5 tests to validate the quality of your data:

Standard

Missing Data:

  • Descriptive statistics could be used to find missing data
  • Tools  like SQL/Excel/R can also be used to look for missing data
  • Some of the attributes of a field are missing: Like Postal Code in an address field

Non-standardized:

  • Check if all the values are standardized: Google, Google Inc & Alphabet might need to be standardized and categorized as Alphabet
  • Different Date formats used in the same field (MM/DD/YYYY and DD/MM/YYYY)

Incomplete:

  • Total size of data (# of rows/columns): Sometimes you may not have all the rows that you were expecting (for e.g. 100k rows for each of your 100k customers) and if that’s not the case then that tells us that we don’t complete dataset at hand

Erroneous:

  • Outlier: If someone;s age is 250 then that’s an outlier but also it’s an error somewhere in the data pipeline that needs to be fixed; outliers can be detected using creating quick data visualization
  • Data Type mismatch: If a text field is in a field where other entries are integer that’s also an error

Duplicates:

  • Duplicates can be introduced in the data e.g. same rows duplicated in the dataset so that needs to be de-duplicated

Hope that helps!

Paras Doshi

This post is sponsored by MockInterview.co, If you are looking for data science jobs, check out 75+ data science interview questions!

Journal of statistical software paper on tidying data:

Standard

Data cleaning takes up a lot of time during a data science process; it’s not necessarily a bad thing and time spent on cleaning data is worthwhile in most cases; To that end, I was researching some framework that might help me make this process a little bit faster. As a part of my research, I found the Journal of statistical software paper written by Hadley Wickham which had a really good framework to “tidy” data — which is part of data cleaning process.

Author does a great job of defining tidy data:

1. Each variable forms a column.
2. Each observation forms a row.
3. Each type of observational unit forms a table.

And then applying it to 5 examples:

 1. Column headers are values, not variable names.
2. Multiple variables are stored in one column.
3. Variables are stored in both rows and columns.
4. Multiple types of observational units are stored in the same table.
5. A single observational unit is stored in multiple tables

It also contains some sample R code; You can read the paper here: http://vita.had.co.nz/papers/tidy-data.pdf

As a student preparing for data anaylst & science roles, should I generalize vs specialize?

Standard

This question was posted on Springboard forum.

Here’s my answer:

It depends on your target industry & where they are in their life-cycle.

It has four stages: Startup, Growth, Maturity, Decline.

Industry lifecycle

Generalization is great in earlier stages. If you are targeting jobs at startups; generalize. You should know enough about lot of things.

T-shaped professionals are great for Growth stage. They specialize in something but still know enough about lot of things. E.g. Sr Growth/Marketing Analyst. Know enough about analytics & data science to be dangerous but specializes in marketing.

Specialization is great for mature industries. They know a lot about few things. E.g. Statisticians in an Insurance industry. They have made careers out of building risk models.

Any advice for moving into data science from business intelligence?

Standard

This was asked on Reddit: Any advice for moving into data science from business intelligence?

Here’s my answer:

I come from “Business Intelligence” background and currently work as Sr. Data Scientist. I found that you need two things to transition into data science:

Data Culture: A company where the data culture is such that managers/executives ask big questions that need a data science approach to solve it. If your end-consumers are still asking bunch of “what” questions then your company might NOT be ready for data science. But if your CEO comes to you and says “hey, I got the customer list with the info I asked for but can you help me understand which of these customers might churn next quarter?” — then you have a data science problem at hand. So, try to find companies that have this culture.

Skills: And you need to upgrade your skills to be able to solve data science problems. BI is focused too much on technology and automation and so may need to unlearn few things. For example: Automation is not always important since you might work on problems where a model is needed to predict just a couple of times. Trying to automate wouldn’t be optimal in that case. Also, BI relies heavily on tools but in Data science, you’ll need deeper domain knowledge & problem-solving approach along with technical skills.

Also, I personally moved from BI (as a consultant) -> Analytics (as Analytics Manager) -> Data science (Sr Data Scientist) and this has been super helpful for me. I recommend to transition into Analytics first and then eventually breaking into data science.

Hope that helps!

VIEW THREAD ON REDDIT

How do I learn #SQL for #data analysis?

Standard

Step 1:

This is a good starting point: SQL School Table of Contents

OR, this: Learn SQL

Both of these resources were put together by analytics vendor and is targeted towards beginners.

Step 2:

Review this Quora Thread: How do I learn SQL?

Participate in competitions like this: Solve SQL Code Challenges

Step 3:

If you like to go more in-depth then check out few books:

  1. Head First SQL
  2. Learn SQL the hard Way
  3. Certification books/material from a database vendor

Hope that helps!

VIEW QUESTION ON QUORA

Single variable linear regression: Calculating baseline prediction, SSE, SST, R2 & RMSE:

Standard

Introduction:

This post is focused on basic concepts in linear regression and I will share how to calculate baseline prediction, SSE, SST, R2 and RMSE for a single variable linear regression.

Dataset:

The following figure shows three data points and the best-fit regression line: y = 3x + 2.

The x-coordinate, or “x”, is our independent variable and the y-coordinate, or “y”, is our dependent variable.

Baseline Prediction:

Baseline prediction is just the average of values of dependent variables. So in this case:

(2 + 2 + 8) / 3 = 4

It won’t take into account the independent variables and just predict the same outcome. We’ll see in a minute why baseline prediction is important.

Here’s what the baseline model would look like:

regression baseline model

SSE:

SSE stands for Sum of Squared errors.

Error is the difference between actual and predicted values.

So SSE in this case:

= (2 – 2)^2 + (2 – 5)^2 + (8 – 5)^2

= 0 + 9 + 9

= 18

SST:

SST stands for Total Sum of Squares.

Step 1 is to take the difference between Actual values and Baseline values of the dependent variables.

Step 2 is to Square them each and add them up.

So in this case:

= (2 – 4)^2 + (2 – 4)^2 + (8 – 4)^2

= 24

R2:

Now R2 is 1 – (SSE/SST)

So in this case:

= 1 – (18/24)

= 0.25

RMSE:

RMSE is Root mean squared error. It can be computed using:

Square Root of (SSE/N) where N is the # of dependent variables.

So in this case, it’s:

SQRT (18/3) = 2.44

 

Is the R data science course from datacamp worth the money?

Standard

DataCamp R Data Science

Question (on Quora) Is the R data science course from datacamp worth the money?

Answer:

It depends on your learning style.

If you like watching videos then coursera/udacity might be better.

If you like reading then a book/e-book might be better.

If you like hands-on then something like Data Camp is a great choice. I think they have monthly plans so it’s much cheaper to try them out. When I subscribed to it, it was like 30$/Month or so. I found it was worth it. Also, if you want to see if “hands-on” is how you learn best. Try this: swirl: Learn R, in R. — it’s free! Also, Data Camp has a free course on R too so you could try that as well.

Also, if you want to have free unlimited access for 2-days then try this link: https://www.datacamp.com/invite/G8yVkTrwR3Khn

VIEW QUESTION ON QUORA