All things data newsletter #16

Standard

(if this newsletter was forwarded to you then you can subscribe here: https://insightextractor.com/)

The goal of this newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles/videos made the cut for today’s newsletter.

(1) Data & AI landscape 2020

Really good review of the yera 2020 of data & AI landscape. Look at those logos that represent bunch of companies tackling various data and AI challenges — it’s an exciting time to be in data! Read here 

2020 Data and AI Landscape
Image Source

(2) Self-Service Analytics

Tooling is the east part, it’s the follow-up steps needed to truly achieve a culture that is independently data-drive. Read here

(3) What is the difference between data pipeline and ETL?

Really good back-to-basics video on difference between Data pipeline and ETL.

(4) Delivering High Quality Analytics at Netlfix

I loved this video! It talks about how to ensure data quality throughout your data stack.

(5) Introduction of data lakes and analytics on AWS

I have another great Youtube video for you. This one introduces you to various AWS tools on data and analytics.

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

All things Data Newsletter #15 (#dataengineering #datascience #data #analytics)

Standard

(if this newsletter was forwarded to you then you can subscribe here: https://insightextractor.com/)

The goal of this newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles made the cut for today’s newsletter.

(1) Scaling data

Fantastic article by Crystal Widjaja on scaling data. It shares a really good framework for building analytics maturity and how to think about building capabilities to navigate each stage. Must read! Here

three stages.png
Image Source: reforge

(2) Building startup’s data infrastructure in 1-Hour

Good video that touches multiple tools. Watch here: https://www.youtube.com/watch?v=WOSrRTaNIm0 (it’s a little outdated since it was shared in 2019 which is 2 years ago but the architecture is still helpful)

(3) Analytics lesson learned

If you haven’t read lean analytics, I recommed it! After that, you should read this free companion which covers 12 good analytics case studies. Read here

(4) Organizing data teams

How do you organize data teams? completely centralized under a data leader? or do you structure it de-centralized reporting into leaders of business functions? some good thoughts here

Image Source

(5) Metrics layer is a missing piece in modern data stack

This is a good article that encourages you to think about adding metrics layer in your data stack. In the last newseltter, I also shared an Article that talks about Airbbn’s Minerva metrics layer and this article does a good job of providing additional reasons to build something simiar. Read here

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

All things data newsletter #12 (#dataengineer #datascience)

Standard

(if this newsletter was forwarded to you then you can subscribe here: https://insightextractor.com/)

The goal of this newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles made the cut for today’s newsletter.

Why dropbox picked Apache superset as data exploration tool?

Apache superset is gaining momentum and if you want to understand the reasons behind that, you can start by reading this article here

Growth: Adjacent User Theory

I love the framing via this LinkedIn post here where Nimit Jain says that Great Growth PM output looks like “We discovered 2 new user segments who are struggling to proceed at 2 key steps in the funnel and simplified the product for them via A/B experiments. This lead to conversion improvement of 5-10% at these steps so far. We are now working to figure the next segment of users to focus on.”; you can read about the Adjacent user theory here

SQL window functions

Need intro to SQL window functions? Read this

Luigi vs Airflow

Really good matrix on comparing 2 popular ETL workflow platforms. Read here

A data engineer’s point of view on data democratization

If more people can easily access data that was previously not accessible to them then that’s a good thing. This is a good read on various things to consider, read here

Apache Superset growth within Dropbox:

superset adoption data graphics
Image Source: Dropbox Tech Blog

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

All things data newsletter #11 (#dataengineer, #datascience)

Standard

(if this newsletter was forwarded to you then you can subscribe here: https://insightextractor.com/)

The goal of this newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles made the cut for today’s newsletter.

1. AWS re:Invent ML, Data and Analytics announcements

Really good recap of all ML, Data and Analytics announcements at AWS reinvent 2020 here

2. How to build production workflow with SQL modeling

A really good example of how a data engineering at Shopify applied software engineering best practices to analytics code. Read here

Image Source

3. Back to basics: What are different data pipeline components and types?

Must know basic concepts for every data engineer here

4. Back to basics: SQL window functions

I was interviewing a senior candidate earlier this week and it was unfortunate to basic mistakes while writing SQL window functions. Don’t let that happen to you. Good tutorial here

5. 300+ data science interview questions

Good library of data science interview questions and answers

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

All things data newsletter #10 (#dataengineer #datascience)

Standard

(if this newsletter was forwarded to you then you can subscribe here: https://insightextractor.com/)

The goal of this newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles made the cut for today’s newsletter.

1. Architecture for Telemetry data

A good reminder that the software development architecture can be significantly simplified for capturing telemetry data here

2. 5 popular job titles for data engineers

This post here lists 5 popular job titles: data engineer, data architect, data warehouse engineer — I think Analytics engineer is missing in that list but a good post nonetheless. I hope that we get some consolidation and standardization of these job titles over the next few cycles.

3. [Podcast] startup growth strategy and building Gojek data team – Crystal Widjaja

Really good podcast, highly recommended! here

4. Tenets for data cleaning

A must-read technical whitepaper from legendary Hadley Wickham. These principles form the foundation on top of which R software gained a lot of momentum for adoption. Python community uses similar tenets. Must read! here and here

5. Magic metrics that startup probably as product/market fit from Andrew Chen

A must-follow Growth leader!

  1. Cohort Retention curves flatten (stickiness)
  2. Actives/Reg > 25% (validates TAM)
  3. power user curve showing a smile

TelemetryTiers
Image Source

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

All things data engineering & science newsletter #7

Standard

(if this newsletter was forwarded to you then you can subscribe here: https://insightextractor.com/)

The goal of this newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles made the cut for today’s newsletter.

1. Why a data scientist is not a data engineer?

Good post on the difference between data engineer and data scientist and why you need both roles in a data team. I chuckled when one of the sections had explanations around why data engineering != spark since I completely agree that these roles should be boxed around just one or two tools! read the full post here

2. Correlation vs Causation:

1 picture = 1000 words!

No alternative text description for this image
Image Source
3. Best Practices from Facebook’s growth team:

Read Chamath Palihapitiya and Andy John’s response to this Quora question here

4. Simple mental model for handling for handling “big data” workloads
No alternative text description for this image
Image Source
5. Five things to do as a data scientist in firt 90 days that will have big impact.

Eric Weber gives 5 tips on what to do as a new data scientist to have a big impact. Read here

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

Data Engineering and Data Science Newsletter #6

Standard

The goal of this Insight Extractor’s newsletter is to promote continuous learning for data science and engineering professionals. To achieve this goal, I’ll be sharing articles across various sources that I found interesting. The following 5 articles made the cut for today’s newsletter.

1. How do you measure Word of mouth for growth analytics?

Some really good research and methodologies on how to measure word of the growth analytics? Read here

womLoop.png
Source
2. Lean data science

really good insights like “measure business performance and not model performance” with the end goal of delivering business value instead of focusing too much on the algorithm. Read here

3. Good data storytelling: Emoji use in the new normal

Read this to get inspired about to tell stories through data, really well done! Go here

5-Top-Ten-Emojis-Used-On-Twitter-2
Source
4. Why is Data engineering important?

Good post that explains important of data engineering here

Source
5. Five things you should know about Data engineering career

This is a good post to read along with reading about the importance of Data engineers above. Both of these articles give you a good mental model to explain the role and assess if this the right fit for you if you are considering this career track. Read here

Thanks for reading! Now it’s your turn: Which article did you love the most and why?

Four Tenets for effective Metrics Design

Standard

The goal of this blog post is to provide four tenets for effective metrics design.

Four Tenets for effective Metrics Design

What is a tenet?

Tenet is a principle honored by a group of a people.

Why is effective metrics design important?

Metrics help with business decision-making. Picking the right metric increases the odds of decision making through data vs gut/intuition which can be a difference between success & failure.

Four Tenets for effective metrics design:

  1. We will prioritize quality over quantity of metrics: Prioritizing quality over quantity is important because if there are multiple metrics that teams are tracking then it’s hard for decision-makers to swarm on areas that are most important. Also having multiple metrics decreases the odds of each metric meeting the bar for quality. Now if you have few metrics that are well thought out and meets the other tenets that are listed in the post, it will increase the odds of having a solid data driven culture. I am not being prescriptive with what’s a good number of metrics you should have but you should definitely experiment and figure that out — however, I can give you a range: Anything less than 3 key metrics might be too less and more than 15 is a sign that need to trim down the list.
  2. We will design metrics that are behavior changing (aka actionable): A litmus test for this that ask your business decision-markers to articulate what they will do if the metric 1) goes up N% (let’s say 5%) 2) stays flat 3) goes down N% — they should have a clear answer for at least two out of three scenario’s above and if they can’t map a behavior change or action then this metric is not as important as you think. This is a sign that you can cut this metric from your “must-have” metrics list. This doesn’t mean that you don’t track it but it gives you a framework to prioritize other metrics over this or iterate your metric design till you can define this metric such that it is behavior changing.
  3. We will design metrics that are easy to understand: If your metrics are hard to understand then it’s harder to take actions from it and so it’s a pre-requisite for making your metrics that are behavior changing. Also, other than increasing your odd for the metrics being actionable, you are also making the metric appeal to a wider audience in your teams instead of just focusing on key business decision makers. Having a wide group of people understand your metrics is key to having a solid data driven culture.
  4. We will design metrics that are easy to compare: Metrics that are easy to compare across time-periods, customer segments & other business constructs help make it easy to understand and actionable. For e.g. If I tell you that we have 1000 paying customer last week and this week, that doesn’t give you enough signal whether it’s good or bad. But if I share that last week our conversion rate was 2.3% and this week our conversion rate is 2.1% then you know that something needs to be fixed on your conversion funnel given a 20 bps drop. Note that the ratios/rate are so easy to compare so one tactical tip that I have for you is that to make your metrics easy to compare, see if a ratio/rate makes sense in your case. Also, if your metrics are easy to compare then that increases the odds of it being behavior changing just like what i showed you through the example.

Conclusion:

In this blog post, you learned about effective metric design.

What are your tips for picking good metrics? Would love to hear your thoughts!

Five Tenets for effective data visualization

Standard

Tenet is a principle honored by a group of a people. As a reader of this blog, you work with data and data visualization is an important element in your day-to-day work. So, to help you build effective data visualization, I created the tenets below which are simple to follow. This work is based on multiple sources and I’ll reference it below.

Five Tenets for effective data visualization:

  1. We will strive to understand customer needs
  2. We will tell the truth
  3. We will bias for simplicity
  4. We will pick the right chart
  5. We will select colors strategically

Examples for each tenet is listed below:

  • We will strive to understand customer needs

Defining and knowing your audience is very important before diving into the other tenets. Doing this will increase your probability of delivering an effective data visualization.

h/t to Mike Rubin for suggesting this over on LinkedIn here

  • We will tell the Truth

We won’t be dishonest with data. See an example below where Fox news deliberately started the bar chart y-axis at a non-zero number to make the delta look way higher than it actually is.

Source: Link

  • We will bias for Simplicity

3-D charts increase complexity for the end-users. So we won’t use something like this and instead opt for simplicity.

  • We will pick the right chart

I have linked some resources here

  • We will select colors strategically

Source here

Conclusion:

In this post, I shared five tenets that will help you build effective data visualization.

Data Culture Mental Model.

Standard

What is Data Culture?

First, let’s define what is culture: “The set of shared values, goals, and practices that characterizes a group of people” Source

Now building on top of that for defining data culture, What are set of shared values? Decisions will be made based on insights generated through data. And also, group of people represent all decision makers in the organization. So in other words:

An org that has a great data culture will have a group of decision makers that uses data & insights to make decisions.

Why is building data culture important?

There are two ways to make decisions: one that uses data and one that doesn’t. My hypothesis is that decisions made through data are less wrong. To make this happen in your org, you need to have a plan. In the sections below, i’ll share key ingredients and mental model to build a data culture.

What are the ingredients for a successful data culture?

It’s 3 P’s: Platform, Process and People and continuously iterating and improving each of the P’s to improve data culture.

How to build data culture?

Here’s a mental model for a leader within an org:

  1. Understand data needs and prioritize
  2. Hire the right people
  3. Set team goals and define success
  4. Build something that people use
  5. Iterate on the data product and make it better
  6. Launch and communicate broadly
  7. Provide Training & Support
  8. Celebrate wins and communicate progress against goals
  9. Continue to build and identify next set of data needs

Disclaimer: The opinions are my own and don’t represent my employer’s view.