Applying Fast and Slow Thinking to Data Analytics

by Theresa Kushner

Five tips to harness the power of both systems of thought and get just the right pace and perspective to your thinking.

In reading Daniel Kahneman’s “Thinking, Fast and Slow,” it occurred to me that we, as data analysts, can get stuck in the “fast” thinking part of his concept.

We seek answers from our data and when it confirms what we believe, we file the data as “evidence” or “proof” and go on to the next problem or challenge we face.  We have little time for reflection or consideration of the impact of what our data means to the businesses we serve.

For data scientists, analysts and those who ply their trade in this field, thinking “fast” without applying some “slow” thinking is dangerous.

Quick review of “thinking Fast and Slow”

Let’s first review what Kahneman is saying in this must-read book.  In the book, he asks the reader to think about thinking.

Your mind is always active, awake and asleep, and it is always sorting through facts and data that confirm, lead or mislead your thoughts.

He proffers the idea that perceptions, biases, quick judgements are the instincts that have kept us alive and thriving for millions of years.  These are the thoughts that tell us what to watch out for in our environment that might hurt or kill us.

That’s good. But unfortunately, in our modern world, we also have perceptions about race, religion, politics and data that does and doesn’t support our perceptions.

This is where it becomes a challenge for data analysts.

With Kahneman over my shoulder figuratively, here’s some advice for analysts that are faced each day with the challenge of explaining analyses that contradict long held beliefs or, scarier still, confirm long held beliefs.

Here are 5 things you can do to make sure that you harness the power of both systems of thought and get just the right pace and perspective to your thinking.

1. Question everything, even when it’s good.

Most F1000 companies have developed over the years surveys that help them understand the satisfaction of their employees.  The higher the score, the happier the executives.  But what if the score is off-the-charts high?

In one high-tech firms that was exactly the situation. I remember asking the team what they would have done had the scores been extremely negative versus extremely positive.

The response was a well-coordinated, all-out effort to determine what could be done with management, policy and cultural changes.  But when the result was so high, no one questioned it.

With a little slow thinking, perhaps the teams might have sought to understand whether too much emphasis was placed on the scores and, therefore, people felt like they would be punished if scores were not high.

That’s one theory that could have been explored with some questioning around the results.

2. Make sure that the data you use in your algorithm or model is devoid of biases.

For example, one of the most common issues in analysis bias is extracting data with all the variables that are believed to be valuable, then discovering that 50% of the records for this data set do not have complete information for a variable.

If you run your model with this incomplete information, you have biased the result.  If you exclude the variable, you may not get the results you need. Also, make sure that you have consulted both IT and business people about the metadata surrounding your algorithms.

Often, we bias our findings because we don’t understand that “account” in sales means something much different than “account” in finance.

Understanding the context in which data and models exist is most important.

Tips 3 through 5 and Takeaway