Came across this question on Quora today: What are the most subtle ways to deceive people with statistics?
The answers are legendary. Going to be one for the bookmarks.
Came across this question on Quora today: What are the most subtle ways to deceive people with statistics?
The answers are legendary. Going to be one for the bookmarks.
Came across a very interesting and persuasive video on baseball via Kottke.org today. It’s a great example of what an interesting question, effective visualisation, and some statistical knowledge can do.
The question the video seeks to answer is the following: what would happen if baseball player Barry Bonds, who happened to play one of his greatest (if not the greatest) baseball seasons ever in 2004, played without a baseball bat?
I’m not a baseball fan, and frankly quite a number of the things that were mentioned in the video were lost on me. But I’m a fan of interesting statistics and great visualisations, and this definitely had both.
And despite having a few doubts at its conclusion (the results seem too good to be true – watch to the end!), it is convincing and definitely worth a watch if you’re either into baseball or statistical visualisations.
Imagine for a moment that you want to implement a new sales initiative that you think will transform your business. The problem is, you’re not too sure if it’d work.
You decide, prudently, that maybe a pilot test would be good: let’s roll out the initiative to just a small subset of the company, the pilot group, and see how it performs.
If it performs well, great, we roll it out to the rest of the company. If it performs badly, no drama – we simply stop the initiative at the pilot stage and don’t roll it out to the rest of the company. The cost of the pilot would be negligible compared to the full implementation.
After consulting with your team, you decide that your pilot group would be based on geography. You pick a region you know well with relatively homogeneous customers, and whom are extremely receptive to your idea.
You bring your idea to your boss, who likes it and agrees to be the project sponsor. However, he tells you in no uncertain terms that in order for the initiative to go beyond a pilot, you need to show conclusively that it has a positive sales impact. You have no doubt it has, and you readily agree, “of course!”
Knowing that measurement is a little outside your area of expertise, you consult your resident data scientist on the best way to “show conclusively” the your idea works. He advises you that the best way to do that would be through doing an A/B test.
“Split the customers in your pilot group, the region you’ve picked, randomly into two,” your data scientist says. “Let one group be the ‘control’ group, on which you do nothing, and the other be the ‘test’ group, on which you roll out the initiative on. If your test group performs statistically better than the control group — I’ll advise you later on how to do that — you know you’ve got a winning initiative on your hands.”
You think about it, but have your doubts. “But,” you say, “wouldn’t that mean that I would only impact a portion of the pilot group? I can’t afford to potentially lose out on any sales – can’t I roll it out to the whole region and have some other group, outside the pilot, be the control?”
Your data scientist thinks about it for a moment, but doesn’t look convinced.
“You can, but it wouldn’t be strictly A/B testing if you were to do that. Your pilot group was based on geography. Customers in other geographies won’t have the exact characteristics as customers in your pilot geography. If they were to perform differently, it could be down to a host of other factors, like environmental differences; or cultural differences; or perhaps even sales budget differences.”
You’re caught in two minds. On the one hand, you want this to be scientific and prove beyond a doubt the efficacy of the initiative.
On the other hand, having an initiative that brings in an additional $2 million in revenue looks better than one that brings in an additional $1.5 million, due to having a control group you can’t impact.
Why would you want to lose $500,000 when you know your idea works?
What do you do?
Without a culture of experimentation, it’s extremely difficult for me to recommend that you actually stick by the principles of proper experimentation and go for the rigorous A/B route. There’s a real agency problem here.
You, as the originator of the idea, have a stake in trying to make sure the idea works. Even though it’d have just been a pilot, having it fail means you’d have wasted time and resources. Your credibility might take a hit. In a way, you don’t want to rigorously test your idea if you don’t have to. You just want to show it works.
Even if it means an ineffective idea is stopped before more funds are channeled to an ultimately worthless cause, for you it really has no benefit. Good for company; bad for you.
In the end, I think it takes a very confident leader to go through with the proper A/B testing route, especially in a culture not used to proper experimentation. It’s simply not easy to walk away from potential revenue gains through holding out a control group, or scrapping a project because of poor results in the pilot phase.
But it is the leader who rigorously tests his or her ideas, who boldly assumes and cautiously validates, who will earn the respect of those around. In the long run, it is this leader who will not be busy fighting fires, attempting to save doomed-to-fail initiatives.
Without these low-value initiatives on this leader’s plate, there will be more resources that can be channeled to more promising ventures. It is this leader who will catch the Black Swans, projects with massive impacts.
I leave you with a passage from an article I really enjoyed from the Harvard Business Review called The Discipline of Business Experimentation, which is a great example of a business actually following through with scrapping an initiative after the poor results of a business experiment:
When Kohl’s was considering adding a new product category, furniture, many executives were tremendously enthusiastic, anticipating significant additional revenue. A test at 70 stores over six months, however, showed a net decrease in revenue. Products that now had less floor space (to make room for the furniture) experienced a drop in sales, and Kohl’s was actually losing customers overall. Those negative results were a huge disappointment for those who had advocated for the initiative, but the program was nevertheless scrapped. The Kohl’s example highlights the fact that experiments are often needed to perform objective assessments of initiatives backed by people with organizational clout.
Can you imagine if they decided not to do a proper test?
What if they thought, “let’s not waste time; if we don’t get on the furniture bandwagon now our competitors are going to eat us alive!” and jumped in with both feet, skipping the “testing” phase?
Or what if the person who proposed the idea felt threatened that should the initiative failed it would make him or her look bad, and decided to cherry pick examples of stores for which it worked well? (An only too real and too frequent possibility when companies don’t conduct proper experiments.)
It would, I have little doubt, led to very poor results.
And now imagine if this happened with very single initiative the company came up with, large or small. No tests, just straight from dream to reality.
But unfortunately in so many companies just the case.
I need to have a data-dump on the sales forecasting process and forecasts.
On optimistic and pessimistic forecasting:
On the granularity of forecasting
On Building Great Predictive Models
Overfitting a model and “perfect” models
So many ideas – have to expand on some of these one of these days.
I had KFC (Kentucky Fried Chicken) for breakfast yesterday. Chicken rice porridge and a “breakfast” wrap (that oddly enough didn’t seem to contain any chicken).
It was decent, and I liked it.
So when I was quite excited when I saw that the receipt had a link to an online customer satisfaction survey, for which I would get a free piece of chicken if I completed it. It was a pretty good deal, I thought.
But I couldn’t help but wonder about how useful it was to KFC.
Surely survey responses would be largely over-represented by people who like their food (and service, to a certain degree)? If I hated their food, and/or hated their service, and swore never to go back there again, what good would offering me a free piece of chicken do for me?
These are the people whom you probably most want to hear from, and yet have absolutely no incentive to complete such a survey (and in most likelihood, being normal people like us, they’d vote with their dollars and just not patronise the store again, instead of submitting feedback).
It would, in short, be far from a representative survey.
I just hope that those who are interpreting and on the receiving end of said-interpretation understand the limitations of just such a survey, and discount the very likely amplified, far-too-positive results.
And if the results are lukewarm instead of three-Michelin -stars-worthy? Then oh dear.
Andrew McAfee posted about a very intriguing study on personality, gender and age in their relation to language. In essence, what the study did was to look at the correlation of people’s Facebook statuses and their personality, gender, and age.
You’ll know why I say it’s intriguing when you take a look at some of the findings. Especially interesting are the word maps.
Here’s one showing the words used by people who were extraverted/introverted, and their emotional stability (i.e. personality). Neurotic people are sad, angry, and existential. Emotionally stable people are… hmm… outdoorsy/active? As McAfee mentioned in his post it’s an interesting correlation between the sorts of activity and emotional stability, but one which cause-and-effect is difficult to determine. Does physical activity lead to a more emotionally stable personality, or do emotionally stable people just tend towards physical activity?
I’m pretty much a 60/40 introvert (60% introvert, 40% extravert) so I’m always intrigued with studies on introversion, so I just couldn’t ignore the huge “anime” (and its related terms, like “Pokemon”) popping up in the introversion word map. — I do wonder how much of an impact cultural influence (i.e. a person’s country of origin/residence) plays a part. And did you notice the number of emoticons in that map? Me too 🙂
And here’s the word map for males vs. females. I love this one. Seems like the biggest thing on female’s minds is shopping and relationships, while for males it’s all about sex and games. As McAfee mention’s on his blog, this doesn’t “does not reflect well at all on my gender”.
And here’s one for age. My guess why daughter’s are more talked for the 30s to 65s about is because women are the ones talking about them (men just talk about sex and sports). In the gender map, relationships dominate what women talk about (apart from chocolate and shopping), and through my experience in TV watching, women don’t really talk about sons because sons pretty much take care of themselves. Daughters, on the other hand, are always worth worrying about.
I could imagine fiction writers using these to build character dialogues; or academics building ever more insightful anthropological maps; or marketers with targeted campaigns. It’s a really imaginative use of big data, and one that I think is brilliant.
Who says Big Data’s failed?
I can’t believe I didn’t write about it before today: the difference between uncertainty and risk.
I’d originally thought that uncertainty and risk were one and the same. If you’re uncertain about something, about taking some action, and you had to decide whether or not to take that action, it was a risky action to take.
But it’s not like that.
Risk involves known odds. Known probabilities. Known possible outcomes. Uncertainty does not.
Let’s say that you have to throw a die that determines whether or not you live or die based on its outcome. If it’s four or greater you live, if it’s three or less you die. It’s a risk. But it’s not uncertain, because the odds and outcomes are known.
If you were not given the conditions under which you’d live or die, so you don’t know what range of values determines what fate, things get pretty uncertain. You don’t know if throwing any number between 1 through 6 will mean you live or die. Or whether or not living or dying was one of the outcomes you could expect.
To use another analogy, it’s like playing Russian Roulette without knowing how many bullets there are in the chambers and not knowing if the gun is real in the first place.
Under conditions of risk you’re making an informed decision.
Under conditions of uncertainty, however, there is no informed decision except that of the overhanging uncertainty. “I know the outcome and odds are uncertain, but I’m going ahead anyway.”
The Singapore government announced a while back that they were going to start an initiative to try to reduce peak period crowds on our public rail system or MRT (Mass Rapid Transit). The initiative involved providing free and subsidised travel for passengers on selected trips during the morning off-peak period.
This initiative kicked off two days ago. Two days on, some people are wondering if it made any difference – trains seem as packed as they were before, and those who were already taking trains during the free travel periods have found little to no difference of the number of passengers from before.
But before we make any conclusions, aside from the fact that it’s only day two and it makes no sense to conclude any result, we have to realise that this initiative kicked in at a time filled with confounding variables.
What we’re trying to measure here is whether the government initiative has worked by reducing peak period travel. So we’re trying to see if there’s a relationship between the [Government Initiative] and [Fewer Peak Period Passengers]; or more precisely, whether [Government Initiative] caused [Fewer Peak Period Passengers].
A confounding variable is an additional variable (one we could and would rather do without) that obscures the relationship of the variables we’re trying to measure, because its introduction impacts the end result. Let me give you an example.
Let’s say there’s a group of people who are hard of hearing. You discover that they love listening to loud music and have, in fact, done so for at least the last five years. You might conclude that listening to loud music makes you hard of hearing.
But let’s say that you then discover that this group of people all used to operate jackhammers, and were subject to loud noises for most of their working lives. Would you be as confident of your conclusion now?
What if these people were in their 80s? Would that change your mind yet again?
Loud music, operating jackhammers, and age can all contribute to hearing loss. Drawing conclusions from this group to make predictions on hearing loss is going to be tough. You just can’t quite single out one cause for hearing loss.
So, as I was saying, any analysis of the MRT rides this week is definitely going to be badly confounded by (at least) the following:
Confounding is a terrible thing to have when you’re trying to measure cause and effect. I remember having been involved in several performance measurement initiatives, all happening at the same time, designed to improve sales numbers.
The problem with such initiatives is that you could never really know how much of an impact a particular initiative had on the overall sales results. You could know the impact of all the initiatives put together, but any single one would probably have been affected by others because, as mentioned before, they were all happening at the same time.
It’s difficult to get management to agree putting off trying initiatives simply because you want to get a more accurate measurement. It’s like telling a get-rich-quick addict to try only one get-rich-quick scheme at a time to know what really works. It just doesn’t happen.
But when you don’t know what initiative works and what doesn’t, you can’t afford to drop even a single one of them. And juggling all of them can get pretty expensive.
I was doing some secondary research on the web to try to gather some statistics on small businesses and websites when I realised that there just wasn’t much reliable data around, and that the majority of the statistics on the web were referencing themselves (this is like when an article on website A would point to 50% of small businesses not having websites in 2011, a statistic it obtained from an article written in 2009 on website B, which got its information from website C, which was incidentally quoting an unconfirmed “Internet Research expert” who wrote it on some tech forum, citing some old and unconfirmed piece he remembered reading a couple of years ago).
Reminded me of an article I read on how Wikipedia was subject to these sorts of self-referencing too.