I’m in the midst of reading “Thinking, Fast and Slow” by Daniel Kahneman. An extremely interesting book, if you have any interest in how you and others form their decisions. I will not discuss the entire book here, as obviously I haven’t finished it yet, but I would like to write a thing or two about a recent chapter I read, since it really resonated with me as something I, first of all; should have known – but certainly should remember moving forward. And one of the best ways of remembering, for me, seems to be trying to explain it to others.

As the title suggests, this has to do with something called the “law of small numbers”. Most people seem to acknowledge that statistics based on large samples produce more accurate results, but fail to recognise that statistics based on smaller samples not only are more inaccurate, but also produce more extreme outcomes.

Why is this important? I thought I was fully aware of the pitfalls, at least the inaccuracy part – but having read this chapter I realized I wasn’t.

My reasoning, and probably a lot of other people’s, recognise that if you take a small sample of a larger whole, then the small sample of course won’t be as accurate – but it will show a tendency. This CAN be absolutely false. Danish media even slipped big time, failing to recognise this, during a recent election, when their early exit poll claimed the wrong victor.

But why is this? How can statistics on small numbers show the complete opposite as statistics performed on the full sample? It has to do with the fact that small samples produce more extreme outcomes. I will use Daniel Kahneman’s example as it made it really clear for me to understand.

From the same urn, two very patient marble counters take turns. Jack draws 4 marbles on each trial, Jill draws 7. They both record each time they observe a homogeneous sample – all white or all red. If they go on long enough, Jack will observe such extreme outcomes more often than Jill – by a factor of 8 (the expected percentages are 12.5% and 1.56%) Again no hammer, no causation, but a mathematical fact: samples of 4 marbles yield extreme results more often than samples of 7 marbles do.

This really made it “click” for me. Of course they do. Small samples are not only more inaccurate but – and this is the very important part – they yield more extreme outcomes.

But as the book so beautifully describes, almost everyone can miss this fact. The Gates foundation made a huge $1.7 billion investment, based on findings that had tried to pinpoint which schools produced the best grades. One of the findings was that the small schools seemed to outperform the larger by a factor of 4. This lead to splitting of larger schools into smaller units. The only problem was that the size of the school had nothing to do with the grades. If they had asked which schools produced the lowest grades – once again it would have been the small schools. But the size of the school had nothing to do with the grades. The larger schools produced more “average” results, simply by the fact that they had more students – thereby larger sample sizes. Small schools on the other hand had fewer students – thereby smaller sample, which as we have learned can produce more extreme outcomes.

Correlation does not equal causation.

This knowledge has given me a whole new perspective on statistics. I am amazed at how often media, marketing or even politicians use statistics based on very small samples as “proof” for their claims. And for the most part they totally get away with it. But moving forward I hope to be more observant and aware of this fallacy, to keep me from making bad decisions, on what I, in the past, might have considered good valid information.