Experience vs. Memory – duration neglect and peak-end rule

Just finished Thinking, Fast and Slow – and what an awesome book! I have already written a bit about the book in old posts and my previous post compares it to a couple of other popular behavioral science books. But if you find behavioral science and psychology the least bit exciting, you’ll probably love this book. As written before, it is not a book you just sprint through as one quote on the back of it states: “Buy it fast. Read it slowly. It will change the way you think!”

But just wanted to share a last anecdote from the book before moving on. Towards the end of the book there are chapters outlining the two selves; the experiencing and the remembering. Rather self explanatory, meaning the difference between what you actually experienced in the moment versus what you remember the experience as.

There have been some rather interesting studies in this area. For example people who have undergone different types of surgery has been equipped with devices that lets them rate the pain of the experience in small intervals during the surgery on a scale from 1-10. This is the experiencing self. Then afterwards they are asked to rate the pain of the experience as a whole again on a scale from 1-10. That is the remembering self.

What seemed to emerge from those studies was that it was not the total amount of pain – meaning the surgery with most pain during the experience that was rated as the most painful by the remembering self. Neither was it the total time under pain that emerged as the most painful, but instead it seemed to be the surgery where the pain towards the end was highest. If the pain tapered off towards the end of the surgery – people generally remembered it as less painful than they actually experienced it to be. This is to be known as the peak-end rule. The duration of the pain did not matter for the remembering self – known as duration neglect – the only real determining factor seemed to be how the surgery felt towards the end.

To test this Daniel Kahneman made a study where they subjected the participants hands to a very cold ice bath. As the surgery-studies, the subjects were equipped with devices to rate their experience during the trial and then afterwards asked to rate their experience. The first trial was 4 minutes in ice-cold water, then the next trial was the same 4 minutes in ice-cold water but then another 3-4 minutes where a little warmer water was released into the bowl without the subject knowing, so the temperature rose just slightly making the end less uncomfortable. Then finally they were asked for the third trial to choose whether to repeat trial 1 or 2 – and as you have probably figured, the vast majority chose to go with trial number 2 even though any rational observer would have chosen number 1.

This is peak-end rule and duration neglect at work. I find it so fascinating and amusing how these completely irrational factors plays into our lives. It also raises some interesting questions, for instance; should you prolong some surgeries artificially to taper off the pain towards the end thereby giving the patient a more pleasant memory of the surgery? Or in the less serious department; was your entire experience of a concert really ruined because it started raining at the end?

Trying to be aware of peak-end rule and duration neglect can make you less likely to get fooled by them. But as Daniel Kahneman writes somewhere; even though he has studied all these factors for decades, he still gets fooled by them from time to time – we just have to acknowledge and live with our irrational selves to the best of our abilities.

Books on psychology, our irrational mind, thinking and decisions

DSC_0008There are several books on the topic of decision-making, a lot more than I will ever read, but here are a couple of recommendations if the topic is of interest to you.

If you aren’t interested – maybe you should be. “Surprisingly” we are not as rational as we might think. Our feelings, perceptions and mood along with other factors plays a far greater role than we would like them to. In a “perfect” world we would not have two opposing stands on the same topic just because of different wording. Or be “tricked” into making a different choice just because of a simple marketing trick.

Check this example from Predictably Irrational.

The Economist runs a campaign with the following options:

  1. Internet-only subscription $59
  2. Print-only subscription $125
  3. Print-and-Internet subscription $125

Dan Ariely(the author) runs an experiment on 100 students at MIT and this is what they opted for:

  1. Internet-only subscription $59 – 16 students
  2. Print-only subscription $125 – 0 students
  3. Print-and-Internet subscription $125 – 84 students

You would most likely also have chosen the 3. option and with good reason. That seems the best deal. But were you somehow influenced by the mere presence of the Print-only option, which of course no one with a sane mind would choose? If that option did not influence the selection, the removal of it would of course yield somewhat the same spread of selections. He then ran the same experiment, but without the Print-only option and this is how people opted:

  1. Internet-only $59 – 68 students
  2. Print-and-Internet $125 – 32 students

If people chose rationally this would of course not be the case, but as the example clearly shows a presence of an option that no one would consider, totally alters the decisions and trust me marketers knows this!

But why do we do this? The “decoy” acts as something to compare option 3 with. We are not sure whether we want internet or print, but with the Print-only option we have a comparison that makes Print-and-Internet a good deal.

This can be deployed by real estate agents trying to sell you a house showing you 3 houses; first one a bit out of town, second one in the city and third another one in the city but who needs some repair done and is in poorer condition than the other house – this would as our example shows, make you more likely to opt in for the good condition city-house. And the applications are numerous; vacations, cars, computers etc.

So if you want to be a bit more aware of how your decisions are shaped and make more rational decisions, you should definitely give one of these books a read. But which one?

How We Decideis by far the one of them who made the least impression on me. Not that it is a bad book, there are some good examples in it, but not just as many “aha” moments or “I could have done that” as in the others. It just did not engage me quite as much as the others. I read it first and found it interesting but with the other options available I would go for one of those.

Predictably Irrational(PI) is by far the most entertaining and engaging. It is so easy to relate to most examples and it is very well written. It is a hard-to-put-down type of book. It is not as thorough as Thinking fast and slow. But if you are not really sure how entertaining it is to read about psychology and your own mind, I would highly recommend to start with Predictably Irrational.

Thinking, Fast and Slowis, as mentioned above, the most thorough. It is not as easy readable as PI in the way that it makes you think so much harder and sometimes presents rather complex theories and ideas. It digs a lot deeper than PI, and has way more material. PI even quotes some of Daniel Kahneman’s discoveries. My recommendation would be to start with PI and if you are hungry for the hardcore stuff go buy Thinking fast and slow. You could read it as your first psychology book on decisions, but then you should be very very curious otherwise it might seem a little to theoretical. PI is an engaging read for almost everyone – Thinking fast and slow is an engaging read if you find the topic engaging I would say.

I am always open to new book recommendations, so please let me know if you have any or if you have comments about the books mentioned.

Happy reading!

Law of small numbers in statistics

I’m in the midst of reading “Thinking, Fast and Slow” by Daniel Kahneman. An extremely interesting book, if you have any interest in how you and others form their decisions. I will not discuss the entire book here, as obviously I haven’t finished it yet, but I would like to write a thing or two about a recent chapter I read, since it really resonated with me as something I, first of all; should have known – but certainly should remember moving forward. And one of the best ways of remembering, for me, seems to be trying to explain it to others.

As the title suggests, this has to do with something called the “law of small numbers”. Most people seem to acknowledge that statistics based on large samples produce more accurate results, but fail to recognise that statistics based on smaller samples not only are more inaccurate, but also produce more extreme outcomes.

Why is this important? I thought I was fully aware of the pitfalls, at least the inaccuracy part  – but having read this chapter I realized I wasn’t.

My reasoning, and probably a lot of other people’s, recognise that if you take a small sample of a larger whole, then the small sample of course won’t be as accurate – but it will show a tendency. This CAN be absolutely false. Danish media even slipped big time, failing to recognise this, during a recent election, when their early exit poll claimed the wrong victor.

But why is this? How can statistics on small numbers show the complete opposite as statistics performed on the full sample? It has to do with the fact that small samples produce more extreme outcomes. I will use Daniel Kahneman’s example as it made it really clear for me to understand.

From the same urn, two very patient marble counters take turns. Jack draws 4 marbles on each trial, Jill draws 7. They both record each time they observe a homogeneous sample – all white or all red. If they go on long enough, Jack will observe such extreme outcomes more often than Jill – by a factor of 8 (the expected percentages are 12.5% and 1.56%) Again no hammer, no causation, but a mathematical fact: samples of 4 marbles yield extreme results more often than samples of 7 marbles do.

This really made it “click” for me. Of course they do. Small samples are not only more inaccurate but – and this is the very important part – they yield more extreme outcomes.

But as the book so beautifully describes, almost everyone can miss this fact. The Gates foundation made a huge $1.7 billion investment, based on findings that had tried to pinpoint which schools produced the best grades. One of the findings was that the small schools seemed to outperform the larger by a factor of 4. This lead to splitting of larger schools into smaller units. The only problem was that the size of the school had nothing to do with the grades. If they had asked which schools produced the lowest grades – once again it would have been the small schools. But the size of the school had nothing to do with the grades. The larger schools produced more “average” results, simply by the fact that they had more students – thereby larger sample sizes. Small schools on the other hand had fewer students – thereby smaller sample, which as we have learned can produce more extreme outcomes.

Correlation does not equal causation.

This knowledge has given me a whole new perspective on statistics. I am amazed at how often media, marketing or even politicians use statistics based on very small samples as “proof” for their claims. And for the most part they totally get away with it. But moving forward I hope to be more observant and aware of this fallacy, to keep me from making bad decisions, on what I, in the past, might have considered good valid information.