CRITICAL THINKING – Cognitive Biases: Reference Dependence and Loss Aversion [HD]

CRITICAL THINKING – Cognitive Biases: Reference Dependence and Loss Aversion [HD]


(intro music) My name is Laurie Santos. I teach psychology at Yale University,
and today I want to talk to you about reference dependence and loss aversion. This lecture is part of a
series on cognitive biases. Imagine that you’re a doctor heading a
medical team that’s trying to fight a new strain of deadly flu, one that’s currently
spreading at an alarming rate. The new flu is so devastating that six
hundred million people have already been infected, and if nothing
is done, all of them will die. The good news is there are two, drugs
available to treat the disease and your team can decide which one
to put into mass production. Clinical trials show that if you go with
the first drug, drug A, you’ll be able to save two hundred million
of the infected people. The second option is drug B, which has a one-third chance of saving all six hundred
million people, but a two-thirds chance that no one infected will be saved. Which drug do you pick? You probably thought drug
A was the best one. After all, with drug A, two hundred
million people will be saved for sure, which is a pretty good outcome. But now imagine that your team is faced
with a slightly different choice. This time, it’s between drug C and drug D. If you choose drug C, four
hundred million infected people will die for sure. If you choose drug D, there’s a one-third chance that no one infected will die, and a
two-thirds chance that six hundred million infected people will die. Which drug do you choose in this case? I bet you probably wen with drug D. After all, a chance that no one will
die seems like a pretty good bet. If you picked drug A in the first scenario and drug D in the second, you’re not alone. When behavioral economists Danny Kahneman and Amos Tversky gave these
scenarios to college students, seventy-two percent of people said
that drug A was better than B, and seventy-eight percent of people
said that drug D was better than C. But let’s take a slightly different
look at both sets of outcomes. In fact, let’s depicted both choices in terms of the number of people
who will live and die. Here’s your first choice. Drug A will save two hundred million
people for sure, and for drug B, there’s a one-third chance that all six hundred million
infected people will be saved and a two-thirds chance that no
one infected will be saved. And now, let’s do the same
thing for drugs C and D. Surprisingly, you can now see
that the two options are identical. Drugs A and C will save two hundred
million people, while four hundred million people are certain to die. And with both drug B and drug D, you
have a one-third chance of saving all six hundred million people and a
two-thirds chance of saving no one. We can argue about whether it’s better to
save two hundred million people for sure, or to take a one-third chance
of saving all of them. But one thing should be clear from the example: it’s pretty weird for you to
prefer drug A over B at the same time as you prefer drug D over C. After all, they’re exactly the same drugs
with slightly different labels. Why does a simple change
in wording change our judgments about exactly the same options? Kahneman and Tversky figured out that this
strange effect results from two classic biases that affect human choice, biases known as “reference
dependence” and “loss aversion.” “Reference dependence” just refers the
fact that we think about our decisions not in terms of absolutes, but relative
to some status quo or baseline. This is why, when you find
a dollar on the ground, you don’t think about that dollar
as part of your entire net worth. Instead, you think in terms
of the change that the dollar made your status quo. You think, “Hey, I’m one dollar richer!” because of reference dependence, you
don’t think of the options presented earlier in terms of the absolute number of lives saved. Instead, you frame each choice
relative to some status quo. And that’s why the wording matters. The first scenario is described in terms of the number of life saved. That’s your reference point. You’re thinking in terms of how many
additional lives you can save. And in the second, you think relative to how many less lives you can lose. And that second part, worrying about losing lives, leads to the second bias that’s
affecting your choices: loss aversion. Loss aversion is our reluctance to
make choices that lead to losses. We don’t like losing stuff, whether
it’s money, or lives, or even candy. We have an instinct to avoid
potential losses at all costs. Economists have found that
loss aversion causes us to do a bunch of irrational stuff. Loss aversion causes people to
hold onto property that’s losing in value in the housing market, just because they don’t want to sell
their assets at a loss. Loss aversion also leads people to
invest more poorly, even avoid risky stocks that overall will do well, because we’re afraid of a small probability of losses. Loss aversion causes to latch onto the fact that drugs C and D involve losing lives. Our aversion to any potential losses causes us to avoid drug C and to go with drug D, which is the chance of not losing anyone. Our loss aversion isn’t as activated when we hear about drugs A and B. Both of them involve saving people,
so why not go with the safe option, drug A over drug B? Merely describing the outcomes differently changes which scenarios
we find more aversive. If losses are mentioned, we want to reduce them as much as possible, so much so, that we take on a bit
more risk than we usually like So describing the decision one
way, as opposed to another, can cause us to make a
completely different choice. even in a life-or-death decision like this, we’re at the mercy of our
minds interpret information. And how our minds interpret information
is at the mercy of our cognitive biases. Subtitles by the Amara.org community

33 thoughts on “CRITICAL THINKING – Cognitive Biases: Reference Dependence and Loss Aversion [HD]

  • Loss aversion is a big deal for me, but I still did the basic math. Not to brag, but I saw that drugs A and C had the same outcome, as did drugs B and D. And even going on guts alone, on the very level of loss aversion, drug D has a terrible outcome: 66% chance of total loss jumped out at me, and that's a risk I would never take. The evidence that so many people failed to see through a simple linguistic smokescreen is interesting, and it's definitely a fact to be reckoned with.

  • Thats kind of interesting because no matter what drug you pick each individual person will have have a 1/3 chance of survival. the only thing that changes is survival on the group level.

  • Wouldn't it be logical to pick the option where you're guaranteed to save some people than risk losing them all, to me it makes sense from a biological standpoint

  • I highly recommend reading the book, Thinking Fast and Slow by Daniel Kahneman, one of the aforementioned psychologists who made that test. Quite possibly the best psychology book ever made and it really does change your perspective on life.

  • careful with that wording. when describing c and d you describe c as 400 million people will certainly die. which means that 400million will certainly die, but perhaps everyone will die or only a few will survive. if it had been "the deaths will be limited to 400 million people" this poor wording lead me a bit a stray. reluctantly picking D over C as a 1/3 chance of saving everyone is better than an uncertain chance of saving nobody.

  • I wonder what bias it causes ( to viewers) by mentioning "I teach at <some perceived 'smart' school – Yale> " makes me better than teaching this same thing by somebody from South Central X State University.

  • this example is false : A is equivalent to B. If you can tell at 33.33% that 100% of the population will be saved then you can tell that 33.33% of the population will be saved at 100 % which is exactly 200 Million !

  • The problem with this video is that one drug saves an arbitrary amount of people – while the other saves a fraction of an unknown whole. How could one possibly know what option is better. If 1/3 chance of saving a third of the whole population 7+ billion people vs 100 million. The choice should be obvious that a 33% chance of saving everybody is the best option. That is 2.3+ billion people vs 100 million.

  • Actually the description of your drug's actions weren't identically stated. They'd only be identical if you added a further clarification.
    I.e. Drug C: 400m people will certainly die [BUT everyone else will live].
    Otherwise I know that 400m fatalities are assured but the survival of the others is indeterminate. Your description as it stands could suggest AT LEAST 400m will die.
    I assume that in the original survey the statements were better clarified?

  • I did the math real quick 1/3 of 6 is 2 and saw it as the same… what happens when you realize it is a trick question? I feel what I'm learning from these videos is that people who participate in case studies are not good at math and thus we make judgments on people because of it… a fun case study would be to do these math experiments with math/engineering majors v.s. psychology majors…

  • The problem is wording, She says if you choose drug C, 400M people will die for sure. This statement does not mean any of the remaining 200M people will be saved. In other words, It doesn't say anything about how many people will be saved by drug C. Not a good example.

  • While I was granting that 600 million were the total amount of people in jeopardy, it was still very easy to spot out that the two scenarios were equivalent.

  • Find a dollar.
    1. Shiny shiny money power precious!
    2. It would take me 3 minutes to earn this!

    Hmm… I think when playing an MMORPG you think more in terms of 2, IRL more in terms of 1?

  • Damn, this one didn't work on me because I thought she was trying to trick me 🙁 I wonder if I'd not thought that if I still would've chosen C.

  • Honestly, I chose drug D not because of avoiding loss, but because I forgot that the population was 600 million. If they said 2/3 chance that all people would die, they would get substantially different results. That’s a flaw in the question, because it’s testing your attention rather than your cognitive bias directly.

  • couldn't another explanation for the percentages be that the roughly 1/4 of people who chose the opposite of the cognitive bias were just better at the math. that seems like a simpler explanation.

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *