are Green) but do much better when presented with facts that
lend themselves to causal stories (85 percent of taxi-related accidents involve Green taxis—those drivers must be maniacs!).
Unfortunately, our confidence in the stories we tell ourselves is more a function of their narrative plausibility than
their accuracy and logical coherence.
We Tell ourselves Stories in order to Live
An important central theme—one that is highly pertinent to actuarial science and risk management—emerges from the large
catalogue of mental heuristics and biases Kahneman discusses.
He writes that our mental processes fall into two categories,
which he labels “System 1” and “System 2.” Kahneman cautions
that Systems 1 and 2 should not be viewed as literal components
of our psychological makeup. Rather, they are useful fictions that
help us discuss two major classes of mental operations. System 1
mental operations are rapid and automatic. Biased toward belief
and confirmation rather than analysis and skepticism, they tend
to jump to conclusions and infer causal relations based on thin,
“cognitively available” evidence, and they tend to neglect the importance of evidence that is not emotionally vivid or in plain sight.
In contrast, System 2 mental operations are slow and deliberate,
seeking logical rather than “narrative” or “associative” coherence.
So prone are we to make judgments based on sketchy evi-
dence that Kahneman calls the human mind “a machine for
jumping to conclusions.” A nice illustration is the “Linda” story.
In a famous experiment, Kahneman and Tversky described a
character called “Linda” to various audiences:
Linda is 31 years old, single, outspoken, and very bright. She
majored in philosophy. As a student, she was deeply concerned
with issues of discrimination and social justice, and also par-
ticipated in anti-nuclear demonstrations.
Next Kahneman and Tversky asked their audiences:
Which alternative is more probable?
■ ■ Linda is a bank teller.
■ ■ Linda is a bank teller and is active in the feminist movement.
They found that about 85 percent to 90 percent of students
at prominent universities chose the second option, even though
doing so violates basic logic. After all, feminist bank tellers are
a subset of all of the set of all bank tellers.
The Linda experiment elegantly highlights the dominance
of System 1 thinking in our everyday judgments and decision-making. It takes time, effort, and physical energy to seek out and
logically evaluate evidence. So unless System 2 is on guard, our
minds tend to weave associatively coherent narratives based on
stereotypes and convenient bits of easily available information.
And we are unconscious of doing so. Nassim Taleb’s phrase “the
narrative fallacy” nicely captures this tendency.
A related tendency is evaluating evidence in terms of its con-
tribution to a coherent causal narrative rather than in terms of
its statistical strength. In other words, we often mistake ran-
dom noise for causal signal. Taleb calls this being “fooled by
An example involves a recent attempt by the Gates Founda-
tion to identify the types of public schools that tend to be most
successful. Prompted by research suggesting that the most suc-
cessful schools tend to be small, the foundation invested in the
creation of smaller schools and breaking existing larger schools
into smaller ones. Other institutions, including the Pew Chari-
table Trust and the Annenberg Foundation, followed suit. The
problem: Poor schools also tend to be small. Small schools aren’t
better than average; they just display more variability because of
their size. Once again, people constructed causal, associatively
coherent narratives (“small schools allow for more personal at-
tention…”) to explain purely statistical variation in need of no
Do you have the automation, flexibility
and management control you need?
A Moody’s Analytics Company