You are on page 1of 9

File: Learning from

Failure
Black Box Thinking:
Why Most People Never Learn From Their Mistakes—But Some Do
By Matthew Syed (Portfolio/Penguin, 2015)

S.O.S. (A Summary of the Summary)

The main ideas of the book:


~ This book is about how, as individuals and organizations, we respond to and learn from failure.
~ Success can only happen when we admit our mistakes, learn from them, and create a culture in which it is
safe to fail.

Why I chose this book:


When a plane goes down, it has a “black box” (now actually orange so it’s easier to find) that captures all conversations
and electronic messages so we can study what went wrong and prevent the error from happening again.

While we don’t have black boxes in education, we can develop what the author calls “black box thinking”—the
willingness to excavate error so we can learn from it. To do this, it is imperative that we create the systems and the
culture in our schools and districts to help educators learn from errors rather than feel stigmatized by them.

Syed’s book is an entertaining, Malcolm Gladwell-esque read complete with examples from every field from health
care to aviation to economics to law to business. You can take it on vacation, lose yourself in it, but I can guarantee it
will propel you to make important changes in how your school handles failure.

Further, this book has enormous implications for how we teach children. It suggests that it is vital that we teach them to
learn not just by being correct, but by being wrong and using errors as opportunities for learning.

How we view failure has a lot to do with mindset. For more on developing a growth mindset, see The Main Idea’s other
resources on the website: Carol Dweck’s Mindset, Angela Duckworth’s Grit, and Jo Boaler’s Mathematical Mindsets.

For a book that helps parents understand why learning from failure is so essential to their children’s success, see The
Main Idea’s BookBit for The Gift of Failure.

www.TheMainIdea.net © The Main Idea 2018. All rights reserved. By Jenn David-Lang
PART I – The Logic of Failure
Two Different Approaches to Error
Two different fields, two different approaches to failure. First, the medical field. In 2005, Elaine, a perfectly healthy thirty-seven-year-
old woman entered a hospital for a routine sinus operation, and despite the surgeon’s thirty years of experience, the anesthetist’s
sixteen years of experience, and the hospital’s impeccable reputation—the woman died after twenty minutes on the operating table.
What happened? When patients undergo general anesthesia, they need assistance to breathe. In this case, the anesthetist couldn’t get
the mask into Elaine’s mouth. This is a common problem, so he administered drugs to loosen her jaw muscles, and he tried smaller
laryngeal masks, but again, he couldn’t insert them. Two minutes into the procedure the patient was already turning blue. After trying
another time, he finally resorted to intubation, however, he faced another obstacle: he couldn’t see the airway. The patient’s oxygen
dropped to 40 percent and her heart rate to 50 beats per minute. Three nurses and the surgeon were all on call. Luckily there was one
more procedure for just this type of situation—a tracheotomy—in which you cut a hole directly into the throat and insert a tube into
the windpipe. Aware that this was the next step, the most experienced nurse went to get a tracheotomy kit. However, the two doctors
continued to try to force the tube into Elaine’s mouth. The nurse was worried about speaking up about the tracheotomy kit. The
doctors frantically continued to insert the masks, but twenty minutes in, the patient was in an irreversible coma. When the surgeon
spoke to Elaine’s husband, he said, “Look, Martin, there were some problems during the anesthesia. It is one of those things.
Accidents sometimes happen. We don’t know why…. It was a one-off. I am so sorry.” There was no mention of the failure to perform
a tracheotomy or the nurse’s aborted attempt to interject.

This is a book about how people and organizations react to failure. For safety-critical industries like health care and aviation, dire
consequences result from the way failure is handled. These two industries, in particular, provide a stark contrast in the way each
responds to failure—there is a marked difference in the systems and the culture that drive each field. From early on, the airline
industry has done whatever it can to examine failures and accidents so those errors will never be made again. As a result, the airline
industry has an outstanding safety record. Back in 1912, more than half of Army pilots died in crashes. About a hundred years later, in
2013, over 3 billion passengers traveled on commercial flights and yet there was only one accident per 2.4 million flights. In that same
year, in contrast, the Journal of Patient Safety estimated that there were more than 400,000 preventable deaths in the medical field
(through misdiagnoses, dispensing incorrect drugs, harming patients during surgery, and other avoidable problems). This puts
preventable medical error as the third largest killer in the United States (after heart disease and cancer). This isn’t because the medical
field is staffed by insane, lazy, or homicidal people. In fact, it’s quite the opposite. Most people become doctors out of a deep desire to
help. Instead, it is because the medical field has not learned from their mistakes as successfully and systematically. As Syed writes,
“…a failure to learn from mistakes has been one of the single greatest obstacles to human progress.” It’s a cliché that we should learn
from our mistakes, but this is actually quite difficult to do.

The idea that failure is a negative thing has deep psychological and cultural roots. Rather than admitting error, we go to great lengths
to blame others and hide our mistakes. The problem isn’t simply the failure, it is the attitude toward the failure. In health care, doctors
often feel they must be perfect. As one physician wrote, “The degree of perfection expected by patients is no doubt also a result of
what we doctors have come to believe about ourselves.” This pervasive attitude has made it difficult for doctors to confront mistakes.
In fact, doctors often use euphemisms for failure such as “technical error,” “complication,” or “unanticipated outcome”—language
that was used with Elaine’s husband above. It is this culture that has often kept health care as a “closed loop”—when failure does not
lead to progress because results are misinterpreted or ignored. A classic example of a closed loop was the practice of bloodletting in
medicine. Long ago, bloodletting killed innumerable patients and yet it existed for 1,700 years because the effects of this procedure
were not examined. Because health-care institutions have not historically examined the data from accidents, they have not been able to
discern the patterns that lead to errors and therefore learn from them. In fact, autopsies are rarely performed. Less than 10 percent of
deaths are followed by autopsies and some hospitals perform none at all.

In contrast, an “open loop” leads to progress because feedback is known and acted upon. In aviation, the industry has not condemned
errors or those who have made them, but rather, has used failures as opportunities for all pilots, all airlines, and all regulators to learn
and improve. It is this type of culture that has allowed aviation to learn from failure and improve its safety records drastically. In fact,
aviation not only has a culture that supports learning from failure, but it also has a system set up to support this learning. This is where
the title of the book comes from—each plane is equipped with a “black box” which captures the conversations of the pilots as well as
all instructions sent through electronic systems. When there is an accident, the black box is found, the data is analyzed, and the reason
for the accident is examined as a way to ensure this type of accident never occurs again. Furthermore, investigators—who are
independent of the airlines—are given full access to the data. Then, once a report has been prepared, every pilot in the world has free
access to the results. As Eleanor Roosevelt said, “Learn from the mistakes of others. You can’t live long enough to make them all
yourself.” In contrast, in medicine the adoption rate of new techniques has been much slower than in aviation because information
about failures does not flow as rapidly throughout the system.

Overall, the aviation industry has been able to create both the culture and the systems necessary to remove the stigma from mistakes,
and instead use them as learning opportunities. This book calls on organizations to develop “black box thinking.” Obviously, we can’t
create literal black boxes, but we can create the systems and the culture that will allow us to investigate when we fail and learn those
lessons that will allow us to succeed in the future.

1 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
PART II – Cognitive Dissonance
Denying the Facts to Avoid Failure
In Part I, Syed looks at the differences between the way the aviation and health care industries respond to and learn from failure.
Those in the aviation field were able to make dramatic improvements in safety because of the commitment to excavating and learning
from failure. However, part of why those in health care have not been able to learn as much from errors is because of their tendency to
cover up failure. This is true in many fields. Why is this the case? Why do otherwise intelligent and competent people—whether they
work in business, finance, law, or other fields—deny or conceal error? This chapter explores the answer to this question and suggests
that it is because of a psychological construct referred to as cognitive dissonance: when our beliefs are challenged by evidence.

One example of cognitive dissonance comes from the field of criminal justice. In 1992, an eleven-year-old girl living in a small town
in Illinois went over to babysit for a recently divorced neighbor who had to go to work. The girl, Holly, often babysat for this family.
However, this time was different. By 8pm an intruder had broken into the apartment and brutally raped and killed Holly. Because this
was such a small community, news traveled fast and everyone was traumatized. Despite interviewing hundreds of people, the police
had no suspects after two weeks. Then they stumbled upon Juan Rivera, a young man with psychological problems who lived nearby.
After several days of interrogations, Rivera finally nodded yes when asked if he committed this crime. He was convicted of first-
degree murder and sentenced to life in prison.

While we may like to think of the criminal justice system as fair and objective, clearly there a number of wrongful convictions.
However, progress has often been thwarted because of a lack of desire to probe and test the system. As one defense lawyer said,
“Historically, the legal system has been incredibly complacent. When people were convicted, people took it as confirmation that the
system was working just fine. There was very little serious work done on testing the system. In fact, the idea that wrongful conviction
was common seemed outlandish.” Almost no one in the twentieth century conducted systematic tests of police methods, court
procedures, forensic techniques, or anything else. They saw the system as near to perfect and mistakes were written off as “one-offs.”
That all changed in 1984 when a research scientist discovered that the use of DNA evidence could be used to more conclusively
identify the culprit of many crimes. DNA would certainly be able to help exonerate a number of those who had been wrongfully
convicted and were currently behind bars. In fact, hundreds of inmates were released as a result of the new DNA evidence. However,
in Juan Rivera’s case, although the DNA test he applied for in 2005 showed that the semen found in the victim did not match his, he
remained in jail. The prosecutors were deeply invested in having solved this crime and came up with various ways to warp the
evidence. They said that Holly (who was 11 at the time) had had consensual sex before the attack and that was the reason the DNA did
not match Juan Rivera’s. Although this seems preposterous, Rivera remained behind bars for twenty years, until 2012, despite the
incontrovertible evidence. How is this possible? Cognitive dissonance.

Below is another example of how cognitive dissonance gets in the way of admitting error, and therefore learning from it. A cult in the
1950s was run by a housewife, Marian Keech. She claimed that she was a psychic in touch with a godlike figure who told her the
world would end on December 21, 1954. A number of people left their jobs and their homes to come live with Keech who was seen as
their spiritual leader. Clearly, the world was not going to end on this date and the followers would see this prophesy as a failure.
However, this is not what happened. When the date of the apocalypse came and went without the world ending, the group changed the
“evidence.” Instead, the people said that the godlike figure had been so impressed by the group’s faith and commitment that he
decided to give the world another chance. It was their faith that saved the world. What both the killing of the 11-year-old and the cult
examples show is that when we are confronted with evidence that contradicts our beliefs, rather than admit to error, we are more likely
to reframe the evidence than change our beliefs. This is one of the obstacles to excavating and learning from failure. This happens in
many fields. For example, even when no weapons of mass destruction were found, many of the Republicans who supported George
W. Bush’s war in Iraq did not change their stance. They simply reframed the evidence and the issue at hand.

Cognitive dissonance is what we experience when we are confronted by facts—such as the lack of evidence of WMD, the absence of
the apocalypse, and the DNA results for Juan Rivera—and yet these facts challenge our beliefs. In such cases, we have two choices.
Either we can admit failure, or we can reframe, alter, spin, or ignore the evidence. Unfortunately, because we often have a lot riding
on our careers and reputations, we often choose the latter. Cognitive dissonance is an ingrained human trait. And in fact, the more
invested we are in our judgments, the more likely we are to manipulate the evidence to match our beliefs. When we do this—when we
essentially edit our errors and reframe our failure—we destroy any shred of possibility of learning from our mistakes. Our culture so
strongly stigmatizes error that even hard, cold data cannot sway us from our beliefs if it means admitting error. Even the most
intelligent and rational of people are not immune to the effects of cognitive dissonance.

2 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
PART III – Confronting Complexity
Testing for Failure from the Bottom Up
So far, this book has shown that to learn from mistakes you need the right kind of system—one that is set up to excavate mistakes and
use them as an opportunity to learn—and the right kind of mindset—one in which errors are not only expected, but are accepted as
part of the learning process. This section of the book delves more deeply into why this type of system is needed. Historically, we have
followed a more linear model of how we expect change to occur. This flowchart represents the typical model: Research and theory à
Technology à Practical applications. But a linear model like this ignores the power of bottom-up testing. For example, when the
company, Unilever, was manufacturing detergent in the 1970s, they discovered that the nozzles used in detergent production were
inefficient and kept blocking the detergent. They employed a team of mathematicians who were experts in the theoretical side of
detergent production—such as high-pressure systems and fluid dynamics—to come up with a perfect nozzle. Their sophisticated
equations yielded a new design. The only problem was, it didn’t work. So, Unilever turned to a group of biologists who knew nothing
about fluid dynamics but who were well versed in testing hypotheses and determining the relationship between failure and success.
Rather than coming up with a new theoretical model, they tried ten versions of the nozzle and tested each one. Then they took the one
that worked the best and created ten slight variations of that version, and subjected those to failure again. After 449 failures, they had
perfected the nozzle. Rather than focusing on a theoretical solution, the biologists depended on a system of trial and error to come up
with a successful nozzle. This mirrors more of an evolutionary-like process of natural selection. Unfortunately, we often try to bypass
the messy bottom-up part of the equation, and view the world in a simpler way with a simpler top-down solution.

So, why exactly do we neglect to conduct the messier bottom-up trials and tests of a new idea? One reason is that we view the world
as simpler than it really is. If the world is simple, then all that is required is a simple top-down solution to problems. For example, in
health care, rather than seeing errors as the result of the fact that we are dealing with a complex system, it is easier to blame an
individual. Another obstacle to testing out a new idea is the desire for perfectionism. A number of tech entrepreneurs face the conflict
between designing the perfect product ahead of time, versus testing out versions of the product. For example, Nick Swinmurn, the
founder of Zappos, decided not to take the top-down approach. Rather than raising millions in capital, stockpiling a large inventory of
shoes, and developing deep relationships with shoe manufacturers, he started by asking store owners if he could take photos of their
shoes, share these photos online, and then pay the store full price if a customer bought the shoes. This way he learned to deal with
returns, complaints, and all of the real-life messiness involved in an online business. The problem with having perfected theoretical
models is that the real world is a much more complex place.

Another example that shows the world is a complex place and proves the necessity of testing is a crime-reduction program called
“Scared Straight.” This program started in 1978 when seventeen teenagers who were in trouble with the law were taken to Rahway
State Prison in order to be “scared straight.” The idea was that prisoners were going to yell in the teenagers’ faces and tell them what
life was like behind bars, and after three hours of such an experience, these kids would be deterred from a life of crime. After three
hours of being locked up, they got a serious dose of prison life. One girl reported, “It scared the s—t out of me, I didn’t like it at all.”
Another said, “I think it will change my life, I mean I have to cut some of this [crime] out.” A documentary filmmaker followed this
story and showed that 16 of the 17 kids “went straight” after three months. Everyone loved the program, including politicians. The
only problem? The program was a complete failure. However, there were obstacles to finding out about this failure for a while. First
of all, there was no randomized control trial (RCT)—no test of whether a comparable group of 17 youth would have returned to crime
or not. Second, to get data, a professor who decided to study the program relied on questionnaires sent to parents and guardians of the
children. This self-reported data was not necessarily reliable. Furthermore, the professor only examined the data from the
questionnaires that were returned—so it was a self-selecting group. When a more accurate randomized trial was conducted, the results
showed that those who participated in Scared Straight were in fact more likely to commit crimes than delinquent youth who had not.
By this point the program had spread to many states and even other countries and thousands of prisoners had attended. When this
evidence surfaced, defenders of the program denounced that data and clung to the program even more resolutely. It seemed so
intuitive that kids would be turned around by experiencing life in a prison. However, crime is more complex than that. Many factors
contribute to a youth’s decision to get involved with crime, and it is unlikely that a three-hour visit to a prison will address all of those
factors. As Syed writes, “The glitzy narrative was far more seductive than the boring old data.” This brings up the question, how often
do we actually test our policies and our strategies? How often do we truly reconsider our deeply held beliefs? Too often we see
theories that come from top-down, never get tested, and end up being a waste of time. For example, in the field of education,
politicians have come up with the theory that discipline will improve if students wear school uniforms. But have they considered using
a trial and error process to test this?

3 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
PART IV – Small Steps and Giant Leaps
Marginal Gains
One of the most respected economists at MIT, Esther Duflo, examined the relationship between aid spent on Africa and poverty rates
there. While the amount of aid increased from under 5 billion dollars in 1960 to almost 800 billion dollars in 2006, the GDP remained
roughly the same during this time period. The sensible conclusion would seem to be that the aid did not help. But since we have
already read about the results from Scared Straight which seemed to be highly positive until a controlled experiment showed it was a
failure, we know better than to jump to conclusions. And in the African poverty case, there was no controlled experiment so we have
no way of knowing whether the GDP would have drastically declined without the aid or if African countries would have been far
richer had it not been for the negative effects of the aid. In fact, it would have been impossible to have conducted a randomized control
trial (RCT) because there are no other Africas that could have received more or less aid.

This brings up the concept of marginal gains. If you cannot test a larger question, then break it down into smaller parts. In this case, it
might help to ask smaller questions about what might alleviate the poverty—programs to address malaria? Literacy? Road-building?
Education? Infrastructure? If you try one program at a time, then you can run a controlled experiment and see if it is working. In fact,
this is what some economists tried to do to improve educational outcomes—a larger issue—in the Busia and Teso regions of Kenya.
They decided to try handing out free textbooks—a smaller solution—to see if that would improve grades. Then they realized that the
textbooks were in English and this was the third language of the students there. So next they decided to distribute visual aids such as
flipcharts with bright graphics to improve education. Again, they failed. Eventually they found a creative approach to try: a de-
worming medication. While this may seem unrelated, these parasites stunt growth, cause lethargy, and often lead to absenteeism. This
time they found their success. Absenteeism was cut by one-fourth and student achievement improved. While this may seem like a very
gradual way to make improvements, marginal gains can add up and end up being more potent than large changes, particularly if the
large changes are not helping the situation. For example, if the economists had decided to provide textbooks and training for all
schools, then this larger-scale approach would have resulted in no gains at all. In order to avoid larger failures like this, many
companies now conduct randomized controlled trials to see what works. At Google, for example, they tested forty shades of blue to
see which would lead to the most number of click-throughs in its toolbar. As a result of the success of this approach, Google began
conducting over 12,000 RCTs a year.

The idea with marginal gains is not that you make small changes and just hope they succeed. Rather, the goal is to break down a larger
problem into smaller ones and then rigorously test solutions for each of these smaller problems. Successful small solutions result when
you look at the data and then engage both creativity and judgment. Each possible solution must be thoroughly examined to see if it
works. Creativity without any feedback goes nowhere. While success certainly comes from these small failures and trying out new
iterations, at times, larger innovations are also needed to move a company forward. For example, Blockbuster could not be saved with
a marginal gains approach. Even using a marginal-gains approach to make small tweaks in their logo, the shelving at their stores, and
their approach to discounts, was not enough to save Blockbuster. No amount of marginal improvement could have compensated for
the fact that their business model would soon be outdated in comparison to Netflix. They needed to innovate their entire approach—
something explored in the next chapter.

How Failure Drives Innovation


Sometimes progress comes as a result of small steps. Other times it’s the result of larger changes. For example, the television was not
an iteration of a previous product. Nor was Einstein’s theory of relativity the result of tinkering with Newton’s law of universal
gravitation—in fact it was a repudiation of it. An innovative approach to the vacuum cleaner followed a similar path. James Dyson
runs the British company, Dyson, whose inventions are considered to be at the forefront of innovation. Many people assume that
innovative ideas just come out of thin air from inspiration. However, and Dyson is a case in point, creativity is actually something to
be worked at. The idea for his innovative vacuum cleaner did not come to him out of thin air. Instead, it started with deep frustration
with his own vacuum cleaner. It just lost its suction and did not function well. He opened up the vacuum cleaner and tried to fix it. He
wondered about the idea of a bagless vacuum cleaner. Then when he was visiting a lumberyard he observed a cyclone-like approach to
dealing with dust that he thought might be of use in a vacuum cleaner. After the testing and failure of 5,127 prototypes, his innovative
approach to the vacuum cleaner would result in him earning over 3 billion British pounds through his company. What this example
shows is that creativity is often the result of a problem or a failure. It was the failure of the Hoover vacuum that led Dyson to design a
better one. The ATM was designed to address the problem of needing money when banks were closed. Dropbox was designed when
you couldn’t access your files from a different computer. This is the “problem phase” of innovation. Without a problem or a failure,
innovation doesn’t take hold. These innovations all serve as a response.

The psychologist, Charlan Nemeth, conducted an important study about innovation and creativity. She had 265 undergraduates,
divided into five-person teams, come up with solutions to address traffic congestion in San Francisco. One group was told to
brainstorm with no criticisms of each other’s ideas. A second group was given no instructions whatsoever. The third was told to point
out flaws, debate, and criticize each other’s ideas. The results were dramatic. The group that was told to criticize came up with 25
percent more ideas than the other groups. Further studies showed that groups encouraged to dissent not only come up with more ideas,
but also more productive and creative ideas. This suggests that criticism surfaces flaws and then people find creative ways to address
these problems. When no one gave feedback or criticized the ideas in the brainstorming group, people had nothing to respond to.

4 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
When people are not encouraged to view the flaws in their ideas, then there is no incentive to push those ideas to a deeper level.
Rather than treating creativity and innovation as fragile, we need to understand that ideas thrive from airing flaws, difficulties, and
problems.

In fact, other studies have shown that rather than the image of the isolated genius, creativity is often sparked by working in teams with
others. Another psychologist, Kevin Dunbar, set up a camera to record everything that happened in biology labs to determine how
scientific breakthroughs occurred. He found that innovations occurred when groups of scientists gathered to critique and discuss their
work, not when scientists were alone with their thoughts. Working with others helped them learn from their failures.

PART V – The Blame Game


Parts V and VI of the book focus on what happens in a culture in which mistakes are not covered up and instead are used as a tool to
drive progress. These sections also examine the external fear of failure—fear of being unfairly blamed or punished—which certainly
undermines the ability to learn from mistakes.

The Psychology of Blame


This chapter explores the psychology of blame. When something goes wrong in an organization, we like to point the finger at
someone. This is particularly the case when the situation is complex and has multiple, intertwining factors. It is far easier to blame an
individual rather than examine the root of the problem. If the previous section focused on the systems that either sweep failure under
the carpet or help us learn from it, this part focuses on the psychological and cultural conditions that lead people to hide their
failures—namely, the fear of blame.

For example, in 2004, Amy Edmondson, a professor at Harvard Business School, decided to conduct a study on the effects of a blame
culture. There is a widespread management belief that if you run a tight system with clear punishments, people will be more diligent
and motivated to avoid mistakes. With this in mind, Edmondson examined the culture at two hospitals (called University Hospital and
Memorial Hospital to protect their anonymity) as they dealt with errors in drug administration. Unfortunately, medication errors are
common. The FDA estimates that 1.3 million patients a year are injured by drug administration errors. According to Edmondson, a
patient can expect one to two medication errors during every hospital visit! Edmondson studied several units in each hospital. In the
one in which the most blame ran rampant, Memorial Nurse Unit 3, she found a very disciplined culture. The managers of the nurses
dressed impeccably in suits and conducted tough conversations with nurses behind closed doors. The managers felt they had their
nurses on a short leash by penalizing mistakes because they believed they were holding the nurses accountable to the patients. Nurses
in this unit said things like, “The environment is unforgiving,” “You’re guilty if you make a mistake,” and “You get put on trial.” To
the hospital bosses, the mangers seemed to be no-nonsense, tough, and on the side of the most important people of all: the patients.
However, in reality these managers were refusing to address the complexity of each problem and the nurses did not trust them at all.
This approach seemed to work at first because very few mistakes were reported. However, upon closer inspection, it was not that the
nurses in this high-blame unit were committing fewer errors, it was that they were reporting fewer errors for fear of the repercussions.

In contrast, the unit that was the most open and blamed people the least, Memorial Nurse Unit 1, had a very different culture. The
managers of the nurses in this low-blame unit didn’t wear suits, they wore scrubs. They were involved, got their hands dirty, and were
keenly aware of the pressure and the complex problems in a hospital. It wasn’t that they weren’t tough. In fact, their toughness came
out when they demanded to learn from mistakes. This unit ended up making many fewer errors than the high-blame unit. Why? The
nurses in this unit reported many more errors, they learned from these errors, and as a result, they ended up making fewer errors. By
reducing the penalties for errors, the managers had created a culture in which the nurses believed that they would not be punished and
instead, the errors would be treated as learning opportunities.

When people anticipate being blamed for errors, they are more likely to cover them up. Why would you share your error if you think
you are going to be penalized for it? Why would you expose a mistake if you don’t trust your manager? If we want to learn from the
world, we need to acknowledge that it is complex and engage with the errors and problems of the system. Blame might have made
sense in simpler systems like on a production line when those who were noncompliant faced penalties, but in a complex system, the
problem is rarely lack of focus. While it may be easiest to blame the person closest to an error, errors are usually not the result of
negligence, but rather of flaws in the system. Increasing punishment and blame is unlikely to reduce error. Blame without thoroughly
analyzing what caused a problem is one of the worst things an organization can do. Being open about mistakes and error is not a
pleasant nicety, rather, it is a necessity for an organization’s culture if that organization hopes to improve.

One way to help stop people from blaming others is to get them to consider the situation from the other person’s point of view. For
example, in one study in which people were shown a video of a driver cutting across lanes, they immediately concluded that the driver
was selfish, impatient, or out of control. They placed all of the blame on the driver. However, there are many reasons for why the
driver might have acted this way. He may have had sun in his eyes, or he may have swerved out of the way and prevented an accident.
Yet the human tendency is to reach for the simplest narrative—that the driver was at fault. People only consider other possibilities
when the question is flipped— “What happened the last time you jumped lanes?”

5 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
PART VI – Creating a Growth Culture
Previous sections explored both the role of blame and the fear of failure in undermining learning. This section examines how we can
overcome both of these obstacles to learning. In an interesting study conducted in 2010, the psychologist Jason Moser examined the
brain waves of people when they made mistakes. He divided his subjects into two groups based on a survey. Those who believed
intelligence is fixed were placed in the Fixed Mindset group and those who believe intelligence can be developed through hard work
were placed in the Growth Mindset group. Typically, there are two responses in the brain when someone makes a mistake. The first
brain signal occurs when the person becomes aware of the mistake. The second happens shortly after this and is a heightened
awareness that comes from a different part of the brain. It comes from the person focusing on the mistake and paying more attention to
it. Studies have shown that people who learn from mistakes tend to have both responses. For his experiment, Moser gave the Growth
and Fixed Mindset groups a simple test. During the test, he put an EEG (electroencephalography) cap on subjects’ heads. He found
that although both groups exhibited the first brain signal equally, the Growth Mindset group experienced the second brain signal three
times as powerfully as the Fixed group. In other words, it was as if the brains of those with a Fixed Mindset ignored the mistake while
those with a Growth Mindset zoomed in on the mistake to pay close attention to it. Next, when performance on the test was re-
assessed, those with the Growth Mindset performed better than the other group. The takeaway became clear. People who engage with
their mistakes improve more than those who don’t. It makes sense that those who are afraid of failure or blame others for their
mistakes would learn less since they are avoiding engaging with their errors. This also explains why entire organizations or companies
may be more or less likely to learn from their mistakes: it depends on whether they have a Growth Mindset culture or not. When an
organization prefers to sweep mistakes under the rug rather than make errors something that the organization transparently and
honestly addresses, this affects how much the organization engages with and therefore learns from error. How do you know if you are
working in a Fixed Mindset organization? If people would agree with statements like, “In this company there is a lot of cheating,
taking shortcuts, and cutting corners” or “In this company people often hide information and keep secrets.”

In another example, the psychologist Angela Duckworth studied what it took to succeed at West Point, a top college and training
academy for aspiring army officers. The cadets in this program go through a very rigorous regimen and about 50 cadets drop out each
year. To predict which candidates would drop out, Duckworth devised a very simple survey asking respondents to rate themselves on
a series of questions such as, “Setbacks don’t discourage me” and “I finish whatever I begin.” She compiled the results into a “grit”
score and found that grittier cadets were far less likely to drop out. Like people with a Growth Mindset, those who scored better on her
grit survey were the people who were much less likely to drop out when they encountered problems, obstacles, and challenges. They
were far more willing to persevere and stay with the program. Rather than seeing setbacks as an indictment of one’s ability, those with
a higher grit score believed they could overcome those failures along the way.

Redefining Failure
After reading about all of these examples and studies we come to the conclusion: that we must redefine failure. Whether at the
individual level or the organizational level we need to stop thinking of failure as an ends and instead view it as a means: a means of
learning, a means of improving. Not only does it lighten the load to see failure in this new light, but it provides the promise of
opportunity and growth as a result of our failures. How do we do this? By moving away from praising perfection on tests and
punishing for errors. Instead we need to value when our young people experiment, try new things, and take risks. The principal of one
high school in London established an annual “failure week” which included assemblies and workshops in which failure was
celebrated. She brought in parents and other professionals to speak about the failure in their lives and what they learned from it. She
shared YouTube clips of famous people practicing, making mistakes, and practicing some more. This was particularly important in a
school setting because, ironically, it is often in school, where students have been so praised for their perfect grades and performance,
that they have not learned to deal with setbacks. At the first sign of struggle, it is these students who feel the most helpless. We need to
teach our students that our errors need not be embarrassing, dirty, or an indictment of our intelligence. To do this, we need to be sure
to praise them for trying, experimenting, persevering, and struggling rather than simply lauding them for getting things right. Failure is
simply a part of life and avoiding it moves us backward, not forward. Once we have developed a more productive mindset about
failure, we can begin to put the systems in place necessary to support our learning from error. We need to institutionalize a process to
examine actual data from our failures and receive feedback in order to move forward. Another suggestion highlighted in the book is to
run a trial. When we run a randomized controlled trial we can test out what will work and what will fail. Like the black box in the title
of this book, the goal is to get individuals and organizations in the habit of excavating errors in order to learn from them and improve
the next time.

6 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
THE MAIN IDEA’s Discussion Questions for Black Box Thinking
Overall Questions
1. To get the discussion going, have people discuss one or both of the following quotations (both from p.211 of the book):
“We learn not just by being correct, but also by being wrong.”

“The problem with academia is that it is about being good at remembering things like chemical formulae and theories, because
that is what you have to regurgitate. But children are not allowed to learn through experimenting and experience. This is a great
pity. You need both.”

2. Think of a time you failed or performed badly in something. How did you respond?

Questions for Part I - The Logic of Failure


1. In Part I, author Matthew Syed contrasts the different ways the health care and aviation fields approach error. How would you
describe these differences and why do you think they exist? Do you believe education has an overall approach to failure? Think of a
recent example of a failure in your school/district and how it was handled. Discuss this.
2. There are a number of examples of preventable errors that occur in the medical field (misdiagnoses, dispensing incorrect drugs,
harming the patient during surgery, operating on the wrong part of the body, improper transfusions, postoperative complications, etc.)
—do you think we are inadvertently harming our students in any way with our “errors” in education? Discuss.
3. Syed writes that doctors often use euphemisms for failure such as “technical error,” “complication,” or “unanticipated outcome.”
Do we use any euphemisms for failure in education?
4. Small errors are often warning signs of more catastrophic failure to come. What systems do we have in place to surface smaller
errors in education? How could we do a better job of this?
5. What if we studied failure rather than success in the field of education? What might this look like?

Questions for Part II - Cognitive Dissonance


1. Part II of the book provides several examples of cognitive dissonance—when our beliefs are challenged by evidence—Holly’s
murder, the cult, and weapons of mass destruction. How was it possible for people to continue to believe their own judgments were
accurate when confronted with conflicting evidence?
2. Many of the examples in this book do NOT come from education. Do you think cognitive dissonance is equally alive and thriving
in the field of education? Can you think of an example? Or, how might education be different from these other fields?
3. In recent years, the idea of “data-driven decision making” has taken root in the field of education. How does the concept of
cognitive dissonance affect how you might think about or approach data-driven decision making?

Questions for Part III - Confronting Complexity


1. Syed states that progress is a complex interplay between theoretical ideals and practical trials and applications. Unfortunately, we
often try to bypass the messy bottom-up part of the equation, and view the world in a simpler way with a simpler top-down solution.
Can you think of an example of a top-down solution (in any field) that was too theoretical in nature and didn’t involve enough bottom-
up testing?
2. How do we “conduct tests” in the world of education? What does this look like? How could we do a better job of this?
3. In the Unilever example, they failed 449 times before coming up with a successful detergent nozzle. What is the most that your
school/district/classroom has failed and been honest enough to admit it or learn from it?
4. In the book, there is the mention of one top-down idea in our field of education. Syed says that politicians come up with the theory
that discipline will improve if students wear school uniforms. But, he asks, have they considered using a trial and error process to test
this? What other examples of ideas that sound logical and compelling do we have in education, but which haven’t been tested with a
simple trial and error test? What could we do to test these initiatives?

Questions for Part IV - Small Steps and Giant Leaps


1. The idea of marginal gains was introduced in the book: if you cannot test solutions to a larger issue like poverty, you can break it
down into smaller parts to test. For example, you could test programs to address malaria, literacy, road-building, education, or
infrastructure. If you try one program at a time, then you can run a controlled experiment and see if it is working. What is an example
of a problem in education we might break down into smaller parts to begin to test? How would you break that problem down?
2. There is a big push these days to focus on data in schools. However, data alone does not solve problems. Syed argues that
successful small solutions result from looking at the data, and then engaging creativity, judgment, and feedback. Does your school or
district incorporate these three elements into its data examination process? How might your school’s approach to data be improved?
3. Syed describes how Charlan Nemeth’s experiment with groups shows that when groups encourage criticism in brainstorming, they
come up with more and better solutions than when groups disallow criticism. What are the implications for staff or teacher teams?
Note that even when groups don’t prohibit criticism outright, they often function with a “culture of nice.”
4. Try your own experiment. Have a group of staff conduct a brainstorming session to address a real problem at your school with NO
commentary, feedback, or criticism. Then have the group attack another problem and this time, for each proposed solution, have every
individual share one thought about how the idea might be improved. Which conversation yielded more and better solutions?

7 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009
Questions for Part V - The Blame Game
1. A big theme in this section is that an organization’s culture influences how people respond to mistakes. How can you build a culture
in your school/district in which mistakes are not suppressed and instead are used as tools to drive progress? Brainstorm some ideas.
2. This section describes two nursing units—a high-blame and a low-blame one. When people anticipate being blamed for errors, they
are more likely to cover them up as the nurses did in the high-blame unit. In 2009, we saw this in education in the largest cheating
scandal on a standardized test in Atlanta (see this New York Times article: https://www.nytimes.com/2015/04/02/us/verdict-reached-
in-atlanta-school-testing-trial.html or this longer piece in The New Yorker: https://www.newyorker.com/magazine/2014/07/21/wrong-
answer) Consider having your leadership team read one of these articles and discuss how your school/district can avoid this type of
approach to high-stakes testing.
3. In education, we are living in a time when the idea of “accountability” has taken hold. How is accountability different from blame?
4. In one study mentioned in Part V, people blamed a driver for cutting across lanes. Then they were asked to look inward, “What
happened the last time you jumped lanes?” Take some typical errors that students make, and turn those inward for teachers to discuss
to better understand those mistakes: When was the last time you arrived late? When was the last time you turned in something late (a
bill, paperwork, etc.)? When was the last time you performed poorly on a task? The same can be done to help administrators better
understand the underlying reasons for the mistakes staff make.

Questions for Part VI - Creating a Growth Culture


1. Given that some people say the most important indicator of a person’s growth mindset is her willingness to try, fail, and learn, what
questions could you ask potential teacher or administrator candidates to determine if they have this quality?
2. Given that the Moser study in this section shows that people with a Fixed Mindset are less likely to pay attention to their errors,
what structures can you put into place to get students to stop and pay attention to their mistakes? One structure I like is from Paul
Bambrick-Santoyo in Driven by Data. He suggests having students reflect on their errors by giving them a template like the excerpt
below (full template on pp.97-98 of the book) in which students examine errors they made on a test and categorize those errors.

STUDENT REFLECTION TEMPLATE


Standard/Skill Did you get the question right or wrong? Why did you get the question wrong? Be honest.
Questions What skill was tested? Right Wrong Careless mistake Didn’t know how to solve
1 Algebra substitution: add
2, etc. Algebra substitution: add 3 numbers

If you have… You are a… In class you… During class you should… During assessments you should…
• Are one of the first to finish • SLOW DOWN! • SLOW DOWN – you know you tend
More careless errors RUSHING • Want to say your answer before • Ask the teacher to check your work to rush
than “don’t ROGER writing or check with partner • Really double check your work since
knows”… • Often don’t show work • Push yourself for perfection, don’t you know you make careless errors
• Are frustrated when you get just tell yourself “I get it.” • Use inverse operations when you have
assessments back time
• Are not always sure that you • Ask questions about HW if you’re • Do the problems you’re SURE about
More “don’t BACK-SEAT understand how to do independent not sure it’s perfect first
knows” than BETTY work • Do all of the problems with the • Take your time on the others and use
careless errors • Are sometimes surprised by your class at the start of class everything you know
quiz scores • Use every chance to check in with • Ask questions right after the
teachers and classmates assessment while fresh in your mind

1. If you are a Rushing Roger and you make careless errors, what should you do in your classwork and homework?
2. If you are a Backseat Betty, what should do when you get a low score on a quiz?

3. If blaming others and fearing failure are both obstacles to learning, what can you do to overcome these in yourselves and help
students overcome them? What can you do to build a culture that normalizes error and supports the growth mindset?
4. Everyone has heard about an individual having a growth or fixed mindset. How can an organization have one? What might a district
or a school look like when it exhibits a growth mindset?
5. Do a jigsaw with several readings about mindset. Have people read Mindset (The Main Idea summary or book) and others read Grit
(the book or The Main Idea’s BookBit) and the math teachers read Mathematical Mindsets (the book or The Main Idea’s BookBit).
Note The Main Idea has discussion questions and PD ideas for each book. Then have the three groups share what they learned.
6. What type of annual event might your school or district establish to highlight failure like the one in this chapter in which a high
school in London established an annual “failure week” which included assemblies and workshops in which failure was celebrated.
They brought in parents and other professionals to speak about the failure in their lives and what they learned from it and shared
YouTube clips of famous people practicing, making mistakes, and practicing some more. Brainstorm what your school could do.
7. Once you have developed a more productive mindset about failure, you can begin to put the systems in place necessary to support
learning from error. Brainstorm what your school/district could do to institutionalize a process to examine actual data from failures
and receive feedback in order to move forward.

In conclusion, from everything you’ve learned in this book – are you a “black box” thinker? Is your organization a “black box”
organization? What do you take away from this that you could begin to implement to incorporate more “black box” thinking

8 (Black Box Thinking, Porfolio/Penguin) © The Main Idea 2018

hjjyffg.ne..net2009

You might also like