We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #144 Gary Klein: Insights For Making Better Decisions

#144 Gary Klein: Insights For Making Better Decisions

2022/8/9
logo of podcast The Knowledge Project with Shane Parrish

The Knowledge Project with Shane Parrish

AI Deep Dive AI Chapters Transcript
People
G
Gary Klein
S
Shane Parrish
创始人和CEO,专注于网络安全、投资和知识分享。
Topics
Gary Klein: 本期节目探讨了如何识别专家、提升洞察力以及减少决策错误。Klein 认为,能够意识到并反思自身错误的人才是真正的专家;提升绩效的关键在于减少错误和增加洞察力,而大多数组织只关注减少错误;洞察力产生于三种途径:连接、矛盾和修正;组织通常会抑制洞察力,因为它具有破坏性和颠覆性;个人可以通过培养好奇心、庆祝自己的洞察力以及关注周围人的洞察力来提升洞察力;经验对于产生洞察力至关重要,但经验并非万能,经验丰富的人可能会因为过去的失败而抑制新的想法;没有完美的标准来判断谁是专家,但专家通常能够意识到自己的错误并从中学习;评估决策能力的方法包括情景模拟、分析决策理由以及评估其认知模型的丰富程度;决策日志应记录决策内容、目标、信息来源、受影响的团队以及事后反思;心理模型不仅包括事物如何运作,还包括其局限性、解决方法以及如何预测他人的困惑;学习过程包含经验、反思、抽象和行动四个环节,反思是学习边缘情况的关键;讲述故事可以克服语言的局限性,帮助人们更好地理解和沟通抽象的概念;工作中,我们往往会忽略或解释意外事件,而未能充分利用这些事件来激发好奇心和学习;人们会使用“知识盾牌”来解释不方便的数据和异常现象,这会导致固着;在相同技能水平下,有些人停滞在中等水平,而有些人则达到精通,这与“去学习”和适应性有关;达到更高水平的专业技能需要“去学习”,即意识到并纠正错误的信念和局限性;经历过充满挑战和错误的一年的人比经历平稳的一年的人学习更多;认知灵活性理论旨在通过防止人们固守常规,提高他们的适应能力;事前分析法是一种通过模拟项目失败来识别潜在问题的风险管理方法;事前分析法可以帮助团队发现潜在问题,建立坦诚的文化,并增强信任感;关于决策偏差的研究主要关注其负面影响,而忽略了其积极作用,作者认为决策启发式方法通常是有价值的;团队决策的优缺点取决于具体情况,匿名投票和避免追求共识对于团队决策至关重要;共识决策往往过于保守,不利于创新,匿名投票可以帮助团队成员表达真实想法;对于复杂问题,目标会随着项目的进展而不断丰富和变化,决策者应保持开放的心态;在面临两个类似选项时,如果其优缺点几乎平衡,则选择哪个选项并不重要,不必纠结于完美决策;当面临“漠不关心区域”的决策时,应认识到选择哪个选项并不重要,不必花费过多时间纠结;“停、翻、否”启发式方法可以帮助人们摆脱决策瘫痪;影子盒法是一种基于情景的培训方法,通过比较个人决策与专家的决策来帮助学习;环境对决策有重要影响,通过改善环境可以提升决策质量;在执法等高压环境中,应注重通过环境设计和沟通技巧来避免冲突,而非一味采取强制手段;在与他人互动中,应努力建立信任,而非一味施压。 Shane Parrish: Parrish 主要负责引导访谈,提出问题,并与 Klein 进行讨论,对 Klein 的观点进行补充和提问。

Deep Dive

Chapters
Gary Klein discusses how to identify experts, the importance of recognizing mistakes, and the criteria for expertise.

Shownotes Transcript

Translations:
中文

One of the things I've examined is how do you know who's an expert? And I identified about seven or eight criteria. One of the criteria I use if I'm going to engage somebody's expertise is I'll ask them, tell me the last mistake you made. Let's talk about that. If the person says, I can't think of any mistakes. To me, that means this person may be competent, but it certainly is not an expert.

Welcome to the Knowledge Project Podcast. I'm your host, Shane Parrish.

The goal of this show is to master the best of what other people have already figured out. To that end, I sit down with people at the top of their game to uncover what they've learned along the way. Every episode is packed with timeless ideas and insights that you can use in life and business.

If you're listening to this, you're missing out. If you'd like special member-only episodes, access before anyone else, transcripts, and other member-only content, you can join us at fs.blog.com. Check out the show notes for a link. Today, I'm speaking with Gary Klein.

Gary is a research psychologist famous for pioneering the field of naturalistic decision-making. I've wanted to geek out with Gary on decision-making for years, and this conversation did not disappoint. We talk about why some people stagnate at an intermediate skill level, and some people reach mastery, cognitive flexibility theory, the role of stories, both the ones we tell ourselves and the ones we tell others.

The two ways to make better decisions, gaining better insights and reducing errors and how we can improve at both of them. Premortems, shadowboxing and how it helps you learn. Fixation errors, cognitive biases, mental models and how we use them and fast tracking expertise. It's time to listen and learn.

The IKEA Business Network is now open for small businesses and entrepreneurs. Join for free today to get access to interior design services to help you make the most of your workspace, employee well-being benefits to help you and your people grow, and amazing discounts on travel, insurance, and IKEA purchases, deliveries, and more. Take your small business to the next level when you sign up for the IKEA Business Network for free today by searching IKEA Business Network.

So you're a cognitive psychologist who has spent your entire career observing how other people make decisions. But you took an interesting angle to this. Rather than focus on how to make better decisions directly, you approached it from how we can develop expertise to make better decisions. We can sort of reduce errors or

or have better insight, or preferably both, and yet these often seem in conflict with one another. I thought a good place to start this would be what sparks insight and what prevents us from putting our insights into use? Is it because they often contradict the beliefs we hold? An insight definitely is going to be incompatible with beliefs we hold. That's why we're surprised

I did a study of 120 examples of insights. I wondered where insights came from. And I did the study out of frustration because I would give talks and I would say, if you want to improve performance, there's two things you can do. And there would be like a down arrow, what you want to reduce, and an up arrow, what you want to increase.

And to improve performance, what you want to reduce is errors. And that's what most organizations do. They try to cut down on errors. But they're missing the up arrow of improving, increasing insights. And so most organizations only focus on reducing errors. So I would present this slide and people in the audience would say, yes, that makes sense. That's my organization. All they care about is reducing errors. But then they would ask me the question, so what do you know about insights?

And I would answer, I don't know anything about insights. I just have this one slide. I haven't studied it. And once I gave a print, I used a slide in a talk I gave in Singapore.

And I got asked that question. And then I flew home afterwards. And it was a 17-hour flight. I had a direct flight. That's a long time to be stewing over the fact that I felt like an idiot not being able to tell them what insights were about. So I decided to do a study. So I collected 120 examples of insights to see what's going on and hopefully find out how to improve them.

increase them. And I tried to find a common theme and I couldn't because it turned out that there were three different pathways. One pathway is a connection pathway where we put things together. So there, we're not contradicting anything. I'm going back to your original question. We're not upsetting our mental model. We're just putting different ideas together. Darwin, you know, knowing that there was a change in species, but what's driving the change? And then

He reads Malthus's book about population growth and competition for scarce resources. He read the sixth edition and he puts that together and he said, that's what's driving evolution is a competition for scarce resources. So that's a connection type of insight. A second type of insight is a contradiction insight where something happens that doesn't make sense.

And there you do have to change what you believe or wonder what's going on. And the example I use there is a story I heard from a police officer who was driving around with a partner who is in his first year. And they're stuck in traffic. There's a red light. And the partner, this young guy, looks at the car ahead, which is a new BMW. And he sees the driver take a deep drag on a cigarette.

and then flick the ashes. And he says, who flicks the ashes in a brand new BMW? That doesn't make sense. Something is off here. So they light him up and pull the car over. Sure enough, it was a stolen car.

So that's a second type of insight, which is a contradiction insight, where something happens that you didn't expect. Now, this didn't force anybody to revise their mental model or their thinking. It just allowed them to investigate further.

Now, your question is, where is the resistance coming from? And that's the third type of pathway that we identified, the correction pathway, where you're stuck because you have a flawed belief. And that's the problem. And that's what's hanging you up.

And that's the kind of insight most researchers study because it's easy to study it in a laboratory. And the way we get past that flawed belief, I found about 27 cases of that kind of pathway. And usually what happened was there was a hint, something happened that made them investigate further. And so instead of dismissing

the anomaly, which is what we usually do, they became curious about it. So in terms of insight, in terms of changing our beliefs, that's the secret sauce for this kind of pathway, where even practically for any of the pathways is to become curious about things that don't make sense. Now, what blocks insight in organizations, and this was the really discouraging part of my research on insight, what blocks insight is

is that organizations say they want insights and they want innovation, and they really don't. But they're being serious. They think they want insights. But insights are disorganizing, as you point out. Insights make you change the way you think, make you change all kinds of things, and they may not be right. And so most organizations actually think

inhibit insights. When people bring up new ideas, they tend to stifle them. And so that's a big problem. Is that because organizations are mostly focused on the error-reducing side versus the gaining insight side? And that's the tension between these things. They don't like variants. They don't like things that are outside of the norm. Yes, they like predictability. To be a good manager, you want things to run smoothly.

And insights are not ways of running smoothly. Insights are disorganizing and disruptive. And so that's a major reason that organizations, without even intending to, block the insights that come their way. So with that said, on an individual level, if I'm working in an organization, what can I do within the confines of only what I control, so myself,

to maximize the insights that I observe and that I can put them into use within that environment? And then I guess the second question would sort of be, what can an organization do to facilitate people gaining more insight and putting those insights into use?

You can be deliberate, like you're describing now. You can say, I want to have more insights, which means taking an insight stance rather than a stance that recoils at something new and unexpected and jarring. You can become curious.

you can start to celebrate your own insights. Because if you go through the day, we often have very small insights, things we didn't know, and we make these discoveries. And we just sort of, you know, cruise right past them and use them. But if we make a mistake, we beat ourselves up, right? At least I know I beat myself up. How could I have been so stupid? But I'm less likely to say, gee, that was really clever of me or of people that I'm working with.

So you can start attuning yourself to the insights you have and the insights people around you have so you're more sensitive to that. You can't be curious about everything because then you just waste tons of time, but you can be a little curious for a few seconds and say, I wonder what that means and think about it and imagine it and just get yourself into that kind of a mindset of wondering and speculating.

rather than recoiling at something that's unfamiliar. So that's what you could do as an individual. What can you do as an organization? There, I'm less enthusiastic. If you have an idea that's a bad idea and you make a mistake, that's the down arrow. Everybody can see it, right? It's public. So there's a lot of cost for making a mistake. If you fail to make an insight, nobody will know.

So the reward structure always favors the down arrow. Organizations, if they're really serious about this, can try to create a mechanism so that right now, if I have an idea, I bring it to my boss, I bring it to you, and you bring it to your boss, and it goes up the chain. It only takes one person in that chain to say, now we better not do it, and then the idea is rejected.

And good ideas are fragile and precious and easily discarded. So they could give you or they could give me an option for a review if I think the idea has been rejected prematurely. I can not really appeal. I don't want to make trouble, but there could be somebody, there could be a

some part of the organization that could re-examine the idea. Because let's take an organization that wants to come up with new ideas for whatever it is that they're doing.

Who's running the organization? People with lots of experience. And I found in my research that experience is essential for coming up with insights. However, people with experience have lots of scar tissue of things that got tried before and failed. And if you look at innovations, usually there's a bad track record. People trying it and failing and then somebody else trying it and failing until eventually it succeeds.

But until you get to the point where it succeeds, you're going to have people in the experienced people, the senior people in the organization. What are they going to say? They're going to say, you know, we tried that and it didn't work.

This may be the time for it to work. Maybe the technology is mature. Maybe, you know, the world situation has changed. Who knows? But the people who are calling the shots can remember times when this was tried in the past and it failed. And often that's a good enough reason to squelch the idea.

I think that's a really interesting point, especially when it comes to experience, because if the experience is typically the people at the top of the organization have the most experience and they probably worked in a job very similar to the one that you're working in now.

The problem is their experience in that role is 20 years ago or 15 years ago. And I would imagine that the environment has changed a lot. Whereas the experience of your manager or your team leader or whatever might be more recent and they would have a more accurate view of to what the lay of the land is right now.

Right. So, so experience, I still found that most of the insights and the sample that I studied would not have arisen unless people had experience. I mean, Darwin didn't, didn't just sit in his study and imagine he was on the Beagle. He was, he was watching these various species. He was looking at the variation. He was building his experience base. So experience is important, but, but,

There are other things, there are other factors that are operating and experience is not infallible here. And the people who are calling the shots are risk averse. And that's another issue in organizations. The people at the top who are calling the shots don't want to rock the boat because they're getting ready to retire. They're getting ready to draw a pension.

My sense, I don't have data on this, is they tend to be risk averse. They want to keep things going smoothly until they reach the end of their career at the organization. I want to come back to something you said about experience. Is there a difference between experience and expertise? Right. There is a difference. And so one of the things I've examined is how do you know who's an expert? And I identified about seven or eight criteria.

And the discouraging part is none of them are foolproof. There is no gold standard for who's an expert. But years of experience certainly contribute to expertise, but it isn't the same thing because there are people who just don't reflect on what happened. They don't learn about what happened. One of the other criteria I use if I'm going to find experts

gave somebody's expertise is I'll ask them, tell me the last mistake you made. Let's talk about that. And if the person says, I can't think of any mistakes. To me, that means this person may be competent, but it certainly is not an expert. Experts are well aware of their mistakes and their mistakes eat at them until they can sort of imagine.

What I should have done is this and figure out a way around it. So experts are highly aware of mistakes, but people who are journeymen, many of them stay as journeymen because they want to move on and forget about their mistakes. How do we identify the difference between people who know what they're talking about and not? And to this point, one of the most important decisions I think that we make

And we don't even recognize that we're making a decision is who we listen to and who we decide is credible and who we decide is not credible. And we don't recognize that we're making a decision in that moment. But how do we sort out who is credible and who's not or who knows what they're talking about or who's faking it? Right. That's a really important question. And I think most people do a mediocre job of determining who they can rely on.

And a lot of people go on surface characteristics. So if I carry myself, I stand up straighter than usual. I carry myself with confidence. People are going to listen to me.

And if I express reservations and I use qualifiers, I think this is the case, but I'm not sure. I'm essentially telling people I'm not entirely an expert here. And so I'm disqualifying myself. I'll give you an example. A friend of mine was excited about a scenario-based technique for training.

using decision-making exercises. And that's a precursor for the shadow box method that we're using now. And so he had some exercises that he had developed. He worked in a petrochemical plant

And he had about, oh, 30 years of experience there. And he said, you know, this would be great. Let's use this for people who just have been working outside of

checking valves and climbing ladders and things like that. Now they're inside working the panels, so they've been promoted. And there were two people who had just been promoted. He said, let's get these people up to speed more quickly and let's have them do these decision-making exercises. And one of the people was somebody everybody felt good about.

He just carried himself with great confidence and people were excited about working with him. And the other one was sort of tentative about things and they worried about him. And they ran them through the decision-making exercise and what did they find? The first person didn't really get it. He didn't really understand critical dynamics of the plant.

And the second person for all of his caveats really did get it. He understood what the relationships were, the different causal factors and how they work together. Now, the story has a bad ending because they decided...

The first person, despite all their enthusiasm for him, he wasn't ready. He wasn't going to be safe operating a panel of a petrochemical plant that could blow up. I mean, these are volatile compounds and lots of heat, and it's a dangerous environment. They said, he's not ready. He's going to have to continue outside.

for another year before we can use him on the panel. And the second person was somebody that they were ready to use. And after they did that, nobody wanted to engage with the decision-making exercise. They realized if you don't do well, it could have consequences. And they weren't intending the consequences to be used to evaluate people.

But because they used it in that way, nobody wanted to do any more of that training. That sort of begs the question, I guess, how do we evaluate people's decision making? How do you evaluate their decision making? You run them through a scenario, see what choices they make, find out what their rationale is, find out the reasons that they're picking one thing over another, and use that to determine if you think that their mental model is rich enough.

Another, I told you, is ask people, tell me about the last mistake you made. Or you can and should use years of experience, just don't take it all that seriously. So there's a variety of criteria that you can use. You can use their track record. Have they had a track record of successes? But none of these are foolproof.

I may be an investor and I have a track record of calling the market right in the last five times. And people say, wow, he really knows what he's talking about. But there are lots of people who are trying to predict which way the market is going to go. And some of them are going to get it right by chance. And people assume it's because they know what they're doing. And in fact, they're just the lucky ones. So you can't rely on performance.

And you can't rely on years of experience. To your point about explaining rationale and sort of why you made a decision and the variables that you consider and maybe how those variables interact over time,

I think organizations partly get in the way of this because they might write a summary of why they're doing something, but it's so high level, it doesn't actually contain much thought or reflection in the thinking, which also prevents people from learning from other people how they're making decisions and developing expertise. You're exactly right. So what you're saying is that we need to go beyond a simple surface explanation of why people did things.

My colleagues and I talk about as cognitive interviews to get beyond the surface explanation to find out how are you sizing this situation up? What were you noticing? And, you know, what inferences were you drawing? And we can be asking people those kinds of questions to get into their head more deeply rather than I figured the market was due for a correction or something like that.

I thought about an idea called a decision scorecard for an employee. I sit down with the employee and I actually I've used it several times and I say, let's go back in the previous year. What were some of the major decisions you made? And we, you know, and I come to the meeting with what I think were the decisions. The employee comes with their decisions. Then we compare notes and then we look at the decisions and say, okay,

which ones worked and which ones didn't. Now, some decisions may have been good decisions that didn't work through no fault of the employee. And some may have been bad decisions that worked because the employee got lucky. So you can't just look at the outcome, but you have to look at what was the person thinking about when they made the decision? And what can we learn whether it was a success or a failure?

And my experience is that this decision scorecard method of evaluating people

is much less stressful. And the employees and I enjoy it because we're all learning a lot as we go through it. Evaluating those decisions requires you to have knowledge of what you knew at the time, what you were thinking at the time, not retrospectively going back and trying to piece it together because now you have new information. And we've written a lot at Furnham Street on the concept of decision journals,

I'm curious as to what information, if you were going to create a journal that an employee had to fill out every time they made a decision, what information would you put in that decision journal to then use at these meetings where you're evaluating their decisions later on?

I think the idea of a decision journal is a great idea and I hadn't heard of that before. I would like to know when the employee is making a decision, what is the decision? What are the goals that the employee wants to achieve? The primary goal, but there may be other goals that the employee is aware of. What are the prime pieces of information the employee is using to make the decision?

Who are the other people or teams that are going to be affected by this decision? Those are the things that I'd like to examine. And then in retrospect, you know, we can see were there cues that I should have been paying attention to that I wasn't.

Are there goals that I should have been thinking about that I wasn't? But you have it in the decision journal. You have a record of what was the person thinking about at the time. I think it's a great idea. And for those of you listening, if you go to fs.blog slash dj for decision journal dj, you can get access to the template that we have online. Gary, I'll send that to you after.

I also think it's really important that people write it down in their handwriting and not on a computer because it's so easy to look at things on a computer and convince ourselves that we didn't write that. And you're going to try to, you know, somebody else wrote that. I was clearly thinking way better than, you know, what it looks like. But when you're confronted with your own handwriting and your own thinking, it does two things. One, when you write it down,

It's actually a bit of reflection. So you're taking this complex decision and you're distilling it, you're reflecting on it. And then you often realize through writing it that you don't understand it as well as you thought, or you know where to go look for a different piece of information to more effectively complete your understanding of it or offer more value. And the other benefit to doing it is that you can look back in the past and you can sort of like figure out patterns in your decisions and

Right. Both of those are important issues.

The act of journaling is a way of helping you inquire better and be curious about things that you hadn't thought about. Yes, all of those are all of the above. I would like to see the decision journal when we're finished here. Awesome. I think we're actually going to publish one that people can buy on Amazon and write at some point this year. Maybe I'll get you to look that over before we do that. I would love your insight into that.

Happy to do it. One of the things, we're down a few rabbit holes here, and I want to come out of this, but you use the term mental models. What do you mean when you use the term mental model? What is that to you? This could be another rabbit hole, but a few years ago, my colleague Joseph Borders and Ron Vazian and I did a study of mental models in petrochemical plants.

And going into this study, I thought that a mental model was a set of beliefs about how something works, whether it's a piece of equipment, a piece of machinery, it's an organization. Here's how it works. Here's what happens. Here's the kinds of things.

of components and how they fit together. So that was my belief going in. And we spent a week at a petrochemical plant running people through a challenging scenario to capture their mental model. And that was part of their mental model, but only part. We found that there was another part of the mental model of not just how does it work, but what are its limitations? What are its boundary conditions? Where can it go wrong?

And people who are experienced are well aware of these boundary conditions. And so that's part of the richness of their mental love. And then we found another component, which is what are the workarounds that I can use if we run into one of those limitations? How can I recover from it?

And then we found that there was a fourth component. We have actually a matrix of this, a two by two matrix. The last component is somebody with a good mental model can also anticipate where people might get confused, what kinds of mistakes they might make, what kinds of flawed beliefs they might have, so that you need to be more careful as you describe what they're supposed to do.

So my idea of a mental model started out with how something works, and now it includes all four of these components, how it works, what its limitations are, how to work around them, and how to anticipate confusion on the part of people who I'm going to be working with.

Do you think that relates to the learning process? I think people learn with a learning loop. You have an experience, you reflect on the experience. That reflection gives you an abstraction and the abstraction turns into action. And so you have this loop of learning. When you learn a concept from somebody else, you're learning their abstraction. So you're putting an abstraction into use as your action.

And if you take the time to reflect, well, reflection is how we learn those edge cases, how we encode it in our mind where it's more likely to work in this scenario and not. And so in the first case, when you're learning from somebody else, I think you have like pattern matching, but it's exact pattern matching. It's like searching Google with quotes. Whereas in the second case, you're doing pattern matching, but it's associative pattern matching. What do you think of that?

That feels reasonable. I think to do that kind of learning, you're sort of exposing a problem where I get your abstraction, but I haven't lived your experience. So your abstraction, I might draw the wrong lessons from your abstraction. I don't know where it's coming from. And I could ask you to explain it. And language is a means for communication, but it's also a means for miscommunication.

because there's all kinds of ambiguity in the words that we use. And people use the same words and mean different things and then believe that they mean the same thing and can get themselves in trouble that way. So that's a limitation of having you explain your abstraction. A way around that limitation is to ask you about stories.

Where did you come up with this abstraction? What happened? Tell me what went on that changed your mind. And then I can ask you about that event. So you were in this situation and something happened that surprised you. What happened? And why were you surprised? And how did you notice it? And what sense did you make of it?

Stories are not hampered by the limitations of language because now we have an incident account that we can drive. And many organizations don't take advantage of stories. And we've just finished a project for a petrochemical group about using stories to try to get at these kinds of expertise.

and communicate them. Can you expand more on what role stories play? Are they just compressions of ideas? Are they sort of highlighting certain variables or details or omitting certain variables or details or getting us to anchor to something? Stories are doing all of those things. And they're doing it in a way that's really engaging. Like, here's what happened and I didn't know what to do about it.

And then you want to know what did I do about it? How did it end? And I know when people have recounted stories in a couple of groups that we studied when we were doing this project,

We'd say, okay, so what's the takeaway from the story? And everybody had a different takeaway. So that's the richness of stories is what have you learned from the story? What are the implications? And then we, as a small team, just compare notes and realize that because we have different mental models, we have different interpretations of the story and the importance of the story. What makes for an effective story?

There's got to be some sort of mystery about something happened that wasn't expected. And then the people listening to the story want to know how it wound up. Second is the storyteller.

should be somebody who likes to tell stories. And I've encountered some people who don't like to tell stories. And I'll ask them about stories and they say, I can't remember any stories. So either they're avoiding the issue or their brain isn't wired to be able to access stories.

Or some people think they're telling a story when they just give me a narrative. This happened and that happened and we go through what happened, but there's no story. It doesn't move. It doesn't evolve. There's no transformation.

And the good stories result in an insight. That reminded me when you were saying that about the cop in the ashes, right? So he's looking for the contradiction. So he's got a story in his mind in a way about the world works. And we all do when we're driving because we're not consciously paying attention to the road, but we notice anomalies. So we notice something that is unexpected. And in a vehicle, that's our cue to pay attention.

as a driver of a vehicle. For whatever reason, we get encoded with this after years of driving. It's like, okay, now I'm switched on. Maybe my heart races a little bit, but I react. I get out of my trance and my body awakens. It's like I'm in low power mode and then I wake up. Yeah.

But at work, we don't do this. We get woken up and we have a surprise or a contradiction and then we ignore it or we explain it away or we move on to the next task instead of going from low power mode to sort of like full power mode. What's going on here? Because the surprise would indicate the world is not working the way that I thought that it should work. So what you want to have is a way to harness the power of curiosity when something didn't happen the way it was supposed to.

And our natural tendency is to dismiss it, to ignore it, and to explain it away.

And you think, well, why can't people think like scientists? Because scientists are careful. But the researchers who have investigated this find that scientists do the same thing. They have something that's called knowledge shields, the way all of us do, that a variety of techniques for explaining away inconvenient data and anomalies. So we have a story about how the world works, and then we notice an anomaly, and it

And rather than try to fit that anomaly in or explore the anomaly, it's easier. It's a low power mode for us just to continue with our story the way that we're telling it to ourselves and ignore or dismiss the anomaly. That's why we use these knowledge shields and scientists use the knowledge shield. Scientists don't want to have anything challenge their theories.

And so you bring up a data point that's going to make my pet theory look a little bit sketchy. And I'm going to say, well, how did you collect those data? Or I'm not really sure that you've interpreted the data correctly. I'm going to find all kinds of ways of dismissing it. Yes.

It leads to fixation because one of our strengths when we encounter a situation is we quickly make sense of it. We use the patterns that we've built up and it usually works, but it doesn't always work. And so some people say we should discourage people from coming up with immediate reactions, but that's ridiculous because that's not the way we think and it would cripple us. So instead you want us to come up with a quick reaction

But if something, if we're wrong, there's going to be an anomaly and we want to be able to revisit it. So we want people, that's the way we break out of fixation is we notice the anomalies. The way we get stuck in fixation and making fixation errors is we explain away the anomalies, hold on to the original wrong impression until it's far too late. Now for your decision journal,

One of the things that you could capture is what do you think is going on in the situation now? Describe what's going on. And then as time occurs, maybe people can jot down data that they receive that are inconsistent with that belief. So you can sort of track when is it that you first started to lose faith in your initial impression?

Could you have questioned it earlier? Did you wait longer than you needed to? I want to switch gears just a little bit. How do you think about the difference between people that stagnate at, say, an intermediate skill level and people who reach a mastery of the same skill? Right. So this gets into the issue of expertise and how do people get to the higher levels of expertise?

And some models of expertise say that people just acquire more and more information and that's how expertise builds. But that doesn't work because that's like having unexamined experiences.

And you're not coming up with a richer mental model. My speculation, working with a colleague, Holly Baxter, is that people get up to a certain level of performance and then they start to stagnate and they start to plateau. And people who break through and move to the next level are the ones who engage in unlearning.

who realize there are certain conventions they've bought into or beliefs they hold that are either wrong or are limited and don't apply as broadly as they imagine. Is there any sort of practical things that you've seen work particularly well in the wild for people developing mastery in less time, say, than people would expect? So let's imagine two people.

Starting out a job, we'll say person A and person, worker A and worker B. They're both going to be on the job for a year. And worker A has a lucky year and everything goes smoothly. And there's no disruptions, no problems. It's just a really very, very low stress year. Worker B has had a turbulent year.

All kinds of things have gone wrong. All kinds of adaptations have been necessary. There were mistakes made that had to be identified, diagnosed and recovered. And one thing is like the whole year has been filled with all of this.

Who is the one that you would like to have working with you after that first year? Well, I would choose person B because the surface area of their learning is so much richer and detailed. Exactly. So worker, the second worker with a turbulent year has learned much more. What can you do for worker A who's had the smooth year?

you can take some of these crises that the second worker had and encapsulate them as stories or as scenarios. And you can present them to worker A just to give them vicarious experiences to get them to move forward. So that's something that you can do. And we've seen organizations

Doing things like that, and that's actually one of our decision training methods, is to provide those vicarious experiences. That's really interesting when you think about organizations as sort of like trying to reduce the variability or reduce errors. And then...

at the same time acknowledging that errors create surface area for reflection, experience, expertise, and learning. Yeah, but where people spoil it is they then take it one step too far and say,

the more errors you make, the more you're going to learn which is true. And so we want to encourage people to make errors. And they don't mean that. Organizations don't want to encourage people to make errors, especially some of the industries that I work in where errors can be very dangerous. And the fact is, I don't like to make mistakes. If I do something, if I put on a workshop that has gone poorly, my immediate reaction is,

I never want to put another workshop on again. This is just too painful. And it takes me a few days to get out of that emotional funk and to think about it and say, you know, if I had done it this way,

I could have used that problem and it could have been much better. And so by the end of that kind of rumination, that kind of reflection, my reaction is, I can't wait to do another workshop because now I want to try that again.

But the immediate reaction is one of being pretty devastated by a mistake. And so it's going to be a hard case if you tell me, Gary, try to make mistakes. That's how you're going to learn. It's not in my emotional makeup. Maybe it's in your shame, but it's not in my emotional makeup. I don't know if I intentionally make them, but I certainly have more than my statistical fair share of them. What is cognitive flexibility theory?

Covenant flexibility theory is the notion of trying to help people achieve expertise by preventing them from locking in to routines and standard ways of doing things so that they can become more naturally adaptive.

You can give them vicarious experiences that can't be handled by the usual routines just to force them out of their comfort zone and to get them into a mindset of not only being prepared to be adaptive, but enjoying being adaptive. Because that's another thing that I've noticed about experts.

If they're doing the same job over and over again, they get bored. And then something happens and they can't use the same routines. And the journeymen, they get really frustrated like that. It's not supposed to happen. I don't know. Who do I call? And the experts, their eyes light up. Okay, we can't use the techniques that have been successful for us up to now. What can we do? What can we invent?

And so the eyes lighting up is like, to me, a key to cognitive flexibility theory is to get people into that mode where they're excited when things don't go as planned and they're going to have to improvise.

Coming back to the beginning, we've talked about sort of a lot about gaining better insights. Let's talk about some of the tools that we can use to reduce errors. Maybe we can start with a premortem. What is it? Why is it useful? How do we conduct one? I just sort of invented it for a company that my company I was running back in the late 1980s.

And we mostly had successful projects, but not always. Sometimes our projects would fail. And then we would do an after action review. We'd say, what went wrong? And we'd realize, oh, if we had just done this differently, it might not have ended so badly. And I said, why don't we, we always have a kickoff meeting when a project starts. Why don't we move that after action review to the front, like a post-mortem,

A postmortem is something that you do like in a hospital when a patient has died. And a physician does a postmortem to find out why did the patient die. And the advantage of doing a postmortem and discovering what was the real cause of death or the causes of death is that the physician gets smarter, now has a richer mental model. So it helps the physician.

And now the physician can tell the family, here's why your loved one died. Because they really want to know. And now you can help reduce their pain by giving them an answer. And they can work with that. Everybody benefits from a postmortem except for the patient because the patient is dead.

So we said instead of doing a postmortem for projects that fail after they fail, let's move it to the beginning. That's why it's called a premortem. And the way it works is if we're on a team, we take everybody on the team. We're all sitting around the table. And usually we do it at a kickoff meeting. Here's here's how we're going to carry out the plan.

Now we've spent an hour, hour and a half just through the kickoff meeting. What are we going to do? Who's doing what? What are the roles and functions? We've got all that nailed down. And we've got about 20 minutes left and enough time to do a premortem. And I say, okay, now we're going to do the premortem. Everybody just relax. Sit back in your chairs. Now I'm looking in an imaginary crystal ball.

I actually do have a crystal ball, but I don't carry it around with me all the time. I'm looking at an imaginary crystal ball, and it's now six months from now, and this project has gone off the rails in a major way. It's been a disaster. It's been a failure. It could be six months. It could be a year. Pick your time frame. What happened is the crystal ball is showing failure.

Now, the crystal ball isn't showing us why it failed. It's just showing us that it has failed. Not much is certain. And the crystal ball is infallible. It never lies. Now, everybody around the table, you have a pad of paper in front of you and a pen. Take the next two minutes and write down all the reasons why this project failed. And then I have a timer and I start the timer. And then everybody's writing like mad for the next two minutes.

And then I say, time's up. I've got usually a whiteboard or on Zoom I'll have a virtual whiteboard. And I'll say, now I'm going to go around the room and I'll start if I'm a project leader. Here's the top of my list. Here's the thing that I mentioned. And usually it's something that never came up during the meeting.

The leader and the facilitator could be different people, but in this case, we'll say it's the same person. I want to set a good example that I'm not just coming with something that's absurd, but something that's real. And then I go to the next person. Shane, what do you have on top of your list?

And then, Dave, what do you have on your list that neither Shane or I have mentioned? And we go around the room and I'm writing these down. And usually we go around the room a second time, sometimes a third. And we have a long list of things that would explain why this project failed. And most of them are things that people hadn't thought about.

And now we sort of are harvesting the collective wisdom of the people and the scar tissue of the people in that room. And we're seeing all the ways that this plan that we were so confident, so overconfident in, now our overconfidence has been diminished.

Now, some people have worried that maybe the confidence level got too low. So then we added a last part of the exercise. Look at everything on the whiteboard. Look at all of those items. Now, let's take another two minutes and write down each of us, what can I do personally to try to reduce the chance of some of these happening?

And then people are writing down what they can do. And that's a way of trying to reduce the risk. And so that's how we run a premortem. What we find is it surfaces ideas and flaws that people hadn't considered. But it also creates a culture of candor in the team where people are used to

are starting to get used to expressing problems rather than covering them up. And it creates a sense of trust that I can say something and I'm not going to get criticized for it. The way the premortem works is it reverses the usual dynamic of these meetings.

Often at the end of a meeting, somebody will say, all right, we're done. We're just about done with the meeting. Does anybody see any problems? Nobody wants to identify a problem. We've just spent the last hour and a half coming up, discussing the plan. Nobody wants to admit that there's a problem. People aren't even thinking about problems. They're all in a go mode. Let's get started. Let's get our patience to start.

And there could be consequences of exposing problems. With a premortem, we've reversed that dynamic. The way you show you're smart in a premortem is the quality of the items that you generate. I think you hit on something that is hiding in plain sight that most people miss and we've spent quite a bit of time on ourselves is sort of how do you run more effective meetings by changing what people signal

value. And you can do this by asking people for unique insights into the problem that nobody else in the room has as one way, instead of everybody coming in and summarizing the executive summary, which everybody's read, but everybody needs to paraphrase because they want to signal that they've done the work and they should be in the room and they know what the problem is, even though they're just regurgitating the same terms over and over again. I

I think it's really a key factor to the premortem that you're changing, you're signaling expertise to your coworkers through coming up with something valuable and uniquely insightful into the problem. Right. So it's sort of like, it's almost theatrical. I mean, there's a certain performative quality.

And we try to do it quickly. I just want to get everybody's entry very quickly so that the energy level stays high. And I want to get one from each person because often like one of the senior members of the team might want to give the entire list and then everybody just sort of in a receive mode. And I want to get out of that mode. I want everybody ready. What are the common mistakes that people make in running a premortem?

One of the mistakes is that they just frame it wrong from the beginning. They'll say, okay, we should do a premortem. So what can go wrong? It's not an issue of what can go wrong. We're dealing with a crystal ball that has showed that the plan has failed. We know that it failed.

And our job is to explain why. So say what can go wrong is too tentative, it's too vague. And so that's I think maybe the most common mistake that people make is they don't frame it that way. Another mistake is they let individuals talk too long and everybody else is waiting. And this way we're going around the room in a pretty rapid clip.

And everybody just stating one thing at a time. And a third mistake might be the leader gets cold feet and says, I just want to hear what everybody else has. So instead of creating a culture of trust and candor, the leader is creating a dangerous environment where people can get in trouble and the leader is being threatened.

shielded and protected. When we think about reducing errors, one of the common things that sort of comes up with people is cognitive biases. I'm curious as to what you've seen, what you think about trying to limit cognitive biases. We've talked about one, which is fixation error. And I have my own theory on this, but I'd like to hear your views. All right. So I have to be careful about this. And just caveat what I'm about to say, that I'm an outlier.

that the whole issue of decision and judgment biases is a major line of research. And the community, the heuristics and biases community has been extremely effective in the research they've done and the applications that they've put this work to in the judgment decision community. So I'm...

I want listeners to understand that the dominant view is that these biases are important and need to be taken into account. Having said that, I don't agree. I don't like the idea of decision biases. I know my sense from the literature is that there's been little, if any, success in de-biasing people. And I think that's a good thing.

because the biases that get identified are essentially related to the heuristics that we've learned and are part of the experience that we have. And they only look like biases in hindsight when one of them doesn't work. But it's a heuristic. It's not an algorithm. So sometimes heuristics don't work. And the way the research has been done is you put people, you give them a task and

where the heuristic that they're used to applying, it's not going to apply in this situation. And you show that people fall into the trap that you've set to them. And you say, you see, their heuristics are flawed, and yet people, many people, most people fall prey to it. But nobody has done research on the positive side of these heuristics.

And so I think that's a bias on the part of the decision bias researchers, that they're only looking at the down arrow. How do these heuristics get us in trouble? And they're not looking at the strengths of these heuristics. And, you know, Danny Kahneman and Amos Tversky, who developed these ideas, they themselves stated that these heuristics are generally useful, even though they're not perfect.

And what's happening is the community is setting up, because it's easy to set up studies where the heuristics get you the wrong answer. And so there's lots of these studies performed. And so that's the message that gets compiled without looking at the upside. So I've written about positive heuristics, that all the major heuristics that get criticized are extremely valuable for coming up with insights and making decisions.

So availability bias. Of course, that's going to be helpful if I want to draw on my experience in order to know how to work in a situation. It's not going to be perfect, but that's what my experience buys me. The same thing for representative bias. Anchoring bias. I'm going to anchor on a number. Now, sometimes people do foolish things. If you just give them a random number, they anchor on that. So people are capable of foolishness.

But generally, I've got to start somewhere. And so I've drawn my experience to see what's this like, and I'll anchor on that, and I'll adjust from that because my initial impression, if I have a reasonable amount of expertise, is going to be accurate. And so the heuristics that get criticized, we would be crippled without those heuristics.

And I've created a thought experiment, which is, you know, people worry about emotion clouding our intellect and getting in the way of good decision making. And as it turns out, there's a part of the brain, I forget what the structure is called, where emotion and decision making come together.

And there are some people who have a lesion in this part of the brain. And this is work done by Damasio 15 years ago or so. And he studied people with lesions in that part of the brain where they were unable to draw on their emotions to make decisions. In other words, they were purely rational. They were like Mr. Spock. Their intelligence didn't suffer. Their IQ was as high as it ever was before the lesion.

but their lives became miserable. It would take them 45 minutes to decide what restaurant to go through. They got divorced, they lost their jobs because they could not use their emotions because our emotions are a way of drawing on the patterns we've built and on our experiences. So here's the demonstration, the thought experiment I have. If you are a strong believer

in the idea that emotions cloud our thinking and interfere with our decisions and our judgments. We know where this area is. We can do laser surgery to triangulate different laser beams to create a lesion in this area so you will no longer be troubled by emotions when you make decisions.

And my challenge is to any of the judgment decision researchers who worry about emotions clouding our decisions, would you have the surgery done? I have yet to find the backer. And I've said, I will pay for the surgery myself.

Now, this would be a great experience, a great opportunity. Nobody has taken me up on this offer. Talk to me a little bit about making decisions in teams of people versus an individual making a decision where boards or committees are sort of asked to come to a decision. What are the pros and cons of a team-based approach? And what have you seen as more effective ways to get teams to make decisions?

Team decision making, the teams use different kinds of strategies and there's different formats. You can have an autocratic team where a leader just makes a decision. You can have teams voting. You can have the vote be anonymous if people are afraid of repercussions. There are situations like let's take a pirate ship.

In the old Caribbean days when you had pirate ships around, pirate ships were extremely democratic. There was nobody in charge. They voted who was going to lead them. They voted about what targets they wanted to attack. But once they were in battle, there was a captain.

And the captain's orders had to be obeyed because under that time pressure with that kind of urgency and those kinds of stakes, you needed to switch your decision strategy to something more autocratic. So there's no general rule for how a team should make decisions. It's going to depend on the situation and the context. I can give you an example of what not to do.

There was a movie with Matt Damon a number of years ago called The Martian. And Matt Damon is part of a team that went to Mars and something happened and they needed to depart and they gathered everybody together, but they couldn't find Matt Damon. He wasn't available and it was urgent that they depart. And so they departed without him. I assume that he had gotten injured or killed. That's why he wasn't there.

So now they're taking off from Mars, they're heading away. And Matt Damon somehow finds his way back to the station and radios them and said, where are you? They realized he's still there. So the decision is, do we go back or not? And the people on the spaceship know that if they go back, it's going to be risky. So they said, let's see if we can come to a consensus about what to do.

I don't like the idea of consensus decisions. And I don't like the idea of a consensus decision in this kind of a dangerous environment. Because as the people in the spaceship went around, there was enormous pressure on everybody to go along with the consensus, which was we should go back and rescue him. And it was public. And they went back and they risked all of their lives in order to save them.

What they should have done is had people vote in a way that was secret. And I see this with people who are cross-country skiing in the back country. And you don't want to just cross-country ski where it's level. You want places where there's hills. But if the hills are too great, then you have a risk of an avalanche. And at a certain point, they'll look ahead and say, this could be risky. Do we want to continue or not? And it's the same dynamic.

There's a lot of pressure on people to inhibit their fear and go along with it with the consensus rather than then finding a way to vote anonymously and realizing that one or two of the members don't want to do it.

Having votes that are anonymous is important. And I also don't like consensus decisions because those tend to be, in other situations, the consensus is fairly risk averse, unlike with the Martian.

And people want to have something that everybody can agree with. And you've lost your opportunity for somebody who's had an insight. Somebody has a new idea of trying to put that in practice. But the consensus is to try to do the safe thing that's comfortable that everybody can live with rather than to do something that's bold and innovative.

So I have problems with consensus decisions. I have problems with group decisions. I think groups, as a rule, like a heuristic, make terrible decisions. Individuals should make decisions. Groups should provide input. But there should be one sort of person making the decision. But that's my personal sort of experiences that I've lived through, through all of the different roles that I've had in life.

I would agree with that. I was trying to answer your question. How should how should teams make decisions? No, this is too important, though, right? Because not everybody in when people a lot of people agree with it, but they can't change it. So they're part of a committee and the committee is making a decision, even though everybody on that committee might think it would be more effective if one person agreed.

sign their name to the decision, which also would increase the accountability of the process, believe it or not. Because often what I've found with group decision making is that when the decision is correct, everybody in the room was responsible for it. And when it was wrong, everybody in the room tried to convince them not to do it. And so there's no sort of like learning that happens from these, but in, in, in

In the real world, a lot of organizations still insist on making decisions as a group. So the question was sort of like, how do we make more effective decisions as a group? And I think you answered that beautifully. And the second question is, how do we illuminate the insights in a group setting when we are making a decision as a group? How do we find ways to...

surface unique insights into the problem that we might not know. Do you have any experience with that? Oh, well, the premortem method. Right.

accomplish that to a great extent. And I think what's important about the premortem method that could also be used generally by a group and team environment is to have people craft their ideas and their interpretations and their assessments individually and privately before they surface it to other people. And the research on like brainstorming, we're all brainstorming together,

The research shows that brainstorming does not produce more ideas or more innovative ideas. And I think teams would be in groups would be better having individuals generate their own concepts.

And then after they've done it independently and privately, sharing it with the others. And that would be a good way to generate insights that are hard to surface about why the project might fail. Is there a way to even bring those forward into how we're proceeding with the project? Like, how do we get the insights on the table, even if those insights are defining the problem? Because before you can solve it, you have to define it. And often we have very different... If you were to...

This is a fun experiment and I suggest everybody who does this in group settings do this. When you go to a decision to make or go to a meeting to make a decision, have everybody in the room write out the problem on a piece of paper that they think they're solving with this decision and then compare how different and how much variance there is in those problem statements.

And so often somebody has a more unique, what tends to happen in the real world is you go to a meeting, somebody surfaces what the problem is, it sort of clicks the minimal standard in everybody's mind. And then being all type A,

individuals, we jump on solving the problem and we forget to sort of go back and say, like, do we have the right insight into the problem? Have we defined the problem? And this is where if you have a sole decision maker instead of a group that you can acknowledge that the responsibility of that decision maker may be to listen to other people's definitions of problem, but ultimately they're responsible for defining the problem for the group and for themselves.

Okay, so I have a bunch of reactions because you've said a lot of very interesting and exciting things just now. So first of all, I agree with you. Better to have a single leader that they trust. They don't have to think that the leader is infallible, but they just know that the leader is going to be effective at making the most of the team's resources. But one of the traits that you want in a leader

is somebody who can honestly query the team members and see what they think. A lot of times that querying is done in a rote way that doesn't accomplish much. So I know that I'm the leader

but I'm not supposed to say, here's what we're doing. I'm supposed to have some guise of being open to other ideas. So I say, so let's go around the table and see what other people think. What do you think? Okay, good. What do you think? So we're just checking the boxes so that I can say, but I went around the table and I learned what people had in mind. That's not the way to do it. Instead of just checking the boxes,

You want to be curious about what do they think? What are your thoughts about this? In what ways do you think, do you have a different sense of what's going on here or what our goals can be? And let's capture that. So that's the way to go around the table, not in a rote fashion. You're starting to get into the area of wicked problems and difficult problems.

And I'm doing a project on that right now with two colleagues, John Schmidt and Sean Murphy on wicked problems and how to proceed with wicked problems. And we're finding that the notion of the goals is going to get richer and it's going to change as we proceed, as we go along.

And you want to capture that. You don't want to lock in to original ideas because almost certainly our original ideas are going to be inadequate. Now, there's also a way to do project reviews that could try to capture more insights. In many organizations, if we're doing a project, well, I'll say, okay, Shane, you're doing this project. Let's have a review every three months.

And now it's a standard review. What resources have you expended? What did you project that you were going to expend? What tasks did you accomplish? Where are we in the milestone chart? And we go through all of those things. But I can also ask you in that review meeting, Shane, in the last three months since the last meeting, has anything happened that surprised you? If we're dealing with a complex situation, with a wicked problem,

That's different from something that's sort of like painting a room where everything is predictable. So I say, has anything happened in the last three months that has surprised you? And if you say, no, no, nothing has surprised me. You don't have to worry. That's when you have to worry because I want to know what surprised you. And now we can start to examine that and see and be curious about it and see what it might mean

and how we can change our notion of

the problem or the goals or any of the important parameters. I want to switch gears a little bit. Often when we are making a decision, there's two really similar options that are, I wouldn't say equally, but near equal in terms of their effectiveness or our anticipated effectiveness about how they'll solve the problem. And we spend a lot of time, money and resources trying to make a perfect decision.

Can you explain the zone of indifference to us? Right. I love the idea of the zone of indifference. And the phenomenon, the way the phenomenon works is if I've got two choices, a terrible option and a wonderful option. Quick, which one do you pick? Okay. That's not a hard decision. As we move them closer together,

And so now the terrible option has some positive features and the great option has some negative consequences. And we get to this point where it's really hard because the strengths and advantages of one versus the other are almost totally balanced. These are the hardest decisions people ever wrestle with. And the paradox is,

If the advantages and disadvantages of the two options are almost perfectly balanced, it doesn't matter which one we pick. And yet we will spend days, weeks going around in a funk. Which one am I going to choose? Committees spend hours going over it.

And if there's anything that can be valuable and efficient in what I'm discussing, it's this. When you think you're in a zone of indifference, to recognize I'm in a zone of indifference, I'm never going to be able to tease them apart. So I'm just going to pick one and spend my time in more fruitful ways.

Even if it means flipping a coin. I like that a lot because I think we do spend so much time often we get paralyzed with perfection. And I have a heuristic called stop flop or no that I use to sort of like get myself out of this paralysis, which is you stop gathering useful information.

Flop is the first lost opportunity. So you're about to lose an option or opportunity or you know, you know what to do And so if you stop flop or no it can get you out of this sort of paralysis And so that works for you. Yeah It sounds like a reasonable strategy. Oh, it's not perfect is it's a heuristic again, but talk to me about shadow box So here's how shadow box works

It's primarily a scenario-based approach. I create a tough, challenging scenario and we go through the scenario. Usually it's in text and I stop it part of the way through and I say, here's decision point one, Shane. At this point, here's four different options, things you can do, courses of action.

Rank order, which you would do from top to bottom, and then write down your reasons, sort of like your decision log. And then we continue the scenario, and then we stop it again, decision point two. Here are three goals that you might pursue. Rank order the importance of each of these goals and write down your reasons. And then we continue, and then we stop it again. Here's decision point three.

Here's five pieces of information that you can pursue that would help you. Rank order the importance of each one of them and write down your reasons. And it could be anything like that. And we just go through the scenario that way. At the same time, I've had a panel of maybe three experts. They've gone through the same scenario that I'm showing you. They've written, they've done their ranking. They've written down their reasons. We've combined their rankings.

We've synthesized the reasons. So now when you go through this scenario and we get to decision point one, here's four different courses of action and you rank order them and you write down your reasons. And then we say, let's see what the experts picked. And then immediately we show you how your ranking compares to the experts. And you want to match the experts. You're really trying to get your ranking to match theirs.

So that's the game part. You want to get that to match. But the real learning is look at what the experts wrote down as their reason. Here are the things they noticed. Here's the things that they inferred. Here's the things they worried about. Here's the opportunities they saw. And you look at what you wrote down and what they wrote down. And now you are seeing the world through the eyes of experts. But the experts don't have to be there because that would be a bottleneck.

They've done their work and now we just have what we recorded from them and we can provide it to you as feedback. So you're seeing the world, at least this scenario through their eyes. So that's how ShadowBox works. And in a half day of training, we show that people's match to the experts increases by about 25% in just a half a day of training. Now there's another version of ShadowBox

which is video based, where I'll have like a one or two minute video of something. And I'll say, here's a situation. It could be a police officer dealing, using body camera footage, or it could be something that's staged in a hospital.

And we have, like if it's a hospital, you know, tell me what's happening in this situation. And you watch this scenario from beginning to end. And then we say, now let's start again. And as we go through the scenario, you can pause it at any point you want. And then there's a little circle and you can put the circle around any cue that you think is important to be watching.

And you do that as successively as you go and you write down why you highlighted that cue and we go through the scenario. And then after you finish that, we say, now let's go through the same scenario and we've had experts do the same thing you did. Let's see what the experts were focusing on and when they were focusing on it compared to you.

And then what reasons they gave. So this really is seeing the world through the eyes of experts. I love that idea. And I think it's really effective. And we're exploring using it with some of our stuff. When you mentioned the police officer and the body cam, a question that came up for me is, it's one thing to watch.

in a room with air conditioning and watch this police officer approach a vehicle. It's a whole other thing for the police officer to be approaching the vehicle.

and be in a different environment. And that sort of brings me to the question of what role does environment play in our decision making? And are there things that we can do to improve our environment that improve our decision making? That's a very important observation because this has come up in some of the work that we've done with use of force decisions.

where if you're a police officer, you might be in a situation, should I use force or not? And sometimes that's just part of the job. That's a horrible part of the job for police officers. But we find that there may be sometimes ways to avoid getting into that decision. The way you structure it, the way you frame it, the way you manage it

can possibly diffuse the situation so that you don't necessarily have to get into a confrontation that you might be able to avoid it. So yes, structuring the environment is a much better approach. And this is something we have done a little bit with, not as much as we would like, about

And when I say we, this is my friend and colleague John Schmidt. How do you get a voluntary compliance rather than intimidation? And for many, a number of police officers, they only know one dominant way of getting civilians to comply with them, which is to threaten them, to intimidate them. But we did a project a number of years ago

my wife and I, my wife Helen and I did the project to try to see what other techniques for gaining voluntary compliance, for avoiding these kinds of direct confrontations. And the nickname for this project, it was funded by DARPA, Defense Advanced Research Projects Agency. The nickname was the Good Strangers Project. And we studied military personnel and we studied police officers

who were known to be good at diffusing situations, at creating an environment that was benign rather than a high conflict environment. And one of the things we learned, so I remember one police officer that we interviewed, it was a small office,

He was a large police officer. It was hard to fit into this office. Not only was he large, he was extremely strong. And like his arms were bigger than my legs. Okay. He was a really strong police officer. And he said, I joined the police because I liked the action. I liked, you know, I just liked it. And it was exciting. But I noticed as my career was going on,

I would sometimes get in fights with people and I almost always won the fight, but there was damage. I was accumulating damage. And then I looked at one of my colleagues. He could get people to do things for him. And just by the way he carried himself and the way he talked to them. And I said, I can't continue much longer fighting.

in the mode that I am, what's he doing? What's his technique? And he said, ever since then, I've been trying to come up with golden words and I've been jotting them down when I see other police officers use them. And I'm trying to build a repertoire for how to gain voluntary compliance. One of his rules of thumb, maybe the most important one,

from my consideration, is he said, whenever he's in an encounter with a civilian or even with a lawbreaker, he said, I want that person to trust me more at the end of the encounter than at the beginning. And that governs the way he carried himself, that he was carrying himself in a way that would move the needle to create trust.

greater trust rather than diminished trust. And in terms of mindsets, that's such an important mindset shift. His previous mindset was, I want to get people to do what I tell them. And so I've got to be tough and I've got to be threatening. And his new mindset was, how can I get them to do what I want and have faith in me?

I think that's a great place to end this conversation, Gary. Thank you so much for your time. Thank you very much for the opportunity. I've really enjoyed the conversation, Shane. The Knowledge Project is produced by the team at Farnham Street. I'd love to get your advice on how to make this the most valuable podcast you listen to. Email me at shane at fs.blog.

You can learn more about the show and find past episodes at fs.blog slash podcast. To get a transcript of this episode, go to fs.blog slash tribe or check out the show notes. Can you do me a small favor? Go online right now and share this episode with one friend who you think would love it. Thanks for listening and learning with us. Till next time.