You yourself are not going to sort of solve the obesity epidemic. You yourself are not going to sort of create world peace. You yourself are not going to sort of, you know, solve climate issues, right? Your brain just isn't going to be big enough. Collections of people, by creating a larger ensemble model, actually have a hope of addressing these problems.
Hello and welcome. I'm Shane Parrish, and this is another episode of The Knowledge Project, which is a podcast exploring the ideas, methods, and mental models that help you learn from the best of what other people have already figured out. You can learn more and stay up to date at fs.blog.com.
Before we get to today's guest, I get emails all the time from people saying, I never knew you had a newsletter. We do. It's called Brain Food, and it comes out every Sunday morning, usually 5.30 a.m. Eastern Time. It's short, contains our recommendations for articles we found online, books, quotes, and more.
It's become one of the most popular things we've ever done. There's hundreds of thousands of subscribers and it's free and you can learn more at fs.blog slash newsletter. That's fs.blog slash newsletter. Most of the guests on this podcast, The Knowledge Project, are subscribers to the weekly email, so make sure you check it out.
On today's show is Scott Page, professor of complex systems, political science, and economics at the University of Michigan. I reached out to Scott because over Christmas, I read a book that he wrote called The Model Thinker, which is all about how mental models can help you think better. And as you can imagine, this podcast is a deep dive into mental models, thinking tools, and developing your cognitive ability. It's time to listen and learn. ♪
The IKEA Business Network is now open for small businesses and entrepreneurs. Join for free today to get access to interior design services to help you make the most of your workspace, employee well-being benefits to help you and your people grow, and amazing discounts on travel, insurance, and IKEA purchases, deliveries, and more. Take your small business to the next level when you sign up for the IKEA Business Network for free today by searching IKEA Business Network.
You just wrote a book called The Model Thinker, and I want to explore that with you. What are mental models?
So what a mental model is, is really just a framework that you use to make sense of the world. So what the model thinker is, the book, it's a book that contains really three things. One is sort of a general philosophy of models. One is a collection of models that you can sort of play with and understand. And then a third thing is sort of this, some examples of how one in practice would apply mental
a variety of models to a problem. So when I think about a mental model as opposed to maybe
a standard mathematical model is in a mental model, what you have to do is you have to sort of map reality to the mathematics, right? So I may say, this would be one thing if someone to say, well, you should use a linear model here to decide who to hire, right? Take your data and just put it on a linear model. Well, the thing is you have to decide what are the variables, right? So, you know, because the linear model contains things like maybe grade point average, maybe work experience, maybe personality tests. You have to think about, uh,
what are the things, the variables that I use to sort of attach reality into, you know, sort of connect reality to the sort of mathematical framework that exists out there. So what I try and do in the book, but also in my work is think about
The mathematics is beautiful because it's logical, it's right, right? But reality is kind of messy and confusing and complex. And so what I see mental models in doing in some sense is mapping reality to the sort of clean logical structures of mathematics.
And we all have mental models, whether we're conscious about it or not. How did you land on this approach? So what the approach is, is this, is that, you know, when I was trained in
I mean, even though it's starting in sixth, seventh, eighth grade, you learn a bunch of very simple models like force equals mass times acceleration or PV equals K, you know, in physics. And in economics, you learn things like S equals D, supply equals demand. And these models are very simple. And sort of the whole idea was I can explain patterns in the real world or I can sort of make sense of the variation we see in the real world using a single simple equation.
Then what happened is sometime in about 1990s, I went and visited, it was 1993, I went and visited the Santa Fe Institute, which is a think tank on complexity. And this is a place where they've been trying to encourage my advisors at the time who were very good game theorists. Roger Meyerson won the Nobel Prize in game theory, along with Leo Herwicks, who's another of my advisors, and Stan Ryder was in that group as well. And they were sort of the
these people who studied rational choice and how do people sort of optimize in social situations. And the Seneff Institute was all about the fact that the world was so complex, there was going to be hard to optimize. And so I wouldn't say that I had some sort of intellectual crisis. It was more the case that it's found intellectually fascinating. There was this disconnect. And the disconnect is that
I'm trying to make sense of an extremely complex world using very simple models. And so what social science has done, I think, typically sort of said, okay, the world's really complex. Here's my model. And I can explain 30% of the variation. I can explain 10% of the variation. Or I can explain why these stocks went up in value or something.
But that means you're missing the other 70% or 90%. And so what not me alone, but other bunch of us have kind of happened on is this notion of collective intelligence is the idea that one way you can make sense of complexity is by throwing ensembles of models at both. So one of them may explain 20%, another 15%. And it's not that they add up to 100% that they're explaining everything and
In fact, there's overlap. There's even sometimes contradictions in what they might explain, what they might predict. But by looking at the world through an ensemble of logically coherent lenses...
you can actually make sense of that complex world. And what's fascinating about this to me is there's a group of people who are, you know, some philosophers, some economists, some statisticians, some biologists, kind of playing in this space of collective intelligence, right? You might think biologists, what are biologists doing in this space? But if you think of ants, each individual ant has...
a mental model. So it's a map of the terrain of where the food sources are. And they can sort of aggregate that collectively within the nest. And bees can do the same thing within the hive by doing these things called waggle dances, which sort of explain where the food is, right? So a bee will come back and dance and say, look, I think there's food here. And another bee will come back and dance and think there's food here. And they can kind of aggregate their sort of crude maps of the world.
At the same time that people were thinking about collective intelligence from a purely theoretical perspective, there was a set of people in computer science who were creating things like random forest algorithms and these giant sort of artificial intelligence algorithms that also...
we're sort of constructing or creating collective intelligence by combining all sorts of very simple filters. And so, and I think you've learned, I think there's sort of a growing consensus that our heads aren't big enough. No individual's head is big enough to sort of make sense of the complexity of the world. So you're going to have a set of models of how you think the world works. I'm going to have a set of models that I think the world works, but,
But collectively, right, in any one of us, it's just too small to make sense of the craziness, the complexity, the sheer dimensionality of the world that sits in front of us.
But collectively, we can kind of make sense of it. So let's take something outside of finance for a second. Let's look at the obesity epidemic, right? You could blame that on infrastructure. You could blame it on food. You could blame it on the bacteria in our gut. You could blame it on changes in work-life balance, the lack of physical work, all sorts of things. And to understand any one of the dimensions that contributes to obesity, you'd probably need to have, you know,
not necessarily graduate, you might take five, 10 years of study to just understand one piece of it.
But if you tried to fix the obesity epidemic by just changing that one piece, by just sort of climbing that one little hill, you're not going to get very far because there's going to be probably systems level feedbacks. And so there's going to be no silver bullet that's going to fix something like that. What you can do is by having a collection of people who each know kind of different parts and his knowledge overlaps and have different kind of models of how things work, you can get a much deeper understanding and you might be able to get
just sort of, I think, a more holistic approach. And we can talk about this later. I think it leads to sort of a different way of thinking about policy when you think about going at these problems from a multiple model perspective. So to what extent is it fair to say that cognitive diversity is then a group of people who have different models in their head about how the world works? It is. I think that
You know, this is where I occupy kind of this strange place because the book I wrote before this was called The Diversity Bonus. And that book talks about the value of having diverse people in a room. And the reason you want diverse people in the room is because different people bring different basic assumptions about how the world works. They construct different basic, you know, mental models of how the world works. And
and they're going to see different parts of a problem. So if you want to just look at this, if you look at sort of fluctuations in the stock market, or if you look at the valuation of any particular company,
There's so many dimensions to a company like Amazon or Disney, right? That there's no way any one person can understand it. And so what you want is you want cognitive diversity. And what that cognitive diversity means is people who have different, you know, literally different sets of models or different information. And so one of the things that sort of leads off the book, and I use a lot when I teach this to people,
undergraduates or general audiences, something called the wisdom hierarchy. And you want to think at the bottom out there is all this data, right? All this, you know, just...
Whether you want to call it a fire hose of data or a hairball of data, choose your favorite metaphor. It's all just floating out there. On top of the data is information. What information is, is that this isn't how we structure the world. So you may say unemployment is up. What you're doing is shaking tons and tons of data about people having jobs, and you're putting that into a single number. You're sort of categorizing both unemployment's up, inflation's up, and you're using those as your variables.
Where someone else might have a very geographic view of things and say, well, Los Angeles is doing well, right? Texas is doing well or something, right? But there seems to be, you know, the Midwest, the economy is not doing as well.
Then what you do on top of information is knowledge. And what knowledge is, is understanding either correlative or causal relationships between those pieces of information, right? So if a piece of information is mass and a piece of information is acceleration, then knowledge is that force equals mass times acceleration, right? And if a piece of information is unemployment and a piece of information is inflation, then you might understand that unemployment is very, very low. You often get wage inflation, right? Again, that's a piece of knowledge. What
What wisdom is, is wisdom is understanding which knowledge is to bring to bear on a particular problem. And sometimes that can be just selecting among the knowledge. Other times that can be a case where what you're doing is sort of combining and coalescing the knowledge. Let me give two examples from finance here. One of them involves my college roommate, Eric Ball, who is treasurer at Oracle. And this is one of my favorite stories in the book where he was
Someone comes into his office and says, "Iceland just collapsed." Two models sort of came in his head. One is you can think of the international financial system as a network of loans and deposits across banks and across countries. Another model is just a simple supply and demand model. Munger has this wonderful quote about you want to sort of array your experiences on a latticework of models.
And what Eric does in this situation, those are his two models, complicated network of loans and promises to pay and simple supply and demand. And he looked at the person who walked in his office and said, Iceland is smaller than Fresno. Go back to work. So that's his experience. It's a tidy country. It's not going to matter very much.
Whereas if the person had walked in his office and had BlackRock just failed, right? He would have said, oh my goodness, I'm not going to use this supply and demand model. I'm going to use this networks of contracts and promises to pay model. And so what you want to think about is you as an individual, and one of the fabulous things about your site, Farnam Street, is that it's all about all these metal models, all these ways that people have sort of making sense of the world. And one of the reasons people go to your site, one of the reasons people read business books, one of the reasons we, you
you know, gather knowledge is to some sense accumulate knowledge in the form of, you know, ways of taking information, understanding relationships to it. And what we hope to gain is wisdom by having more knowledge to draw from. But the point of the sort of core philosophy of the model thinker is even if you do the best you can, even if you're a lifelong learner, even if you're constantly amassing models, you're still not going to be up to the task of solving anyone. You know, you're not going to,
you yourself are not going to sort of solve the obesity epidemic. You yourself are not going to sort of, you know, make, you know, create world peace. You yourself are not going to sort of, you know, solve climate issues, right? But what it's going to take, because you're just not going to, your brain just isn't going to be big enough. Collections of people, right? By having different ensembles and models can, by creating a larger ensemble model, actually have a hope of addressing these problems.
Okay. Oh, there's so much I want to dive into there. Let's start with in the hierarchy from data to information to knowledge to wisdom. It sounds like we're applying sort of mental models at the knowledge stage. And then wisdom is discerning which models are more relevant than not. Is that an accurate view of that? And if not, correct me.
I puzzle over this a lot. Every time I think I have an accurate view of it, I then reframe how I think about it. I was giving a talk the other day and someone said, I think the real space where mental models come in is in this move that's very subtle between data and information, which is true, right? Because when you think about
you know, how I might think about, like, if I visit a city for the first time and somebody says, you know, well, tell me about Stockholm, you know, I immediately start putting it in categories. They might say, well, you know, it's a lot like London or something, right? Or you might, you know, you might sort of say, well, the people are friendly, but reserved or something. So again, you're taking all these sort of experiences and putting them in boxes. So there is a sense in which just the act of going from your raw experiences into the information does
It's almost leaning on the models you already know. And so this is the thing that I've been puzzling over the last few weeks, which has been fun to think about is that if I have a set of like models in my head that we think of it, that knowledge space, does that then bias how I filter the data into information? Probably does. Of course, because I mean, the models are helping you pick out which variables are, which variables you think will be more relevant and how those variables will interact with one another. Yeah.
Right. So here's a really great example of that. So there's this phenomenon called the wisdom of crowds, right? The Sir Wicke book, where you can have groups of people make accurate predictions. Well, the reality is that sometimes groups will be successful and sometimes they won't. And one of the reasons we write down models is to figure out what types of diversity will be useful and what types won't be.
But there's been work by a number of people, Kay at Chen at HP Labs, Michael Sanchez-Burks here at the University of Michigan, where they sort of compared how to, you know, sort of suppose I have lots of actual data out there and I run a linear regression to try and predict something. And I have that compete against people. What you find is the regression does a lot better than any one person because of the fact that the linear regression can include a lot more data. It doesn't suffer from biases, all sorts of stuff.
But oftentimes, when you have groups of people compete against the linear models, the groups of people can beat the linear models. And when they do, what you find out where they beat them is when the linear model, the person constructing the linear model, because the linear model is actually constructed by some sort of person typically, doesn't have a way of including something in the model. So one example involved a consumer product. It was a printer.
The linear model said, "This printer is going to sell, let's say, 400,000 units." When they used a crowd, the crowd was like, "No, it's going to sell 200,000 units." That was a huge difference. They went back and they interrogated people in the crowd saying, "Why do you think this printer is not going to sell? It handles as many pieces of paper. Print quality is this good. The toner cartridge is easy to change." All the attributes that were in the linear model.
And the first word out of the person's mouth was, but ugly. That is a but ugly printer. Well, there was no but ugly variable in your regression because that's sort of like, you know, a design feature. And it wasn't a very attractive printer. But the difficulty with data in those situations and the form of the linear model is that it only looks backwards. It can only look at sort of what's happened in the past.
Whereas people, when we're constructing models, often are kind of forward-looking. How are people going to respond to this new design protocol? Now, what's going to work best, ironically, in all these situations is a combination of the linear model and the model with people.
And this gets to this sort of step from knowledge to wisdom that I find really fun. You could say, oh, so what you should do is you should average the linear model and the people. That seems to not be true. What you should do instead is if the linear model and the people are close, you know, the predictions, you probably should go to the linear model because it's really well calibrated, right? It's probably going to, you know, be better. Yeah.
But if they're far apart, if the linear model and the humans are giving very different predictions, then you want to go talk to them. I mean, talk to the people and talk to the linear model. Now, you could say, how do you talk to the linear model? Well, you look and you say, what variables are in there? What variables are the people using that the linear model is not? What do the coefficients look like? Has the environment changed? Yeah, right. No, that's the key thing. The linear model is assuming stationarity.
And what's fun in this is like MIT just started this new school, right? This new sort of data science school, right? That they get their first, I think it's the first one they started in like 30, 40 years. They raised a billion dollars for this.
One of the things is they want people who are bilingual, who can communicate between these really sophisticated artificial intelligence models and the real world. The thing is, people are afraid of just throwing all this information into this giant AI model that spits something out. If you're using relatively simple models, it is easy to be bilingual. It's easy to look deeply at whatever model you've got and say, "Why is the model saying this?"
I like that a lot. I want to sort of explore something with you, which is when we're talking about models and how we apply them, whether we're applying them at the information sort of like filtering state or data and information stage or the knowledge stage or the knowledge to wisdom stage. It seems to me like we can probably agree that the more models you have is a good thing in general, but only to the point where they're relevant for the specific problem that you're facing.
Having extra models, if they're not useful, is not good. But the more tools you have in your toolbox, the more likely you can accommodate a wide variety of jobs. I think that's right. But I also think there becomes this interesting challenge when you think about building teams or also building your own career. So in your interview with Atul Gawande, he made this fabulous point about his experience.
of making a contribution to the world was sort of being able to communicate across different types of people in different areas, right? So he brought sort of a, you know, he'd been trained by doctors. So he had, you know, his parents were doctors. And so he,
He'd sort of absorbed what the medical profession was all about. But at the same time, he had this deep interest in science and this deep interest in sort of political philosophy and literature and sort of public policy. And that enabled him to fill what Ron Burt calls a structural hole, right? In terms of like there's a network of people studying medicine, there's a network of people studying sort of politics and public policy. And he can sort of stand between those two things and make sense of them.
Right. So one of the things I talk about in both the diversity bonus and also in the model thinkers, you can think of yourself as like this toolbox. And you've got some capacity to accumulate tools, mental models, ways of thinking. And what you could decide to do is you could decide to go really deep. You could be the world's expert on or one of the world experts on random forest models or.
or Lyapunov functions, or you could be one of the world's leading practitioners of signaling models in economics. Alternatively, what you could do is you could go deep on a handful of models. There could be three or four things you're pretty good at. Or you could be someone who, I think, in the financial space, I think a lot of people are really successful, like Bill Miller, a friend of mine,
is by having just an awareness of a whole bunch of models, right? So having 20 models that you have at your disposal that you can think about. And then when you think, when you realize like, you know, this one may be important, then you dig a little bit deeper. But also those, that variety of models gives you, I think, two things. One is it gives you sort of a robustness that you think of in a sort of a portfolio sense that you're not going to make a mistake.
But it also can give you this sort of incredible bonus in the sense that two models, rather than giving you the average of the performance of the two, often give you much, much better than the average, right? You get this sort of bonus from thinking about a variety of models. And so the book, what was super fun about the book and what's been really rewarding about it is what I do is I lay out this philosophy of like, okay, this is why to confront the complexity of the modern world, you need a variety of models, right? Sort of using this data information, knowledge, wisdom idea.
Then what I do is I take what I think are the 30 most important models that you might know, Markov models, linear models, kernel blotto models, Lyapunov functions, systems dynamic models, simple signaling models from gate theory, just a whole variety of gate theories.
And it was just a great exercise was how do you write these in each one of these in seven to 12 pages in a way that everybody can understand them and then use them. And that's our real challenge. And I, the people at basic, uh, the list of Verna, who's my, the editor of the TJ Keller was editor of the book, but she was the person who sort of wordsmith this with me. That was a real challenge because there were times when she would just say,
no one is going to understand this. And that was, it's a fun thing to do. I can pick up my book and pick up Markov models, for example, right? Markov models are these models where there's states of the world and then transitions between the states. And you read the book, it's like, you know, about 10, 12 pages. And I think most people can understand it. And all the math is in a box. If you go to the Wikipedia page and you type in Markov models, you'll just go, wow, I should like get a PhD in statistics. Like that's your only hope of understanding
understanding it. And so, so I think that what I'm trying to do in the book, in some sense, is the same thing that Atul's trying to do in his, in his work life is to say, you
Here's a way to get kind of knee-deep in these things, to understand where they work. No one is going to master all 30 models in this book because you could write a whole PhD on each one of them. You could write 20 PhDs on each one. But the thing is, the awareness is really useful because you might say, "This Colonel Blotto game or these Markov models or these signaling models or these power law distributions, this is really interesting to me." Then you can go a little bit deeper.
And so I think that it's really meant to be, you know, just kind of a reference in a way, but also just, you know, sort of an awareness document where you can just sort of say, hey, wow, I was just looking and here's a super interesting model I'd never even heard of. And it's really fun to think about. So I want to dive into some of those in a little bit. But before we get there, I just want to talk a little bit more about acquiring mental models. Like how do you pick which models to learn if you're...
you're working in an organization or you're a student or how would you go about having a conversation with somebody about which mental models to prioritize and why? Yeah. So I think one of the first things you want to ask is who are the relevant actors, right? So is this a single actor who's sort of just making a decision or
Or is this a strategic situation where someone is taking an action and they've really got to take into account what the action is of someone else, what choice somebody else is going to take? So if you're thinking of like, what action I'm going to take in a soccer game, I'm going to take in investing, I've got to think a lot about what other people are doing.
Or you might want to ask, is it the case that I'm taking some sort of action and I'm embedded in a much larger structure where things are sort of moving and I'm taking cues from that larger social system? So suppose I'm thinking about like, which book do I buy? You can think of that as like, or which album do I download? You can think of that as like, oh, it's just me making a decision.
but it's not because you're really making that within a much larger sort of social, cultural, economic milieu where you're not even maybe aware of the fact that you're drawing signals about what other people are doing. So you want to think about what am I modeling? Am I modeling a person? Am I making an individual sort of isolated choice by modeling a strategic situation or modeling something that's much more sort of social, ecological like that? Second thing you want to ask is how rational
is the person making the decision with the alternative not being irrational, but the alternative being sort of rule of thumb based. So there's a guy, Geert Gigerentzer, who's a German social scientist and Peter Todd of this book, gave this work on sort of
adaptive toolboxes, you know, just this idea that like, I've just got a set of sort of mental models or tools and I apply those kind of like, oh, this is problem A, I apply tool A, this is problem B, I apply tool B. So there's a lot of stuff when I think about like where I'm going to go do my laundry or what coffee shop I'm going to go to, but I probably don't sit around and rationally think about it. I just kind of like just follow some sort of routine and maybe I adapt that routine slowly. Maybe I learn a little bit, but for the most part, I might just follow rules. So you want to ask,
How are people behaving? Then you can ask yourself, is my logic correct? So Colin Kammerer, you know, Richard Thaler, people studying behavioral economics would say, if it's repeated a lot, that should move you a little bit more towards the rational behavior because people should learn. And if the stakes are huge, that should move you towards rational behavior. If it's being made by an organization versus an individual, that could move you in either way. If it's a big decision by an organization, you could imagine like,
Like, okay, this is going to be done rationally, right? Because you have a committee of people think about it in a very careful way. But if it's some sort of standard operating procedure within a large organization, then it could be way over at the other extreme. This is just how we do it. This is how we've always done it in vain. This is how we're always going to do it in vain. So now you've got this idea of, okay, how...
Now you're thinking about, okay, how do I model the person? Following a rule or optimizing? Or maybe suffering from some sort of human bias if it's a human. And then is it just a decision? Is it a game or is it some sort of social process? So those are sort of in some sense the two main questions. What context is the action taking place? Who's making it? And then I think the real interesting challenge is if it's not a decision, if it's a game or if it's some sort of social process,
is making yourself really aware of the challenges of aggregation, you know, in the sense that like oftentimes things don't add up the way they're supposed to, right? Or there can be just a fundamental paradox in the assumptions you're making. So for example, Barabasi has a fabulous new book out called The Formula. And in this book, The Formula, he talks about how these are the lessons for success by looking at tons and tons of data.
And some of it is about some of the things we were talking about before, the Gawande's. You want to make sure you use your network really well. You want to seize opportunities, those sorts of things. But there's a circular reasoning in there in the sense that if everybody followed that formula,
it's not clear that everybody would be successful, right? And so the thing is oftentimes these systems can contain feedbacks within them, right? That make them logically inconsistent at the level of the whole. And in fact, his book, which is again, it's a fabulous book. I'm not done reading it because I think he's right. And I think if you read his book, you'll be able to be more successful. But if everybody read his book, which he would like, then people wouldn't make sense. - Right. - So when you think about constructing a model, again, it depends on the context.
you want to think through is the whole thing sort of coherent. One of the things I love about sort of moving-- when you asked me the first question, like, what do I think of as a mental model, I think of it as-- again, and I'm a real outlier here-- I think of it as like a way of saying, I'm going to use this mathematical model to make sense of reality. So one of the great examples--
of this thing is that we think the value of the formal mathematics is Markov model. So Markov model is there's a set of states that you could be happy, bored, sad, whatever. And then there's transitions between those states, right? Or a market could be volatile, not volatile, whatever. And then if those transition probabilities are fixed, and if it's possible to get from any state to any other state, then that system goes to an equilibrium.
And it's a unique equilibrium. So what that means is history doesn't matter, right? One-time interventions don't matter. There's just this vortex drawing it to one thing. What that model forces you to do then is if you want to argue the world is complex, if you want to argue for path dependence, if you want to argue that a policy intervention is going to make a difference in some way, you then have to either be saying, I'm creating a new state that didn't exist before, or I'm fundamentally changing these transition probabilities.
And so you get this idea that, so when somebody is constructing a model, oftentimes they'll say, well, I'm a systems thinker. And then if you have them write down their model, you'll say, wait a minute, that's a Markov model. That is the unique equilibrium. Do you think your system has the unique equilibrium? And they're like, no, no, it's very contingent, path dependent. And then you'll say, okay, well, then your model has got to be wrong, right? I mean, you're missing something. There's got to be some way of changing these transition problems. So I think that
I view it as a very deliberative process within yourself of constructing a model, right? First, you kind of ask, what is the general class of models? So there's a system, there's a decision, is it a game? And then once you write it down, then you can kind of go to the mathematics and the mathematics will often tell you, given your assumptions, what must be true about the world. And then if it's not, you can, you don't have to kind of go back and say, well, let me rethink my model then, right?
Do models become a way of surfacing assumptions? Oh, absolutely. No, I think models force you to get the logic right. They force you to sort of say...
What really matters here in terms of driving people's behaviors or firms' behaviors, how do those behaviors interact, right? In terms of, you know, how do they aggregate? And then how should people respond to that? There's a famous quote by Murray Gell-Mott where he said, imagine how hard physics would be if electrons could think, right?
And I'd written that in a paper and it was attributed to Murray and someone said, "I don't think Murray's ever said that." And so I went to Murray and I said, "Murray, have you ever said this?" And he read it and so then he just said, "Imagine how difficult physics would be if electrons could think." And so he goes, "There, I've said it, you're safe." But one of the things about modeling and especially like I think in this space, a lot of your listeners are in because they're people who in some sense define the world, is imagine how difficult physics would be if electrons could think
And if electrons could decide on their own laws of physics. So if you're running a large organization, or if you're Secretary of the Treasury, or if you're in any sort of policy position, you get to decide the laws of physics. You get to decide what's legal, what's not legal, what the strategy space is.
So when you think about, when you're saying when someone constructs a model, what do they decide their assumptions are? One of the things we have to keep in mind is one reason people construct models is to build things, right? To build buildings, to build policies, to build strategies. When you do that, you're defining in some sense the state space. You're defining reality. So if you tell your traders, you're looking at these ratios, you're defining the game for them, right? And so I think that it's, I think the design aspect
of models is often overlooked, underappreciated. So within the field, I mean, sort of in economics, political science, business, whatever, because there's so much data, there's this huge shift toward empirical research. If you count the number of papers in the leading journals that are empirical versus theoretical, there's been this massive shift towards empirical research, which
which in some sense is, you know, I applaud it. The work is much better, right? There's much more data. We can get a causality, huge fan of it. But I think there's a cost to that because what that a lot of that empirical work is doing is, is really nailing down exactly what's the, what's the size of this effect? What's the slow, right? Of that line. What's the size of the coefficient? How significant is it? Right. And so we can suss out whether in
improving teacher quality matters more than reducing class size and by exactly how much, right? So that's great. I'm all for it. However, that's taking the world as it is. And one of the really cool things about models, I was trained by these people who did mechanism design, is thinking about
Can we, based on our understanding of how people act, redefine the world, construct mechanisms and institutions that work better? So if you look at the American government at the moment, it's kind of a mess. Everything from like sort of gerrymandering to the fact that, you know, we have this electoral college that made a lot of sense when states are all equal sized or roughly equal sized.
You know, now some states that are tiny and still have the same number of senators as states that have 50 times as many people. But even how we vote on things, what is under the purview of Congress? Like, why do we have a separate...
in some sense like a financial system, you think of like the Federal Open Market Committee and the Federal Reserve System, that's quasi-governmental. The FDIC is quasi-governmental, but NASA and the NIH are not quite as quasi-governmental. There's a deep question about what institutions we use where.
that is underappreciated. So, like, Javis, Jenna Bednar, and JJ Prescott and I are running a thing in February at Michigan called Markets, Hierarchies, Democracies, Algorithms, and Ecologies. We're just sort of saying, look at all this stuff we have to do. We used to sort of think, okay, look, we can use markets, hierarchies, democracies. We're just kind of let her go, right? You see what happens. Like, with the roads, we just kind of let it go. You decide to go somewhere, I decide to go somewhere, and then it's a total mess for the most part, right?
But when we made these decisions about where we have markets, hierarchies, and democracies, that was made in a world where there was no data, no information technology, where we were exchanging beads as opposed to sending bits through the mail. But now there's this fifth thing, right? There's these algorithms. And a lot of stuff, a lot of things can be done by algorithms as opposed to markets, hierarchies, and democracies.
And there's a question because the sort of the cost of change for all these institutions, should we be reallocating problems, right? Across these different institutional forms. That's a question you can't touch really with by running regressions necessarily, other than to identify the places where it's not working, right? But you can use models to help you kind of think through what if we made this a market, right? What if we made this a democracy, right?
Right. What if we handed this up to an algorithm? Yeah, it sounds like we're using multiple models to sort of construct a more accurate view of reality. We might never, ever be able to understand reality completely, but the better we understand it, the better we sort of know what to do. And yet it strikes me as odd that we're often one of the ways that we learn to apply models unconsciously is through school.
And it's usually like a single model, right? Like you're reading a chapter in your grade 10 physics book on gravity, and then you get gravity problems. And then you know that I will apply, you know, this equation to this problem. It's almost an algorithm, right? I know what the variables are, that the school is going to give me the variables, and I'm just going to apply this. And we're taught with this sort of like one model view of the world, right?
why are we taught that way? And why is that wrong? I think it was right when we had a much simpler world, right? I think it was right when we thought, let's get it taken in the context of a business decision. Like you might think, okay, here's how you make it a business decision. You figure out what the cost is going to be. And then you think about the net inflow of profits, right? And you think, okay, do the profits outweigh the costs, right? Do the revenues outweigh the costs, right? So is it going to be positive cashflow?
Then, now when you make a business decision, there's a recognition that there's environmental impact. There's an understanding that it's going to affect your ability to attract talent because it's going to be an interesting problem. There's a question of how does it position you strategically for the long run? There's a question of what it does for your data capacity. There's a question of what it does for your brand. These decisions are just so much more complex than they were. There's so much increased awareness of the complexity of all these decisions.
that there's no single model that's going to work. So when you're in seventh grade, we're teaching you very simple things. And we're trying to teach you that there's some structure to the world. And so we want to say, look, here's the power of these physics models. They explain, not only do they explain things that you see every day, like why objects fall to the floor, they also sort of explain things that you wouldn't have predicted beforehand, like the two things of different weight fall to the exact same time.
Or they can predict things like the existence of the planet Uranus, right? Which they, you know, they didn't know was out there, right? So you get, so I think the simple models we teach people, because we thought like Plato, it was Plato's famous quote about carving nature at its joints, right? I think there was a belief that we could carve nature at its joints. And then for each one of those little pieces, you can sort of apply this model.
And so people will sometimes say, oh, many model thinker, it's like the parts of the elephant. And I'm like, no, no, it's almost exactly wrong in the sense that you want each model. You know, there is a sense in which you had different models look at different parts, but you need that overlap.
Right. Because you can't carve nature at its joints. That's what we've learned over the last 50, 100 years. Right. Is that it's complex. The world is a complex place. And so I think that the challenges to become a more nimble thinker is to.
is to be able to sort of move across these models. But at the same time, if you can't, like if that's just not your style, that doesn't mean there's no place for you in the modern economy. To the contrary, it means that maybe you should be one of those people who goes deep, right? Specialized. Yeah. You know, so you need this weird balance of specialists, super generalists, quasi-specialists, generalists. I mean, there's even people who I've heard describe themselves as having, that their human capital is in the shape of a T,
Right. In the sense that like there's a lot of there's a whole bunch of things they know a decent amount about. And then one thing they know deep.
where other people describe themselves as like the symbol for pie, right? Where there's two things they know pretty deep, not as deep as the tea person, and then a range of things that sort of connect those two areas of knowledge and then a little bit after each side. And I think that it's worth having a discussion with yourself. I mean, not you, but your listeners, is to think, okay, what are my capacities? Am I someone who's able to learn things really, really deeply? Am I able to learn a lot of stuff? And then think about
a strategy for what sort of human capital you develop. Because I think you can't make a difference in the world. You can't go out there and do good. You can't take this knowledge and this wisdom and make the world a better place unless you've acquired a set of useful tools, not only individually, but also they've got to be collectively useful. Because you could learn 15 different models that are disconnected, that applied in different cases, and never have any sort of just any sort of hole.
and that might make it hard for you to make a contribution. Or you could say, "I'm going to be someone who learns 30 different models," but if you're not someone who's nimble and able to move across them, that may be more frustrating for you. I think as we're talking, one of the things that strikes me is if you're going to prioritize which models to learn, obviously the ones in your domain or discipline
the common ones are good to have an understanding of. And then these general knowledge sort of models that apply across disciplines because those are less likely to be, other people are less likely to bring those to the table. So you can become your own sort of in a way and not to the extent that other people would, but your own cognitive diversity machine almost, if you will. How do you go about iterating these models once you have them? Like how do you put them into use?
Are you recommending a checklist sort of approach? How do you mentally store them, walk through them, pick out which ones are relevant and not? So this gets back to, you asked a really prescient question earlier, which was,
how do you know how to model something and how do you think about what assumptions to make? And I think what you do when you think about which models to use and how do you play them off one another is you want to, again, ask, what is the nature of the thing I'm looking at? And then not so much, you know, sort of a checklist, but you can sort of like page through the book or page through your collection of models and think which ones here might be relevant. So let me give an example that I find my students love to sit around and play with, which is there's two models that have to do with sort of
competition between two high dimensional things.
So one of them is a spatial model. And in the spatial model, there's an ideal point. So let's suppose that you have like your ideal burrito, which said it's like weighs about a pound and a half. It's got this proportion of sort of meat and rice and it's hot, but not so hot that you've got to like, you know, you have a giant cup of water right next to you. So you can think of that as a point, like sort of four dimensional space, right? There's a size, there's a heat, there's an amount of, you know, beef and amount of rice or something.
That's your perfect burrito. Well, then you can imagine all the burritos that are for sale in Toronto or Ann Arbor or New York. You can put each of those in that same space. And then you're going to sort of choose the burrito that's closest to you. So then you're going to say, oh, this is the best burrito in Chicago. Well, if my ideal point is different than your ideal point, then I may think a different thing is the best burrito.
Well, that same model you can use for, it's actually the workhorse model in political science for thinking about which candidate do I vote for? And if we aggregate them, like nobody's happy. Yeah, so nobody might be happy. But then there's another model that models the same thing called the Colonel Blotto game. And in the Colonel Blotto game, there's a whole set of fronts. You can think of those as dimensions. But instead of it being a spatial characteristic...
It's hedonic in the sense more is better, right? So I think about buying a car, it could be like more miles per gallon is better, more legroom is better, right? Higher crash test scores are better, less environmental damage is better. And so now when I think about comparing two things, I could just sort of say, it's not which one is closer to me, like this burrito is better because it's near my ideal point. I can sort of go across all these different dimensions and say which one wins.
Well, what's cool about both those models is that if there's a big set of people deciding in the first one, there's a whole bunch of people who have different ideal points and they're trying to decide. There's generally no winner. So there's those sort of best things. So you think about, OK, I'm going to go to University of Michigan. I'm going to go to Northwestern. I'm going to go to Western Ontario and get a degree. And I'm going to apply for a job and I'm competing against seven other people.
Or maybe I'm, you know, up for a road trip, but I think, how did I not win this? I'm so great. The thing is, it may be that that's, you could think that's a spatial model and I just wasn't what people liked. Or you could think that's a hedonic model, a kernel model, like somebody just happened to beat me on some collection of fronts.
But one of the nice things that both those models sort of tell us is that there's kind of no best answer because like you're going to win relative to how someone else is. So it's, it's a strategic, it's more like a game. It's strategic and there's no best thing you can do unless you happen to know where the other person was.
And so the nice thing that comes out of that, there's sort of this calming sensation. My undergrads always feel like if you don't get a job, if you don't get a scholarship, you don't get a grad school. It's not because somebody was better than you. No, it just happens to be that they were positioned better than you. And that's fine. Right. It's just going to happen. But typically, when you think about maximizing your chance of success.
getting one of those things, you need to think about, is this a spatial thing where I want to sort of make sure I look, I'm got the characteristics that are near what they're looking for, or is it hedonic where, you know, I want to beat my competitors on as many things as possible. Right. So I would have like, I would like have the most undergraduate research done. I would have the strongest letters. And so, and a lot of things are sort of like a combination of the two.
But what's really useful is having both those models in your head because what you're thinking about, and in the same strike, if I'm an advertising firm and I'm pitching, right, an advertising platform, trying to be a supplier to a large auto company, right? It's multidimensional competition. And so what you'd like to do is have both these models in your heads and say, let's think about this as a spatial model. Let's think about this as a purely hedonic competitive model and think about how would we position ourselves? Where are our competitors? And it gives you, I think,
I think it's calming in a way, right? Because it gives you a way to structure your thinking. And it also lets you know that if you lose, it's not because you're necessarily worse. And if you win, it's not necessarily because you're better. So it's also, it's calming. It's also humbling, right? Because it's easy to think, one of the things I deal with a lot in trying to present my
diversity, the value of diversity is people who are successful by definition have, have, are in, have always won and they're in power and they think I'm good because I'm here because I'm good. And they typically are.
And they tend to think they're there because they've had a lot of ability. They have a lot of ability, which means that they've got flexibility in terms of what tools they've acquired. But the point is getting them then recognizes for the group to be better. Right. You want people with other with other tools. Right.
So it's tricky because these people think that these people, you know, people in successful think also because they've won, you know, because they're good. When in fact, you know, maybe they've won because they just happen to have the right combinations of talents at the top. I kind of think of that in an evolutionary sense, right? Where we have considered gene mutation today that might be selected as valuable, but a million years ago, the same gene mutation might have been,
negatively selected or filtered, if you will, because the environment has changed, the situation has changed. And we apply stories to these sort of random gyrations. And that's not to say that success is completely random, but there's an element of luck to everything.
But how that's weighted varies depending on the circumstances, right? So you get into this really complicated view of the world. And I find that really interesting when we're thinking about how to learn models and how to make better decisions and how do you teach your kids about complexity? Like, how do you teach your kids, not necessarily university students, but them as well, but like, how do you teach them, hey, the world is...
isn't really this simple place. And, you know, here are some general thinking concepts that you need to learn about. And how do you go about instilling that in children?
It's such a fascinating question. I think that especially, you know, the article in the New York Times last week about how the upper, you know, the sort of upper quintile is spending so much more money on their children than those below with the idea of them being economically successful. So let's go back to a question you asked earlier about like sort of in school you learn where sequence mass times acceleration. In the economy of 100 years ago,
It almost depended, it depended a lot more on sort of you yourself being really, really good. Like you're a really good lawyer. You're a really good furniture maker. Go back 200 years ago. Like you were successful if you ran your farm well, right? So it was all about your individual ability and hard work, right? So I was reading this fabulous book called The Rise of the Meritocracy, which is an old book like 50 years ago. So it talks about sort of like
Success is intelligence plus effort. It's actually where the word meritocracy came from. If you imagine the world as a collection of individual silos and the amount of grain in your silo depends on how intelligent you are and how hard you work, then it is all about your ability to work hard, get A's.
in class, develop these skills. And that's a very instrumental view of the world. But in a complex world, your ability to contribute, and again, I'm going to go back to your amazing interview with Atul Gawande, in a complex world, your ability to
to succeed is going to depend on you sort of filling a niche that's valuable, right? Which, you know, as in Barabasi's book, it could be connecting things, it could be pulling resources and ideas from different places, but it's going to be filling a niche. That niche could take all sorts of different forms. And so I think when I talk to undergraduates about this, when I talk to my two sons about this, what you want to think about is
Finding something that combines three things, you have to really love it. You've got to, you know, that's something you're passionate about. You've got to love the practice of it. So if you're, you know, a great basketball player isn't someone of great ability. It's someone who loves practicing basketball. A great musician is someone who's got some ability there, but they love practicing music. So you've got to, you've really got to enjoy the practice of the thing you're doing. The second thing is you've got to have some innate ability, right?
Right. So my younger son's actually a reasonably good dancer. He was a younger kid and there's not many adult male dancers. And the guy who runs the dance studio is that after I dropped him off one day, chased me down and said, is that your son? And I said, yeah. And he goes, well, we need adult male dancers. And I said, yeah, that comes from the other side of the family. He says, no, no, it can't be coming. And he like watched me dance for like 30 seconds. And he's like, you're right. It comes from the other side of the family. And.
You know, even if I love dancing, my upper bar dancing is going to be pretty low, right? So you've got to have some ability there. And then the third thing is you have to be able to, in some sense, connect those things to something useful, meaningful, right? Some way, you know, of you think that it's going to make the world a better place, right? So the question in and of itself, the thing you're going after has to have some meaning or purpose or value. You've got to be able to convince yourself of that and convince others of that.
Because otherwise, one of the things I find fascinating about the academy is that we're
people would be in small departments and they'll study something and it gets really interesting to them and they're the world expert in that and that's great because we're advancing knowledge but outside of their small circle no one may find that interesting and i think that it's incumbent upon them to sort of think about you know are they using their talents in a way to i think you are they making that interesting to other people or at least intriguing to people because i don't think you're adding that much value if only 30 people lead your work
That's a great conversation to have with maybe like a 14 to 20-year-old. What if we go younger, right? Like how do we teach an eight-year-old about not only compounding, but power law distributions? And like, how do we, we might not use those names and we might not use the mathematics behind them, but how do we start instilling models that are
The way that I think about this is if the world is changing, there's a core set of models that are probably unchanging, right? Mathematical ones that either cross sort of human history and biology and perhaps sort of like all existence, right? Reciprocation is a great example of one, right? Like it works on human and social systems. It's also a physics concept. Like how do we teach our kids or should we teach our kids is maybe even a different question on this, but
Should those models be learned in school as models so that you start developing this latticework or this mental sort of like book in your head where you're flipping through pages and going, oh, this model might apply here. Nope, it doesn't go to the next model. How do we instill that in our children, even if they don't understand the mathematics behind them, so that we start understanding the world is more complex than single models and
And part of your goal is to just what you said, right, to fill this niche. But one of the ways that you're going to fill that niche is your aggregation of these models and how you apply them is going to be more valuable or less valuable in a group setting in a particular company. And your understanding of how other people are applying models is also going to be a key element of strategy in the future.
So if we can anticipate that our competitors are following models that they learned in business school, well, now we know how they're likely to respond to what we're doing and we're not likely to be surprised. And we can use that information to make our business or our company more competitive. Yeah, it's a great question. I think two things come to mind. One is, I think we could do a little bit more of sort of meta teaching in the sense that one of the
One of the things that people really like about, I did an online course called the model thinking, which is a MOOC. And one of the sort of tropes in there is that when you, there's something that's called, they borrowed from my colleague, Mark Newman, when he talks about distributions, which is logic structure function. So if you see some sort of structure pattern out there in the world, there has to be some logic as to how that came to be. And then you also want to ask yourself,
is there some functionality to that structure? Does it matter, right? So when you talked about like normal distributions versus power law distributions, so we'll teach kids the bell curve.
But what we won't do is sort of say, here's the bell curve, and this is a structure. Think about all the other structures you could draw. But now we want to ask, which structures do we actually see in nature? And we don't see that many. We see bell curves. We see sort of stretched out bell curves, which are log normal. We see power law things. But we rarely see things that have like five peaks to them or something like that. So why is that? And so then we need a logic that explains the structures we see. And so what logic underpins structure?
Normal distributions, which are you adding things up what logic underpins log normal distributions? You're multiplying things and what logic gives you power laws? Well in the book I said there's a bunch of them, right? There's preferential attachment. There's self-organized criticality But there's there's logics that will give us those pumps then you want to ask does it matter do we care and with the and then that's sort of an easy thing to convince kids up because you could say well if if incomes like Heights are distributed in
Normally, so that's nice and predictable and seems fair. But if heights were distributed by a power law, there'd be 10,000 people as tall as giraffes, there'd be someone as tall as the Burj Tower, and then be, you know, like 170 million people in the United States, seven inches tall.
And they're like, whoa, that would be pretty bad. It would be really hard to design buildings as well, right? Because you need them for little tiny people in traps. And so I think this logic structure function thing is really important. I think the other thing that we need to do is give them experiences of using the same
broad idea across a variety of disciplines. So one of the things I did in class that I'm hoping to teach again, because the students just absolutely loved it, but it just didn't work out this year, called collective intelligence, where we just sort of did a whole bunch of different sort of in-class exercises and
to sort of explore where collective intelligence comes from. So here's one example that was just, I think, just most sort of- Before we go on, what is collective intelligence? So collective intelligence is where sort of the whole-
is sort of smarter than any one individual unit. So you can think of that in a predictive context. This could be the wisdom of crowds sort of thing where people guessing the weight of a steer, the average, the crowd's guess is going to be better than the average guess of the person in it. That's just a mathematical fact. But here what we're doing is we're looking at sort of collective intelligence in terms of solving a problem. So here's the setup. Really fun. I had a graduate student make up a bunch of
problems defined over 100 by 100 grids. So imagine a checkerboard that's 100 by 100, and each one of those cells has a value. So one of the problems was really simple. It was like what we call a Mount Fuji problem. There's just one big peak, not necessarily right in the center, it's just kind of in the upper right. There's a huge peak and that had the highest value and everything kind of fell off from that.
Another problem had like five or six little peaks over the landscape, but with one being higher and another one was really, really rugged. So he just created a bunch of problems and I didn't know what they were. Right. That was part of the key. No one knew what the values were.
So I created three teams. One team was the physicists. And what the physicists did is they got to sit around and first say, okay, which eight points do we check? And then they would get the values from those points. So it's kind of like that game Battleship where they would say like, you know, D7. And they'd come in and we'd say, this is the value from D7. And then they'd get another. So they got like five rounds where they got to check. It was 10 points. So five rounds where they got to check 10 points. And the goal was to find the highest point.
Another group was the Hayekians, the decentralized market where each person went to pick a point and then they'd just come back and they would say, here's the point I picked and here's the value, but there's no coordination. So the idea was that you could see value by comparing those two because you could kind of see where other people picked and you might want to go near where they were. But you also wanted to build information for yourself and the group by trying other points, right? So there was all sorts of cooperation and competition going on in that group.
The third group was the Bs. So the Bs would point at a square. They couldn't give a number. They couldn't say, you know, A26. They just could point somewhere on this big square. We would approximate what we thought that was. We would show value. And they had to go back and waggle dance, right? Now, the thing is, it turns out undergrads won't waggle. They won't, like, waggle dance. They're just too insecure. So we had them just dance with their hands.
What they did do is they had to like kind of like point the direction it was in. And then the longer they waggled, that was kind of the better the value. Okay. And then we compared the waggle dancing bees to the hackians to the physicist. And on the two easy, on a easy problem and on the problem with five peaks, the bees did just the physicist.
In fact, on the five peaks, they did ironically just a tiny bit better than the physicists. And so we're talking about this afterwards. And someone says, well, that's because the bees can take a derivative.
And everybody's like, what? He goes, well, no, like to solve this, you just kind of got to take derivatives so they could sort of find the highest, they could find the highest point and then they could take derivatives because they could see who was wagging on. Right. And it was only on the really hard problem. Physicists did the best. And so what you learn from that is that bees markets and problem solving teams are all dealing with high dimensional problems with like, you know, things that they saw, right. If it's,
it's not super hard and finding food isn't super hard, then these are just as good as physicists because it's as if derivatives and markets are just as good. But when it gets super hard, the market's not going to work because you need all sorts of coordination. So that's going back then to this thing we talked about before about when to use a market, when to use a democracy, when to use a hierarchy, when to just kind of let it rip.
That probably depends on the difficulty of the problem. But what's cool about that, and this is where you can do things with young kids as they see, well, here's this idea, collective intelligence that spans disciplines.
If you want to teach, like, so in the same class, I'll just give one more example because it was just so fun. There's this amazing game called Rush Hour. I don't know if you've ever seen it. We have little cars and trucks and you slide them around and you've got to get this red car out. So you get that. Yeah. So you get that kind of gives you a configuration. And these configurations are like called easy, medium, hard and very hard or something.
And what happens is, so here's the experiment I do in class. And again, the numbers are too small to say this is any sort of scientific result, but it's always worked and it's always been really fun. Is that people play rush hour and they play like an easy, a medium, a hard, and we time how long it takes them to do each one on average, right? And the harder ones take a lot longer. Then I have them write down models for how to play rush hour. So one model might be solve it backwards.
which is think about how that car is going to get out. Another model is like, get the big trucks out of the way, right? Another model is move forward, then move backwards. So, you know, move forward as far as you can, right? And then move it backwards. And then what I do is,
I have another set of people read the mental models from the first set of people and then play not the same games, but different games that have the same difficulty and compute how long it takes them. And what happens is they're just a lot better. And what you, what you see is that this is a, this is something where, um,
it's not tacit knowledge. It's actually learnable knowledge playing rush out. What I've been struggling with, and you probably have a good idea of this, is I'm trying to come up with a game where you can't, it's purely tacit. Like I can't communicate. So my friend John Miller always jokes that like this weekend, he's going to read a couple of books on tennis and then go become a professional tennis player. You know, you can't. Yeah, yeah. So I'm trying to find like a really cool example to juxtapose with,
rush hour, maybe one of your listeners will email it in. You could do it in a classroom setting, some new game where people can learn it, but then there's nothing they can... It'd be nice if it didn't involve physical skills too, just mental skills.
That's what makes it hard. Right. It was Adam Robinson actually told me that Rush Hour was one of the best games that he knew of to teach thinking skills to young children. Oh, really? Yeah. And we spent the summer playing that this year on vacation and we would take it. It's a great game to take to like restaurants and stuff. And my kids were at the time, you know,
eight and nine. And, you know, we would sit there for, you know, they would sit through a whole two hours and just play this. And it was a totally awesome game. It was fascinating. I mean, and as a parent, I just promised them 30 minutes of iPad. If they got through all 40 problems in like three, they were like, Oh my God, this is amazing. It's amazing how hard they'll work for that 30 minutes of iPad.
- That's right, get the incentives right. No, but it is funny though how, I think it's because it's a physical game when I'm doing this in class, I'll say, you only have 10 minutes.
sometimes I'm just extracting the game from my students' hands, you know? And then I'm like, look, you could take it home, bring it back the next day because it's so much fun. Let's talk about a few of the models that you have in your book before we finish up here. I want to, can you actually, how about I'm going to mention three of them and you walk me through sort of how you present them and how you use them. Right. Let's start with power law distributions. Okay. So,
Power law distributions are
Let's start with what they're not. A normal distribution is something like human height, where the average person is 5'10", there's some people 5'8", there's some people 6'0", and it falls off really fast. A power law distribution, most events are very, very, very small, and there's occasional huge events. If you look at earthquakes, there's thousands and thousands of tiny, tiny earthquakes, and there's an occasional huge earthquake. If you look at the distribution of city sizes in those countries, there's tons and tons of small towns.
there's an occasional new york london tokyo if you look at book sales music sales right most books sell three or four hundred copies there's occasional books that sell millions of copies
And there's a question of what causes power loss. So unlike normal distributions, which come from just kind of adding things up or averaging things, power loss have a bunch of causes. So what I do in the book is I go back, let's go back to this logic structure function. It's a structure we see a lot, right? This long tail distribution.
The question is, what causes it? And so I talk about three models in the book. One is something called a preferential attachment model, where imagine things kind of like walk. Imagine that like there's a set of cities or there's a set of books and the probability I move to a city or the probability I buy a book is proportional to the number of other people living in that city or buying that book.
we can see right away there's positive feedbacks there's more people move to new york right more people move to new york whereas more people buy the tipping point ironically more people buy the tipping point so the tipping point sells a million copies there's 10 million people in new york if nobody buys somebody's boring book then nobody buys the boring book um but another way that power laws form is through is through random walks so imagine that each firm a firm starts by somebody
you know, joining their firms. They've got these startup firms. They're one person. Now, suppose that they're equally likely to sort of fail or hire a second person. And I suppose they're equally likely to go back down to one person or a third person. Well, the life of that firm, the firm's going to exist as long as there's a positive number of workers. Well, if it's a completely random walk, like a coin flip, you can imagine that most firms are going to die really quickly, right? You add an employee yourself, then you fold. You add two employees, then you go down one, up one, down one, down one, you die up.
So that would say that the lifespan of firms, some of them can be really short, but if you happen to get really big, you're going to last a long time. That should be a power line. And it is. It's also true that the lifespan of species phylogenera in ecology, you can think of that as perfectly random.
also satisfies the power and then a third way to get these power laws is from um something called self-organized criticality so if i drop grains of sand over a desk eventually i get a big sand pile and then if i look at how many grains of sand fall off the floor most of the time when i drop a grain of sand once the pile is formed it'll be very few but occasionally i'll get these giant avalanches and so what's happening there's the system is sort of aggregating to this critical state so think of like traffic in los angeles or traffic in toronto or new york
What happens is,
It gets itself organized despite where cars are spaced pretty close and all of a sudden there's one accident, boom, there's three hour delay. So most of the time things are kind of fine, but one accident can lead to like, so now what you've got is you've got, okay, now we have a logic that explains the structure. Why does it matter? Well, it, it clearly matters in the case of things, you know, for you think of things like, you know, book sales, um, music sales, those sorts of things. It means that there's going to be some people who are wildly successful and how much people who are not that successful.
And that may not be, we may decide that's not fair, right? We may decide that like, if I'm Malcolm Gladwell, I shouldn't necessarily think, wow, I'm amazing because I sold 4 million books. It's because no, you just happen to be the New Yorker books because you benefited from those positive feedback. So it actually could change how we think about, you know, how we attack people. If you thought, no, this person sold these books because they're just so much better, right?
right? That's a very different story than if you say, no, just the natural process of people buying books leads to big winners. Then you start realizing, no, the big winners are as much luck as they are skill. That's really interesting. Let's go to the next model I want to talk about, which is something that when I was reading it in your book, I was like, oh, first year physics, which was concave and convex. Yeah. Walk me through that one.
So what are the, this is, I got, I got these wrong. I'm like, yeah, physics, like the first assignment, I got them mixed up. And my, like my, it was hilarious. It was, yeah. All these memories came back. Yeah, no, this is a challenging thing because the, there are certain things you almost, you have to cover. Otherwise you're sort of, it's a disservice, right? And so,
the basic idea of sort of linearity, right? Is that something has the same slope always. So the next doll, you know, the next thing is worth just as much as the previous thing. And fundamental to so many models throughout the book is some assumptions of either concavity, which is sort of diminishing returns or convexity, which is increasing returns. So we just talked about preferential attachment. That's a form of convexity that sort of,
the odds that somebody buys your book increases more and more people buy your book. So the odds that if the first person buys the tipping point are low, but then the odds that the million and first bought by is that are much higher, right? Because so many people have bought. So convexity just means that the odds of something happening or the payoff from something, um, increases more people do it. So many things in the world are the opposite. They're concave. So if you think about like, so concavity means that the added value of the next thing, um,
diminishes. So for example, like chocolate cake, the next bite of chocolate cake, the next scoop of ice cream, right. There's just diminishing returns to that. So if you think about like adding workers to a firm, right. As you keep adding workers, like the value to those additional workers go down. And that's true of people in teams as well. Right. So when you think about, suppose I decided I've got an important decision to make,
The second person is going to add a lot to the first. The third person will add a lot to the second, and so on. But at some point, you're just not going to add much value. And so there tends to be in team performance on a specific task a certain level of concavity. I think the challenge for me in writing that was like, how do you make
concavity and convexity even remotely exciting, right? Because it's just kind of like mainstream math and the easiest way to teach it is almost in terms of derivative, right? So linear function has a constant derivative and a concave function, there it falls off. But so what you do is you try and make the case that these are in some sense fundamental to not recognizing
in particular, concavity, can lead to really flawed assumptions. In the 1970s, Japan had this really fast growth. And there are all these articles saying Japan's going to overtake the United States in eight years. But the thing is, if you construct the model, you realize that as you sort of industrialize, you're going to grow pretty fast, but there's going to be diminishing returns to that industrialization.
The same is true of China. So if you do a linear projection of China, five years ago, you'd have said, oh my gosh, by 2040, China's economy is going to be just enormous. But the reality is growth is going to fall off because what the model shows, in order to maintain anything even close to linear growth, you have to innovate like crazy. These are the massive levels of innovation. So I think that the idea behind the concavity and convexity chapter was to
Try and get people to recognize that there's just diminishing returns to so many things, right? That linear thinking can be dangerous. Linear projections can be really dangerous. And the last model I want to talk about, I guess it's actually more than one model, but local interaction models. Yeah, so these are fun. These are like super fun. So local interaction models, there's some simple...
computer models, not that convexity and concavity aren't fun, but they're fun for a smaller set of people, I think. So these local interaction models are models where you think of people, first off, maybe on a checkerboard, but eventually can put them on a network.
And what you imagine is, is the set of things, my behavior depends on the people around me. So one of the examples, a simple example I give often is sort of like, how do you greet people? Right? So do you shake hands? Do you bow? Do you fist bump? Right? It doesn't matter what you do, but what you do the same thing that other people do, right? So if,
If you go to bow and I go to shake hands, I'm going to poke your eye out. So it's not going to work. So what you want is these are, in some sense, what we call a pure coordination game. What I'm trying to do is I'm trying to just coordinate with the people I'm interacting with.
This happens on so many dimensions. So in an earlier book I wrote called The Difference, I talk about where you store your ketchup. Do you store your ketchup in the fridge or do you store your ketchup in the cupboard? Again, it doesn't matter what you do. No, it does matter. Always the fridge. Yeah. And the cupboard people think the fridge people are crazy. And once a doctor said to me, Scott,
I think you may think this is funny, but you have to store ketchup in the fridge because it has vinegar in it. And I said, where do you store your vinegar? He said, in the fridge. And like the whole room is like, what are you, a crazy person? Like, you know, you don't store vinegar in the fridge. The same is true with soy sauce. There's soy in the fridge, soy in the cupboard, people. And it doesn't matter what you do, but whatever you do takes on a lot of importance. It defines who you are.
So one of the fun things I do in class, I also talk about in the book, is you can imagine you're actually playing a whole series of local interaction models.
And that collection of local interaction solutions, you can think of as comprising a part of culture. So my wife, Jenna Bednar, we mentioned before, is a political scientist. She had some papers on this where you can think of cultures like a set of sort of coordinated behaviors across a variety of settings. And so I'll do this in class. I'll say, okay, do people use their phones at the dinner table? Do people take their shoes off in your
in your house? Is the TV on? Do you hug your family? Right. Just a whole set of things. And then I'll have people sort of, I have the students vote, like, you know, using a Google form, like, you know, which ones they do. And we sort of have like, you know, here's the modal response across all these things. Here's the people who are correct, which are the people who do what I do. Right. And I'm like, you know, these are my people. You can move to the front and you all get A's. Yeah.
And the best part about teaching this is one time, this is like 10 years ago, this kid comes up after class and he goes, oh my God, oh my God, this explains it. And I'm like, explains what? He's like, my girlfriend's family. And I was like, what? And he goes, everything I do, they do the opposite. And what's weird about it, what's great about these local interaction models is that
Prior to that, he had thought intrinsically they were just weird people. Right. Right. He just thought these are crazy people who have, he goes, they have their own napkins.
Like you always get their own napkin with a napkin holder. They take their shoes off. Like, you know, they always have the radio out of the house. Nobody's, it was just a whole set of things that they did. Like they hugged each other, right? And he's like, well, they're hugging me, right? All these things that they did. He's like, he thought that they were just like part of their genetic makeup, some essential part of their character. When in fact, it was just a series of coordination problems that their family had solved, right? The other example I have in this space that was great is somebody told me this story about how
At New Year's Eve one year, she'd been married into this family for 20 years. She said, you know, look, I love the family. They're great. But I hate the boiled cabbage and beet soup on New Year's Eve. You know, 20 years in, I think I can say that. Turned out everybody hated it. And nobody mentioned it. And it turns out, like, I guess somebody, like, had been dead for, like, 15 years or
Supposedly like that they think right? Yeah, and then they decided like going forward that they would make like one ceremonial beach or something but then No, so I think that like you don't realize those that you can a lot of who we are and what we do Come to these local interaction modes notes Now let's make this serious for a moment away from like the ketchup and the bowing When I go work for a firm or if I'm working in an organization, you know stock analyst psychologist whatever I'm doing I
mental models that we use are like local interact, right? I mean, like, it's like, oh, you're using that mental model. It's easier for me to use that mental model. And that, that then works against this diversity, right? So there's, it really becomes a super important thing that, and what's also very funny is that
your mental model is better than mine, but it's still worth it for me to hang on to my mental model because it's giving that diversity. So collectively it's worthwhile, but there's going to be, so again, back to the point you raised earlier about evolution, and this is where the many model thinking becomes fun is you realize like, so I've been, I go work in some organization or I'm working in some community of practice and I've got a collection of mental models I'm using. It's, it just becomes easy for me to start coordinating on other people's mental models, right? Using other people's terminology because that's where it,
it's more efficient and you know how to appeal to them, how to persuade them, how to interact with them, how they see the world. And then they're predictable. This kind of goes back to, uh, have you read Ender's game? No. Oh, one of the key moments in Ender's game is like Ender, who's this kid who ends up saving the world, totally fictional book by Orson Scott. We just read it with my kids. And one of the key moments is like, he's like, I,
Right.
What does this problem look like through the lens of this person? What does it look like through the lens of this person? And sort of like mentally walk around the table and then sort of like have a hierarchy too. What does it look like to shareholders? Like what does it look like to the government? What does it look like to all the people that sort of interact with the system? And through that, you can get this more nuanced view of reality. And if you see the problem through everybody else's lens, you know how to talk to them in their sort of language or, you
in a way that might be more able to appeal to them. - Yeah, that's such a great point because one of the things that I struggle with in this whole space, and I think it's a good place to struggle is,
as you move from sort of very formal models, like, you know, I'm fitting, you know, some sort of hierarchical linear model versus some like abstract perspective taking versus sort of some notion of sort of like a disciplinary approach to a problem. So let me give a very specific example that I find cool to think about, which is the drug approval process. So if you look at a company like Gilead, Genentech, right, somebody constructs a molecule and then they've got to decide, okay, is this molecule something that we can, you
use to improve people's health. One perspective to take on that is just purely a pharmacological perspective, right? So the body chemistry, how does it work, right? Just pure science. But then there's also sort of a sociological perspective in terms of like, you know, will people, you know, how will people take this? How will this get passed on? Will it get abused? Could it be abused? What uses would it take? There's also sort of a
a purely almost organizational science business school perspective in terms of how do we, if it's complicated to explain, how do we educate the doctors in terms of how to use this? Right. Then there's also people who understand just the political process, which is like, you know, what's the likelihood of it getting approved, even if it works on all these other dimensions, can we get this through the government approval process? If it's somehow something different, given that they've got boxes that they use. So what you've got is you bring all these different disciplines to bear and you've got to, just like you're saying in this book, if,
if I'm the CEO of Gilead and I've got to make the call, do we take this drug to market? I actually have to hire people who can take all those different perspectives, right? Otherwise, you know, I probably won't be CEO for very long because I'm not going to do it.
But then you realize, just let's make things just a tiny bit less abstract for a moment and think about traditional arguments for a liberal arts education, right? The reason you want to read literature from a whole bunch of different vantage points, like, so the reason you don't want to just read sort of the great man view of, you know, Canadian history or US economic history or something like that is because there's all these other people who experienced that same thing and saw it from a very different perspective. But
So, but what's funny here is that I'm kind of making this point, when you think about many models, you could think like, I'm making this point like, oh my gosh, people should be spending more time learning technical stuff. And on the one hand, that's kind of true. People should be learning technical stuff. But the core argument I'm making is very similar to the argument that people at the other extreme are making.
in terms of like the reason why liberal arts education is so important is the ability to do perspective taking, right? To sort of learn to see the world through different eyes. I think where the difference is, is that I'm a, you know, I think I'm a pragmatist in a way, right? I mean, I just see so many opportunities. And I, and so I feel like
I'm coming at it from a much more sort of pragmatic perspective in terms of going out there and making a difference in the world, as opposed to just purely appreciating all these different ways of seeing things.
And the reason that distinction matters is if in literature, it could be that every perspective is worth considering, right. And engaging and thinking about, because there's no, there's no end game, ironically, given the name of the story you just mentioned, but the, but if I'm making an investment decision, or if I'm worried about drug approval, if I'm trying to write a policy to, you know, reduce inequality, if I'm trying to think about how do we teach people, there is an end game. There are things we can measure. There is performance characteristics. And so it,
It could very well be the case that you could say, "I think we should think about it from this perspective. I think we could use this model." Then we can beta test that perspective and that model and think, "No, we shouldn't." There's a difference, I think, in the approach I'm promoting in that, yeah, you throw out a whole bunch of models, but if the spaghetti doesn't stick to the fridge, the spaghetti doesn't stick to the fridge, and you let it go, it's probably something you keep around because there's going to be other cases where it does work.
But the point is, there's gonna be cases where it doesn't. And so- - You don't wanna force it. - No, you don't wanna, so there's a limits of inclusion, right? I mean, in the sense that like, you only wanna be inclusive to things that are actually gonna help you to do whatever it is you're trying to do. - I think that's a great place to sort of end this conversation. I feel like we could go on for another few hours.
But I want to thank you so much for your time, Scott. This has been fascinating. Thanks. It's really fun to have these open-ended conversations and I really appreciate the format, you know, as opposed to simple, you know, answer, respond, but to give me time to elaborate on the book and what I've been thinking. Thank you. Awesome. We'll have to do part two at some point. Thanks. Hey guys, this is Shane again. Just a few more things before we wrap up.
You can find show notes at farnamstreetblog.com slash podcast. That's F-A-R-N-A-M-S-T-R-E-E-T-B-L-O-G dot com slash podcast. You can also find information there on how to get a transcript.
And if you'd like to receive a weekly email from me filled with all sorts of brain food, go to farnhamstreetblog.com slash newsletter. This is all the good stuff I've found on the web that week that I've read and shared with close friends, books I'm reading, and so much more. Thank you for listening.