You might like assessment and accountability and objectives and metrics because they make you feel better because it feels like we're making sure that nothing bad will happen. But you have to recognize it's just a security blanket. It doesn't work. Take off the straitjacket, get rid of the security blanket and just acknowledge how reality works. And it'd be better, I prefer if reality worked where if I set an objective, like I could just get to it as long as I'm sufficiently determined to do it. That'd be really convenient, but it's just not the way reality works. ♪
Welcome to the Knowledge Project Podcast. I'm your host, Shane Parrish. This podcast is about mastering the best of what other people have already figured out so that you can apply their insights in life and business.
If you're listening to this, you're missing out. If you'd like special member-only episodes, access before anyone else, hand-edited transcripts, including my personal highlights and other member-only content, you can join at fs.blog.com. Check out the show notes for a link. Today I'm speaking with Ken Stanley. Ken is the author of Why Greatness Cannot Be Planned, The Myth of the Objective, a book I read and loved, so I reached out and wanted to chat with him.
Ken is currently deciding his next adventure, but recently led a research team at OpenAI. In this episode, we're going to take a simple idea and take it seriously. That idea is the question of objectives. They seem good when they're modest, but things get a lot more complicated the more ambitious they get.
We also explore the growing obsession with metrification, why we accept failure in science, but not when it comes to things like politics or education, why trying harder doesn't always help you achieve the outcomes you seek, why you can't be so tied to your destination that you're not open to the unexpected and unplanned, and why you should avoid ideas that make too much sense. It's time to listen and learn. ♪
The IKEA Business Network is now open for small businesses and entrepreneurs. Join for free today to get access to interior design services to help you make the most of your workspace, employee well-being benefits to help you and your people grow, and amazing discounts on travel, insurance, and IKEA purchases, deliveries, and more. Take your small business to the next level when you sign up for the IKEA Business Network for free today by searching IKEA Business Network.
You wrote a book about questioning the value of objectives, which revealed a surprising paradox that objectives are good when they're modest, but things get more complicated when they're ambitious. Can you expand on that? Yeah, this is something that people don't talk about that often. People don't talk about the problem with setting objectives. There are some controversies about
in society. This is currently not one of them. But yet, if you think about it, this is something that we do all the time. Basically, one of the, I think, deepest facets of our culture is that we think of accomplishment and achievement and discovery in terms of setting an objective and then pursuing it. And I do research in artificial intelligence, my normal job. And in the course of doing that research, we just started to see undeniable evidence that
That approach to achievement has some serious flaws. And that realization at first was mostly just an kind of algorithmic realization like, okay, well, this has this has applications and implications for artificial intelligence. But it dawned on us over time that actually, it has a lot more implications than just for artificial intelligence.
Because it's not just something people do in the algorithms of artificial intelligence, but basically in life and in our culture. It's what we do all the time. And it started to seem to me maybe almost urgent that this is actually brought into a public conversation of some sort. So that's why myself and my co-author, Joel Lehman, decided to...
to kind of take the unconventional road of writing a book like this, which is not an AI book and kind of try to provoke at least, at least a conversation, if not some kind of change in the way that institutions and people structure what they're doing. I think we're going to get into sort of some of the drawbacks to this approach and maybe some of the nuances around it before we do. You sort of mentioned that some great ideas were never objectives for anyone, at least until they were discovered rock and roll.
I think penicillin is another one. Can you say more on that? Yeah. I mean, it's related to the idea of serendipity and, you know, in serendipitous kinds of discoveries, you weren't expecting to make it. And I think the insight is that this is something that is much more common than the kind of narrative that we tell ourself about how discoveries, inventions, innovations are made.
And actually, like a lot of what we do that facilitates making these kinds of important discoveries is to actually set us up, set ourselves up for having effective serendipity, which is not the way that we talk about things when you talk about setting an objective and just moving towards it. So, of course, something like rock and roll, which is a good example of
It's not the kind of thing that you could set as an objective because it doesn't even exist as an idea until you run into it. And yet somehow, you know, the pieces were in place when Elvis was there on the scene for him to kind of run into this.
And the question I think that's interesting is why that happens, like what kind of situation leads to that and what kind of person will take advantage of that kind of situation. Are all objectives the same? No. And it's an important caveat to what I'm saying that there are many objectives that I guess I would characterize them as just modest.
Like, for example, you know, I want to be able to run for longer or I want to lose some weight or maybe even like I want to get a degree in accounting. Nothing against accounting. But the reason I call those things modest is just because, well, they've been done many times. Like we know that these are achievable things.
And I think those need to be distinguished from what I would call ambitious objectives. So by ambitious, I mean, these are things we don't know how to do. We're not sure how they're going to get accomplished, even though we want to accomplish them.
curing cancer or achieving artificial general intelligence or something like that. Like those are really ambitious. And so when I critique objectives or when the book does and it tries to make it clear, it's really the ambitious ones that we're talking about here. Like having modest objectives, it shouldn't be affected by this critique. And it would be getting, you know, verging on kind of nutty, cranky type of behavior to try to get rid of those. Of course, you can set modest objectives.
um but you have to remember though that like a lot of our society runs on the ambitious one
Like we are banking on innovation to save us from all kinds of problems and also to deliver us into like a new world of different types. And so we depend on this kind of ambitious stuff to happen. And the fact that we run things as if they actually happen through objectives is perhaps self-deception that is then really grinding down our efficiency, like ability to really take advantage of the resources that we have to make these kinds of discoveries by recognizing how they actually work.
So the core problem with the ambitious objectives then is that in many cases, trying harder won't help you achieve the outcomes you're seeking. And sort of like a follow on to that, you can't be so tied to your vision of accomplishment that you're not open to the unexpected and unplanned. Yeah. So it is true that like one of the principles in the book is that you can actually be
block your own ability to reach an objective by setting it, which is paradoxical.
So grappling with that is, you know, hard and important and something that the book tries to discuss, like how to grapple with that. But yes, we're actually causing ourselves to achieve less in these ambitious cases by setting very ambitious objectives. And one of the things that we do when we set an ambitious objective conventionally is that we would also set some metrics up to measure progress towards that objective.
And that's where I think things really get tripped up is these metrics or assessments. And we love assessment in our culture. We have a very big assessment culture. And the assessment is basically trying to give us a security blanket so we can feel like we're moving towards the objective and that we're actually making progress. And the problem is that a lot of the time, even if your score on a metric is going up in the short run, it doesn't mean it will all get all the way to the point that you want it to get.
that's a fundamental problem. It's called deception and it's a fundamental problem of all complex problems. And so the fact that we rely so much on these assessments and metrics is very deceiving.
and can ultimately cause us to invest a lot in a deceptive path, which is actually going to lead to a dead end. And that's why it can actually be, although it's counterintuitive, it can actually be bad for you to have a very strict objective that you're assessing movement towards. Let's sort of make that tangible. One of the examples you give in the book around that is schools and improving student performance. And I'm thinking here that's both hard to argue with
And it also never seems to improve despite billions, if not trillions of dollars. And so progress sort of can't be packed into a single metric. But what should we do instead for something like that, where we we also need accountability on behalf of sort of politicians or decision makers who have some sort of skin in the game for their choices? That that word accountability always comes up. Is it sort of like why we feel like we have to have these metrics and assessments?
Education is a great example of this because the education system does have an objective.
And it's a little bit fuzzy. It's not usually stated explicitly, but I'll characterize it something like the objective of the education system is for everyone that's in the education system, all of the students to score perfectly on a bunch of assessment tests. That would be ideal if everybody was perfect. Of course, we're never going to get quite to there, but that would be the ultimate perfect objective. What leads you to that outcome?
We don't come close to that outcome right now. We have many people whose performance is way below what we want to see. So that's our problem. That's what the problem with the education system is. And so we're trying to solve it by being objective. So we say, okay, well, what we need then is we need some standardized tests. We're going to blanket like the entire country basically with these standardized tests and other countries to do something similar.
And then we have a universal measure or metric that can be used to decide whether progress is going well in local locations. Like if what's going on in your local school district is still being assessed with this kind of like a global test that is used across the nation, then we can compare things and see if things are moving in the right direction in different locations. And the problem is that this is subject to this deceptive problem or what I would call this objective paradox.
which is that you can look like, you know, your metrics are going up a little bit from year to year, but that has nothing to do with necessarily getting to a point where everybody is scoring perfectly or even above the threshold that we would consider acceptable. And
And that's borne out by history. Like it never happens. We keep on trying to revise like every decade or something. Like there's a whole new like push to like, let's really do this seriously this time. And just the same thing over and over again. Like we don't learn from this mistake. It's that the problem isn't that the assessment is somehow flawed. The problem is with assessment itself.
We cannot make progress in certain kinds of extremely complex problems, which education is one of those, simply by just laying out some assessment system and then trying to follow it towards this global objective, which is just incredibly complex to get to. But what comes into play then is what you said, which is this accountability issue. If you get a critique like what I just said at first, I mean, people make critiques, obviously, against standardized tests.
And the response is usually like, OK, well, where's accountability going to come from? This is a way to hold people responsible. And the problem is that it's not necessarily a dichotomy the way it's being presented, where like you either have accountability or you have no assessment at all. There is a possibility of having accountability with a different approach.
But we need to have an approach that recognizes how you actually make innovative progress in an extremely complex problem, which is what the book tries to get into in general, how those kinds of discoveries are made. And really what happens in these kinds of problems is that because we don't know the stepping stones, this is the key thing, the stepping stones that we need to traverse through in order to get to the outcome that we finally want.
which in this case is like this universal high achievement, we don't actually know what we need to cross through. And what we can almost be sure is that one of the things that's not a stepping stone is everybody getting just a tiny bit better next year. Like that's not probably going to happen. And if it did happen, that's not probably going to lead to this kind of universal global high achievement. And so it's very deceptive.
And so the stepping stones are likely things that are counterintuitive. And this is what, where these metrics start to break down, like that we use across all these institutions is that if the, if the actual stepping stones that lead to where we want to go are counterintuitive, in other words, they're not what you would expect.
then the metrics are useless, right? Because they won't detect those stepping stones because they don't look like what the metrics are trying to detect. And if you think about it, of course, they're going to be counterintuitive. It's like a rule. Because the thing is, if the stepping stones are not counterintuitive, then it's not a hard problem. So we would have solved it already. Like that's basically what makes a problem hard is you don't know what the stepping stones are. If you do, then you don't have a problem. You don't even need to do the assessments, just follow the stepping stones.
And so because we don't know the stepping stones, what we need to do is we need to proliferate stepping stones, like stepping stone candidates, things that could lead to something interesting, but we don't know which one. This is a lot, this is related in some ways to investing. It's a lot like investing. It's like you have this portfolio of ideas and
And you don't know which one is going to pay off, but you need to have that portfolio because you can't a priori make that kind of prediction. And so we will have some we would need to have some stepping stones that we check into that don't ultimately pay off. But if we have a portfolio, then some will pay off and eventually branch to more stepping stones. And some of those will lead to this ultimate holy grail.
But the thing is that obviously we're going to have some risk and we're going to have some things that don't work out. And we need to tolerate that. And that's because that's what allows stepping stones to proliferate. And generally, these kinds of assessment and accountability cultures don't allow that. You know, if you have this accountability culture, you're not going to be willing to tolerate having things that don't look like they're succeeding with respect to your naive metrics.
And so you need a completely different kind of a culture and you still want accountability because people just can't live without accountability. But I think accountability needs to be much more nuanced. It needs to recognize that what makes something valuable is if it's an interesting stepping stone.
not whether a metric is going up. It's like if there's a teacher out there in some obscure town in the middle of Alabama or something who does something interesting, like the key to getting the education system overall to improve is to disseminate that through the social network. Like what that teacher did, it's not necessarily the solution to everything, all the problems we have in education, but
But it will be the solution for some people, and they can follow up on that and go to the next stepping stone and see where it leads. Maybe some of these things lead to dead ends, but we can't find out if it doesn't disseminate through the social network, in this case, the network of teachers.
There's nothing set up whatsoever to facilitate that in our system. Everything is centralized and globalized so that everything is assessed with respect to the same kind of criteria. So if something interesting happens in some obscure place, nobody's going to know about it. Nobody can follow up on it. Nobody can think about it, discuss it. But in the new version of assessment, that should be recognized and rewarded.
In some way, I can think of ways, but I won't, I try not to take up too much time with this. We could discuss if we want, but there are ways we can imagine that peer review and things like that could allow us to recognize interesting things. Do we still have assessment? Like we don't allow completely crazy things to go on. You know, like if somebody proposes, like, let's just not do anything in school and let the kids run around or something. Okay.
Okay, this would get caught by something like peer review. But we do need to be able to recognize things that are interesting, which means things that are not objectively detectable through the usual assessment techniques. I have four thoughts that came from that. Wouldn't peer review necessarily push back on anything that's counterintuitive? So there's a cultural issue, actually, with that.
If we live in a culture where you're basically under the gun all the time, so people are basically your boss is looking at you and saying,
if you don't walk the narrow line that we consider to be the accountable line, you're in trouble, big trouble. You could lose your job or something. You could lose funding. Then the peer review system will also suffer from that culture. People will be trying to patrol the culture to make sure that it's being adhered to. But I don't think that has to be the way it works. I think that peer review...
What it's supposed to do is allow individuals to speak from their individual perspectives. It basically disentangles the individual from this large kind of global monolithic view of what it means to be doing something good. And that happens. That is not an impossible vision. But because, you know, people generally how innovation actually happens is individual connections.
Most people see some idea somebody had and say, that doesn't fit with the usual paradigm. That's not actually a good idea. We know the right way now. We're all very sure right now what we need to be doing. But there's some one contrarian or something who sees that. It's like, actually, I can see a lot of potential here.
And if you give them the space to express that, in fact, express means not just to write a critique, which they could, but also to follow up and actually try it themselves and build on it because they see the spark there of potential, which other people don't see. Then, yeah, I think that that's that is how ideas percolate through networks. Like there's people, individuals make decisions that are somewhat disentangled from like the large mass of consensus around the field. And
And I think that peer review can facilitate that, but you have to start giving people permit, like make it clear that this is a change. Like we're not in this universal assessment culture anymore. Like that's not how this is going to work, but we're still going to use the peers as accountability. Like your peers will see what you do. So you can't just go crazy and do something stupid.
it. And they will report it if it's something absolutely intolerable, but they have an opportunity to make unique assessments. We're not telling them what they should like. It's just like in academic publishing, you know, you don't, you do get peer review. Like that's a place where a peer review does, does happen as sort of like a standard aspect of the culture. They don't, you don't tell the reviewers like how to think about like the reviewers are the experts in their field. Like, so a paper has been submitted to a journal and we have some reviewers who are other scientists who
You know, tell them, okay, well, here's how you should think about this. Like they get to think about it how they want to think about it. But that doesn't mean that there aren't like big problems with peer review. And in fact, I think peer review in scientific publication and also in like the assessment of proposals for getting grants is flawed also because of objective thinking. Right.
But it is a place where I think we can, if we structure it right, which would be somewhat different than how it's structured even there, I think we can start to escape this kind of a global thinking. It's so interesting because like the peer review system is sort of also not caught the replication crisis that we've, crisis is an overuse of the term, but it hasn't caught that sort of mistake. But it sounds like what you're really saying is that, you know, with ambitious goals, you
They're far off in the future. We don't know what the next stepping stone is. And so it's better to almost take like an evolutionary approach where we're creating these mutations or copying errors or, you know, we're trying all these little experiments and then we see,
which of those experiments leads to some interesting insights or conclusions. And then the idea being we take those conclusions or that interesting insight, and then we propagate it to all the other nodes, almost like nature sort of rewarding variations. And we do this blindly because we don't know
sort of what will yield the best results. Is that, how do you think of it? Yeah, that's a good characterization. Actually, it's not a, it's not a coincidence that you bring up evolution because the background behind the genesis of this discussion in the book is that, you know, I was working in evolutionary algorithms and this is like in a branch of artificial intelligence. And I was trying to understand what actually allows evolution in nature to make the kinds of incredible innovations that it makes.
You know, if you think about it, like from a computer science perspective, like as opposed to biology perspective, like what evolution is, it's a very unique thing in the sense that it's kind of like a search or like a learning algorithm that discovered everything that was ever created in nature in a single run.
This is very different from what you see in typical machine learning, which is like, we're going to try to solve a very hard problem and just that one problem and all the resources go to that one problem and that's the objective and that's the run. It's very, very unique that you would have a single run discover the solution to every problem. What I mean by problems like how do you get flight to work? How do you get photosynthesis to work? How do you get human level intelligence? It's all one run.
And I was trying to understand this from an algorithmic perspective, not like a biology textbook, but like, can we actually...
create algorithms that work in some analogous way to what nature has done. And, and so it comes from some of the revelations that, that came from that work, like this, like other, this discussion, which seems somewhat remote from that, but it's, it's very connected actually because it's based on a recognition of like how nature was, was innovating. And one of the keys that you mentioned, I just wanted to highlight like the word interesting is,
When we talk about selection in nature, it's like who survives? Usually the word interesting isn't what comes up. Usually you hear the word fitness. But actually, I think we won't go into all detail, but you could take an alternative interpretation of nature that the way it's set up is actually a way of detecting things that are interesting in some sense. Because of the fact that what everybody has to be, how I sometimes put it, is like a walking Xerox machine.
Like you have to have within your guts, something that will make a copy, which is extremely complex or else that lineage will not persist. And that means that like pretty much everything's going to be interesting in some sense. It's a very abstract and weird sense, but like you can't degenerate into complete meaninglessness because everybody has to be a walking Xerox machine. And this keeps things honest and kind of interesting. Now,
Now, when we move to other paradigms like education or something like that, or invention of another type of human invention, civilization, like how we progress through this space of possible inventions, then it's different because here we care about our view of what's interesting.
Evolution sort of has an arbitrary concept of what's interesting. It just happens to be the way nature is structured. But we care about things that we find interesting as people. And so really the crux of the matter is to really delve into the issue of what is interesting. The education example is an example you could look at for reference. What does it mean to have an interesting teaching technique independent from the immediate assessment implication of it?
And we are afraid of that conversation, I think, as a culture. We do not like talking about whether something's interesting because, again, it seems to somehow veer around this accountability issue. We don't want to know your subjective view of why your little thing is interesting, your pet idea, because I can't assess it in some objective sense. And so I don't like to hear about this. Also, I think it's one of the problems with peer reviews that you don't allow it to really discuss interestingness.
But the truth is, interestingness is the magic sauce. And that's what I think humans are really good at. That is why civilization has actually created all of the amazing artifacts and genres and, you know, musical and artistic and literary genres that it's gone through over the eons is because, you know, I mentioned those just because I don't want this only to be about technology. It's about everything.
And it's because we have a nose for the interesting. We're super, super sensitive to what's interesting. We're just, we've gotten to a point in our culture where we're not allowed to talk about it. And I just want to point out that it's not unprincipled to let people talk about what's interesting because we're talking about experts here. You know, we're not saying that like, okay, in the field of education, we're just going to go up to some random person on the street and ask them whether some random teacher somewhere did something interesting.
We're talking about the people who are experts in education who have a history, who have actually been in the field for years. The idea that those people, that their opinions about what's interesting are invalid is completely throwing away, I think, decades of societal investment. All of the education that you put into that person from the time they went into kindergarten all the way up to the point that they got out of graduate school. What was the point of all that if we don't trust their judgment on anything subjectively?
It's actually the subjective judgments are the interesting ones because like the objective judgments are easy. You don't need a degree to just measure something. You know, some kid takes some test, you get a score, you can average them across everybody at the school and,
Who needs to have a degree for that? Like anybody could look at that and just tell you how it's going. It's the interestingness judgments that require education, experience, like deep insight. And so the fact that we are paranoid and afraid and unable to engage with the question, which interesting, I think is crippling like to our ability to innovate.
That's fascinating in a couple of ways. One of the ways is that you mentioned the word judgment and judgment subjective, and we're hesitant to say or do anything that's subjective. And we grab onto these metrics, right? So an example would be sort of like during COVID we, we,
If we were mutating, we could have done something where, you know, the best teacher in your state or your province or your country, the best grade five teacher is now teaching all grade fives, right? This could have been set up and it could have been, you know, you could have added access to the best teachers in the world at whatever level.
That you're at in whatever subject you're at, or even the best teachers in your state or your country or even your city. I mean, we could have done this at any of those levels. And we didn't because that would require a it would require sort of like trying something that might fail, which I want to talk about in a second. But B, it would require us saying something subjective that this is a better teacher than somebody else.
And we're so hesitant to do that. Can you what are your thoughts when I say that? Yeah, I agree with the point. Like we could we could debate about whether we should have one teacher like teaching everybody in New York State or something like that. I mean, I'm not saying we shouldn't, but we could debate about it, obviously.
But the point that you're making that we can't even consider it because of the structure of the system, that is a problem. Why can't we debate about it in an official sense? I mean, you and I could debate about it and it will have no effect on anything. But the actual official system, the formal system that actually has the gatekeepers inside of it will not ever go through that debate.
Because yeah, it's like it's incompatible with the assessment system and it's just like way too unwieldy given that bureaucratic system, the way it's set up.
And I think that's an example of what is impeding our ability to collect stepping stones and try different things and see their outcomes and really be flexible and innovative and sort of like proliferate ideas in the spirit of nature. But it's, it's, I think it's multifaceted. So it's not only, you can't point to just one reason for it. It's like pervasive across all kinds of levels of the culture that we're, we're a
of this kind of engagement and exploration, including what you alluded to briefly, just that we are afraid to take risks. I mean, that's part of it. Like, what if it doesn't work? You know, like this could be the end of the world or something. And I think it depends on the domain whether risk is
Is tolerable. Like we should acknowledge that risk sometimes isn't worth it. You know, like there are some places you don't want to explore necessarily. And so maybe there are some places you can't innovate because you just can't tolerate the risk. This may be true for an individual like, you know, you've got a family to take care of and you can't really go try some crazy thing right now.
Or it could be like at a societal level, like we can't try a whole new economic system. You know, it might be interesting, like it would cause potentially so much devastation that we just can't take the risk. So we have to acknowledge that like some risks are too risky. But I think that's that's orthogonal to the idea that we can't discuss what's interesting. Like, I think we can still discuss what's interesting. We can also talk about whether the risk is too high. But there are certainly systems where risk is tolerable. Yeah.
And it won't be too devastating. And we have to thread that needle depending on the system. We don't want kids to go through a year of school and learn absolutely nothing.
But that doesn't mean we can't try anything interesting whatsoever either. And so we have to thread that needle carefully depending on the domain. Some domains, like there's lots of room for risk. Like, for example, science funding. I mean, the whole point of it is risk as far as I can see. Like we don't expect most of these things to work out, I think. I don't know if that's the view of the National Science Foundation, but my view is it's okay if lots of things don't work out.
So in a situation like that, there's not any real downside to really, really just doubling down on the way I'm advocating, which is being a lot less objective, a lot more subjective, and a lot more willing to talk about what's interesting. And then in more kind of brittle situations like the national economy, we have to be a little more cautious about that.
Because there's only one national economy. But we can do it, I think. We just have to be clear-eyed about what the consequences might be. It's like nature or evolution, if you want to call it that, has no concept of loss aversion. So it'll just keep trying things.
And if it's fit, it'll reproduce. And if it's, you know, not, it'll eventually wean itself out and then it'll keep trying the same things over and over again. And it doesn't, it doesn't have a consciousness. So it doesn't think about being wrong or the consequences of being wrong or, you know, uh, moving backwards to move forward. It doesn't,
it doesn't have any of those concepts where we do like we accept failure in science, but we won't accept failure when it comes to an education system. So any, anybody who comes in and tries something else, they have this really weird equation, right? Where you have a very linear, small upside and a very exponential downside if things go wrong. And so it prevents us from trying things.
And so that's why everything always looks like what was before it with slight wording differences or nuances around it. Yeah, yeah, it's true. That's a good point about nature, how there isn't this kind of assessment. There's nothing...
Like, you know, should this mutation go forward? You know, like, let's have a committee look at this first before we check into it. It just happens. And, you know, sometimes what will be bad and that lineage won't persist. But what's interesting is you look at it in aggregate, like the system is obviously absolutely prolifically creative. I mean, nature is like the ultimate creative genius.
And so it's because of that. That's why. It's like not afraid to try things and it's willing to invest resources into things.
But, you know, of course, we can't be that laissez faire. We can't just let anything happen anywhere. I understand that and acknowledge, you know, this is the people's lives can be affected. But we certainly can swing the pendulum a little bit away from the current assessment accountability paranoia. And especially in some domains like science research, where it's like,
very natural to do that. Like there's no reason to be paranoid in these kinds of domains. In other domains like education, you have to be a little more careful. But I think it's not working anyway. Like that's one of the points here I think is that look,
You might like assessment and accountability and objectives and metrics because they make you feel better because it feels like we're making sure that nothing bad will happen. But you have to recognize it's just a security blanket. It doesn't work. I mean, look at the education system. Like it never gets any better. So like it's just making you feel better, but it's not actually doing anything productive.
So maybe it would be worth it and actually not any worse if we did allow some more risk taking, which is not just about risk. It's about actually following interesting things. That's really what it's about on the positive side. Risk sounds like it's all negative, but we're talking about following the interesting things.
Well, maybe things actually be better if we did that, to take off the straitjacket, get rid of the security blanket and just acknowledge how reality works. And it's unfortunate that reality works this way. You know, it's like it'd be better. I prefer if reality worked where if I set an objective, like I could just get to it as long as I'm sufficiently determined to do it. That'd be really convenient. But it's just not the way reality works.
And so reality is inconvenient and difficult and scary. How do you map that to sort of like Elon Musk saying something like crazy or what seems like crazy, which is like, we're going to go to Mars in the next eight years?
which is this objective. And we don't have all those stepping stones. We might have a little bit, but I mean, he's building custom engines and there's a little bit of technology that exists that maybe... But there's also, there's a psychological value, I think, in some ways of pulling people toward a pursuit.
that might never even be reached, right? Mars might be one example, but CEOs and politicians do this too. And at its best, it sort of unites us. And it pulls us through sometimes when we're having hard times or, and we feel part of something larger than ourselves. We feel part of something meaningful. So there's a psychological angle to pursuing these big, large ambitions. Yeah, that is an interesting question because we see these quests that are set up
Self-driving cars is one of those, which from the perspective of this objective critique are really interesting because they basically are very ambitious objectives being set. So they're an example of what I'm critiquing, basically, like just like completely plainly that. And they often are like not successful at,
I would claim for the reasons that are in the critique. So the self-driving car thing, like back in 2016 or so, I don't know the exact year, it's around 2016. People like Musk, but others too, not just him, were saying like this is around the corner, like one or two years, like you're going to start seeing these services. And it didn't happen.
And, you know, the, like the interpretation through what I'm saying would be that, well, that's not a surprise. This isn't how innovation actually happens. Like you don't set some extremely ambitious objective where we don't yet know the stepping stones and then just like double down and throw all your money at it. And it's going to happen. That's not how things actually work. And of course what the crux of the argument has to do with, but are the stepping stones actually there? That's the real question. Like I,
I think often visionaries are interpreted as people who make these statements. Like, we're going to go to Mars. Like, there's a visionary. Like, let's put the halo on that person. Like, they're a visionary. But the thing is that that's just speculation because we don't know what the stepping stones are to getting humans on Mars yet. We don't know that. I think a visionary is somebody –
In contrast, who has recognized when the stepping stones actually have snapped into place. Now, that's a person you should follow. And that's a very interesting and unique kind of person. It's a different kind of a person. And I think that's more like a Steve Jobs type of like the Elon Musk kind of thing is. I agree with you, though, on the on the point that it might rally interest in an area. And that connects to the word interestingness.
Like that could be a positive thing. So it's not clear there's all negative, just somebody saying we're going to go to Mars. Like there could be a positive side because it basically moves resources and people's interest into an area which might matter. And so we could still say, well, actually, the predictions here are wrong. But the social effect, like cultural effect actually is positive. I could see that.
And I think that that makes things a little hard because it's complicated because of that. It's not like we can just fully critique somebody like that and dismiss them. But they're not doing that. So I think to be fair, in terms of who you want to lionize, I think you have to be fair to at least acknowledge that that's not what they're actually doing. They're not just saying this is culturally a good place for us to be interested.
they're making claims that are not really well founded. And so it's not really visionary to make these kinds of claims. And so you don't want to go too far in kind of embracing and lionizing this kind of thing. But you can say, well, it might have had a positive effect and we can concede that. But I think what's really interesting is the people who recognize when the stepping stones are there. Because even the visionaries, the so-called visionaries are so bad at that. Like they're the ones who tell us like right around the corner, this is going to happen, that's going to happen. Never actually ends up being that way.
But it's like this rare kind of a person who's like, huh, you know, look at what the things that we have in this iPad. Look at the technologies that we have, like screen technology, you know, like that.
Right now, we could actually make this iPhone concept for real. It's actually possible to do it. And that's not just speculation. It's not like I'm going to predict 10 years from now there's going to be this phone thing that does all these magical things. I actually realized this is now is the time. That's very hard. People have visions all the time of really cool stuff that might happen, like flying. People are trying to build planes for hundreds of years.
They're not particularly impressive people because they said, "We might be able to build a flying machine." And they were wrong about how long it was going to take, what's going to go into it. But the Wright brothers are in the right place at the right time. I view the Wright brothers more in that way. It's like they saw that the stepping stones had now snapped into view. And so they actually saw this actually is the time where this can happen.
Those are the people to be impressed with and to follow, I think. But hold on, like at a meta level, and it might be completely wrong here, isn't that a variation? Like we're trying this idea now and it doesn't work. And just because it doesn't work doesn't mean that we shouldn't try it because that in and of itself is a variation that we're, you know, this is like how nature proceeds. So yeah, I think this gets into some subtlety. Like, is it ultimately...
you know, actually harmful to just try something. Even if like, it's really unrealistic.
And like, I think it depends on what your motivations, like if you're going in that direction, 'cause it's interesting, it might not be harmful. I think the field of AI is kind of like that. Like at some level there's like this really grandiose conception of like some human like computer. And like that is, I think a naive objective right now. Like you just don't wanna know how to do that. We don't know what the stepping stones are that lead to that, although they're getting closer perhaps, but they're still not close I would say.
But at the same time, like the point that like investigating around the area of like algorithms that have intelligent like qualities, that's still valuable, I think, because it's interesting. Like one thing that happens, I think, if you do is you're you're unearthing stepping stones that could lead to something else. There isn't artificial general intelligence, but still really valuable.
That's not usually how it's thought of. I guess in some way that would be disappointing if it's like, well, I made some progress, but it's never going to get to the AI that I'm envisioning, but it still caused something cool to happen. But in effect, that's really what's happening. You know, like the effect on industry of AI is significant, but it's not because human level intelligence has emerged because it hasn't. It's because these things have other implications that are in the short run quite useful and interesting.
And so it's sort of a side effect of the fact that people have this kind of grandiose vision that a lot of interest has now focused in and like now caused a lot of stepping stones to be uncovered. And we don't really know where those stepping stones lead. They may not lead to the human level, but they do lead to interesting things that, that, so you have to, I think you have to distinguish which type of thing are we talking about? Like when somebody says something like AI, like a self-driving car, flying car, you know, whatever it is,
Are we talking about like this is an interesting space to play around in because we're going to find some interesting stuff? Or are we literally believing that like, OK, within two years, we're going to be on Mars and or even 10 years? And it's not like, you know, the answer is like extremely critical. Like, I mean, you know, you could be wrong and still have gotten a lot of people on some interesting stepping stones.
But at least I would want to, for myself, try to disentangle those that question and decide which which type of visionary is this and what type of vision is this? Because I think it guides the realisticness. And also it takes out a lot of inefficiency, like to recognize why you're doing something.
you know, like, cause when I investigate AI, I do, I think of myself as just basically looking at stepping stones because they're interesting. I don't necessarily think this leads to AI. This doesn't lead to AI because I don't know. It's like way far off. I can't tell you. So, so what do you tell your boss? Well, that's not you, but like somebody listening to this, how do I go to my boss and say, you know what? These objectives, they're really destroying my creative thinking here. Uh,
How do I step back? And, you know, what do I say? That is a serious problem. That actually is one of the big reasons that we wrote the book, because it's kind of like the book is, I think, a weapon, right?
It becomes an argument. Yeah. It's trying to empower people to make an argument to their boss, you know, because it's really scary, I think, to argue like this. Like this sounds wacky, actually. Like if you don't have any context, you haven't listened to this show, read the book or anything. And if you just go to your boss and you're just like, look, I'm just doing this because it's interesting. And there's no assessment. You know, we're going to drop the assessment. Right.
Just let me do it though, because interesting this is really important. I mean, your boss is going to, yeah, you're risking your job. Your boss is going to freak. And it's not only because your boss is like a jerk, because he's not necessarily a jerk or she is not necessarily a jerk. The real problem for your boss is they have a boss.
How are they going to explain it to their boss? The problem is that this culture percolates through everything. So like everybody's trapped in, even if they believe in what I'm saying, like I've said, I mean, I've talked to a lot of people about this, a lot of different organizations. Like the book has caused me to come before like all kinds of audiences that I never would have encountered. And a lot of the time I meet people who are gatekeepers and they're like, you know, people who decide what should go forward, what should not go forward, whatever.
And they love the principle of the argument. Like they're like, this is so inspiring. Like I'd love to change things around here. But, and there's always this caveat, like I answer to this, that, and the other thing. And this is going to be really hard like to explain this to these people. And so I'm not really sure what I can do. And then basically it doesn't really lead to any changes. And so I'm hoping that the book will actually empower through this kind of discussion becoming more mainstream, hopefully. I mean, that's idealistic.
People being able to make this argument without being ridiculous. Like this is serious. Like there's, this is principled. It's based on research. And there's a reason that justifies doing things like that. But, but isn't this sort of like the self, like not the, the destructive nature of capitalism or, you know, that it just like sort of feeds on itself. And then, so you have an interesting idea at work, you can't pursue it. That becomes a startup.
So you find the next stepping stone, you find something interesting for you. Your workplace won't necessarily allow you to explore it. So you quit your job and you find a couple of like-minded people and now you get to explore it. Yeah. I mean, I'm only talking about improving things, really. It's not like nothing ever works at all. Obviously, we see progress like rock and roll was invented. Cool stuff happens. You know, people create startups that change the world. It's not like nothing can happen at all.
At least we're not in this kind of horrible entrenched situation. And then there are dictatorships and things where that effectively is like that. But we at least have a certain amount of flexibility in our society.
But I'm basically saying it could work a lot better. And in fact, like it shouldn't be the case that like in an organization or a business, for example, or like a corporation where like there is a part of the corporation which is supposed to be facilitating innovation, that the only way to actually do something innovative is to actually quit and start a startup company.
I don't feel bad that that happened because the startup ended up being really cool. But something is wrong, though. What is going on inside of that organization? They could have captured that idea, but they let it go. And this is, I think, pervasive all over the place because of our objective culture. And so things could be a lot better and life could be a lot better because dropping out of your job and having to start something new and risking your career and your finances is
kind of sucks and it really shouldn't be necessary because this is actually a principle thing to do, especially if you're part of the innovative component. I just want to acknowledge there are parts of companies where you shouldn't necessarily be about innovation. It's not every single thing that's being done is not about innovation. Some things need to be conservative. Some things need to be conserved.
But there are the parts of companies explicitly, you know, supposedly assigned to innovate. And they actually work in this objective way, which is just like completely backwards, but
And so does sort of like huge agencies like the National Science Foundation, which is, I think, very objectively run. Like, if you look at the criteria for funding, it's very objective. It's like you have to propose to this committee by telling them what your objectives are and then assessing whether you can get to those objectives and being very clear on what the assessment will be.
And it's completely objective. And then it's consensus driven by a committee, which is not also, which is another thing I've talked about, which is not great for doing this kind of thing. And so like, yeah, there, it still might happen. Like I myself have pursued projects that like were rejected by those committees because I basically was pissed off and thought, screw them. I'm going to just do it.
But that's not very ideal. It should work better than that. It shouldn't be like you always have to be a rebel in order to do something which is principled. You had two sort of insights on decision making that you mentioned before we got on here. And the first was that you told me one rule of thumb that you use for deciding on projects is to try to avoid ideas that make too much sense. Can you double click on that? Yeah, this is kind of fun because when I say it, people look at me like I'm insane.
It's like somebody says, look at this great idea. Like it's usually something in science. It's like, this is, isn't this exciting? Or maybe even we should follow up on this or do something.
And I'm just like, well, I do think it's a good idea, but it makes too much sense. So I really don't find it that exciting. And so that's a little personal heuristic, which is related to what we're discussing. And it's sort of what it is for me, or rule of thumb, is that I recognize because of all this, that when you talk about stepping stones that lead somewhere really revolutionary, they're going to be counterintuitive.
You know, and if you think about it, that makes a lot of sense because if they were just intuitive, like obvious, then we would have crossed them anyway. You know, like it's not a problem. Getting to things that are really important or interesting will cross through counterintuitive stepping stones. And so what that means is that like they won't make sense at first. That's basically the definition of counterintuitive. Like it's true that in hindsight, they'll make sense because like at some point you look back and you say, oh, well, I see why this led to that.
But looking forward, they will be strange and counterintuitive and basically seem like they don't make sense. And it's like even like if you think about what the theme that we're talking about here, which is in effect that that to achieve our highest goals, we should be willing to abandon them. Doesn't sound like it makes sense at first, you know, which is why it's a good stepping stone, because they're always going to be like that. They don't make sense at first.
But in hindsight, you can look at it like what we've discussed if you're starting to be convinced and it does make sense actually, but only in hindsight. At first, when you hear a proposal like that, you're like, what the heck is that? That's crazy. And those are the ones I'd really rather pursue because they're the ones that are going to lead to something revolutionary. And so if you give me something that makes a lot of sense, I basically think, well, someone's going to follow it. It's guaranteed. It makes sense.
but it's not exciting enough for me because like i know it's going to get followed someone else can do it like the ones that aren't going to get done are the ones that don't make sense so i'd rather think of something like that and that would be more exciting that goes with your sort of second heuristic which is trying to imagine the people you know and if they can predict what you would do next then you probably shouldn't do that because if it's predictable somebody else can and probably will do it yeah yeah so that's that's another heuristic yeah like um
At least I feel much more pleased if I do something next that isn't what you would have expected me to do. That means, yeah, trying to think about upfront, like what is the natural next thing that you would think I would do?
and then not doing it, so trying to run away from it. And this is related to the concept of novelty. Basically, that was a large concept in the book is novelty because novelty is a very large component of interestingness. It's not all of it, but just about anything that's interesting is novel. It's other things too, but it's at least novel. What do you mean when you say novel? Yeah, novel means it hasn't been tried before. It doesn't look like things that have come before. It's different in some
very fundamental way from other things that have existed in the past.
Think about things like, what's novel changes over time? Of course, something like the idea of a little room that's on top of wheels and can move you from A to B. 100 years ago or 120 years ago, that would be a super interesting thing if you actually could get something like that. That would be dinner conversation. Could you imagine? How will this affect society that this little thing on wheels can go anywhere?
Now, today, that's not very novel. Like, it's completely not novel. And that's why it's not interesting. Like, this is not good dinner conversation, like, because it's been done and it's been done for 100 years. And so, like, it changes. And so generally speaking, like this, the next stepping stones, the ones that are going to be interesting are also going to be novel. So that's generally a rule of thumb. And so like running away, like novelty means running away from where you've been in the past. You know, a lot of objective problem solving is the opposite. It's about converging along the same path.
Convergence is basically what it means to optimize. You converge towards that global optimum if you're doing well. But it means you're basically sticking to the same path. You're not trying to get away from it. Novelty is the exact opposite. It's like I'm getting away from where I've been in the past.
And that's uncomfortable, of course, by nature and risky and everything. I mean, because it's like if I've been successful in some path that I've taken in the past and now I'm in a situation where I'm now trying to take a different path. Well, of course, that's going to be like super uncomfortable for me. Like I'm leaving behind like everything that makes me feel comfortable and safe.
But then again, it's a heuristic for innovation, you know, because it will be through the novel and the interesting that innovation actually happens. And so I think that's why generally I think, well, it is you can kind of predict what I would do next. You look at a certain trajectory that I've taken. But like, I guess the way I think is that then someone will do it. You know, like it's it's kind of you don't need to be me anymore.
Like this path has been laid and I'm sort of like pushed myself out of relevance because the path is now clear. I think this is why like luminaries kind of get stuck in a rut as they get older. Part of it, I think, is that it's really scary like to leave that path because that's what you're known for. That's what everybody respects you for and you're comfortable in it. But the problem is it becomes predictable. And so you look less innovative the farther down the path you go and the less you deviate.
And so it takes a lot of energy to actually intentionally deviate. But I think it's a good heuristic for if you really want to continue to be innovative. Yeah, that's a good place to end this conversation, Ken. I want to thank you for your time today. And it was a fruitful discussion. Thank you so much. Yeah, it was really great to be here. Thanks for listening and learning with us. For a complete list of episodes, show notes, transcripts, and more, go to fs.blog.com.
or just Google The Knowledge Project. Until next time.