Before we dive into this conversation, just a warning that this episode contains mentions of suicide. So please do take care. When it comes to questions of risk, human brains do this black and white thinking. I don't think when you're diagnosed with cancer, it's the time to have a statistics lesson. And I also don't think that we're getting informed consent from people if you just shout numbers at them. But what I think that you could do is you could have somebody who turns it around the other way, right? So who sits down with you and says to you, okay,
What is important to you in your life? What are the things that you value? And what percentage chance of it working would you be willing to tolerate? It's noticing how you feel and interpreting it mathematically rather than trying to put the numbers on how you feel. Hi, I'm Reid Hoffman. And I'm Aria Finger. We want to know how, together, we can use technology like AI to help us shape the best possible future.
With support from Stripe, we ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future, and we learn what it'll take to get there. This is possible. ♪
Just over a decade ago, a charismatic mathematician took the stage at TEDx Binghamton University and delivered a bold thesis. Math can help you find the love of your life. Her talk, fit with wisecracks and diagrams, wasn't just a platform for relationship advice. It was a portrait of how math is deployed in our everyday lives, of how numbers, theories, data, and algorithms can allow us to
to better understand ourselves and each other and build toward the best possible future. Millions of views and hundreds of lectures and broadcasts later, Hannah Fry is a leader in a global movement to make math cool.
She was a professor of the mathematics of cities at University College London before joining the mathematics faculty at Cambridge University. Hannah is also a best-selling author and host of many shows, including BBC's The Secret Genius of Modern Life and Uncharted, along with the Bloomberg original The Future with Hannah Fry and the podcast Deep Mind.
Put simply, Hannah finds and shares the many ways that numbers, algorithms, and data are at play in daily life, from dating apps to traffic jams. Of course, math also forms the foundation of AI, providing the tools that enable machines to operate intelligently. Today, Hannah joins us to talk about how math moves us and shapes us, and how we can use technology like AI to elevate humanity. Here's our conversation with Hannah Frye.
Hannah, it is excellent to be here in LinkedIn London with you. Thank you for joining us. Thank you for having me. A delight to be here. So apparently you've said that if you weren't going to be a mathematician, you'd be a hairdresser.
One, why? And two, if you were a hairdresser, how would math play into hairdressing? Okay, I remember saying this, and it is absolutely true as well. When I was 16, it was what I wanted to do. And my mum was like, okay, look, once you finish your GCSEs, just do your A-levels, and then we'll see where we are. And then I finished my A-levels, and she was like, okay, just go and do a quick degree in Maths and Theoretical Physics. And it sort of went on and on and on, and then eventually the hairdressing has fallen by the wayside.
But it was like a genuine ambition of mine. And I can still like tap into that feeling of what I wanted, because I think that there is something, first of all, quite interesting about kind of constructing a three dimensional shape. You're on a sphere. There's sort of some interesting geometrical properties there.
But also, I think what it is, is it's about doing something and then immediately seeing the value in it. And I get the same kind of buzz, I suppose, from doing Instagram videos or YouTube videos. You know, you do something and then you immediately see like the effects of it. It's not like in the academic world. I mean, things are slow burn, right? Slow burn. So, yeah, I think that was what it was about. I was just going to say, it's like when you go to the gym and you have a personal trainer and you want them to be fit. Yeah.
You have great hair. Thank you so much. So, you know what I mean? It's like not a bad backup profession is all I'm saying. Thank you so much, yes. But you did end up, you call yourself an accidental broadcaster, which I love. And so are there things you can't wait to focus on? Like what's percolating your head right now? I've been trying to get some stuff about artificial intelligence done properly out there for so long. I mean, it's been really difficult to persuade people that this is something that,
that's worth time and attention that can be told in an entertaining way. And I think that this switch has just happened in the last 18 months or so where people are like, okay, no, we will really address this. So I'm really looking forward to doing some of that. I also, I don't know, I'm in this quite philosophical mood at the moment where I'm thinking a lot about what knowledge means and the edges of knowledge and how we can deal with the fact that there are things that we will never know. You know, I think that most people don't see how math works.
is so relevant to their lives. One of the things I think is really excellent about your work is to say, how do I make it tangible, practical? When someone starts from a, like, I don't understand math or I'm a little alienated, what are the first things that you try to light up their world and their universe? Because, like, for example, on the AI point that you were making, part of what I was doing with Super Agency is to try to move people from AI fearful, skeptical, to AI curious, right?
And I see a similar arc, which is you're trying to make people math curious. Yeah. Right. And so what are some of the things that you do to get people to suddenly start becoming math curious? Okay. So I think the very first step is that it's not your job to change people's mind about how they feel about the subject. You know, I think a lot of people are kind of really curious.
traumatized, genuinely traumatized by the math that they encounter in schools. And I think it really divides people. Some people like us, I imagine, you know, being really drawn to the subject. And I think a lot of people really turned away. And you're never going to switch that. So I think it's about acknowledging where people are and using that as your baseline. But then I think it's about really doing it by stealth. That's a...
It's the vitamin in the Twinkie. Exactly, exactly. It's the vitamin in the Twinkie. Exactly right. That's if you like Twinkies. Fair. Maybe vitamin on a chocolate chip cookie. Done. Just vitamins anywhere. Just like force them in. Because I think that when you have had the luxury of really seeing the world through a mathematical lens, I think that you fully understand that there is...
Almost everything can be viewed through that perspective. I think that it has incredible insights that it can offer you on literally anything. The explosion in sort of productivity and everything that the world has seen in the last 15 years has been based on that, right? It's like the era of big data was the kind of mathification of industry, right?
When you understand that, then I think that you can start elsewhere, right? You can pick basically any topic you like and then show the insights. I often make programs where I never even say the word maths, like I never even mention it.
But it's just about providing these curious, counterintuitive, surprising ways to turn something that people feel like they already know, turn it on its head and show it to them in a completely different way. I just think math is such a way to explain the world. And to your point, I think it's very different by country. At least in the U.S., people are math people and not math people. And that's so ingrained in our head. There's no growth mindset. It's just, well, I'm bad at math. I can't understand it. And you actually had a colleague say to you,
people are scared of mathematicians, let's keep it that way. Like, why do you think people feel that way? And like, how can we not leave math to the pros? I mean, I think that you can weaponize it, actually. I think that when you have knowledge, I think it gives you power, basically. I
If you are able to create mathematical models that can impact the way that things are run and you understand them, actually it's very easy to just like draw a wall around you and be like, sorry, you're not, you know, you can't come in. But the thing is at the same time, you know, we are quite literally designing our collective future, particularly now.
Everybody deserves to have a say in that, you know, and I don't think that people should be excluded from it. I'm not saying for a second that everyone's going to be mathematicians. Of course they're not. I would like to just make...
Move the dial a little bit on people understanding that there is this connection between a subject that they think is like numbers and textbooks and whatever. And actually this really living, breathing language that is allowing us to effect change out in the real world. I think I'd like people to understand that this is also maths, right? This is also maths.
and be invited in on the conversation. I know you've spoken a lot about ethics and doing things safely. I think those kinds of conversations, yeah, you should be drawing people into those. In the AI revolution, which I prefer to do as the cognitive industrial revolution, what are the ways that people should think about engaging with AI that's mathematically informed? So I think a lot of it is about critical thought. Historically,
Computers have been kind of deterministic machines, right? It's like you put the same sum into a calculator twice, you get the same answer out. And I think that actually we are moving away from that and more towards this much more sort of stochastic, you know, probabilistic space.
And I think that people haven't yet adjusted their mindset to what that actually means. You see this with, you know, hallucinations from large language models, of course. But I think that you also really see it in the ways that people are applying mathematical models or, you know, artificial intelligence models to determine particular outcomes. Right. So, for example...
Let's say that I somehow or other came up with an amazing new algorithm that could find your perfect partner in the entire world, right? But it did it with an 85% accuracy. I think that people are not very good at understanding what that means and what the wider implications are of that. And there are countless examples of that nature, I mean, more and more and more and more, where these things fall short of perfection and will always fall short of perfection, which is fine.
But that gap between sort of perfect and what you end up with can be sort of all kinds of potential problems that arise. You're highlighting...
A particular thing, Ari has heard this from me before, if I had a mathematical wish for humanity, it would be understanding that everything is in probabilities. That it's almost never 0% and almost never 100%. And how you're configuring your navigation path depends on the intersection's probabilities. And yet people tend to collapse it into 100 or 0, which frequently, like if you have 85%, you skate through it 85% of the time, fine.
but then you're ambushed by the 15 when you shouldn't be. I mean, I tend to think of almost everything as sort of spectrum. So I completely agree with you about between zero and 100. But I also think about when you're having arguments with people, it's like right and wrong also exists along this spectrum. And I think the big kind of like societal debates that we've had where people feel so completely polarised
There's a mathematical trick where you deliberately sort of push something to an extreme. So you think of, okay, what would the case be if this particular variable went to infinity or what would it be if this was zero? And you kind of use that to give you information. And I think that the Jesuits have a similar philosophy, right? So when there's an argument about something, you sort of imagine an extreme version of the same problem to help you understand it a little bit better.
That thing of like not seeing the world in black and white. It's not 100% or zero. It's not yes or no. It's not true or false. Everything exists along a spectrum. I think that's a really helpful way to see the world. I mean, that to me, besides like weeping alongside of you in your documentary about cancer, that actually was the most illuminating moment. I mean, I was talking about it with Reid this morning. So often with health and medicine, I mean, especially as women, it's like, well, if you do that, you're 10 times more likely to get cancer. Mm-hmm.
Well, the chance before was .001, and now it's .001. Your mind can't comprehend, and especially for people who don't sort of have a facility with math. And so, you know, in that documentary, you were talking about, you know, if 100 women were diagnosed and 80 of them would have been okay without going through chemotherapy, and what are the odds? Like, what was the response that you got to that? How do you think about that? To me, especially, something so important as your life is
The math becomes very important, but then also not important because you say, of course, I'm going to do this. Like, how did you think about that? Yeah. So there was one woman in the documentary in particular, this woman, Anne. I mean, that's a conversation that will stay with me forever, right? So she's, people haven't seen it. She was in her late 60s and she'd just been diagnosed with breast cancer and she'd had the lump removed, but the doctor was working out what future treatments she should be given. And they said, if we don't touch you again,
you have an 84% chance of living another 10 years. But if we give you everything we've got, so chemotherapy, radiotherapy, hormone therapy, everything, we can increase it to 88%.
I thought that was a really difficult decision, right? Like, do you go through it? But I spoke to her outside and I was like, what are you going to do? And she was like, I'll have to have the chemotherapy because otherwise I'll die. I was absolutely astonished that the numbers were not getting through, right? The numbers were not communicating the message.
And then I really thought about it, you know, I went back through to the doctor and was like, she did not understand. Like, you go to the doctor, they tell you the treatment, you take the treatment, the thing goes away, right? It's like exactly as you said, you know, 100%, 100%, 100%, 100%. And the doctor was like, I think if she did understand, she probably wouldn't go through the treatment. And if that happens over and over and over again, more people will die, which is also completely true.
what that started in me and has continued to the book I'm writing at the moment is about doubt and uncertainty is that I'm not sure I even know what those numbers mean. I'm not sure I understand the difference between 88% and 84%. Like, is the human brain really capable of that? I don't think it is. And so then it's like, well, now you've got these numbers that are applicable and very useful at the population level. And I,
kind of meaningless when it comes down to you as an individual. And so then it's like, well, what does probability actually mean? It really comes down to how you feel about risk. Ultimately, I think you're quantifying how you feel about risk. And that makes sense at the population level, because we can do it that way. But when it comes down to you as an individual, I think things get very, very hazy. Well, there's a couple of things there. One,
One of the things that happens frequently with math and most people's psychology is that they get false precision. It's like, really? 84? 88? Right? Like, how do you derive that? That's, you know, your area of expertise. The next thing is, what's the actual fitness function? What's the actual game you're doing? Because actually, in fact, in this case, I actually think the answer is fairly obvious if 84 and 88 are correct because it's probability of maximum number of quality of life days, right?
Right. And chemotherapy is brutal. So you kind of go, OK, actually, in fact, the higher probability of quality of life days is not doing it. And so frequently, I think one of the things that people mistake is what's the game? It'd be interesting if figuring out which game I mean, you mentioned girdle earlier and, you know, kind of incompleteness there. But but like one of the most important thing is figuring out what game is before you even kind of get to get the thing is. And that's another part of this.
And then, you know, the last part of it is to kind of think about, okay, exactly as you're mentioning, how does this apply to me? Am I a person who is willing to take some risk? And by the way, most people, this is part of the thing, reason I think they get to zero and a hundred, because most people want to imagine they're not taking risk. They don't realize that when they get in a taxi cab and go somewhere, there's a risk. You walk across the street, there is a risk. And they go, oh, it's just zero, zero, zero, zero, zero. Cause I otherwise brain goes. I'm immortal. Yes. And
And so part of the thing that I think is excellent is to get a little bit more fluid in the application of probabilities. One of the reasons I love the mission you're on. So have you thought about the heuristics of, like, if I were going to say something to one of my godchildren about...
Hey, think about math this way in your life. Yeah. What would be like a heuristic, a principle, a, okay, I need to remember this as I apply to this. Yeah. Okay. So actually, can I say something about the game bit first? Because I thought that was such an interesting point because I totally agree with you. It's something I think about a lot.
You have to decide what you're actually optimizing for. And I think so often people don't. And one of the things I've been thinking about a lot recently is about prisons. Right. And if you kind of look at the data, right, you know, why would you want to send somebody to prison when somebody commits a crime? You need some way to rehabilitate them. Right. You also need to kind of actually take them out of the system. You need some deterrent effect, right?
and you need some sort of sense of retribution for the crime. But when you actually look at the data, like on every single one of those, apart from maybe a feeling of retribution, prison is like the worst possible answer. It's like, if you want to reduce crime, is that the game that you're playing? I do think that actually people kind of go through a system without ever stopping to think about what the question is, what the game is exactly as you describe.
And then in terms of how do you think about maths? How do you kind of translate it? So going back to that example about cancer, one of the things that I found really noticeable is when you're diagnosed with cancer, you have somebody who will take you into a room away from the doctor and they...
will sit with you as long as you want and they will go through the procedure, they will talk, answer every question that you have. I mean, they're essentially a translator between you and the medical profession, right? And I do sort of wonder that when it comes to questions of risk, because human brains do this black and white thinking, I don't think when you're diagnosed with cancer it's the time to have a statistics lesson, right? I don't think
If it didn't work before, it's not going to work then. And I also don't think that we're getting informed consent from people if you just shout numbers at them. But what I think that you could do is you could have somebody who turns it around the other way, right? So who sits down with you and says to you, okay,
What is important to you in your life? What are the things that you value? And how can we design a treatment system that does that as well as possible for you? I spoke once to an intensive care nurse, because of course, when you're in intensive care, there are all sorts of situations where these probabilities arise, right? Like you can, you know, resuscitate somebody, but the chance of this repercussion is really high or whatever it might be. This procedure has these consequences.
And she said that what she started doing with families, rather than saying, this has this percent, she instead says to them, okay, this is what we're thinking. What percentage chance of it working would you be willing to tolerate? So it's the other way around, right? It's not you're taking a number and trying to attach a feeling to it. It's the opposite. You're taking a feeling and trying to attach a number to it.
And that, I think, is a much, much better way to try and do things. So in terms of intuitively thinking about mathematics, it's switching it around. It's like noticing how you feel and interpreting it mathematically rather than trying to put the numbers on how you feel. So actually with my kids, for instance, you know, if we'll walk into town, I'll ask them to come up with a route that minimizes the number of crossings of roads that we do. Like seeing the world in
in a way where you are kind of systematizing and critically thinking about things and noticing that you're doing it rather than necessarily just trying to put numbers on everything. Your point about pre-deciding, especially before the emotion takes over, I think is so critical. Most parents go through that during childbirth.
You're faced with all these decisions. Should you get this test? Should you do this? And, you know, my husband and I would say like, well, will we make a different decision based on the information? If we get an information of 60% chance of this or 20% chance, will it change our mind? Well, then why do we need to know? One of the things you've talked a lot about is that perhaps because, again, of this sort of innumeracy or people's non-facility with math, that they tend to trust the outputs
of math blindly or perhaps of LLMs or computers. And so I'm obsessed with criminal justice reform and the prison system. One of the things I'm excited about with AI is that, you know, it's hard to wave a wand and fix racism, but maybe we could wave a wand and make an LLM not racist. You know, it's hard to sort of fix system-wide things, but if we can, you know, sort of change a code, if we could reinforce, if we could do all these things. And you've talked about how...
You know, with AI, there could be increased bias, but there's also ways to reduce bias with AI. So how do you think about that? Yeah, absolutely. There's a really interesting paper that came out a couple of years ago, and it was called Women Also Snowboard. It was looking at image recognition where you can't tell the gender of people.
But what was really interesting was that it demonstrated how, you know, these are not like stable equilibrium, right? So if you have a bias in the initial data set, which is where the original bias came from in the labelling, where people were seeing pictures of snowboarders and assuming they were men rather than women, but it can be exacerbated once it goes through the algorithms. That's something that exactly as you say, you have to be extremely cautious about and make sure you're putting in the correct safety procedures to minimise and mitigate against that.
for me, the crucial point about this is sort of the one that you made, right? That you cannot wave a wand and fix sort of systemic issues or societal problems. And
And so then I think that the question changes. This is something that it doesn't have a finish line, right? It's not like, oh, well done. You did fairness, right? Like, oh, unbiased. Congratulations, right? I was writing a book a few years ago and I once spent like a week trying to research and look for any system in the world that has ever been perfectly fair. Doesn't exist, right? There's nothing. You know, forget about algorithms completely for a second.
And so then I think if you say this is not something that has a finish line and instead you accept that there will always be bias in your system and therefore commit to continually hunting for it and repairing it, I think that's the way that you have to approach this. Your comment just made me think about there are...
ways to make perfectly fair systems, but they will be very unjust systems. Yeah. Because you could go 0% all the time, 100% all the time, or 50-50 random, and it'll be fair. It'll apply to everybody. Yeah, you're right. Oh my God, would it be an unjust system? You're absolutely right. So how is this AI revolution...
made you think about education? Because one of the things you do at Cambridge, obviously, is trying to make math education widespread, impactful, et cetera. But let's generalize a little bit to education generally. And obviously, there's been a lot of turmoil within the academy when it comes to ChatGPT and everything else. What are your thoughts on it? And how should
universities be thinking about AI generally? Yeah. I mean, obviously it's been a massive disruption. So thanks, tech guys. Yes, we try. But I am really optimistic. I feel really optimistic about it. And I think part of my optimism actually comes back to the point that you made about the game earlier. Because the thing is, the real disruption has happened in the way that we assess students' performance. But
But if you kind of rewind what the question is about the assessment in the first place, the real question that we want to answer is, how do we know whether our students are getting a good education, whether they're understanding the concepts and are ready for the next step that they go on to? And you can't ever really answer that question. You know, you have to use a metric in order to get there. And the metric, you know, always falls short of precisely the thing. You can't use numbers to like perfectly capture these things.
I'm fine with the fact that written essays have to be considered in a different way. I'm fine with the fact that we have to move more towards oral examinations, change the types of questions that we're asking. I'm genuinely fine with that. I'm okay with it. But where I'm really excited about it is that we've known for a really long time that different people learn in different ways. But it's really hard to do that when you're one person standing at the front of a room full of people.
And I think just even from my own experience, using AI for research is just like on steroids, right? Like it's incredible how much faster you can accumulate and assimilate and like critically think about knowledge. I think it's giving them extra tools, giving them extra learning, right? Like I think they're learning faster and better than they were before. But I also think that the AI tutors that I'm looking forward to...
which will be adapted for individual people's learning styles and can really clearly identify gaps in their knowledge and then, you know, appropriately construct questions to reinforce those. I really think there's a lot of really good stuff that's going to be coming. I agree. I was thinking about the switch from written to oral examinations and I used to teach and my students, no offense, were just horrific writers. And so even if they understood the concepts, like I would always ding them for their writing, but some of them were amazing.
Orators, all of these things are coming in and we're never going to get it perfect. And maybe AI can help us actually get a more holistic picture.
picture of the person we're thinking about. One thing, just very quickly on that, you know, historically, every culture, every society has always really valued wisdom, right? And I sort of think of wisdom as an ability to take in a wide range of variables and come up with a very particular response to whatever that might be. And I think that historically, actually, we've sort of been at the exact opposite end of the spectrum, which is like one solution for everybody. And I think we're moving more towards AI, which is like much more individual. Absolutely. Absolutely.
Welcome to Possible. I'm Reid AI. You're probably wondering why I'm here instead of Reid. Well, you'll understand in a minute. Though I sound like Reid Hoffman, I'm actually a synthetically generated voice brought to life by Eleven Labs with the help of its audio director, Kamil Soldaki. Thanks for joining us, Kamil. Hey, Reid. Thanks for having me. Not to toot my own horn, but I sound a lot like the real Reid Hoffman. How'd you make me sound this realistic?
At Eleven Labs, we go beyond just cloning a voice. We capture everything that actually makes a voice feel human, like intentions, emotion, rhythm, identity. That's what makes voice alive. And that means they can be used in so many different ways. It's true. I can narrate podcasts, audiobooks, and entire product launches simultaneously.
We built 11 labs to enable authentic storytelling. No more robotic monologues. You know what I mean? Now, I live in the digital world, but you do business all over the globe. That sounds like it could get complicated.
It's been simple for us because we use Stripe. They have made global monetization super easy. And their built-in AI assistant helped us iterate fast on our pricing until we found the right product market fit. So frictionless voice creation and frictionless payments. That's actually a pretty good tagline.
Stripe took care of the monetization infrastructure so smoothly, it means we could just keep growing without worrying whether our payments platform could keep up. Unbounded creativity with unblocked business. Are you just pitching taglines now? I'm thinking of getting into marketing.
11 Labs is redefining what is possible in audio. And Stripe allows us to generate revenue while staying true to our mission. Stripe ensures we can deliver that innovation to the world.
I rewatched your TED Talk last night, which for those of you who haven't seen it, it was all about using math to find love. And so I'd ask you, you know, in the age of AI, I mean, people are having AI friends, AI girlfriends, they're using AI on dating apps to, you know, give them the best poem to send to someone, whatever it might be. How would you update your TED Talk in the age of AI? Oh my gosh, it's such a big question.
So I did that TED Talk over a decade ago. I should also tell you that when I did it, I was just engaged or like just about to be married. I was very excited about the world. And I gave these bits of advice about how you can use math to optimize your own dating life and now divorced everybody. OK, so basically don't listen to anything I say. OK.
No, understand its probabilities. You're absolutely right. But in terms of updating it for the age of AI, I think that something that's really interesting has happened actually with dating apps. But I think it does again come back to what is the game, right? Because the thing is, the hard bit about dating is like finding somebody who you can integrate your life with in an effective way where you have these shared goals.
where you support each other, you know, emotionally in terms of your career, all of that stuff. Finding somebody where you can do that with and the process of doing it, right? Communication ultimately is like the thing that's difficult about relationships. The thing that is not difficult about relationships is which of these 2D images of people do you think are better than others, right? That is not...
what it is, yeah? And yet somehow or other, like that has become what dating is about. And of course, it's because it's the bit where you can find profit. This is one of those situations where the technology has started changing human behaviour, you know? Just the way that we have relationships has dramatically changed because of the impact of these apps.
So in terms of what I would do with AI, I don't know, maybe it would be around what the question would be. Probably be something about improving communication between individuals. I mean, my favorite thing about the TED Talk was about increasing your odds, sort of to your point, understanding the game. AI can just...
help people understand that communication relationship. You can talk to your AI to like figure out what you're supposed to say. You can get advice. Like there are positive things that it can help you to do. But to your point, maybe looking at two static images and deciding which is most attractive is not going to get you to your future husband or wife. Yeah. I mean, that whole point about like you think that you're appealing to the masses. You're not. You're not appealing to the masses. You should pick whatever makes you unique and just go with that. Yeah.
So before we get to your new BBC series, which I have several questions about, I'm curious if you have a heuristic about how people can learn a good mapping between an emotional state and like a number or percentage. Because I agree with you because it also is like what's your aim? What's the game? Plays into the emotional state. But it's like, well, what percentage would be acceptable to you as a way of doing it?
But of course, that requires practice and a reasonable mapping function. Have you had any insights about how people can learn that mapping function better? I think practice is the key. You know, the super forecaster stuff where you look at different future events and then you put down a probability and then you check back and recalibrate your percentages that you're giving it based on what actually happens.
And so I think it's that. I think it's you have to continually reassess it. The other thing, actually, the precise situation where you're like, what's the number I would be happy with and what's the number that it is? I don't think those come up that often. I think the one that I use more often is regret minimization.
So where I imagine being in the future and looking back and then choose the path which I would regret the least. That's the one I use almost all the time. Again, depending on the game, a very useful tool. So let me move to your new BBC series.
Start with a confession, which is I find it very difficult to watch Black Mirror. Yeah. Because when I watch Black Mirror, I'm like, oh, I know how to fix that. Like, I know how to not make that be the dystopia. Like, it's not inevitable. Sure, if you're dumb and build it this way and your society somehow orients around making it dystopic, it can be terrible. So it was described in...
is like real life Black Mirror. Yeah. So I had a bit of an aversion response, which surprised me because I see you as a optimist about how we create the future. So what is this future coming and what is this series? I agree with you about there's dumb ways to design things. But I think the slight problem is there are dumb people designing stuff, right? And I think that there are some real
Really astonishing stories of things that have already happened and are continuing to happen. You know, I saw, I went to go meet this company in California where, so now it's new cars have to be built with an internal facing camera, right? And part of that is technology to check when the driver is falling asleep at the wheel in order to alert them, right? Which I think is really positive thing will reduce deaths on the roads.
But what you can also do, if you are a dumb designer, is you can use an AI to determine the emotional state of the driver. Now, I probably should have put that in air quotes because the actual science behind it is just junk. It's based on the idea that you smile when you're happy and you frown when you're sad, right? Which is like, it's just not true at all.
And yet, because there's nothing stopping people selling this stuff, there are companies who will take those feeds and then, you know, send it out to insurers that you're a grumpy driver or whatever it might be. And that sort of thinking is that it will change your insurance policy. But maybe also if there's like, you know, a particular emotional state that you're in, then your car wouldn't work.
And so there are these stories where people have done things that don't make sense, that need correction. Part of that correction is also sort of shining a bit of a light on them. You know, I think it's about having those conversations in the public space.
But I also agree with you that I am ultimately an optimist. A hundred percent. And by the way, I agree with everything you've said and outside commentary and revelation is an important part of this. But I think like one of the things that I'm curious about is going to do in the story, because if you go, well, it's a probability curve. And I'll give you an example of something that I encountered last year, which is
There's a lawsuit against Character AI for a child, tragedy obviously, who was having interaction with a chatbot and committed suicide. And so there's a lawsuit. Now, as far as I can tell, you know, from a perusal of the chat manuscript, the chatbot wasn't doing any of the obvious things that would trigger a lawsuit, which is you should consider committing suicide or that kind of thing. It had some irregularities in the conversation, but nothing that was like persuasive, manipulative, etc.,
But the problem that I saw, and this is part of how we as human beings take narration and make bad judgments because of it.
is if you asked me to guess right now, would chatbots as they've exist and been deployed increase or decrease suicides across the entire population? My guess is it'd be decrease. Decrease, yeah. Right, because there's a sympathetic ear to talk to you at two in the morning. I know the vast majority of these chatbots are trained in a way of going, oh, are you unhappy? Well, let's talk to you, but let's try to help you get back to a thing. And so when you say, okay, we have this narrative where this bad thing happened, you
you also have to be like, where does it fit within the probability curve? And given you and the intelligence around math, I presume that's also part of this evaluation? Yeah. Because it's nuance, right? I totally agree. But then I also think on that example, I would probably take the conversation to a higher level, right? Which is about the ways in which we anthropomorphize these chatbots, right? And consider them to be human, where people are forming kind of genuine emotional connections and genuine bonds almost with these chatbots.
And I think that we should be having public conversations about how much we want our chatbots or whatever it might be to sort of act as though they're human, bearing in mind that humans have this real habit of exactly, as you say, putting a story on something. But this reminds me, I think, of what was happening, you know, in the earlier days of driverless cars when, yes, driverless cars were reducing the total number of deaths on the roads. But equally, there were deaths due to driverless cars going wrong.
For me, the top level question with driving is like, okay, well, humans are going to be in cars, right? So what is it that humans are not very good at? Well, we're not very good at paying attention. We're not very good at acting well under pressure. And we're not very good at being aware of our surroundings, right? Those are the things- And self-assessment. And self-assessment. Absolutely right.
And so I think that when the design of those systems in those earlier days was, okay, well, we'll just get the machine to drive the car and then the human will step in. And it's like, well, hang on a second. Humans aren't very good at being aware of their surroundings. They're not very good at performing under pressure and then they're not very good at paying attention. So you're expecting the human to kind of step in at the moment when their attention is elsewhere, right? It just doesn't work.
And then I think that there was like a reframing of it, which is like, okay, well, hang on a second. Then if we, if we start with what the human can't do, pay attention, you know, whatever, keep the human in the driving seat and then have the machine fill in the gaps of the stuff that the human can't do. So the car is the collision avoidance systems that you've had, right?
Well, that's just kind of a much better pairing, right, of the two together. And I think that, you know, having had that now for a number of years, the technology has developed where you can go back to the other system. And I sort of wonder what the equivalent is for chatbots to have an answer for this, by the way. But given that humans have this habit of like imposing personalities and characters on things that are inanimate objects.
How do we create these systems in order to mitigate against the worst risks that can happen when they do? No, I think that's really important. I also think there's all these second order effects. Like I was reading about the decision not to require child car seats on planes because actually, if you have a six month old, if they are in a car seat on a plane, they are safer. Mostly nothing will happen, but it's, you know, twice as likely not to have a neck injury or whatever. But
But if they required car seats on planes, parents would just drive. And you know what's dangerous? Driving. So it's like, oh, actually, we can't just look narrowly at this math statistic that is happening on planes. We have to say, oh, bigger picture, let's look at parental travel and how do we keep our children safe. But I think there's a natural tension, sort of what Reid was saying, between parents
storytelling and statistics or storytelling and facts. Like you see like every day on Twitter, someone saying like, oh, silly me, I thought facts and statistics would change someone's mind.
But we all know it doesn't. And so in your job, in your professorship as sort of trying to explicate the world through math, trying to make it more intelligible, how do you see that disconnect between stories and statistics? Because people's brains, you know, parse them so differently. Oh my gosh, that's such a good question. And it's so hard. So I did this program during COVID. Essentially, the idea was that we would take seven anti-vaxxers,
And for a week, I would be with them and we would have all sorts of conversations. And then over the course of the week, we would see if anybody changed their minds.
Now, the thing about this program, I didn't like the cut that went out. I thought there were some problems with it. And I think that the issue was that exactly as you say, right, we know that statistics doesn't change people's minds. You cannot just throw statistics numbers at people and then be like, oh, of course, like that's obviously I was right. It doesn't work. It's called the deficit model of public communication, which is like if only people knew what we knew, they would see the world in the way that we do.
And what I wanted to do with that program was actually really sit down and understand where these people were coming from. And over the course of the week, I just found it so interesting and it changed my mind on so many things. For instance, there was one guy who was a nurse. I really got on very well with him. I think he'd had some issues when he was younger where he would put medication against his will.
And so he was like, look, I just believe in informed consent, right? Everybody does. And if somebody comes into the hospital and they have, you know, gangrene in their leg or whatever it might be, and they refuse treatment, we have to accept that. And this is a vaccine that doesn't have like a societal responsibility because it doesn't change the probability of transition, certainly not after a couple of weeks. And so it's my decision to make. And this is the stand that I'm making. You know what? Actually, I agree with him. There was another woman who was pregnant and
And she was a black woman from Lambeth and a black husband. And at the time, the vaccination rates in black Londoners particularly was really down. And so they've been targeted campaigns to try and increase their participation.
And she was like, okay, well, for starters, right, I'm pregnant. I'm not going to take any unnecessary risk, which, you know, in the fullness of time, actually, I also kind of really see that. But she was also like, why all of a sudden the government has something they want to give young black men in Lambeth. But she also made this really interesting point, which was about how when you go to vaccine centres, honestly, it just feels like you're going into a prison. And
And like it genuinely had not even occurred to me that that sort of triggering experience for somebody. And so I think that like actually getting people to understand numbers, the first bit of it is you have to understand them. It's about listening and not listening while thinking of the next thing that you want to say. I think you can't really change people's minds, right? People have to change their own minds. And I think that the best way that you can do that is to approach a conversation with empathy. And I think to your point, you have to understand empathy.
what their point of view is. What's the game? What is their reason for not? And then, you know, we've seen so many studies that actually AI and LLMs are the best at combating conspiracy theories or whatever it might be because they can understand where the person's coming from and then give some nuanced reasoned argument. And so it's not that people's minds are totally closed. It's that everyone has a different reason. And so when you just attack them with your one-size-fits-all response,
Of course they closed down because you didn't know it was about prison or if it was about this or whatever. Actually, I think the way the LLMs work is less I'm arguing with you and more I'm...
I am asking you questions because it's exactly, it's the, you don't change their mind, they change their mind. Exactly. Yes. Right. The Socratic method. They change their own mind. Do you remember that amazing study where they got people, I think this was in America, they were Republican or Democrat, and they asked them about their feelings about Obamacare, right? And then they asked them how strongly they felt about it.
And then they kind of gave them a new sheet and they were like, oh, do you know how a toilet works? And everyone was like, yes, obviously I know how a toilet works. It's like, okay, how confident are you? And people were like, 10, come on. And then they were like, okay, here's a diagram.
I want you to explain to me how a toilet works and label the parts. And I want you to give me a full rundown of where the water goes and exactly what happens in the whole thing. And then suddenly people are like, okay, actually, maybe not. Yeah, yeah. Okay, fine, fine, fine, fine, fine. Like remember the competence. But what they found was that by doing that, then they asked them like how strongly they felt about Obamacare or whatever it might be. And just that act of like questioning themselves
also made them question themselves more widely on other topics. So you're right. It's like it's asking people questions, but to find out the answer, not to sort of humiliate them or anything like that, but just to find out the answer, I think there's something in that. Going back to the probability thinking in AI, as AI develops, it'll get higher and higher probability of accuracy of information. And I think that one of the things that we're going to need to do is have an assessment of probability
where will most likely be right, where it might be wrong when you want to look at it more. So for example, there was a research thing that suggested that
AI, GBT by itself, was better than AI plus doctor. But I think the reason was is because the doctors hadn't yet learned how to use GBT, right? They didn't know where they should go, oh, right, that's different than I think, and they're probably right, right? Or, wait, this case is where I actually want to do more investigation. And I think that's part of what we're going to need, like part of the reason why
you know, I'm excited about your work and it's good, is that thinking about like, okay, when is this likely to be right and when is this likely to be wrong is going to be part of our AI future and evolving. And so I think part of it is we're going to have to learn heuristics. And the heuristics, as opposed to the, I just feel, it's like, well...
You know, generally speaking, like for example, one of the hero six I use in AI, if you ask it a general principled thing, like what are the seven rules of entrepreneurship? Those generally be pretty good. If you said like one of the things that I did, because I had early access to GPT-4, I said, has Reid Hoffman ever,
made a knockoff of Settlers of Catan? And if so, what is it? Because I have... And it said, yes, absolutely. I was like, wow, it discovered that. There's almost very little information on the... And then it said, what Reid has made is a game called Secret Hitler, right? And there is a game called Secret Hitler that the Cards Against Humanity people have made a version and it created a Wikipedia answer that was completely fictitious. Right.
And that was when, because I have these evolving set of heuristics, you know, principles, it's like, well, when you ask a specific question or give me a quotation, I'm always a little bit more suspicious. Absolutely. It's the edge cases. Yes. Or specificity. Yes. Because it's trying to be helpful to you. And it fundamentally still doesn't recognize that in a lot of these cases, error is
is extremely expensive. Yes, absolutely. I totally agree. You know, one of the things I think is absolutely amazing about AlphaFold, right, which is, you know, you had Demis on your podcast, is the way that it also gives you confidence, right? So it doesn't just tell you this is the folding. So then you can see when the model is in its comfortable zone and when it's struggling. And I think that that's really, really helpful, especially when you're talking about situations like doctors assessing data and information.
I think a lot of the actually training and meta-prompting will be like, for example, you can get GPT-4, even as it is, to be a much better medical thing when you do a meta-prompt
with how do you do Bayesian reasoning with medicine? Because then all of a sudden it goes, right, I'm going to give you a Bayesian answer, right? And then all of a sudden it's much, much better because it's like, well, there's a 64% chance that this is saying, and there's a 30%, but it's giving you a list in order with probability assessment. If you do the Bayesian metaprompt the right way, which is like, you know,
what I would do. Yeah, that's so interesting. I mean, I remember seeing there was a paper about chain of thought reasoning, right? And the difference that it makes. And now, of course, the people have started putting reasoning into their models. It sort of feels like magic, you know what I mean? Like it does feel like magic, but you're right. It's about just looking again, having confidence in the places where it's confident and kind of going over it again. It's really, yeah, I think subconsciously I've been doing a similar heuristic. Yeah. Yeah.
So on many episodes, we have an AI element where we bring AI into the chat. And so since I think I know that you're a Jane Austen fan, and you might hate math jokes, but we're going to bring them in anyway. And so we asked Pi, the personal intelligence that Reid co-founded with Mustafa Suleiman, to give us some math-themed Jane Austen jokes. Amazing.
It is a truth universally acknowledged that a young woman in possession of a large number of admirers must be in want of a better statistician to calculate
her chances of finding true love. I mean, amazing. It's taken a quite famous line from Jane Austen and just stuck the word statistician in there, which is great. It worked for me. Mr. Darcy's pride and Elizabeth Bennet's prejudice may have been quite irrational, but it was their common interest in geometry that brought them together in a most acute love triangle. Mmm.
Doesn't quite do the dad joke thing. No. Needs to work on the sort of build up and release of tension, you know, in that drag a bit more. Right. All right. We have one more. When Miss Bates heard of Mr. Elton's engagement to Augusta Hawkins, she was quite put out. Why? It's simply not fair, she exclaimed.
For everyone knows that three is a crowd and four is a quadrilateral. So bad. Yeah. Okay. I see that comedians still have a job. Yeah, I think this is the specificity there is we're really at the edge, aren't we?
All right, so one out of three. One out of three was good. Decent show. It was decent. Quasi one out of three. Yeah. It was one kind of sufficiently across the line. Yeah. And the other two probably would have had tomatoes thrown at the stage. Okay, we have some work to do. We have some work to do. Okay, great. Well, that was our AI element for today. All right, should we do rapid fire?
Is there a movie, song or book that fills you with optimism for the future? I'm going to go for When We Cease to Understand the World because it's just, okay, so beautiful. But it's like, it's just, it really captures how exhilarating it is to be at the brink of new knowledge. I think it really makes me feel great right about six times. And by the way, since we just did the comedic thing,
Stephen Fry read that book and reached out to Lava too. Oh yeah. Yeah, because he also shares all of our passion. It is a great book. Yeah, it's really amazing. Really amazing. Because they did the Hay Festival together, didn't they? Yes. Yeah, that's right. And that came out from that reach out. Oh wow, amazing. Amazing. All right. What is a question that you wish people would ask you more often?
Do you want another drink? Done. I don't know. I don't know. What do people answer for this one? Perhaps the funnest one was I asked a friend of mine's kid and the kid looked at me and said, do I want to be here? Amazing. I'm sticking with my first answer then. That's better.
All right. So where do you see progress or momentum outside of your industry that inspires you? Oh, I think the stuff that's happening in biological spaces is really incredible. Physics is lucky that it has equations to discover, right? You know, you can look at all of that data of the galaxies and then you can come up with E equals MC squared. Like, I mean, almost...
impossibly simple, right? And biology doesn't have that luxury, but I think that we're now at the situation of almost where you can take the unimaginable complexities of biology and extract a sort of working model for how it fits together. And I think that is really, really exciting. Awesome. All right. Can you leave us with a final thought on
on what you think is possible to achieve in the next 15 years if everything breaks humanity's way? And what's the first step to get there? Let's go crazy optimism for a minute here. Please. Because I think that actually, you know, the history of humanity has always been a story about scarcity, right? It's been about resources being divided. And I do think that actually there is a way forward
for science to make a gigantic difference for everybody. There are so many different, you know, areas where if science makes a breakthrough, like desalination or nuclear energy or whatever it might be, right? There's a number of different areas, good battery design, right? These kind of things where we just need like a little bit of a breakthrough. And I think everything, everything, everything can potentially change. Thank you so much for being here. Thank you. That was so fun. Thank you. It was great.
Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young. Possible is produced by Katie Sanders, Edie Allard, Sarah Schleed, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor.
Special thanks to Surya Yalamanchili, Sayida Sepieva, Thanasi Dilos, Ian Ellis, Greg Beato, Parth Patil, and Ben Rellis. And a big thanks to Taylor Forster-Cornes, George Kingston, Irenia Alvarez, Jenna Antonik, Ayi Kano, Natasha Maines, Joshua Balogun, KJ Arthur, and Sophie Claire Armitage. ♪♪
♪