We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #87 Hannah Fry: The Role of Algorithms

#87 Hannah Fry: The Role of Algorithms

2020/7/7
logo of podcast The Knowledge Project with Shane Parrish

The Knowledge Project with Shane Parrish

AI Deep Dive AI Chapters Transcript
People
H
Hannah Fry
S
Shane Parrish
创始人和CEO,专注于网络安全、投资和知识分享。
Topics
Hannah Fry: 数学在现代社会中扮演着至关重要的角色,但其应用往往是隐形的。学校应该改进数学教学方法,向学生展示数学在现实生活中的应用,提高学生的学习兴趣。算法的应用需要考虑其对人类社会的影响,避免算法中的偏差和对人类自主性的影响。在医学领域,算法的应用需要谨慎,避免过度诊断。算法的开放性问题需要进一步探讨,既要保证透明度,又要避免抑制创新。在人际关系中,数学可以应用于恋爱、婚姻等方面,帮助人们做出更好的决策。 Shane Parrish: 探讨了算法在现代社会中的作用,特别是其在人际关系和决策中的应用。关注了算法的潜在风险,例如算法偏差和对人类自主性的影响。提出了关于算法开放性和透明度的讨论,并探讨了在算法与人类决策的结合中如何取得平衡。

Deep Dive

Chapters
Hannah Fry discusses her early interest in mathematics, sparked by her mother's intervention during a summer holiday, which significantly improved her skills and enjoyment of the subject.

Shownotes Transcript

Translations:
中文

I think one of the big complaints that you get from school kids is like, well, I'm never going to use this stuff. What's the point of it? It doesn't apply anywhere. And I think really showing just how dramatically important maths is to virtually every aspect of our modern world. I think that that's something that can really make the subject come alive. MUSIC

Hello and welcome. I'm Shane Parrish and you're listening to The Knowledge Project, a podcast dedicated to mastering the best of what other people have already figured out. This podcast and our website, fs.blog, help you better understand yourself and the world around you by exploring the methods, ideas, and lessons learned from others.

If you enjoy this podcast, we've created a premium version that brings you even more. You'll get ad-free versions of the show, like You Won't Hear This, early access to episodes, You Would Have Heard This last week, transcripts, and so much more. If you want to learn more now, head on over to fs.blog.com or check out the show notes for a link.

Today I'm talking with the incredible Hannah Fry, a mathematician, author of Hello World and The Mathematics of Love. We talk math, how schools can promote better engagement, human behavior, how math can help you date, and we explore what it means to be human in the age of algorithms. It's time to listen and learn. ♪

The IKEA Business Network is now open for small businesses and entrepreneurs. Join for free today to get access to interior design services to help you make the most of your workspace, employee well-being benefits to help you and your people grow, and amazing discounts on travel, insurance, and IKEA purchases, deliveries, and more. Take your small business to the next level when you sign up for the IKEA Business Network for free today by searching IKEA Business Network.

Hannah, I'm so happy to have you on the show. Oh, well, I'm very excited that you invited me. Thanks. Thanks for having me on, Jane. What got you interested in maths? I like how you said maths there, for starters. Thank you for anglicizing it. I appreciate that. I think partly I was born that way.

So, OK, actually what happened was when I was about 11 years old, my mum, I think she just didn't know what to do with us over one summer holiday. So she she bought me this math textbook and she made me sit down every day and and do a page of this textbook before I was allowed to go out to the garden to play. And then when I went back to school that September after after the summer, I was just so much better at the subject. I just understood everything. I'd seen everything before and I was just really well practiced at it.

And I think that it's inevitable that if you're good at something, you just find it all the more enjoyable. And the more enjoyable you find something, the less like it feels like hard work. So I think that's it really. I think that that's just, that was sort of the, before then, I mean, I didn't dislike it at all, but I wouldn't have said it was my thing. But I think that that was really a stark change. Like after that, then it became my thing. And then, you know, the more and more I got into it, the more and more it became part almost of my identity. Yeah.

Well, I mean, math is such a tricky subject for students. I mean, they seem to have this very love-hate relationship with it, with most people hating it. What are some of the things that schools could do to promote better engagement with students over math? So it's a tough thing because, I mean, on the one hand, if you're ever going to be able to reach the most beautiful elements of the subject, if you're ever really going to be able to properly to put it to use,

You can't have your working memory being swamped by remembering all of these rules and remembering these really fundamental basics of the subject. So it's slightly unfortunate that that inevitably means that when you're starting out, when you're in the early stages, it has to be dominated by essentially learning the basics of the subject. It's something that's, you know,

it's difficult, it's not particularly inspiring, or, you know, if it's taught in a very straight fashion, it's not particularly inspiring. So in terms of what schools can do, I mean, I think

For me, I've really seen a difference when teachers really put in the effort to demonstrate just how useful this stuff is. I think one of the big complaints that you get from school kids is like, well, I'm never going to use this stuff. What's the point of it? It doesn't apply anywhere. And I think really showing just how dramatically important maths is to teachers.

virtually every aspect of our modern world. I think that that's something that can really make the subject come alive. - Do we see that sort of manifesting itself now with kids' attitudes because they're surrounded by algorithms and machines and does that change how they perceive math? - Well, yeah, but I think that unfortunately the math is invisible, right? Because I mean, for this stuff to work, for a mobile phone to work,

It has to be all of the, I mean, the amount of maths involved in getting your mobile phone or, you know, me speaking to you now, however many thousand miles apart we are, the amount of maths involved is like phenomenal. I mean, it's, it's easily PhD level stuff, but for this to work effectively, it has to be invisible. It has to be hidden completely behind the scenes.

you as the user can't really be aware that any of it is there so even though you know as you say with algorithms dominating more and more of the way that we're communicating with each other how we're accessing information you know what we're watching who we're dating everything even so I think the maths is so behind the scenes that I don't think it's necessarily clear that it's it's driving so much of the change.

As you were saying that, I was sort of thinking of a Formula One car. You know, the driver gets all the attention, but there's this big, huge team of engineering behind them that we don't know their names. We don't know who they are or what they do. That's a perfect analogy. It's a perfect analogy. I always think, so I'm a big fan, actually, of Formula One. And the reason why I like it, if I'm honest with you, is because I think of it as a giant maths competition, just with, you know, a bit of glamour on top. Yeah.

I have this idea where they should do a driverless version of the cars too, because you have this closed track, right? So it'd be super easy to do an autonomous version.

And then you're actually, then the engineers are competing. There's no human element. And then you could celebrate the engineers. And I think by celebrating the engineering and the people behind the scenes, you get kids more interested in that work. Oh, see, I don't know if I agree with you actually. Oh, pushback. Yeah, pushback. I'm sorry. So early on. So, okay. So partly there are examples of that already. There's a, I think it's called RoboRace, which is,

the fastest autonomous vehicles in the world. There's different teams build the cars and it's like robot wars, right? But on a track. And it's all very fun. It's all very interesting. But for me, I think that part of the problem with why mass communication is difficult

is that really we care a lot about stories and we care a lot about stories of people. And I think that in many ways, the thing that makes Formula One or other racing so fascinating to watch is

is because you have it sitting in that gigantic, you know, engineered machine with so much science and technology going into it. You have a person who cares so much about what happens in that race. You know, you live the whole emotional roller coaster with them as the series progresses. And I think if you take that out of the situation, then actually I think it dehumanizes it and makes it less interesting in a way.

That's really interesting. So how do we make a better story around mass then? So I think it's that. For me, it's humanizing it. I think that really is it for me. I think, you know, one of the, certainly in Britain, I think in the States too, there's this massive book called Fermat's Last Theorem. Massive as in, in terms of its sales rather than physically big. It was written by Simon Singh.

And it's, you know, I read it when I was maybe 16 years old. And one of the things that really...

I guess solidified the idea that I wanted to be a mathematician. And in it, it's just a long story of, you know, hardcore maths throughout the centuries. But what he did was he anchored all of the stories to the people that were involved. And it's exactly like your race car driver, right? Like you care so much about the characters who are involved in this history of math. There's stories of someone like Galois is a great example of a character that Simon Singh tells the story of in the book.

So he was French. He was about 19 years old. I think someone I'm sure will know the facts better than me and sure will contact me and correct me. But he was about 19 or 20 and he'd been having an affair with a very important person in French society, a woman who was older than he was. And her husband had found out about this affair.

and had challenged him to a duel. Now, of course, in France, this is like, I'm going to guess 1700s, 1800s. In France at that time, if someone challenges you to a duel, you do not back out, you go to the duel. Except unfortunately, Galois had been working on this incredibly important theory of mathematics now known as Galois theory and hadn't quite finished the math.

And so he knew that at sunset, he had to go off and fight this duel and probably be killed. And he was desperate all the way into the night, drinking and cowering over this, you know, his quill and his paper, desperately trying to write down as much math as he could. And the papers that he left were

They were left on his desk as he went off to his duel. They're just incredible. Like you can see sort of photos of them or see images of them. They still exist. And it's loads and loads of equations, loads and loads of scribbling. And then every now and then he's like, oh my goodness, what's happening? This lady, why did I do this? I'm after my death. And, you know, he's desperately trying to finish everything.

And I think for me, that's what makes the maths come to life. Because when you realize how important this stuff is to people that they know that they're going to their death and still the only thing they want to do is finish their maths. I think that's the stuff that makes it come alive. That's a great story. I hadn't heard that one before. It is, isn't it?

Yeah, it sort of like pulls you in. What does it mean to you to be human in an age of algorithms and machines? Wow, goodness.

I mean, I could and have write an entire book on the subject. Exactly. So I think that actually that whole idea of humanizing maths, I think it sort of works both ways, actually. I think that you need to humanize maths to make people want to find out more about it.

But I also think that the maths itself needs to be humanised if it's to properly fit in with our society. Because I think this is something that's happened a lot, actually, in the last decade, certainly. I think that people have got very, very excited about data and about what data can tell us about ourselves. And I think that people have sort of rushed ahead and maybe not always thought very carefully about

about what happens when you build an algorithm, when you build something based on data and just expect humans to fit in around it. And I think that that actually has had quite, you know,

catastrophic consequences. So the most sort of famous examples of this, there's Cathy O'Neill's book, Weapons of Maths Destruction, which I think honed in on one aspect of this really brilliantly, which is, you know, the bias that comes out when you don't think very carefully about taking this algorithm and planting it in the middle of society and expecting everyone to just, to just

fit in around it. You know, the sort of gender bias that we've seen, the racial bias, all of that stuff. I think that's very well documented and quite well known and understood about.

But I think there are slightly more subtle things as well. So the example that makes this a really personal story for me is that, and the reason I guess why I started thinking about this very clearly or very seriously, and the reason why I wrote a book about it, is because of something that happened to me where I think I made that same mistake where I got so tunnel vision about the maths that I didn't think about

what it meant when you put it in the human world. So this is back in, as soon as I finished my PhD, back in 2011, the first project really that I did was a collaboration with the Metropolitan Police in London.

So we just had in 2011, we had these terrible riots across the country that started off as protests against police brutality, but they evolved into something else. And a lot of looting. There was a lot of social unrest, really.

And the police had been, I think, slightly stunned by how quickly this had taken hold. I mean, you know, we were in for four days, really, that the city was on, you know, was on lockdown. London certainly was on lockdown. So we'd been working in collaboration with the police just to see if there had been anything they could have done earlier, just to calm things down, I guess, to just see if there was, if there were signatures or patterns in the data that

that would have given them a better grasp on how things were about to spread. So, okay, we wrote up this paper and the academic community were really happy with it, whatever. And a couple of years later, I went off to this big conference in Berlin and gave a talk. There was like 1500 people there at this talk. And I was standing on stage giving a talk about this paper.

And I think that I think I was a bit naive, really. I think I was a bit foolish at the time, because when you're a mathematician, there's no Hippocratic oath for mathematicians. There's no like you don't have to worry about the ethics of, I don't know, fluid particles when you're when you're running equations on them.

And so I was standing on stage and I was presenting this paper and I was giving this very enthusiastic presentation. I was essentially saying how great it was that now with data and algorithms, we were in a world where we could help the police to control an entire city's worth of people. That was essentially what I was saying. And it just hadn't occurred to me that, you know, if there is one city in the entire world where people are probably not going to be that keen on that idea, it's going to be Berlin.

So I just didn't think it through. Anyway, so as a result, the Q&A of this session, I mean, they destroyed me. And quite rightly so, they destroyed me.

They didn't destroy the map, they just destroyed the... They didn't destroy the map, they just destroyed, yeah. It was like heckling and everything. It was amazing. It was amazing. I think for me that was just this really, really important moment because I think I hadn't, it just hadn't quite twigged with me. I know that it makes me sound really naive, but

it hadn't quite tweaked in my mind that you can't just build an algorithm, put it on a shelf and decide whether you think it's good or bad in completely in isolation. You have to think about how the algorithm actually integrates with the world that you're embedding in. And I think that that's a mistake that sounds like it's really obvious, but actually I've seen lots and lots of people make that mistake repeatedly over the last few years and continue to make it. Can you give me examples of what comes to mind when you say that? Just as a silly example,

A kind of more trivial example. I think that the way that some satnavs used to be designed, this is less true now, but certainly the way that some satnavs used to be designed was that you would just type it in and it would tell you your destination and off you went, right? Tell you where you were going and off you went. And you could, if you wanted to, go in and interrogate the interface and find out exactly where the thing was sending you. But most of all, you'd put in the address and it would just tell you where to go.

And that is an example, I think, of not thinking clearly about the interface between the human and the machine, because there are all sorts of stories about people just blindly following their sat-nav.

didn't look at the map, off they went, didn't realise the sat-nav was essentially telling them to drive out into the ocean.

And amazingly, amazingly, the story, you'd think, okay, fine, right? You know, like you get to the side, to the ocean and you're like, well, no, it's obviously asking me to drive into the ocean. I'm not going to. They didn't have that moment. They carried on driving. They really trusted the machine and thought, oh, well, it'll bring us to a path eventually. And eventually they had to abandon their vehicle, I think like 300 meters out into the ocean. This is amazing. It's like half an hour later as the tide came in, a ferry sailed past their abandoned car.

That's crazy. It sort of like calls to mind though, like what role do algorithms play then in abdicating thinking and authority? Well, no, that's it. That's it. So I think the shift

In design that we've seen recently, and this is only very recently, is where you type in the address now. So I'm thinking in terms of Google Maps in ways, certainly, and perhaps others, is that you type in the address and then up pops a map which gives you three options, right? So it's not saying I've made the decision for you, off you go. It's saying here is the calculations I've made. Now it's down to you.

But it's giving you that, I guess, that just that last step where you can overrule it, where you can you can kind of sanity check it, if you like. And I think I like to I mean, I sort of maybe I'm giving them a bit too much credit. They did drive out into the ocean. But I sort of think these tourists had been seeing a map showing that they were going into the ocean. Maybe they wouldn't have done it.

How does that work as algorithms become more and more? Is that the goal then? I'm thinking about the integration between algorithms and medicine where you're scanning. Is it always a human overruling? Are there edge cases? How do you think about that? Yeah, so that I think is incredibly important.

incredibly tough example so okay the first the first algorithms that came through the machine learning algorithms that were designed to just tell you whether there was uh cancerous cells within an image or not right yes or no and that's all very well that's kind of you know that's good and they proved themselves that they were good that they could they could perform well in that

But they're problematic. There were examples where, you know, they'd go into a hospital, they'd been performing incredibly well on a certain set of images. And then suddenly they're performing incredibly badly. And these algorithms are so sensitive that they were picking up on things like the type of scanner that was used was making a difference to the decision process of the algorithm. Or like actually the best example of that is there was a skin cancer diagnosis algorithm

that was picking up on lesions on people's skins, photographs taken by dermatologists was the training set. And it turned out that the algorithm wasn't really looking at the lesion itself at all. It was deciding whether or not it was cancerous based on whether there was a ruler photograph next to it or not. Like that kind of stuff. This stuff makes stupid mistakes.

So I think that was sort of phase one of these sort of algorithms within medicine. I think phase two is about making them much more able to be interrogated. So, for instance, DeepMind, who I spent a long time working with on public outreach projects, one of their big systems is rather than just having an algorithm that tells you what the answer is, is having two separate AIs, right, two separate agents.

One of them that highlights areas of interest within the image itself. And then the second algorithm that goes in and labels them.

them, but it's just kind of opening out the box a little bit more so that it's possible for a pathologist or a radiologist to interrogate that image. So, okay, I think that's stage two, right? And that's like, that's the difference between old type satnavs and new type satnavs. But I think that there's a stage three in medicine that we're only just beginning to go into, which is, I think, a harder, even harder one of all, which is that most cancerous cells in people's bodies actually are nothing to worry about, which

which sounds like a mad idea, but there was a study a few years ago. You have to forgive me slightly because I don't have all the numbers on the tip of my tongue, but there was a study a few years ago where a group of scientists performed autopsies on people who had died from a whole host of different causes. So everything from heart attacks to

car crashes, all these different kinds of things. And they looked deliberately to see whether they had cancerous cells in the body. And even though none of these patients had died from cancer, a huge percentage of them had cancerous cells within their body. And the reason for this, it's not that they all had really serious cancer that needed to be detected and treated.

It's this, actually this happens a lot, right? It's not, if you have breast cancer, for example, it's not a case of you don't have cancer or you do have cancer. There's a whole spectrum in between that and in between totally fine and really, really nasty cancerous cells, there are tumors that may turn out to be something bad and may just, the body may just deal with them or they may just stay there untouched

well into, you know, for essentially all of your life and be nothing to worry about. And the real danger of relying too much

on algorithms to detect those cancerous cells is that if you are too good at detecting them, you're not just good at detecting the ones that then go on to be a problem. You're also gonna be good at detecting the ones that are nothing to worry about and hence potentially causing huge numbers of people to have very serious and very invasive techniques like double mastectomies for instance

life-changing treatments, right? That actually they never needed to have. And that I think is something that's, it's another thing about like that boundary between how much do we trust our machines that I think is not resolved yet and a sort of tricky one for the next few years, I think. That's fascinating. I hadn't really thought of it in that way before, but I like the way you put it. I think one of the interesting things going into the future is also going to be on if algorithms are involved in the decision.

Is there an obligation to make them open source? And then that would be sort of like stage one where you can critique and see the actual algorithm working. But stage two would be maybe it's a machine learning algorithm. And then each iteration that it runs is actually slightly different. Like, do we have to keep a copy of each algorithm? And would we be able to detect like how it actually worked? I know, right? I know. It's so hard.

It's so hard. It's so hard because I think it's very easy, you know, it's very easy to say there are definitely problems with algorithms that are not open source. It's very easy to say there are huge problems with transparency, but finding the way around it, finding the solutions is a lot harder. Yeah.

It's a lot harder. I mean, because I think actually, I sort of am of the opinion that open source algorithms, at least the ones that are proprietary, at least the ones that have some sort of intellectual property attached to them. I think that that is both too much and too little. So what I mean by that is I think it's too little, because if you publish the code, if you publish the source code of something,

The level of technical knowledge and time, actually, that it would take to interrogate that as an outsider, enough that you have a really good understanding of how it works, enough to be able to say, OK, you know what, just sort of sanity check it, if you like. It's just vast. And I just don't think it's realistic that actually you can ask the community at large, really, to be able to take on that load.

But then simultaneously, I think it's by doing so, by releasing and making everything open source, then I think that you are going to stifle innovation, right? Because I think that part of the really good thing, part of the reason why we've seen such acceleration of these ideas is because it's possible to make them commercially viable. And I think that if you publish things as open source, then there's a problem with that, that you risk slowing down innovation, I think, which is a problem.

I don't think you'd want to do either. The workaround though, you know, okay, so what do you do instead? Because I think that everybody sort of agrees that transparency is really important here. I think particularly when it comes to the more scientific end of algorithms. I mean, I think to be totally blunt, I think that unless you're doing science openly, you're not doing science. But yeah, I mean, it's really there. So some of the suggestions have been

And I think this is one that I broadly support. Some of the suggestions have been to copy the pharmaceutical industries model. So where you have a separate board like the FDA who have the ability to really interrogate these algorithms properly and can give a sort of rubber stamp of approval as to whether they are appropriate to be used or not.

But that's different from just open source because, I mean, a sort of FDA style thing would be able to go in and stress test them and test them for robustness and check them for bias and all of those type of things instead. But, I mean, there's no easy, there's no silver bullet to sort of, yeah, addressing some of the many problems that algorithms raise. Do you think, like, we would rather...

on general, like when do we want algorithms making decisions and when do we want humans making those decisions? - Well, so there's certainly some occasions where actually the further away humans are from us, the better. Humans are not very good at making decisions at all. We're not very good at being consistent. We're not very good at being clear. With nuclear power stations, for instance, as much as possible, you wanna leave that to the algorithms. You wanna leave that to the machines.

Likewise, in flying airplanes, I think you want to leave that to autopilot as much as you possibly can. In fact, actually, there's that really nice joke. To fly a plane, you need three things, a computer, a pilot, a human and a dog.

And the computer is there to fly the plane. The human is there to feed the dog and the dog is there to bite the human if ever it touches the computer. Which I think is like nice. There's definitely some situations where you want the humans as far away from it as possible. But I also think that actually these machines, especially the ones that are getting much more involved in more social decisions, they really are capable of making quite catastrophic mistakes.

And I think that if you take the human out of the decision, even if on average you might have a slightly better, more consistent framework, if you take the human out of that decision process altogether, then I think that you risk real disasters.

We've certainly seen plenty of those in the judicial system, you know, where algorithms have made decisions, judges have followed it blindly, and it's been really the wrong thing. Just to give you an example, there's a young man called Christopher Drew Brooks. This is actually a few years ago, but he was 19 years old from Virginia, and he was arrested for the statutory rape of a 14-year-old girl. So they had been having a consensual relationship,

but she was underage and so he was, which is illegal and he was convicted. But during his trial, an algorithm assessed his chance of going on to commit another crime in future. These are the sort of very controversial, yeah, exactly, algorithms that do say, but actually have been around for quite a long time. And this algorithm, it went through all of his data and it determined that because he was a very young man, he was only 19 years old and he was already committing sexual offences and

then he had a long life ahead of him and the chances of him committing another one in that long life were high. So it said that he was high risk and it recommended that he'd be given 18 months jail time, which I mean, I think you can argue that one way or the other, depending on your view. But I think what this case really does do is it highlights just how illogical these algorithms can sometimes be, because in that particular case,

If instead the young man had been, I think, 36 years old, that would have been enough. This algorithm had put so much weight on his age that if he'd been 36, it would have been enough to tip the balance, even though that put him at 22 years older than the girl, which I think surely by any possible metric makes this crime much worse.

But that would have been enough just to tip the balance and for the algorithm to believe that he was low risk and to recommend that he escaped jail entirely, which I think is just an extraordinary example of how wrong these decisions can go if you hand them over to the algorithm. But I think for me, the scary thing about that story is that the judge was still in the loop, right? The judge was still in the loop of that decision making process.

And I think that you would hope in that kind of situation that they would notice that the algorithm had made this terrible mistake and step in and overrule it. Well, turns out that, you know, those Japanese tourists we were talking about earlier, I think the judges are a lot more like them than we might want them to be. Because so in that case and lots of other cases like it, actually, the judge just sort of blindly followed what the algorithm had to say and increased the jail sentence of this individual.

So, I mean, you've got to be really careful, right? You've got to be careful about putting too much faith in the algorithm. But just on the flip side of that judges example, I also don't agree with the people who say, well, let's get rid of these things altogether in the judicial system. Because I think there is a reason for them being there, which is that humans are terrible decision makers, right? Like there's so much luck involved in the judicial system. There's studies that show that if you take the same case to different judges, you get a different response.

But even if you take the same case to the same judge and just on a different day, you get different responses. Or judges who have daughters tend to be much stricter in cases that involve violence against women. Or my favourite one, actually, is that judges tend to be a lot stricter in towns where the local sports team has lost recently, which kind of shows you what you're dealing with, right? Like there's just so much inconsistency and luck.

that's involved in the judicial system. And I think if you do it right and carefully, I think there is a place for algorithms to support those decisions being made. Do you think in a way we get to advocate ourself from responsibility if we defer to an algorithm? So if you're a judge and you defer to an algorithm,

It's not like you're going to be fired for deferring to the algorithm that everybody agreed was supposed to input or make the decision. Exactly that. Especially if people vote you in. And here's a way that you can absolve yourself of responsibility. I completely agree. I think all of us do it. All of us do it. And that's the problem is that this is a really, really easy thing to happen. It's very easy for us to just...

I don't know, take a cognitive shortcut and do what the machine tells us to do, which is why you have to be so careful about thinking about this interface, thinking about the kind of mistakes that people are going to make and how you mitigate against them by designing stuff to prevent that from happening. Can you talk to me a little bit about what we can learn about making better decisions from masks?

I'm going to do a pertinent example. I think the example of what's going on right now with the pandemic is a really tragic and chilling example of how important masks can be when it comes to making clear decisions.

Because I think that this is just one situation where in many ways, maths is really the biggest weapon that we have on our side. You know, we don't have pharmaceutical interventions yet. We don't have a vaccine yet. And all we have really is the data and the numbers. This is March 18th. Yeah. Just for people listening to 2020. Yeah, exactly. So we're still at the stage where things are ramping up. I mean, you know,

who knows how bad it's going to get from here. But certainly in the last month,

I mean, they're the first ones, really. The epidemiologists and the mathematical modelers are the ones who've been sort of raising the alarm and driving the decision making and driving the strategy and driving government policies. You know, because at the moment, if you looked only at the numbers of where we are, I think there's been maybe 150 deaths or so in the UK. I haven't got the exact numbers to my fingertips, but something of that order, right? Around 100 deaths.

in the UK, which, you know, it's every single one of those is a real tragedy. But it's not a huge, huge, huge number. But the reason why we know that that's a bad, why we're in a bad situation, and the reason why we know we need to take these extreme measures to essentially shut down our borders, to shut down our country, is because the maths is telling us what is coming next. We don't have a crystal ball to look into the future. But really, maths is the only thing that's there guiding us.

It's really fascinating to me. Can you talk to me a little bit more about the pandemic and sort of like how you think about it through the lens of math? Yeah, technically. So I actually...

um in 2018 i did a big project with the bbc because we knew that a pandemic was coming so we teamed up with some epidemiologists from the london school of hygiene tropical medicine and the university of cambridge to collect the best possible data so that we could be prepared for when something like this did happen the big problem at that point so this is only you know a couple of years ago

The big problem was that if you want to know how an epidemic or flu-like virus will spread through a population, then you need to have really good data on how far people travel and how often people come into contact with one another. And crucially, who they come into contact with, the different age groups, the settings they come into contact with other people and so on.

And up until a couple of years ago, it sounds mad to say it, but, you know, given that everyone's carrying mobile phones, but up until a couple of years ago, the best possible data that we had within the UK at least,

for how people did that, how people moved and how people mixed with one another was a paper survey from 2006 where a thousand people said, oh yeah, I reckon I did this. I reckon I went about that far. I reckon I came into contact with these people.

So what we did with the help of the BBC, because they have such amazing reach, is we created this mobile app that would essentially track people. People would volunteer and sign up by watching the programme and so on and let us track them around for 24 hours and track who they came into contact with and also get loads of things about their demographics and their age and so on and so on. Now, two years later, or less than two years later,

We have this, you know, incredibly detailed data set that's feeding right into the models that our government are using, making this enormous difference in terms of the accuracy of how well we can predict things. And I just think it's like, it's just the most pertinent and chilling example I've ever been part of, which just demonstrates how important the maths is. If you're going to

try and win a war with nature, essentially. It seemed to me, I mean, there was two different types of people just to broadly generalize going into this pandemic. There was people who understood nonlinear and exponential functions and people who maybe had a harder time with that. And the people who did seem to understand or grasp those concepts better seemed to take it a lot more seriously than the people that didn't. And I would love to find a way to

and help people think better in terms of exponentiality. Yeah, of course. I mean, part of the problem is that the word exponential just gets thrown around. Like, you know, people say, oh, this project's exponentially more difficult or, you know, exponentially more dangerous. And it's like, well, no, it's not. That's not what the word means. And it is really counterintuitive because the thing about exponential growth, it doesn't just mean

big, it doesn't just mean lots, it means something very specific. It means that it's where something is changing by a fixed fraction in a fixed period. So this virus, for instance, is doubling every five days. So doubling, fixed fraction, every five days is a fixed period.

And I think that it's just, yeah, I mean, it's just not something that's counterintuitive at all. Like there's the really classic example of the rice on the chessboard. So this is this idea. It's like a classic story about an Indian king who was really impressed with the chessboards when it was shown to him. And so he said, OK, I'll tell you what.

I'll give you a grain of rice for the first square and then we'll double the grains of rice every subsequent square, right? Which sounds like, oh, that's not very much. If you're at the beginning and it's like one grain and two grains and then four grains, it's like, okay, this is not gonna cost me very much.

The thing is, is that by the end of the chessboard, you need like a lot of rice. Essentially, you need 18 quintillion grains of rice, which is essentially, I worked this out. If you take Liverpool, the area of Liverpool, which I know for American listeners isn't easy to imagine, but it's essentially like a whole city. It's an area that size, stacked three kilometres high with rice. That's how much rice it is.

It's like, I mean, exponential growth is just beyond imagining. It's just completely counterintuitive. One of the stories I loved about in your book, switching gears a little here to Hello World, you had the story of Kasparov and playing Deep Blue and everybody's told that story, but you had a unique angle to it that I hadn't heard anywhere else.

which is that the machine was also playing with Kasparov. Yeah. So this goes exactly back to what I was saying earlier about it's not just about building a machine. It's about thinking about how that machine fits in with humans and fits in with human weaknesses. Because the thing is, is that Kasparov,

I mean, he was an incredible player. So I had a chat, when I was researching my book, I spoke to lots of different chess grandmasters. And one of them described him like a tornado. So when he would walk into the room, he would essentially pin people to the sides of the room. They would kind of clear a path for him because he was just so respected.

And what he used to do, he had this trick. If he was playing you, he would take off his watch and he would place it down on the table next to him and then carry on playing. And then when he decided that he'd sort of had enough toying with you, he would pick up his watch and he would put it back on as if to say, that's time now, I'm done. I'm like, I'm done.

I'm not playing you anymore. And essentially everyone in the room knew that was your cue to resign the game, which is just so intimidating and just really terrifying. The thing is, is that those tricks that Kasparov had, I mean, they're not going to work on a machine, right? You've got the IBM guy sitting in the seat, but I mean, he's not the one making the moves. He's not the one playing. So it's not going to affect him at all. So none of that stuff worked in Kasparov's favour.

And yet the other way around, the IBM machine could still use tricks on him.

So there's a few reports, the IBM team deliberately coded their machine so that the way that it worked, right, it would sort of search for solutions. And depending on how long that search would take, it would be how quickly the answer came back. But they deliberately coded it so that sometimes in certain positions, the machine might find the answer very quickly. But

But rather than just come back with the response, they added in a random amount of time where it looked like the machine was just ticking over, thinking very carefully about what the move was, when in reality it was just sitting there in a sort of holding pattern.

And Kasparov himself, so in his latest book and in several interviews, had said that he was sitting there and was trying to second guess what the machine was doing at all times. So he was trying to work out why this machine was stuck grunting through very difficult calculations and essentially got psyched out by the machine. Because I think all of the chess grandmasters are pretty much uniformly in agreement that at that moment in time, when the machine beat Kasparov, Kasparov was still the better player.

But it was the fact that he was a human. It was the fact that he had those human failings that meant that he was outsmarted by the machine. That's such an amazing and incredible story. Thanks for sharing that. Your first book, The Mathematics of Love, explained the math underlying human relationships. How can applying math concepts to romantic situations be helpful to people? Wow.

So this is a, it was sort of a kind of private joke that got terribly out of hand, that book, where I, you know, when I was sort of, you know, in the dating game or like, you know, designing my table plan for my wedding or like any of those things. I mean, I just like generally apply maths to everything and would just try and calculate as much as possible. I'm trying to like game it as much as possible.

And so in the end, I like wrote these up into a book and it's all very tongue in cheek. But the thing is, is that while I totally believe that you cannot write down an equation for real romance, you can't write down an equation for that sort of that spark of delight that you get when you meet someone and you know you really like them.

There's kind of, there's no real maths in that, but there's still loads of maths in, in lots of aspects of your love life. Right. So there's maths in, you know, how many people you date before you decide to settle down. What there's, there's maths in the data of what photographs work well on online dating, um, or, you know, apps or, or websites, um,

There's loads of maths in designing your table plan for your wedding to make sure that people that don't like each other don't have to sit together instantly. My code's available if anyone wants it. And there's even actually my favorite, favorite one is there's even maths in the way that arguments between couples in long term relationships, the dynamics of those arguments.

So there's lots of little places that you can find a place to kind of latch on and use the math. How many people should we date before we settle down? This is the one that got me the most in trouble.

So, okay. So here's the problem, right? Is that what you don't want to do, I guess, in an ideal world is you don't want to just decide to latch onto and settle down with the very, very first person who shows you any interest at all. Because actually they might not be that well suited to you. And if you hold out a little bit longer, maybe you'll find someone who's better suited to you. But equally, you don't want to wait forever and ever and ever and ever because you may end up missing the person who was right for you, right?

turning them down because you think someone betters around the corner and then finding out that actually they were always the right person.

So what you could do is you can set this up as though it's like a mathematical problem. So you've got a number of opportunities lined up in a row, sort of chronologically lined up. And your task is you want to stop at the perfect time. You want to stop at the moment that you're with your perfect partner. So it's essentially a problem in optimal stopping theory, it's called. So the rules are that once you reject someone, you can't go back and say, actually, I wanted you after all, because people don't tend to like that.

And the other rule is that once you decide that you've settled down, you can't look ahead to see who you could have had going on later in life. So if you frame it like that with those assumptions, then it turns out that the mathematically best strategy is,

is if you spend the first 37% of your dating life just to having a nice time and playing the field. So it's one over E, right? So 37%. Yeah, spend the first 37% of your life just playing field, having a nice time, getting to know people, but not taking anything too seriously. And then after that period has passed, you then settle down with the next person who comes along that is better than everyone you've seen before.

So, yeah, that's what the math says. But I should tell you, right, I should tell you that there's quite a lot of risks involved in this. So is that what you tell your husband? You're the best after the 37%? Yeah.

Yeah. Yeah. Yeah. Marginally better. Yeah. That's it. Exactly. How can we use, I don't want to say argue better, but I'll use your language. Like how can we use math to argue better in our relationship? Oh, this is my favorite, favorite one. So this is, this is some work that was done by the psychologist, John Gottman. He's done some amazing work with couples in long-term relationships and

And he's worked out a way that he, what he essentially does is he gets couples in a room together and he videotapes them and he gets them to effectively to have an argument with one another. Right. So officially they say that it's a, they asked them to have a conversation about the most contentious issue in their relationship. But basically they lock up a couple in a room, make them have an argument.

But what they've done is they've worked out a way to score everything that happens during that conversation. So every time that someone's positive, they get a positive score. Every time someone sort of laughs and, you know, gives way to the partner and, you know, but even gestures. Right. So if you roll your eyes, you get a negative score. If you stonewall your partner, you get a negative score, that kind of thing. Anyway, the thing that's kind of neat about this is that it then means that you can look at a graph of how an argument evolves over time.

So the really nice thing about this is that John Gottman then teamed up with a mathematician called James Murray, who came up with a set of equations for how these arguments ebb and flow, the dynamics of these equations, essentially. And hidden inside those equations, there's something called the negativity threshold. So essentially, this is how annoying someone has to be before they provoke an extreme response in their partner, right?

So my guess would have been, I mean, they've got the data on, you know, hundreds, if not thousands of couples here. My guess always would have been, all right, negativity threshold. Surely the people who've got the best chance at long-term success, the people who end up staying together, surely those are going to be the ones where they've got a really high negativity threshold. That would, that would have always been my guess.

You know, like the couples where you're leaving room for the other person to be themselves. You're not sort of picking on anything on every single little thing and you're kind of compromising. That would have been my guess. Turns out, though, when you actually look in the data, the exact opposite is true. So the chances, the people who have the best chance at long term success are actually the people who've got really low negativity thresholds.

So these instead, they're the people where if something annoys them, they speak up about it really quickly, immediately, essentially, and address that situation right there and then. But they do it in a way where the problem is dealt with and then actually you go back to normality. So it's couples where you're continually repairing and resolving very, very tiny issues in your relationships.

because otherwise you risk bottling things up and then not saying anything. And then one day coming home, being totally angry about a towel that's left on the floor or something. And it just being totally at odds with what the incident itself is, you know, bottling things up and then, and then exploding. Yeah.

Yeah, I think that's really fascinating, right? Because if you look at what it takes to bring things up in a relationship when they happen or pretty close to the time they happen, it means you have a lot of security and comfort. And you know that bringing this hard thing up and is not it might make somebody angry or hurt them, but it's not going to be the end of the relationship. And then not letting it fester actually makes the relationship stronger long term.

exactly exactly now of course the language that you use is really important as well right so you can't just be like you you know you can't just launch it and be a nightmare about it but um but but i think that's i really like i love those stories i love those stories where there's something about humans that is just written completely in the numbers i think that's really wonderful hannah this has been an amazing conversation i want to thank you for your time oh thank you thank you very much

Hey, one more thing before we say goodbye. The Knowledge Project is produced by the team at Furnham Street. I want to make this the best podcast you listen to, and I'd love to get your feedback. If you have comments, ideas for future shows or topics, or just feedback in general, you can email me at shane at fs.blog or follow me on Twitter at shaneaperish. You can learn more about the show and find past episodes at fs.blog slash podcast.

If you want a transcript of this episode, go to fs.blog slash tribe and join our learning community. If you found this episode valuable, share it online with the hashtag the knowledge project or leave a review. Until the next episode.