Welcome to the LSE Events Podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Good evening, everyone, and thank you for joining us today for this very exciting public lecture. My name is Teng-Yao Wong, and I'm a professor of statistics in the stats department here. It is my absolute honor to chair this event and welcome Professor Sir David Spiegelhalter at LSE today.
Professor Spiegelhalter is Emeritus Professor of Statistics in the Statistical Laboratory, University of Cambridge. He is one of the most influential statisticians of our time, known not only for his groundbreaking contribution to the statistical community, but also for his dedication to enhancing public understanding of statistics. He was previously the chair of the Winton Center for Risk and Evidence Communication at University of Cambridge.
where his work has focused on how statistical evidence can be effectively communicated to the public and policymakers. Over his illustrious career, Professor Spiegelhalter has made significant advancements in Bayesian statistics. His research encompasses development of novel statistical methodologies and their applications in public health.
He led the development of the groundbreaking WinBus software, which for the first time made complex Bayesian methods accessible to researchers from statistics and beyond. His research has been very highly cited by researchers from a broad range of disciplines. And beyond academia, Professor Spiegelhalter has been a tireless advocate for statistical literacy.
Many of you here may know him for his contribution in BBC and TED talks, which have inspired millions to think critically about numbers and probability. His extraordinary contributions have earned him numerous awards. He was elected as a Fellow of the Royal Society in 2007,
He received the knighthood for services to statistics in 2014 and received a gold medal from the Royal Statistical Society in 2020. We're also very thrilled to highlight Professor Spiegelhauter's latest book, which she's going to speak about today. The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk and Luck, published last year.
This insightful book explores how we can better understand and embrace uncertainty in this very ever-changing world. You will probably have seen the booth outside the lecture theatre. After the lecture, there will be a book selling and signing session, an opportunity that you do not want to miss. Before we start, I have a few housekeeping notes. Please make sure that your mobile phones are switched to silent mode. LAUGHTER
I should have said this earlier. So we can all enjoy the lecture without any interruptions. Also, I'm glad to say that today's event is live streamed and will also be recorded. So you can watch it later to revisit Professor Spiegelhalter's insights and share them with your family and friends.
Now, without further ado, would you please join me in welcoming Professor Sir David Spiegelhorter for his lecture on The Art of Uncertainty, Living with Chance, Ignorance, Risk and Luck. Thank you. Thank you.
Thanks. Thanks very much indeed for the kind introduction and thank you all for coming along this evening. It's great to see so many bums on seats. So I'm really excited about today. Okay, so that's what I'm going to be talking about. That's me. I'm emeritus, which means I'm an old, retired old man. And
and non-executive director of the UK Statistics Authority. So I do sort of do something useful with my time. Okay, so just a little bit of background. I used to do statistical methodology. I used to do the sort of difficult stuff
that people in the department, in the LSE do now and stuff on AI in the 1980s and Bayesian methods and Bayesian computation. But I was fortunate enough in 2007 to get funded by a friendly billionaire. This is my career advice to everybody. Find a friendly billionaire.
And ask him for lots of money. And it's great. And then you can do what you feel like. So, you know, just general advice. Just find one. And I was philanthropically funded to, I was Winton Professor for the Public Understanding of Risk. So then I could go into communication. So if there's me with Hannah Fry doing BBC documentaries and other documentary, I jumped out of a plane and that sort of thing. I did Wipeout.
It's me jumping over the big red balls in Argentina, falling in the mud. So I did that. And I even got on Desert Island Discs, which was, those two were my life's ambitions to be in Wipeout and be on Desert Island Discs. So I've done everything I ever would love to do. And I don't have to do anything anymore. OK. And I've written some books.
I used to write unpopular popular books, but now I write fairly popular popular books, which is great. And that's the latest one, which you can, available from all good booksellers and outside. And that's the chapters. And actually, slightly indulgent. I tell you, good thing. The other thing is, write a really popular book, and then you get asked to do another one, and you can do what you feel like. So that's a good, that's what happened to me. So I, this is a really indulgent book. I've put in everything I'm actually interested in,
All the stuff I've been pondering about and arguing about for 50 years, and there it all is. And I'm just going to do a few bits this evening. I'm going to do stuff on luck, a bit on coincidences, a bit on confidence in models, which I did a bit on deep uncertainty,
and so on. Oh, and putting your ignorance into numbers, because I'm going to give you a little quiz later on. So it's a random range of topics that I find interesting. So uncertainty. The first thing I talk about in the book is what is uncertainty? And people say, well, it's not being certain. So what does certain mean?
I know certain means where then you have to start thinking about truth and everything like that. But I like this definition, the conscious awareness of ignorance.
It's when you know you don't know, when you own up to the fact that you don't know something about either what will happen or what's going on at the moment or why something happened. But I do restrict it to things that in principle you could actually find out the answer to. So when they say I'm uncertain what to eat tonight, what to eat in this restaurant, that's not right.
What I'm talking about is when I'm uncertain what the best Beatles records were. No, no, I'm uncertain, you know, what to wear. No, those are not what I'm talking about because there's no correct answer. I'm talking about things where there actually is an answer. And you may, of course, be ignorant about the future, the present, the past, or why things happened. And so I like to illustrate that with a coin. So here's a coin. What's the probability this is going to be heads when I flip it?
Okay. What's the probability this is heads? Zero to zero. No, come on. Zero to one.
Nah. Now he says zero or one. This is typical, typical of the sort of outdated view that professors at LSE have about this is due to, you know, an outdated, what's called a frequentist view of probability, which says that once it's happened, you can't give it a probability. Now that is completely wrong. Of course you can give this a probability. And what probability might you give it? Your probability?
Yeah, 50%. It's not my probability. Actually, the point is that what that shows is that probability is a measurement of uncertainty and that uncertainty is a relationship between you and the outside world. It's not a property of the object. It's a property of you and your relationship to that object. And actually, what I should do is actually show you the coin. Have a look at the coin.
Turn it over. Yeah, very nice. It's a lovely two pound coin, but the problem is it's strictly illegal because it's got two heads. Have a look at it afterwards. It's absolutely beautifully made. It's got two heads. So that initial, when you all said, oh, 50-50 before I flipped it, you were wrong. You were wrong. You were naive enough to trust me. What that shows is that any probability you give in any circumstances, any number,
It's a construction. It's a subjective construction based on assumptions about how the world works. And you may be right. And so, again, it's an expression of your uncertainty about the world. This is quite a powerful idea.
And what it means is that that sort of statement that is either zero or one is completely idiotic, but that's what you expect. Okay, so the book is about deconstructing uncertainty in our lives as people.
And it's how to, I love this, I made this up. How to think, I didn't make the first one, I stole. But this is how to think slowly about not knowing. If you think about fast and thinking fast and slow, we're ignorant about lots of stuff and often we use our gut feelings. This is about slowing down and not just using our gut feelings about our ignorance.
I'd like to give some examples of how important that is, but also how important when we are thinking about our uncertainty to start thinking about magnitudes. How uncertain are we? And this is a nice story in the book, the Bay of Peaks. Now, some of you, well, actually you'll be pretty old to...
remember this because it was in 1961 because in 1959 you know Cuba revolution the Fidel Castro when he took over the revolutionary communists took over Cuba and
And almost immediately, the CIA started plotting the overthrow of Castro with Cuban exiles. And they had a plan to invade 1,500 Cuban exiles into the Bay of Pigs on the south coast of Cuba, which I've been to. It's a very nice place. And...
This was secret though. The US Army didn't know about it. It was a complete CIA operation. And then Kennedy, when he became president in 1961, found out about it. And he commissioned a review by the intelligence, by the Joint Chiefs of Staff, about whether this was a good idea or not. And they thought the chances of success were about 30-70. Either the idea of 70% chance of failure, what they thought.
Unfortunately, by the time that report got to Kennedy, that 70% chance of failure had been replaced by the phrase "a fair chance of success". Now I don't know how you would interpret the phrase "a fair chance of success", but it doesn't sound like such a bad idea. And I'm sure this wasn't the only reason he approved of it, but he approved it and it was a total fiasco.
All the invaders were either killed or captured. There's Castro himself jumping off a tank in the defense of Cuba, and it was a complete mess. And incidentally, you know, lead pushed Cuba even closer into the Soviet sphere. And the next year was the Cuban Missile Crisis, the closest we've been to nuclear war. So it was a total disaster. Now, what that shows, I think, to me is that just using words like a fair chance for important issues is really, really dangerous.
And since bear pigs, people have realized that. And because words are very easily misunderstood. So it's best to at least roughly define the words. And if you now look in climate change, for example, the IPCC reports, they all, when they say something's likely, they all actually, oh, I've forgotten this. I think it's between, it means between 55% and 70% probability.
I think that's their definition of likely. So different organisations use different way coding between words and numbers. But I'm going to talk about the UK intelligence service. If they say something is likely, it means between 55% and 75% probability. That's MI5, MI6 and the other intelligence operations. So that's the official UK probability yardstick. And it's public. It's public.
I gave a talk at MI5 and they gave me a mug. This is an MI5 mug, I keep this very carefully. And what it says is that need a refill if your coffee's down here, you're highly likely to need a refill because highly likely means between 80 and 90%. So there's likely between 55 and 75%. So if you thought that there was a 30% chance of success,
You would officially call that unlikely. Now, isn't that a huge advance from what happened to the Bay of Pigs? And that is mandated so that when in the intelligence services, people use words like likely, this is what they have to mean. I think it's an enormous advance there.
Okay, so here's an example of where these ideas were used in 2011. Here's President Obama sitting around the table deciding what to do because intelligence services thought that Osama bin Laden was in the compound at Abbottabad. And he had to decide whether to send in the SEALs to, well, actually to kill him. And so...
What he did, one of the important things, he then got multiple teams looking at the evidence, judging how likely it was that Osama bin Laden was in the compound. Multiple independent teams. And this is actually a quote from President Obama afterwards. No, he wasn't interviewed about this. So...
Some thought it was only a 30 to 40 percent chance, others thought it was as high as 80 to 90 percent. So these are the gung-ho enthusiasts, these are the miserable people, otherwise possibly known as a red team, and we'll come back to that later, the idea of red teaming, where you deliberately have people who are rather pessimistic, who try to find out all the flaws of what might go wrong. And long discussion, Obama then said this is basically 50-50, he decided to go for it, and it was the correct decision, he approved the raid,
So one of the questions that people have asked since then is should all the different intelligence groups have got together beforehand to come up with a single number to give the decision maker? So who thinks they should have got together? Who thinks it's better that actually he received these multiple different decisions? Yes, exactly.
But some people have written it saying, oh, they should have all got together. No, the decision maker should know when there's differences of opinion. He's the one who makes the decision. He should know about those disagreements. So I think this is actually a really good example of how you might go about making really difficult decisions. And we'll come back to an example of like this later. So the point about these probabilities, of course, they're not based on analysis. These are pure judgments, pure judgments.
Just like in super forecasting competitions when people make judgments about what's going to go and happen in the future, they're not doing calculations, these are probability assessments. And I just want to now prove to you, show you that these, what we will call subjective probabilities, purely numbers that come out of your mind are measurable in terms of their quality. So I'm going to now get you to quantify your ignorance.
Pretty ignorant looking lot. So I'm going to ask you a string of questions and you say whether you prefer, you don't say it out loud, A or B as the answer. No cheating, no Wikipedia, no phones.
Then think of how confident you are. So are you 10 out of 10 sure that it's A? Or are you 7 out of 10 sure or only 5 out of 10 sure? And then you give your confidence a number 5 to 10. I'm going to give you the answer and then you score yourself according to this rule. Now this is a really, really important rule. This is so non-arbitrary. So if you say I'm 10 out of 10 that the answer is A and you're right, you get 25 points. If you're wrong, you lose 75 points.
It's completely asymmetric, it's really vicious, punishing overconfidence. If you say 5, you can just stay on 0. If you say 8 out of 10 and you're right, you get 21, wrong, you lose 39. And I'm going to leave that up because I'm going to ask you at the end, what's the formula behind that rule?
What's the pattern of those numbers? So just have a look at that. And now here's the first question, which is higher, the Eiffel Tower or the Shard? Now you can do this on paper, but you're also clever. I'm sure you can just do this calculation in your head. No phones. Which is higher, the Eiffel Tower or the Shard? So make a judgment. Do you think it's A or B?
Which one, if you had to answer, guess A or B. You've got to all do this. And then you've got to say how confident you are. Are you 10 out of 10 sure? 6 out of 10? 7 out of 10? Right. Have you all made a judgment? Okay. Have you made a judgment? And you've got to put a confidence on it. What's the answer? The Eiffel Tower. Only just... Only just... Now... Did anyone...
Did anyone put 10 out of 10 on the shard? Yes! Right, you have got off to a really bad start. And it's exactly to prevent this sort of behaviour that I'm doing this quiz. Sorry, you really bad start. Okay, who's older, the Prince of Wales, William, or the Princess of Wales, Kate? Who's older?
A or B? And how confident are you in your answer? And those people who were 10 out of 10 sure and were wrong, I hope you're learning your lesson. Okay. Have you made your judgment? And the answer is? Yeah, she's six months older. Okay. All right. Which was founded first, LSE or Imperial College? Shh, shh, shh, shh.
Which was founded first, LSE or Imperial College? Okay. Make your own judgment, A or B. Your confidence. And the answer is? A. 1895 against 1908. According to Wikipedia. I'm going by Wikipedia. Okay. How are you getting on? Two more, two more. Which is larger, Belgium or Switzerland? In terms of land area. Forget the mountains and stuff.
Just land area, which is larger, Belgium or Switzerland. No, it's all Wikipedia, so that you could just do it in a moment with Wikipedia. But you're not allowed to. Right, have you done your judgment? And the answer is? Yeah, B. Okay. Might have been quite easy. Right, finally, what years did Mick Jagger study at LSE? 1960 to 62, or 61 to 63? Now, you've got to work out, you know, come on, where...
When did the Rolling Stones start making their records? How old is he now? You can do some calculations so you should be able to work it out maybe. All right. And the answer is 61 to 63. And he only left, he only left, he wouldn't leave until he actually had a recording contract and then he left. Okay, so how did you got on? Can I ask him who got positive points?
Not bad, not bad. Did anyone get quite a lot of points? Anyone get more than 80? Whoa. Anyone get more than 80? That's really good. Okay. So people who got a lot of points are pretty good. Either they were lucky or they know quite a lot. There are people who are quite cautious and stayed around 5, 6s and 7s and so stayed around 0. Yes. Okay. So these are good. So you've got groups of people who actually know what they know. They've got groups of people who know what they don't know.
which is really good. Now there's a third sort of person who got rather a low negative score. Yay! Who got less than minus 100? Oh yeah, great, very good. Okay so people who got a really low negative score think they know something and don't. These are not people you want as your financial advisors.
Now the point about this, I mean it's a bit of a joke, this may appear as a joke, this is deadly serious. These sorts of quizzes are used for intelligence analysts and so on to train them not to be overconfident. Not to say this is extremely likely when it's not.
So it was developed in, actually the score, scoring rule was developed in weather forecasting by Glenn Brower in 1951 to assess probabilistic weather forecasters for probability of rain. And it's also the scoring rule that's used in super forecasting competitions when people are judging probabilities for future events. What's the pattern of the numbers? Yeah, yes. Did you know that or did you work it out? Okay, so anyway, what he said, squared error loss.
Look at the most you can get is 25. So look at what you lose as you start making mistakes. Because then you lose nothing. There you lose 1. There you lose 4. There you lose 9, 16, 25, 36, 49. So you lose the square of the error.
So it's a standard squared error loss, which is used widely anyway. And that's got particular theoretical properties that encourages you to be honest. Because if this was a linear scoring rule, it just was a straight line, it would actually encourage you to exaggerate your confidence. So this has got a real mathematical meaning behind it. So, okay, luck.
Some of you may have been lucky. You just guessed, you said 10 out of 10 on something, and you got it right. So you may have been lucky. I like this definition, the operation of chance taken personally. So the idea is that luck are things, you know, when you're lucky or unlucky, they're unpredictable events, they're not in your control, and yet they have an impact, either good or bad.
So it's stuff that just happens to you that wasn't your fault and you didn't do anything to deserve it. I use an example. I use it as my grandfather, old Cecil Spiegelhalter, in the First World War. Sergeant Spiegelhalter he was then. And he ended up, his job in early 1918, he was brigade gas officer for Passchendaele sector in January 1918, which is just the worst job you could ever have.
possibly, because he had to walk around the trench system inspecting the gas facilities. In order to get that job, which I reckon is a life expectancy of a couple of weeks, he had to do an exam. Imagine doing an exam for a lethal job. That's the sort of terrain we're talking about.
Anyway, he had a diary, which we still got, probably illegal to keep that diary, in which he just describes his work: "Desolation, narrow escape on return journey, lucky to get through in time" and so on. And he only lasted three weeks in the job. And then on January 29th in 1918, he was coming home from Eagle Trench and he got blown up. Now, you may be able to guess that he survived.
Otherwise, somebody else would be walking up and down here entertaining you. And he was put back into the casualty clearing station and he was declared B2 and he avoided the front lines. He was only in the rear from then on for the rest of the war, which is incredibly lucky because his battalion then was moved to the Somme just in time to meet the 1918 offensive of a million German soldiers and they had to go over the top twice.
and he was, by then he was a second lieutenant, and he would have been first up the ladder blowing his little whistle and he wouldn't be there anymore and I would not be here. So, you know, was he lucky or unlucky? Well, you know, one way to think about this is that in some ways he was lucky and other ways he was unlucky because he was unlucky just to be there in the first place.
Then he was lucky to survive. So philosophers have, I think, produced a very useful deconstruction of luck. And the first type, which is the most important, is what they call constitutive luck. And it's the luck of just who you were born as. So I would say that we're all pretty lucky to have been born at this time and place in history. Okay?
compared with some absolutely ghastly times and places in history when we could have been born. I mean, it sounds a bit daft when you could have been born because it wouldn't be you because you are only here by a ridiculous set of lucky circumstances.
And in fact, in researching this, it made me think about, why am I here? My grandfather survived, but all these other events, my mother escaped Shanghai in 1936 under shell fire and all these sort of things. And then, oh dear, there are kids in the audience, nevermind. Don't tell anyone. Then I started thinking about my own conception. Now,
Have you ever thought about your own conception? I mean, it's not just teenagers who go, "Ugh!" Have a thought of their parents having sex. I mean, it's really difficult. I had to wait till my parents had died before I could think about it.
And then I realized I was conceived in November 1952 in North Devon and I checked the weather records. Oh my God, it was a freezing cold snap in the whole country. And that's it, there was snow and ice, that sort of thing. So it was pretty well an unheated house. So what do they do? And here I am. So I could so easily not have been. But I had fantastic constitutive luck, I think, in terms of where I was born and when I was born, who I was born to.
My genes and everything was point. But then you've got circumstantial luck, which is being at the right place in the right time or the wrong place at the wrong time. So my grandfather had the bad constitutive luck of being born just in time for the First World War. The terrible circumstantial luck of being wandering around the trench system in early 1918 when the German artillery will have targeted the crossroads and the roads. Then you've got outcome luck.
How it just for you at that time happened to have worked out through nothing that's in your control. And the point is that the shell did not land it far enough away so he was only blown up and not actually killed. So this is quite useful. He survived until he was 81. So this...
I got a really good example from this from my friend. This is a plane crash in when a Dakota flew into Saddleworth Moor in 1949 and nearly everyone on board was killed. It flew into the mist. Except Stephen Evans. You may know Stephen Evans. He's a statistician from London School of Hygiene. He was on that plane as a young boy and he didn't die.
So what was his luck? The point about it is that he had terrible circumstantial luck of just happening to be on that plane. Just, you know, through no fault of his own, he was on that plane that flew into the mountainside. He had incredibly good outcome luck in surviving when nearly everybody on the plane was killed. So what about his constitutive luck? His constitutive luck was the fact that his father had been in the RAF and knew that the safest place for a plane was right at the back.
So he always made his family sit right at the back of the plane. Unfortunately, actually, his small baby brother was killed. But that's the only reason Stephen is alive now is because his father made him sit at the back of the plane. So you just realise your life can be so influenced by things that are totally outside your control. And this is, I think, a useful deconstruction of those things. So maybe think about that, how it affects your own life. Because constitutive luck...
Actually deter means that you know who you're born as we know is an enormous influence. I'd love to think that oh I'm Professor Sir David Spiegelhalter because of my hard work and blah blah blah. No. You know huge amounts of it is just because who I was born as.
You know, my genes, my upbringing, in my day, you know, my parents didn't have any money, but free education, free healthcare, free university, almost being able to choose what job you want when you come out, final salary pension. I mean, really, you know, my generation just ate all the pies. You know, tough luck on everybody else. So basically, constitutive luck means that so much of your life is determined just by who you're born as. So the lesson from that is you've got to make the best of the hand you've been dealt.
And I quite like that analogy that you just should make the best of the hand you've been dealt because you might think that, you know, we're all unique, you know, we are biologically unique, nobody's ever like us at all. You are almost certainly unique, just as every shuffle is almost certainly unique.
Because I'm shuffling the cards now, and I would say that nobody in the entire history of humanity has ever done that shuffle before, ever produced that order of cards. And I'm going to shuffle them again, and I can say I'm unbelievably confident that nobody in the entire history of humanity has ever done that shuffle before. I can't be logically certain. You know, it could be true. It's unbelievably unlikely that anyone's ever done that before.
And it does seem it, so every shuffle that's done ever is different. Now why can I be so confident of that? The point is you have to think of first of all of how do you go about thinking about that, think about how many shuffles are there. So how many shuffles are there? Well, when you think of a shuffling of cards, the first one could be any one of 52, and the second one could be any one of 51 cards.
Next one get anyone to 50 and so on we can multiply that together me out what's called 52 factorial And that's how many different shuffles there are now you could try doing that on your calculator and Very soon the numbers will go off your screen and after a while it'll explode Don't do it. Don't try it at home because it's a very big number and
It's that big number. Oh, no, it's a bit bigger than that. Oh, yeah, yeah, no, that, that, that, yeah. No, actually, it's a bit bigger than that as well. And, oh, I forgot some zeros. It's that big. 10 to the 68, roughly, 10 to the 68. It's about the same as the number of atoms in our galaxy is the number of shuffles.
So you can work out, I did something, you know, if you thought that every human that ever lived did nothing but shuffle cards their entire life, the probability that two of them would ever have done the same shuffle is about 1 in 10 to the minus 19, which is absolutely minute. It's the chance that, you know, you might be able to guess two mobile phone numbers. And that's the chance that two people ever in the history of humanity have ever produced the same shuffle, assuming it's all done randomly, right?
And with a ridiculous assumption that's all anyone's ever done in their lives. So I can be really confident that not only is every shuffle different than I can do, but no two shuffles have ever been the same.
Really difficult to grasp because that's an example of a number, a probability that is much smaller than we think it is. Now we'll come on to some ones that may be completely opposite, ones that are more common than we think it is. So let me just show a little bit of video. You may have seen this. This is Darren Brown, an illusionist. And this was from a program called The System from a few years ago. I'd just like you to watch it.
but I want you to watch this and try and work out how it can be possible because the key to understanding this is the key to understanding the system. He says he's going to flip a coin ten heads in a row. One, that's heads. Two, that's heads. No, there's no cuts in the film. Three, heads. Heads, that is four. Heads, five. Six. Four again, and I'll stop. Six.
Seven, three more, eight, nine, yes, last one, ten. Ten heads in a row. Thank you very much indeed. Okay, so the question is... I'll show you later on. Hang on. How did he do it? It really is, that was a single take. How did he do it? Sorry? Yeah...
He would have been pretty lucky. It's one in a thousand chance of doing it. So it's a half times a half times a half ten times. It's about one in a thousand chance to do it. Ah, oh, how skeptical you are. Yeah, no, he is. He actually is flipping it. Yeah. Sorry, it's not two-headed. You're flipping a coin? Sorry? Yes.
Ah, some people, because yeah, not into a bouncy pot like that. I know Percy Diaconis, a professor of probability at Stanford, he can flip a coin and make it come up whatever he wants, because he's a trained magician. But that's not the same when you're throwing it onto a hard surface, because then it does that sort of chaotic bounce. Yes? Yeah, yeah. We've heard that. Yeah. He flipped for nine hours.
And they reveal it later on in the program showing both, oh no, I'm sorry. So he did it for nine hours of flipping before he did that, which just shows, you notice how he was really good and he made it look as though it was the first time. He was really cool, really professional. Now he had been flipping for nine hours. Imagine when he got to eight, how he must have been going, oh, like this. And he kept so cool, like he knew it was going to happen. So,
He flipped 10 heads in a row. Was he lucky or unlucky? Lucky or unlucky to take nine hours to do it? Who thinks he was lucky to only take nine hours?
Who thinks he was unlucky to take as long as nine hours? Okay, quite so slightly in favor of the unlucky. So let's see what we can do. Let's do some sums. Let's do some maths. So he flipped nine hours to get 10 heads in a row. As I said, the probability of doing that is one in 1,000 roughly. So how long does it take for a one in 1,000 event to occur? So on average, you'd expect to take 1,000 attempts. If you try 1,000 times, you expect it to happen once. Now, you don't know, not guaranteed,
Actually a 63% chance it will happen, but that's... He took around 1600 attempts. I've calculated that by timing how long it took him and knowing what nine hours, so I reckon he took around 960. So I think he was unlucky to take so long. On average you'd expect someone to take a thousand attempts, he took longer. Now my colleague did it, James Grime, and he only did it in an hour.
And it's there. The whole thing is on YouTube. It's the most tedious YouTube video you could ever imagine. It's just him flipping away. But at least it's only an hour long. Because when he was filming, you didn't know it was only going to take an hour to do it. But he'd done it. You can find it on YouTube. I don't think it's got many views all the way through.
But it's there as evidence. So he only took an hour. Okay, so the point is we can do some sums. Now what we're talking about now is an event, flipping 10 heads in a row, and the event's got a probability one in a thousand of happening. So the average time it takes to do that. But actually we can work out, this is what's known as a geometric distribution, which is the waiting time
before such an event happens. Now the probability it happens immediately is 1 in 1,000 is up there. And that's the probability it happens first time. Now the probability it happens second time is 1 in 1,000 that it happens second time times 1 minus 1 over 1,000, which is the probability it didn't happen the first time.
So it's slightly reduced and so we can get the probability of taking this long, it gets lower and lower and lower and the time probably you take you're going to take 3000 times is very low indeed. So it's got that's the shape of the distribution called a geometric distribution which has got an average of 1000, a mode at one, the most likely time to do it is immediately. Darren Brown took that long, James Grime took that long. So he was unlucky, he's in the tail of the distribution.
So sometimes we can put a probability on how lucky somebody is. And we'll come back to some more examples in a minute. So what I'm going to do now is, since there are some sort of more academic-y people in the audience, I'm going to do a little bit of slightly more technical stuff.
Because when we do statistics, people at school will be doing statistics and doing estimates and producing probabilities and maybe estimating probabilities and producing confidence intervals or something like that. And I'm just going to say that I've been doing that stuff all my life. I've been teaching it all my life. And now I think, actually, it's just wrong. It's what you can do when you're old. So I wouldn't say it's wrong. It's inadequate.
Because every one of those assessments, every one of those intervals we calculate is based on assumptions. And those assumptions may be wrong, like me flipping a coin and being a two-headed coin. So it's based on a model. It's based on a model for what we understand.
and we know that all models are wrong. So every calculated probability we come up with, every confidence interval is wrong. It's too confident. It should always be wider because it's dependent on assumptions which may not be true. In fact, they're certainly not true. So in lots of different areas,
People have gone beyond the standard statistical modeling that we know and love and teach, which I spent my life teaching. People go beyond it and then they do an analysis and then say, yeah, but it's not very good. You know, we haven't got much confidence now or we got real confidence in our analysis. And it's very liberating. Those of you who do data analysis, it's really liberating to be able to say, yeah, I've done my best, but it's not very good.
And everybody has developed independently without speaking to each other, loads of different areas have developed their own confidence scale or to do with the quality of the evidence, the quality in the models. In intelligence services, people will give probabilities but they'll also express their confidence in their whole analysis on their understanding. Health interventions is a grade scale, a star rating for how confident people are about their assessments. Climate modeling, they do probabilities and confidence.
And UK policy analysis in COVID, every COVID judgment that was made was accompanied by an assessment of low to moderate, moderate, moderate, high or high confidence in the evidence and their understanding of the process. This is a qualitative scale. And where we did it is when we were working in the affected blood inquiry a few years, two years ago, where we were as a team, we're asked to investigate how many people
got infected and died because of it, because of getting bad infected blood products between 1970 and 1991. And this is a massive scandal and we were asked to do the analysis of it. And for some areas like HIV infections you could count them. We didn't know the names but there are registries. So we could tell with, you know, pretty accurately with high confidence
how many people had got infected with, it's about 1,250 people got infected with HIV through receiving blood products, mainly people with haemophilia. Terrible. Some just from blood transfusions, mainly from haemophilia. But for things like hepatitis C, we don't know. We haven't got a registry. We don't know how many people. There have been people who got infected with hepatitis C from blood transfusion in the 1980s who still don't know.
So if you know somebody who's old, starts having some problems, they're quite undiagnosed, with liver issues, get them tested for hepatitis C. So they might have got blood when they were delivering a baby or something like that.
So many will not be, it's very difficult. So we gave an assessment, we made an estimate that about 27,000 people had got infected with hepatitis C. We gave quite a large uncertainty interval, 21,000 to 39,000. But then we said, what's the confidence that the available evidence can answer the question? Only moderate. In other words, we did all our analysis and then said, yeah, but it's not that good.
It's all right. We've done our best to build a very complex statistical model. But loads of assumptions have gone into it, subjective judgments and so on. And this is enormously liberating to be able to do that. So as it means you can go beyond just what you can calculate, you can actually express your deeper uncertainty about what you understand. Hepatitis B, we get asked how many people got infected and we refuse to answer with such low confidence in the evidence that we say we're not going to even go to try to put a number on it.
Really liberating, because this is what's known as mandated science. When you get told to do an analysis and produce a number, even if there's no good data there. So if you ever get involved with mandated science, sometimes refuse to answer. So what this illustrates is that when we do an analysis and come out with our nice little interval, don't believe it.
And this was illustrated beautifully during COVID. Remember the R number in COVID? We got every week or so, oh, the R number is 1.2. That means that on average, somebody who's infected is going to affect 1.2 people. And so the pandemic is still increasing because that number is greater than one. And there were multiple teams. There were eight different teams with 12 different models estimating R.
And every week they'd get together in a meeting and decide what they thought about it. And this was the sort of, you know, you get 12, 18, so 12 different models. These are the estimates for one week. They don't even overlap most of the time. So here's a really confident one. This is pretty confident. And it doesn't even overlap with this one. And they're all estimating the same quantity.
from essentially the same data but using completely different model structures there's deterministic ones there's agent based models everything differential equations everybody's using different approaches so what do you do what this shows to me what they do is all get together talk about it and then produce a composite answer which is sort of in the middle and wider than any of them have actually said which is I think is quite a good approach but what that shows is that dangerous if what has there been only one team doing this
And you know there was this team, "Well I say we're absolutely confident at one point." No, we can't be that confident. So this is a vital importance which again the Osama bin Laden example shows, vital importance of getting multiple perspectives. In particular, it's a great variety of, all the intervals are too narrow as they assume the truth of their analysis.
So they produce a pooled estimate. But the really good thing is they published all these results every week. You can get all these stuff online. Isn't that open and generous? Really good example of science to show that science is a collaborative activity. And of course they do the same in climate change as well. They pool the results from all the models rather than believing any single model.
Okay, the point about this also, remember I mentioned this idea of red teams? So a red team is when you have got multiple teams investigating the same problem, the red team are the miserable people. You don't want them around. They're always looking for what can go wrong, they're pessimists, they go, no, no, no, no, no, no.
And the Ministry of Defence are the people, look, it's a whole red teaming handbook. They really push the idea of a red team to stress, to uncover hidden biases, challenge assumptions and belief, identify flaws in logic. Now these are people who really cause trouble. They disrupt the group think and the sort of cosiness of a decision-making body. So a really powerful line. The Ministry of Defence are the people who promote this really strongly.
Even this, putting confidence in models, doesn't go deep enough. And now this is going into what we call some deep uncertainty. Now imagine, who's this guy, Trump? He started last week, started with a bang. Imagine if you were the government or a business or anything like that, trying to plan for the next four years about the U.S.
I mean, talk about deep uncertainty. It'd be really difficult because you just don't know what's going to happen. The unpredictability is massive.
So we're getting on to Rumsfeld's known unknowns. Sometimes we can do, we could specify a particular thing. Maybe for the Ukrainian war, we could list the possibilities, maybe. Maybe start putting a probability on each of them. Possibly, pretty difficult. But in some areas, we don't even know what the possibilities are. We don't even know the questions that are going to come up. The real unknown unknowns.
So, I mean, the point about a good red team is it should be trying to bring the unknown unknowns into the known unknowns. It should be imagining all the things that could happen, you know, with the list of all the possibilities. So basically, we're in the situation where sometimes we can't even think about what might happen.
This is a well-defined list. This is when we haven't got a clue what might happen. We can't even list what might happen. And this is when can we put probabilities on things? And we might be able to put numbers and sometimes we haven't got a clue what the numbers are. And we can produce, I think it's quite a nice idea, I stole this from somebody else, a nice little sort of two-way table here that in this corner, this is the nice stuff, this is the stuff we teach students.
When we know the possibilities and we can put numbers on it. Risk analysis. Then down here, this is where we still know the things, but people aren't very happy about their probabilities. They can give rough probabilities, maybe just a list of possibilities. They might be able to rank things. They can do stuff roughly, but they kind of understand what might, they know what might happen. This, we don't know what might happen. We can't even think of what might happen and we haven't got a clue what the numbers are anyway.
So that's the deep uncertainty. And that's where we're left with just trying to produce some scenarios, narratives, imaginative scenarios. And the response to that, of course, is resilient decision making. Decision making that should stand up to whatever happens, because we know that whatever happens is something we never thought of. And you've got to be really humble. You've got to admit you don't know. And that's quite difficult to work in that area.
Now this one you might think is a bit odd. This is where you don't know the list of what might happen and yet you're willing to put probabilities on. I think, what? That seems really bizarre. Now the people who do that are the Bank of England. The Bank of England do that.
Oh, sorry, decisions of deep, I forgot about this. Yeah, if you've got deep uncertainty, this is the bottom corner. Sorry, I jumped ahead. This is the bottom corner, unknown unknowns. Oh, yeah, yeah, yeah, exactly. So the Ministry of Defence, to try to deal with the unknown known, they employ science fiction writers.
Isn't that cool? It's a really good book. They're great short stories. You can download this. They're really good short stories. Extraordinary stuff about drones taking over London. It's brilliant. So just to try to think about all these things that might happen. They will never think of what actually is going to happen with AI.
We will never be able to do it, but we can try to think about it. We need humility, make decisions that are resilient to things we may not have thought of. Sorry, I said all that. I forgot I had a slide for it. Okay, this corner is the odd one. We don't know what's going to... We don't know what might happen, and yet we want to put probabilities on it. Now, this is what the Bank of England do, or at least used to,
because I think they've abolished fan charts now, which I think is completely stupid. They produced fan charts. So this was a prediction of what GDP growth might be over three years ahead from the start of 2000, end of 2023. And this band, this is a 30% probability. This is based on a mixture of modeling and judgment. This is a 60% probability. That's a 90% probability. And that's all they do. The final 10% is unassigned.
They say something else might happen. 10% probability that something else might happen. And we haven't got a clue, it could be anywhere. And just as well they do, because that's what happened in 2020. This was their band and that's what happened. They weren't wrong. They weren't wrong. Because they said, sorry, there's a 10% probability that something else will happen and we're not saying what it could be. So they weren't wrong when that happened after that projection.
So I think that's the important thing. They actually do think about this, that we can give a probability to something else, none of the above, and then build that into our decision-making by being resilient. I think that's an enormous insight. Okay, so... Oh God, I'd better finish.
Oh, yeah. Oh, God. Yeah, nearly finishing, nearly finishing. I'll go through this very quickly. Sorry. Communication in crises, just because this is quite an important topic with COVID. He recommended saying, oh, I'll do it very quickly. You know,
This is where there's a lot of uncertainty and yet we need to communicate. And John Krebs, head of the Food Standards Agency, you've got to say what we know, what we don't know, got to be honest about that, what we're doing to find out, and what people can do in the meantime to be on the safe side. But the final one, that advice may change. Now that, what's called provisionality, is something that politicians are incapable of doing. They're totally incapable of saying, well, we're going to do this, but it might not work, and then we'll change and do something else.
Because then we'll find out more. So we advise doing this, but then, you know, but we'll come back to you and our advice may change. They never said that in COVID, which is why people were still wiping surfaces a year later when it was a complete waste of time. And they should have known that after and they knew it after about six weeks. And yet everyone was still wiping surfaces. Absolutely pointless. And so that kind of day was just stuck.
Because they're otherwise to be accused of doing a U-turn because they didn't admit any provisionality of what they were saying. Okay, so coincidences. Now, I've got to race through this. Why do they happen so often? Why do they happen to me except once? I got six double-yolked eggs. There we are. So that was just an example that I got six double-yolked eggs and one in a thousand eggs are double-yolked.
So is this a one in a million, million, million event? One in a thousand, six times. Okay. So I'll let you think about that. It seems a bit amazing. I'm about to make an incredible omelette. You know, a bit of white wine there. So birthday coincidence happened a lot. Let's just do it. I think we've got time. I'd like just, I'll just go briefly through coincidence. Could you just turn to your neighbor on both sides and say your birthday and say the last two digits of your phone number?
Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question. Like, why do people believe in conspiracy theories? Or, can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts. Now, back to the event.
So just turn on each side your birthday and the last two digits of your mobile phone number. Now. Okay. Stop.
All right, okay, have you done that? Have you done that? Now, there should be nearly 300 people in the audience. So I would expect two people are sitting next to each other who aren't twins, who've got the same birthday. Is there anybody who's got the same birthday as their neighbour? Have you? Not twins? Did you know? You knew already. Okay, right. That's right, it's quite... Is there anybody for whom it was a surprise? Yes.
Yeah? It was a surprise? Oh cool. Okay, so one pair who knew and one pair, that's just what I expected. Okay, mobile phone, because there's one in 365 chance, 300 opportunities for it to happen, so that's about one. Now, mobile numbers, how many people are sitting next to somebody with the same last two digits of their mobile phone number? Oh come on, must be somebody. Sorry? No, no, it's got to be the same order. Yeah.
Got to be the same order. Come on. Really? Have you got two there? Two. Last two digits. What was your two digits? Three out. Anybody else? Am I missing anybody here? Are they only pair? Oh. Oh, I was a bit unlucky there. It should be... It's a Poisson distribution with mean about 2.9. So I would expect it between two and three. That's a bit disappointing. One. That's a bit bad luck. So I was unlucky. I was unlucky. It's how it happens.
So sometimes you can judge things, but with the birthday coincidence, I haven't got time to do the trick of, you know, that 23 people, you know, a couple of rows here, a 50% chance that two of them got the same birthday. But the one that is quite, and 35 people, 81% chance that two share a birthday. 80 people, it's pretty certain that two have got the same birthday. Oh, this is the example, he used Women's World Cup 2023, because in a World Cup squad, there's 23 people in a World Cup squad.
And so there were 32 squads and 17 squads had at least paired with a matching birthday. So more than half the squads in Women's World Cup had a pair of people with a matching birthday. And three teams had two pairs and Morocco and Nigeria had three pairs. I like the Nigeria one because here's Glory and Christy who are both born on Christmas Day. Isn't that lovely?
Okay, so matches. Now, the last two digits of the phone numbers, this is quite a good one you can use. I mean, I won't do the sums of this now, that if there's 23 people, 94% chance. It's quite easy one to remember. If there's 20 people, there's an 87% chance that two of them have got the same last two digits of their phone number.
I'm not time to demonstrate it now because I know. So I know in those two rows, first two rows, almost certain that two people have got the same digits. But I don't want to hold it up too much, the thing. So you can kind of...
You know, you should be able to make money by betting people on that. Okay, so, oh, here's another, but here's another, I'm just finishing off now. Now, here's an interesting, there's a game of trays, which people used to play in the 1700s, and each one would have one suit. So someone had the clubs, and someone had all the diamonds, and you put them over one at a time, clump, clump, clump, and then you'd shout, snap, if you both turned over the five or both turned over the six.
Now, if two people were playing that, which would you bet on? That there would be a match or there wouldn't be a match? Which one would you think would be the most likely thing to happen? 13 cards, you're turning them over one at a time. What's the probability there's a match? Who thinks it's more likely than not there will be a match? Who thinks it's more likely than not there won't be a match? Yeah, it's 63% chance there'll be a match.
So nearly two-thirds chance. So if you play that game, always bet on there being a match. Now, the curious thing is that it doesn't depend on how many cards you've got. If you had a whole pack of cards each and you got a match if the number matched exactly, it'd still be 63%. If you only had five cards, it'd be 63%. It doesn't depend on how many cards there are, the probability of a match.
which I'm not going to try to demonstrate now. It's 1 minus 1 over e, for those who are interested in that. But it is quite extraordinary. Euler worked on this problem. It's only approximate at 5, but it gets closer and closer to 63%, 1 minus 1 over e. So it's a game that's been around since 1700, and it's still a bit, you know, unintuitive to people that it doesn't depend on how many cards there are.
Okay, so I'm going to stop now. How do they get six double yolked eggs? Pretty easy. I bought a box of double yolked eggs. Again, just shows, don't trust. You know, don't believe these numbers. One in a million, million, million. What a load of old nonsense. And just to, yeah, you can get them any time. Look, three pounds 80 from Tesco. Waitrose, only two pounds from Tesco's for your double yolked eggs. So,
This is the trick that you want to do, is buy a box of double-yolked eggs and then go home and switch them for the eggs in your family's egg box. So then when they start breaking them, they'll all be double-yolked and they'll go, oh! So it's a good trick, a good trick to play on everybody. So good luck and thank you very much. APPLAUSE Would you like to come over? Oh, yeah.
Thank you, David, for the very, very engaging lecture. So now we have some time to get questions from the audience. Anyone who wants to ask any question, please raise your hand. I see the gentleman over there has the first hand up.
Thank you. So my name is Jay and I'm an undergrad at LSE. So thank you for your time today, Professor. I wanted to ask about your unassigned probability, the 10% thing on Bank of England. So don't you think this implies epistemological kind of irresponsibility here? Because you're just admitting, you're just saying, oh, I actually don't know anything about this part.
Hence, I feel like you're almost denying the whole rational agent in economics. Thank you. Yeah, but you're believing in this rational agent theory. I mean, it's a delusion, complete delusion that we can be rational, that we know these. This rational agent theory, it is a theory because it depends on being able to specify all the outcomes and put probabilities. You can't do either.
You usually don't know all the things that might happen, and you certainly don't know the probabilities, because they don't exist anyway. I haven't got into that, but I don't think probabilities exist anyway. They're just constructed on the basis of assumptions and judgments. And so it is, yeah, it's complete denial of rational Asian theory and everything, because I think, I just don't think it's a sensible, it's a theory. It's not good to think that it applies in practice in any sense at all, because it's making assumptions that are just unrealistic.
So I think it's much more realistic to have the humility to admit we can't say everything that might happen and just to give it a little smidgen, maybe a little or quite a big smidgen of probability of something else that we haven't thought of yet.
It seems to me a very reasonable thing to do and I'm rather pleased that the Bank of England do. You could just think of it as an unmodeled tail of the distribution. In the book I just talked about you actually get the same results if you just assume a Pareto distribution for the tails. But you know you can you know you don't even have to do that amount of modeling. You just say were something else.
So I think it is the counter to rational agent theory and saying that is just unrealistic. But yet I don't go as far as the sort of radical uncertainty book of Irving King and John Kay who kind of want to throw the whole thing out. I just want to be this intermediate area between quantification and giving up quantification because I think you should always try to quantify as far as you can
But don't pretend you can actually do it. Thank you. I think there was a hand up from here, from the lady here. Yeah.
Hello, so I'm Radhika. I'm from the post-graduation course here. So I have a question which is not exactly academic, but so as a statistician, what are your views about the Murphy's Law? About Murphy's Law? Yeah. Oh, that's brilliant. Yes. Yeah. So Murphy's Law is that anything that can go wrong will go wrong. And I think that actually it's a red team mindset.
It's a red team mindset. It's anticipating the fact that anything can go wrong. That's almost the definition of a red team. That always just thinks of what the problems are and what could go wrong and plans, at least within some reason, for those eventualities, builds in a resilience to things that could go wrong. I don't think it's sensible to have it as a complete philosophy of life because it's not true.
It's just not true. But as a planning procedure, as a way, as a sort of mentality, certainly among one group of people, I think it's very reasonable indeed. But I don't think it's not true. Because it's like luck. I mean, luck doesn't exist as an outside, as a thing. It's a description we give afterwards to stuff that happened. You know, it's not some outside force. Thank you. Any other questions? There's one back there.
Thank you very much, Professor. I graduated last year from LSE in statistics masters. So this is not statistical, but it is more about advice, right? And it also resonates, a lot of your talk resonated with some snippets of Nassim Nicholas Taleb, who also has some work in this space. But as a young person, I come across elderly wisdom.
and I come across facts and statistics. That elderly wisdom has survived maybe centuries, millennia, but there's statistics that overnight can completely destroy that wisdom, or at least seemingly so. So how do you advise me to approach when I'm in between these two? Oh, oh.
Oh, that's outside my pay grade, I think. Yeah, I mean, how you do. I mean, these are competing sources of knowledge, I think. And in a way, I don't like the idea of choosing between them. I actually think the elderly wisdom is the, you can think of that as almost the fourth quadrant that isn't mentioned of Rumsfeld. It's the unknown knowns.
It's things that actually people might know but almost don't realize it. The knowledge that they tacit knowledge. There's non-analytic but actually can be right just through you know they don't even know they've got that knowledge perhaps or where it came from or anything like that. So I think that that obviously is a source of understanding that could be very valuable particularly because all statistical models are wrong. They're full of assumptions and judgments.
we're not rational agents. And so I think it's a good counter to this urge to quantify and to analyze which can be, we can delude ourselves that we actually understand things. So it's a difficult thing. I think that it is quite good to keep both of these in mind all the time in a sense because they shouldn't just be seen as either or as sources of knowledge. I think that's my attitude.
- Thank you. We have a few questions from online. Let me just read them for you. So the first question is, the first guessing game with A or B as the answer could be a competition where you want to be the best.
Does this encourage risky and overconfident estimates? It shouldn't. The scoring rule is such that it should penalize people who exaggerate their confidence. That's why a squared error used to teach this, that you can prove that your expected score is maximized by being honest.
I see. But I guess the point here is that if you want to be the best in the whole crowd, then somehow... Yeah. It depends. Again, if you're just trying to cross a threshold, in other words, you've got some... The loss function says that, well, it's pointless unless I'm the top. Then it might be worth taking a gamble and saying 10 out of 10, even though you're either going to fail miserably or you might succeed. But that's not...
That's particular loss function. If you're trying to maximize your expected score, then you should be trying to be honest and not be overconfident. Right. We have another question here. So following the coin flipping example, what do you think is the best way of explaining repeated rare events will eventually happen?
Oh, that's a good point. Well, yeah, again, I was talking about that before. Now when I talk about rare events, I both talk about the rare event and then also say how likely such an event would be in a bigger context.
So I do this stuff in the book, a few years ago there were three major plane crashes in eight days. Now that's a very rare event, but you can work out that, you know, with a moving eight-day window there's a very good chance that that'll happen at some point in over ten years.
And similarly, I gave evidence at the inquiry into Lucy Letby last week, and the probability of the number of deaths that were observed at Countess of Chester Hospital in 2015 compared with the previous years was about 0.008, which is a rare event there, but there are 150 neonatal units, so such an event you'd expect to happen every year. I see. I guess the point here is that rare events are rare events are different.
It depends on how rare they are. Yeah, rare events happen frequently. Good. Let's take some more... It sounds a little bit paradoxical, but it's true. I mean, winning a lottery is an incredibly rare event for the individual, but it happens all the time. Let's take a few more questions from the audience. I see a young gentleman over there. Am I the young gentleman? What sort of... As you've been...
Going around talking and also you've published your book, what sort of negative reactions have you had that have been interesting and where have they come from? Oh, I see. Oh, with the book. Oh, negative reactions is... Well, it's interesting. Some people just say it's too technical. Others say it's not technical enough. It's trying to, you know, satisfy everybody in a way not satisfying, you know, putting in a bit of maths. But some people say, oh, I want more. So it never quite satisfies anybody because some people...
I got some comments on Amazon saying there's not enough detailed work about how do you do scenario analysis within an organization. Well, I mentioned it, but it's not a risk management book. It's not what it's for. It's not an organizational management book. And the other thing is I think a lot of people probably might give up
because you know the chapters on luck and coincidence are fairly light stuff as we could show you there and then it gets it does get a bit heavier when I start talking about statistical modeling and multiple and confidence intervals being too narrow so in a sense you know trying to write something to please everybody it's likely that it won't please anybody but
Sorry, can I just follow up? That's interesting. I was rather thinking about people within the statistics world who took a different view philosophically. Oh, well, I mean, it's the point that the book is an absolute polemic for subjective Aspasian stuff with the main claim in it, of course, that probability doesn't exist. We just made it up.
Which I sincerely believe in, and I've just got a paper in Nature saying this as well. Probability doesn't exist. Possibly at a quantum level, but I'm not even convinced of that. Or a lot of people aren't even convinced of that. That it exists as an objective property of the outside world. So probability, I don't believe is. I think it's just an expression of our own uncertainty. All numerical probabilities are constructs.
and I don't believe they're even estimating a true underlying probability. That's quite an extreme view, which I was indoctrinated into 50 years ago, which I never shifted from, and which the book is actually a sort of ill-disguised polemic
to argue that probability doesn't exist and it's what we teach at universities and schools is a complete delusion. But never mind, it's quite useful, what I do argue is it's very useful to pretend it exists, to pretend there is such a thing as chance and probabilities. It's really useful, but don't believe they're actually there in the outside world.
I mean, the paradox is, is the situations where I'm almost willing to admit there are probability distributions, like gas molecules bouncing around and lottery balls bouncing around all over the place, I do think produces, and it's testable, a uniform probability distribution over the numbers. The paradox is, that is a completely deterministic argument.
situation there's nothing stochastic there at all the lottery balls are just bouncing around using newtonian mechanics it's just immensely complicated so you end up with a uniform probability distribution so the irony is that the areas one of the only areas where i'm willing to admit probabilities are actually there are completely deterministic and it's only an expression of our ignorance yeah we have time for a few more questions
But that's quite an extreme view. Yeah, one over here. Sports betting. There's a whole section on sports betting.
whole section on how do you work out for football for doing probabilities for goal scores because goals tend to follow a Poisson distribution. If you can model the expected number of goals in a match and use the series of the attack strength and the defence weakness of each team, you can produce a probability distribution for the number of goals. What I describe in the book is the level of sports betting about
15 years ago, when the models were quite simple and people published them. They were found to work and you could make good money out of them, whereupon all the people who worked in that area started working in betting companies. My predecessor's professor of statistics in Cambridge left to join a sports betting company. So the point is that about 15 years ago, for the last...
15, 10 years or so, people have been able to make good money from fairly basic models for football and horse racing. It's getting a lot more difficult now because the bookies and the betting exchanges, there's so many people doing such good analytics that it's really quite difficult to beat the odds.
For a good 10 years, people made a lot of money by statistical modeling. But now it's got a lot more difficult because people are doing really sophisticated modeling for horse racing and football in particular. So I wouldn't advise it as a career now. In the past, I would have said it's a really good career. Not anymore, I don't think, yet.
Thank you. I think we have some very good questions. Unfortunately, we've run out of time. And let's thank Professor Sir David Spiegelhardt again for a very nice talk. Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE event soon.