We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Proof, truth, and infectious disease

Proof, truth, and infectious disease

2025/7/3
logo of podcast The Lancet Voice

The Lancet Voice

AI Deep Dive AI Chapters Transcript
People
A
Adam Kucharski
Topics
Adam Kucharski: 我发现那些规则不明确的情况特别吸引人。与物理学不同,在行为、健康和疾病等领域,我们缺乏确定性。然而,我们拥有大量的数据和潜在的动态,而数学可以帮助我们形式化这些动态,并将其与数据进行比较。我认为传染病领域是一个非常重要的领域,它能够产生实际影响,并且我们正在开发工具来解决这些问题。在攻读博士学位时,我专注于流感,并意识到由于病毒的演变,感染组合的数量变得非常巨大,这使得分析变得极其困难。因此,我开始进行自己的研究,收集数据,并思考行为与感染之间的互动。我认为我们有责任向公众解释我们的工作,尤其是在当今时代。与孩子们交流可以让你更脚踏实地地了解公众的想法,并发现他们真正关心的问题。重要的是要记住,对于成年人来说,疫情是一件大事,但对于年轻一代来说,情况并非如此。因此,在假设某些概念是显而易见的时候,我们需要考虑到这一点。

Deep Dive

Chapters
Professor Adam Kucharski's journey from mathematics to infectious disease modeling is discussed. He explains the overlap between mathematics and epidemiology, highlighting the use of mathematics to formalize and compare observations in complex systems like behavior and health.
  • Professor Kucharski's background in mathematics and applied maths.
  • His interest in systems with unknown rules.
  • Transition to infectious disease modeling and outbreak analysis.
  • Importance of public engagement in research.

Shownotes Transcript

Translations:
中文

Hello and welcome to The Lancet Voice. It's July 2025 and I'm your host, Gavin Klebech, here as ever with my co-host, Jessamy Baganol. Today, we're excited to be joined by Professor Adam Kucharski, a leading expert in mathematical epidemiology and infectious disease modelling, whose new book, Proof, is out now.

We're going to chat about the challenges of communicating uncertainty in science, and we'll also discuss the evolving landscape of public trust, the influence of social media and AI, and what all of these changes mean for scientific engagement and policy. We hope you enjoy this conversation with Adam Kacharsky.

Professor Adam Kucharski, thanks so much for joining us on The Lancet Voice today. Thank you for having me. I wanted to kick off by talking about a little bit of your background. You started out as a mathematician and then you ended up working in infectious diseases and epidemiology. I'm curious, what was your background?

interested as to how those two things sort of overlap in the obvious way obviously because they both use numbers very broadly but I'm sure there are more kind of myriad interactions between the two. Yeah so I started off I think like most people interested in science interested in maths end up pursuing maths at university because I was at Warwick you could do quite a lot of your degree from outside this kind of core pure maths and theory and ended up doing more mathematical biology and a little bit of kind of theoretical neuroscience and systems biology and so on and

I think I became particularly interested in these situations where we don't necessarily know exactly what all the rules are. So in physics, you know, in theory, you can often derive a lot of knowledge of these systems. Whereas if you think about things like behaviour or health or disease, there are all these really important problems. And actually, we don't have these kind of laws in the same way. We don't have that certainty.

But you have increasing amounts of data and often these underlying dynamics, which you can describe, and mathematics is often just a way of formalising in our heads what we think might be happening and think about ways of comparing it with data. So I think particularly going into PhD, which is applied maths, but really applied to a lot of evolution, immunology type questions, and then going more into outbreak analysis, I think I realised as a field it was somewhere where if you think about patterns, if you think about underlying processes and interactions,

There's some really important problems that can have a lot of impact and you really have that toolkit that they're starting to open up to tackle them.

That's really interesting. And then how did you make the leap into infectious diseases? As I came into PhD, I did it at Cambridge with Julia Gog, and that was really much more around flu and how we can use mathematical approaches to understand it. So flu particularly, you have this issue that because it evolves, the amount of combinations of infection you could have had over a lifetime just is enormous. You know, if you have

potential for 30 viruses to circulate it's two to the power of 30 so you're getting kind of billions if not trillions of combinations so as a mathematical statistical problem it became very difficult to to analyze simply so for phd in applied maths that was a really good topic but i think as i went on became more interested in applications for the policy side applications for behavior started as with anyone who works with data if you have frustrations about what's out there you go and

set up your own studies to collect things so started working more on going out and collecting antibody data sets behavioural data sets thinking about interactions between behaviour and infection running public engagement studies thinking about how can we engage with audiences like schools for example where there

there's a lot of value in understanding behaviour and infection in those settings but also you want to have more engagement with that that you want to kind of embed that within something where the children and teachers are going to get something out of it as well You've always been quite interested in that sort of public engagement work right? I think a

A lot of our funding, because it's from public sources, I think we have a duty to go out there and explain what we're doing, I think, especially in the modern era. I think I've also got a lot out of it. I think it's sometimes quite humbling to go and talk to a bunch of school kids and some of the things that you think are very important they're not interested in, some things that they might be very interested in in terms of disease and threats might not have been something you thought of. So I've actually found over time it's probably maybe a better research. I think it makes you more grounded in actually just talking to people about things

the sort of questions they have and how it interacts with your field rather than just you know cliche but it's sort of sitting in an ivory tower and assuming you have some sense of what the public are thinking. What are the kids most interested in? It's changed a lot over the years so one thing I found striking when I started doing a lot more of this it would have been about 10-15 years ago is first of all just the half-life on a pandemic threat so even things like the 2009 pandemic

Within a few years, that's just meaningless as a threat. Things like measles, very little awareness of that. I think probably more now because of the situation we're in. It was also interesting coming from pre-COVID to post-COVID. So we had a lot of materials that really introduced these things from quite basic principles of this is a virus, this is what transmission means.

And we thought actually that the level of, I guess, understanding or even boredom might be much further advanced post-COVID because people have just seen so many of these things. But actually, particularly children who may be slightly younger at the start of the pandemic, that's not necessarily the case, that a lot of that awareness drops off quite quickly. I think that's quite important to bear in mind that

For a lot of adults, this is just an enormous event in our memory, but we do have a generation very quickly coming up where it's not the same event for them. And I think that's very useful to bear in mind when we're sort of assuming certain concepts are obvious, for example. Your new book has just come out, Proof. Congratulations. Reading through it, I was interested in how much historical content there was in the book.

And I guess one question I had from reading through it was, are the sort of methods of coming to know things in modern times fundamentally different to historically? And I mean that in the sense of things like social media or things like AI, you know, modern technological inventions, have they sort of changed the way that we come to understand things? I think there are a few, a few similarities and a few key differences. I think one of the reasons I like digging into the history of these ideas is because

there's many methods we now have. If you take even just the idea that you'd run a clinical trial for something, it now seems just like such a natural way of testing whether something works or not. And in medicine, the first real modern randomized clinical trial was in 1947. So why did it take that long to get to that point? And why were these ideas across different cultures? Why was it used in some places but not others? I think that's kind of really useful to explore. I think going into the modern era, though,

I think one is social media has changed our interaction with information. So particularly harmful information, just the timescales you're dealing with are much, much faster. You can get something they can really get from the fringes of the internet. And we've seen that in even some of the run-up to COVID.

in the US, some of the claims about votes being rigged and this sort of thing. It's something that's on a random internet site that, you know, in the past it would have been a handful of people and it would have taken a very long time to get through. Within hours, you've suddenly got that scale of problem. I think also the emergence of AI, on the one hand, is delivering some remarkable scientific insights. I think a lot of what we've seen in terms of, particularly around things like structural biology,

just open up enormous avenues, even in terms of decision making and our understanding of strategy in games, a lot of the work they've done on things like poker.

And it's talking to scientists in those fields, I think there's almost a little bit of sadness that a lot of this ability to understand the biology that you might have had and might have got you into the field were now relinquishing. But then I think also just the awareness that biology is complex and that you can't always have a very neat summary that explains everything. And if you've got tools that can make extremely valuable predictions, that's really helpful.

But of course, alongside that, you've also then got elements like trust playing in. That if you've got a self-driving car that's something of a black box, the ability to trust the decisions it makes is very different to perhaps what we had in science in the past where amateurs could pick it up and play around with it. None of us can...

have the resources to train our own AI models to the standard of a lot of the ones that are currently being used. And so there is that element of trust and sort of relationships with companies and technology in almost kind of outsourcing that understanding. I think the idea of trust is a very interesting one, isn't it? And

I guess it feels to me that in the sort of modern era, social media has sort of eroded trust a little bit in the sense that just as you were talking about, if something's written on a little website somewhere, you can go and find it. And then as long as that backs up a belief you already had, you can feel that the belief being confirmed by this bit of evidence that you found, no matter where it be on the internet. This is a broad question, but how do we come to trust things, I guess, in the modern era? And how do you feel about the trust of the scientific process at this point?

Yeah, that's a really good question. It's a big one. I think even just the dynamics of social media, I think, expose you a lot more to those extremes. I think there's some idea that you never see an opposing viewpoint, but the evidence doesn't suggest that's the case, that actually people do see opposing viewpoints, but they've seen the most extreme version of them. So almost that drives this lack of engagement because you're not engaging with the perhaps more moderate versions of that argument that you could then

bridge and actually make some progress with. I think in terms of

in the scientific process. I mean, we still see now in surveys relatively high trust in science and medicine as industries. I think we need to bear that data in mind even against that wider landscape. And I think remember that sometimes of the most, I guess, loud and kind of confident falsehoods come from a relatively small group of people. Even if you look at content on YouTube, for example, 1% of people consume about 80% of the most kind of expensive

extreme falsehoods on those kind of platforms. And so one of my own personal maxims is just never read the comment section. And I think I recently gave a talk actually that covered conspiracy theories and very quickly you get people launching into a lot of defense of this. But I find that interesting, actually. Why do people believe that? And I think during the COVID pandemic and during a lot of other health issues, it's important to understand that. And when we talk about trust, it's not just that you've got

A group of people you don't trust and a group who do, often you have some kind of spectrum. And I know when COVID vaccines emerged, for example, my wife was pregnant, a lot of pregnant women weren't in the trials. And there was that question between us of, well, we've got this risk of a COVID wave. We've got uncertainties around the vaccine. And we on balance decided it was much better to get the vaccine based on the emerging real world evidence. Other people I talked to had some scepticism about pharma companies, about other things that they'd heard.

And it wasn't that they were kind of deep into conspiracy theories, but on that gradient of trust, they were just on a slightly different pace. I think in some cases, and there's been some emerging work in this area, things like overstated certainty can often undermine trust because actually people can handle a bit of that weight of evidence if it's communicated effectively and really understand where they're coming from. No, I think it's a really interesting conversation. And I suppose...

Relating it to now and where we currently find ourselves geopolitically, when you read the historical parts of your book, it's very clear that actually there are very few people that were engaged in these conversations of scientific advancement, very few. It was an elite, generally male group of Europeans who were discussing important things and changing the landscape of science and medicine. And now it's in a much, much broader conversation

But are somehow failing. And I was interested by what you were saying earlier about public engagement. It feels to me like we're not necessarily always looking and talking about the things that are most relevant to people in their day-to-day lives and how, particularly in health, you know, thinking of health and medicine, how they access and use healthcare.

So we have a broader conversation, but we're often talking about topics which are much more distant to people's day-to-day lives. And I wondered what your reflections were on that and what that process historically has been, that we've come from a very narrow group of people to a much broader group of people, but somehow it's caused...

many more problems and and that diversity of opinion is something that we haven't quite i think i say yet because i think we will find a way of managing it we haven't yet found a solution to manage those that diversity of opinion or that information ecosystem i think that's a really really good question it's a it's a big question and one that um i mean even if we just take

That wider, I guess, access to information, access to expertise. Something I thought about quite a lot in outbreaks and certainly in recent years, there's a lot of people who've added a lot of value who are not PhDs in infectious disease or medics in that specific area. And I thought a lot about why in some cases, I mean, data journalists, for example, spring to mind, did fantastic work in kind of documenting and communicating understanding, often much better than we did.

And I think a lot of it was kind of intention of where they were coming from, that they were there to be useful and to try and push these forward rather than having that agenda. I think that's something that I've noticed, um,

particularly when you get people very actively pushing falsehoods there's often a desire to be to have that linked into reputation as you know they're someone who is independent thinker they kind of go against the grain and then I think once they are doubling down at that it becomes much harder to have that constructive conversation versus people who are much more interested in being useful putting hypotheses out there being able to test them and I think similarly just with with wider interested wider audiences I think

One of the things you very quickly realize if you get involved in public engagement or even if you just try and talk to more diverse groups is you're wrong about a lot of things or you're wrong in your perception of how people feel about things or think things are important. And I think there's always a decision of what do you do next if you're wrong. And there's always a temptation to...

you know, put up the barriers and dig in or think about how you transition to a better place. And I think that balance hasn't always been right. I mean, part of me would like to be optimistic. The access to information is,

And the access to potentially expertise is much bigger than it has been historically. But somewhat counterintuitively, we're ending up in a place where actually the interpretation of that information has become even worse in a lot of ways. Yeah, it's so interesting, isn't it? And I wondered whether through your thinking and writing of the book, whether you had any sort of recommendations. What should we be doing at The Lancet, for example? I know that's probably too...

too much to answer just on the spot. I think one thing that struck me digging into these questions of falsehoods and misinterpretation and misinformation is, first of all, the two elements of the problem. It's much like when you design a trial, that you want to guard against thinking something is true and it's not, but you want to also guard against thinking something is false when it's not. And I think similarly, we have a lot of focus on people believing falsehoods,

but there's also this risk that people won't believe things that are true. And there's been some nice work, even in the past few months, looking at the effects of news consumption and that there is an effect of falsehood belief

But there's also just this disengagement with things that are true. And the majority of content that people interact with are generally from credible sources. And that's something that we saw even in vaccine-related content on Facebook. There's some nice work looking at this. And of the content, particularly during the vaccine rollout for COVID, about 0.3% was flagged by fact-checkers. And that didn't mean the rest was fine, because one of the really...

widely shared headlines said a doctor dies two weeks after getting his COVID vaccine CDC investigating. That was a factually accurate statement those things had all happened but it was being shared with the insinuation that we could infer something about the safety of the vaccine or the effectiveness which we couldn't from from that particular report and that particular headline was actually reached about seven times more people than all of the fact-checked content combined.

and so there's something which is factually accurate but has that potential for misinterpretation which is much more impactful so i think if we're talking about actions we need to be thinking about where the problem might actually be in terms of the impact these things are having on people and also of those balances that is it that these people are very deeply engaging with things that are false or is it that we're just seeing this erosion of trust and it

sort of trust in the ability that anything could be true. And I think then it also just plays into things like roles of institutions. And I know some of the people I talk to who work a lot on conspiracy theories made a very good point that these people aren't often lazy or poorly kind of read on things. They put a lot of effort in. I think making the effort to understand why do people get to these places is not just random. There's a whole bunch of factors that have led them there.

We had quite a similar one actually during the pandemic. One of the most read articles on The Lancet in 2020 was an editorial that Richard Horton, our editor-in-chief, wrote. And the title of the editorial was COVID-19 is not a pandemic. Now, if you read the article, Richard is discussing how COVID-19 is a syndemic. It affects lots of different systems in a pandemic style at the same time that are all interacting with each other.

But hundreds of thousands of people saw the headline, COVID-19 is not a pandemic, says Richard Horton, editor-in-chief of The Lancet, and just ran with it. And I was sort of, I mean, amazed isn't the right word, but you know what I mean? It was like they were taking our scientific credibility, reading just the headline and going, fine, it's not a pandemic. The Lancet says so. I found a lot even now interesting.

people will do things like that or they'll you know send articles from early 2020 when it's just a very different epidemiological situation very different um combination of evidence and how you should interpret it i think it's also just the volume and i think that's something that for scientists kind of walking into those debates is is sometimes easy to not appreciate that whenever i've ended up interacting with people who very strongly believe things that just

aren't evidenced about some aspects of health they will have 20 or 30 peer-reviewed papers in good journals to hand the sum of those papers doesn't support the point that they're making but that's a lot of work to then untangle those elements I think if I if I'd gone into those conversations assuming they're all just wrong it's nonsense it's gonna be really easy to debunk ultimately particularly in that very performative online space we will have to remember that social media is a stage social media isn't a nice conversation in a cafe often

I would lose that debate very quickly if I went in with that kind of perception of how they were approaching the evidence that they were trying to present in support of their argument. It is difficult to unpick, isn't it? I was really interested, actually, in your book where you talked about the sort of nuanced understanding of misinformation, how, like you said, people had 20 or 30 peer-reviewed articles ready to go, but at some point...

There was just like almost like a table leg holding up all of their beliefs. There was some complete misunderstanding of a paradigm in particular. And that was kind of where they were. They were taking all this true information, but there was some underpinning of it that was false or misunderstood. And it's so much work for someone trying to communicate science to work back to where that false understanding might be.

One thing that can be very helpful, I find, in communication is not just trying to push back on the avalanche, but also explain the tactics that are being used. And actually, I realized when I dug more into these rhetorical techniques and some of these kind of flooding techniques that

I realize I've been falling for a lot of them and you know you stay up all night because someone's flooded you with a million questions and you can't respond to all of them and then oh I look bad I look like I don't know what I'm talking about and actually it can be far more successful to very succinctly call out what they're doing and say you know I'm gonna particularly if you say something and loads of people come after you with a million claims you say well I'm gonna go back to my original and this is what I've said and this is why I've said it and this is

Because even if you're not going to convince that one, perhaps person at the extreme of the distribution, you've got a lot of onlookers where you're presenting useful, balanced information and not getting dragged into just some online argument, which often is the exact thing that they're looking for. Did you get dragged into many online arguments during COVID? I'd like to think I avoided many of them. I think it's...

Particularly as the pandemic went on, everyone was just so tired and frustrated. I'm sure there's some things that were just sharper than I would have liked to be. But I actually ended up just logging off social media a lot of the time because I think I just saw the damage it was doing just to your well-being. I think just interactions with family, interactions with everybody. You can very easily live your life distracted and tired and frustrated. And I think...

In a way, because when the pandemic came, I'd just written another book about information spread online and all the unhealthy processes. I think I was a little bit more guarded, but it's just very easy to get. So I think it's very hard to have a healthy relationship with social media. I think it takes a lot of work. And I've actually found in particularly last couple of years, almost become a bit of a kind of advice person for friends who've got into social

difficult situations on social media and don't really know what to do about it and they're kind of dealing with the stress of it and actually even just saying the

The thing of, you know, if someone's commented on your post, if it's not a big problem, nobody's going to see that unless you've seen it. Almost nobody else will. And it will go away. And those kind of things, which is very hard in the moment to see that reality. I mean, I'm at the moment, I'm in that phase of mine where I'm just trying to kind of think of, you know, positives of the future. Like, because all of this is very new. You know, it's 15 years, really, say. Yeah.

And we've had some huge things that have happened in those 15 years. And the pandemic was a massive learning experience for us all. We're going through what seems to be another transformative stage now, you know, on the sort of global front. And what are the learnings that we can take from this?

for us in the health and science community positively as to where we need to be moving towards because there are positive lessons here and I think the things that we're talking about, the fact that lies spread faster than truth it's a much broader problem. The reason why those lies spread faster than truth is often because they're talking to something which is very real, to a feeling or

an inequality and injustice that people are feeling in a in a very deep manner and that we are somehow missing with our sort of factual or truth response I mean I accept it as well there's algorithms there's a sort of you know there's many other factors at play but there is often some some core there and I'd love to hear your reflections Adam on you know we're we're in a

A very interesting space, it seems to me, in history. And I'd love to hear from you, what are the positive things that we can take forward and do? And where do you see this all going in the next five years? I mean, maybe first on the positives. I think that...

That ability to kind of make connections across fields. I mean, I've had collaborations that have come off social media in the good old days where it was fewer angry people and a bit more science. And those, yeah, I think even for early career researchers, the ability to just have a preprint and put a summary out there and get thousands of eyeballs on it, especially in the era of climate change, you know, you don't want to be going to conferences every month and you can mimic a lot of that. And I think there's a lot of

potentially really good links that can be made. I think also that ability to just translate across fields in different ways. So even, for example, in COVID, weirdly, my most shared post was an extremely mathematical post about the alpha variant, basically making the point that transmission will trump severity. So if you have something that's 50% more transmissible, you've got a bigger problem than 50% more severe because severe will just...

kind of scale the numbers up and down whereas transmissible you get the exponential effect and that took off on its own but then you've got graphic artists reworking that for Instagram and getting many many more shares and then people reworked it for other platforms and people put the idea elsewhere and that's something that just couldn't have happened in the past I probably would have you know written a letter to a journal and it might have eventually in a few weeks got somewhere and I

Now, I think there's all these kind of glimmers of things that can be really powerful and promising. And even just with bits of technology and AI coming through, we had some students who put together just little interactive things with the help of some coding and some AI and be able to explore ideas and iterate much, much further. And a lot of the barriers to entry, whereas previously there would have been kind of money and technical barriers. I've even found working with collaborators around the world that are building out their data science teams

I was traveling a bit before Christmas and they're saying we'd love to put some dashboards but we don't have the resources to get it in. I said, look, let's just sit down for an afternoon with some AI co-pilots and you can prototype, you can get over that hurdle and then start working on it. But I think as you said, there are those downsides and I think about it a little bit in terms of how we might approach health threats. That if you have harmful content, if you want to keep the benefits and get rid of the harm, there's kind of a few options you might have. But if you think about an outbreak,

One is you just chase down the bad thing. So we do it if we have an outbreak of a new disease, you just somehow work out where the cases are and you treat them or you isolate them. Another option is you change the nature of interactions. And we saw that very dramatically in COVID. We didn't know where the infections were and a lot of the country's response were we're just going to essentially fragment the network and make it harder for them to take off. We saw that even in the financial crisis in 2008, one of the responses around things like ring fencing, it was basically stopping that response

risk of spreading from the investment to the deposit side of banks. But again, that's something that's quite disruptive. I think there's a lot of aspects of online interactions, which probably it would be healthy to slow them down or add some more friction. But companies are understandably very reluctant to do that, given a lot of their money is in these things taking off. But the third thing, of course, and the thing that we often prefer in health is the

you have things like vaccines, which avoid the need to interfere with people's interactions, avoid the need to have to chase things down in real time. And I think it's partly about that explanation, that awareness of how you might fall for these things and this harmful information. But I think, as you said, it's also those deeper social reasons that people might be mistrusting. I think to some extent, just saying we're going to chase down the bad things when they emerge is a bit of a sticking plaster if there's actually just...

underlying reasons that there's a lot of susceptibility to these particular ideas and beliefs. You mentioned AI there, and one thing that occurred to me while I was reading your book and about your work during COVID, if we had the large language models that we have available now, available during COVID five years ago, what do you think might have played out a little bit differently?

That's a really good question. It's something we're working a lot on at the moment, actually. I mean, I think, especially given some of the signals we're seeing with some pathogens around the world, um,

I spent quite a lot of time almost trying to write down what's every task I did during COVID and during last outbreaks and why does that take so long? And we did a meeting earlier this year, actually bringing together a lot of analytics groups and wrote down kind of common tasks you'd want to do early in an epidemic. And each one on average, people estimated would take them about a working day. And so if you think that's a kind of a real margin we can target with

with something like AI. I think which bits could AI be good at? There will be some bits AI won't be good at, which the bits we don't want to let AI near. If you're making policy on something, you don't just want to have a fully generated code base and let it go.

But there's elements. I've got one of my colleagues, Billy Cawley, who's been doing some nice work looking at particularly things like narrative reports where you might have a big outbreak document with so-and-so went here on this day and did this thing. Can we convert that into a nice table that we can easily summarize? And these things that you just sink time into, even extracting bits of information from the literature in a very rapid way. You know, I want an incubation period or I want to know a certain delay in an outbreak.

can we just streamline a lot of that? And then I think also just from a learner point of view, one of the big problems we had during COVID and for a lot of outbreaks is a lot of analysis is really bottlenecked.

it becomes very centralized. What has historically happened is your data will go to some international group that international group will do the analysis, usually amongst a small bunch of people, and then it will get communicated back. And that doesn't scale. You can't have suddenly a hundred countries wanting that. And it's also, it's not really sustainable or equitable. You'd much rather have those tools being used within country. And I think AI has a lot of promise in, in just helping people, um,

with training requests or helping to understand how to use these or I've got this and I want to change it to this and it can get people over a hurdle that previously created a bottleneck of a small number of experts you're relying on. So I think for me there's a lot of optimism that aside from the fact you've got a workforce which is extremely depleted and tired, I'd hope that a lot more of that analysis could be done globally by growing teams rather than having to be outsourced.

We talk quite a lot about communicating certainty, I suppose. And, you know, not that as you talk about in your book, there is really such a thing as complete certainty. But what struck me a lot during COVID with the comms situation was how difficult it was to communicate to the public on certainty. You know, the whole follow the science thing and everything.

saying this is what the science believes when the science was a completely shifting method. I thought it was interesting you mentioned in your book of traveling to work in early 2020, which at the time involved a lot of hand sanitizer and no masking, which obviously now we realize to be the wrong way to approach avoiding COVID infection. So I wonder if you've got any reflections on communicating uncertainty to the general public?

Yeah, I think that was an extremely important question during the pandemic. I think there were a lot of times where we could all point to communication that got it wrong. I mean, even around sort of debate around airborne transmission, there's a lot of certainty this is a fact, this is not airborne. And of course, that wasn't a fact. There was, even at the time, uncertainty. And now we know more evidence, although again, context specific around different modes of transmission.

I think there were some emerging examples of, I think, countries that communicate that much better to the public, particularly in terms of changes of direction. I mean, Denmark was one that sprung to mind. Singapore, I thought, over time, particularly reopening, was explaining, you know, we're doing this based on this evidence, that evidence might change.

rather than if the science tells us this is the correct thing, this is definitely the correct thing. And then three days later, the science has actually completely changed. It hasn't, it's just the policy space. I think also that separation between what we're seeing, what our views are on how we intervene, and then the policy decisions on intervention, I think is quite important. And we see this for disease, we see this for climate change that

you might have a lot of people who agree on the basics about a disease. You agree on the transmission routes, you agree on the severity, you agree on broadly the impacts it will have on society. You might even agree on the effect different interventions will have, how good certain non-pharmaceutical interventions are, how good certain border measures, testing and so on.

But you might strongly disagree on what you do about it because then you've got values, you've got all of these kind of social moral aspects. When it gets into the policy space. As well. And I think it's really important to understand which level we're on. I think what often during COVID and other situations gets blurred is people talk about one level as if it's the other. They talk about something which is perhaps a difference in values around implementing something and will present it as a debate around the science of the evidence of things. Or someone might disagree.

be disagreeing on whether intervention works or not and present it as a disagreement on policy. In that case, it's more useful to get back into the science and at least agree that level before you move on to the next. I think we saw a lot of blurring between that. And that's one thing I found research in the book is not a new thing. Even Austin Bradford Hill's work on smoking is

he was quite focused on having that separation, that you can establish the evidence around smoking causes cancer, but the policy of what you do about that is a separate issue. It's not for, in his view, scientists to dictate the policy based on the evidence. I think a lot, I saw a lot of discussions that could have been much more constructive, I think, if we were focusing on which bit we actually talking about here.

I guess I'm interested then to know what your kind of reflections are on how science interacts with policy post-pandemic and what role do you think it has to play in government and formation of policy? I think that's a great question. It's a very big one. I think there's a lot of things where science has played a normal role and can continue into going forward and informing policy. I think probably one of the challenges during COVID was particularly

where the policy space was extremely narrow and then that almost puts pressure on science because you're constrained within that. I mean the classic example is very early on with a lot of the scenarios that have been looked at for the UK and a lot of those early scenarios were around three months of interventions and if that's your constraint you cannot come up with a scenario that doesn't either sooner or later have a massive epidemic and so then you get that mix of

is the science saying you're going to have a big epidemic or is it the policy constraint that's been put on the range of things being looked at? And so I think one of the really useful things that I think could be done coming out of it is keeping the question attached to the analysis. I think we often saw that happen in COVID that it was, here's some scenarios or here's some

claim about something and actually that was often in response to a very specific what if question I think having those two things linked if this is my question this is how the evidence feeds in I think that can be very helpful in understanding of how science is informing policy because then

if things are public, people can sense to be asked, is that the right question? I think what also happened that was very helpful as the pandemic went on was faster, more transparent release of evidence. And I think that was particularly helpful with these very dramatic measures where as we got into the pandemic, you'd see the evidence released almost a day or two later and people could very quickly critique it. I found it, I know a lot of colleagues did early on very hard. You were essentially, you had evidence, you knew evidence,

And it wasn't public. And then even with media interactions, it's very hard to say things where there's no public record of it. Because then it's a, believe me, I'm a scientist type interaction. And I much prefer being able to do media where you could walk people through and say, look, this is the emerging evidence about a variant. Here, you can read it. This is what you're seeing. This is why you're seeing it. And I think it just, it created a much healthier interaction with communicating that uncertainty. I mean, if we had to write a new definition for proof, what would it be?

I think one of the things I found as I was going through the book was a little bit this tension between the academic desire for certainty and then just the pragmatic real world approach to how we use things. And even if you look back historically, there was this movement of pragmatists in the 19th century. And they say proof is what works, which I think is perhaps the extreme version of that.

But I think it is important in this world of uncertainty and importantly, a world where you have to make decisions of thinking about where you set the bar. And that bar may not be the same for every action. I think one of the dangers sometimes with requiring a very high level of confidence is by default, you're going to prefer inaction.

And inaction is a decision. And so I think sometimes we get a situation where rather than actually weighing up what are we dealing with and what are the uncertainties and where are we happy to take that risk and even take the risk to accumulate more knowledge that we can then build on. I mean, that's almost the whole idea of a clinical trial in an emergency is

We're taking on some risk now to build knowledge that can prevent risk in future. And I think that's a really important element to think about rather than this kind of search for purity, which particularly under pressure won't necessarily give us the outcome we want.

All right. Well, Professor Adam Kucharski, thanks so much for joining us on the podcast. Congratulations on your new book, which I believe is out already. It is, yes. Yeah. Fantastic. So people will be able to find wherever they get their books. And thank you again for joining us. It's been really fascinating. Yeah, thank you. Great to chat. Well, thanks so much for joining us today at The Lance of Voice. If you've enjoyed the podcast, please leave us a review on the podcast platform that you generally use. And we'll see you again next time. Take care.