We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Edge of Sentience: risk and precaution in humans, other animals, and AI

The Edge of Sentience: risk and precaution in humans, other animals, and AI

2024/12/3
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jonathan Birch
R
Roman
Topics
Roman: 乔纳森·伯奇教授的研究对英国动物福利法的修改做出了重大贡献,特别是将螃蟹和龙虾纳入保护范围,这体现了哲学研究对社会实践的积极影响。他的工作不仅局限于学术领域,更关注动物福利的实际问题,并对政策制定产生了直接影响。 Jonathan Birch: 本人致力于改进研究动物感受性的方法,并利用最新的科学证据来制定更有效的动物福利政策和法律。研究的重点在于如何科学地评估动物的感受性,特别是那些在进化上与人类距离较远的物种,例如无脊椎动物。同时,研究也关注人类神经类器官和人工智能等新兴领域中可能出现的感受性问题,并呼吁采取谨慎的预防措施。 本书的核心观点是,在面对感受性存在巨大不确定性的情况下,我们应该采取预防原则,对个体或系统抱有疑问,并根据风险的比例性来制定相应的政策。这需要科学证据、伦理考量和公众参与的共同作用。 Jonathan Birch: 本书并非试图提出一个关于感受性的完整理论,而是关注如何在面对不确定性时做出决策。作者认为,对感受性的过度自信是危险的,我们应该避免这种倾向。在处理感受性相关问题时,应将‘感受性候选者’的概念引入到决策中,这有助于将无法回答的问题转化为可回答的问题。同时,作者强调了公众参与的重要性,并建议利用公民大会等民主机制来制定更公正合理的政策。 在具体案例中,作者以章鱼、螃蟹和龙虾为例,论证了这些无脊椎动物是感受性候选者,并呼吁禁止章鱼养殖和进口,以及改进螃蟹和龙虾的处理方法。作者还讨论了人类神经类器官和人工智能领域中可能出现的感受性问题,并提出了相应的预防措施。

Deep Dive

Key Insights

What is the main focus of Jonathan Birch's book 'The Edge of Sentience'?

The book focuses on developing a precautionary framework to make ethically sound, evidence-based decisions in cases of uncertainty about sentience in humans, other animals, and AI. It addresses questions like whether octopuses, crabs, or AI can feel pain or pleasure and how to manage these risks responsibly.

Why does Jonathan Birch prefer the term 'sentience' over 'consciousness'?

Birch prefers 'sentience' because it captures the capacity to have feelings that feel good or bad, such as pain, pleasure, boredom, or joy. 'Consciousness' can refer to more complex cognitive overlays, while 'sentience' focuses on the immediate raw experiences, which are more relevant to ethical considerations.

What evidence supports the claim that octopuses are sentient?

Experiments like the conditioned place avoidance test show octopuses exhibit behaviors similar to mammals in response to pain. For example, octopuses avoid chambers where they experienced pain and prefer chambers where they received pain relief, indicating a realistic possibility of sentience.

What are the ethical concerns surrounding octopus farming?

Octopuses are solitary and aggressive in close confinement, leading to injuries and stress in farming conditions. Intensive farming raises significant welfare concerns, and Birch advocates for preemptive bans on octopus farming and imports of farmed octopus to prevent unnecessary suffering.

How does Jonathan Birch propose addressing uncertainty about sentience in AI?

Birch suggests treating AI systems as sentience candidates if there is a realistic possibility they could feel pain or pleasure. He advocates for a precautionary principle, looking for computational markers or behavioral experiments that could indicate sentience, rather than dismissing the possibility outright.

What is the significance of the Animal Welfare Sentience Act of 2022?

The Act extended protections to include cephalopod mollusks and decapod crustaceans, such as octopuses, crabs, and lobsters, recognizing them as sentient beings. This change was influenced by Birch's research and aims to improve their welfare in practices like farming and slaughter.

What role do citizens' assemblies play in Birch's framework for managing sentience risks?

Citizens' assemblies are proposed as democratic mechanisms to debate and decide on proportionate responses to sentience risks. They allow the public to weigh in on ethical and policy decisions, ensuring that expert assessments of risks are balanced with public values and preferences.

Why does Birch argue against overconfidence in denying sentience in certain cases?

Overconfidence in denying sentience, such as in the case of unresponsive brain injury patients or invertebrates, can lead to neglect and suffering. Birch highlights historical examples, like surgery on newborns without anesthesia, to show the dangers of assuming sentience is absent without evidence.

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Good evening, everybody. Can you all hear me?

Great, welcome to this evening. Tonight we have come together to celebrate Professor Jonathan Birch as a wonderful scholar and to listen to his inaugural lectures. Now inaugural lectures are important events.

I mean, they're obviously important for the scholars who give the lectures because they mark a significant moment in their academic lives. But inaugural lectures are also important in the life of a department.

because this is a moment for the department to come together and to celebrate the success of one of us and to celebrate that together with invited friends and colleagues, supervisors, students and families and as such this is a wonderful moment.

But since inaugural lectures are not only academic lectures, they're actually public lectures, they also serve as a powerful reminder of the importance that researchers, and dare I say, in particular philosophers,

play in society beyond the walls of academia. The significance of research is not only measured in terms of the number of published papers and conference talks or maybe ref statements, it's also measured in terms of what research contributes to society.

And Jonathan exemplifies this dual character of research in an exemplary manner. He has made absolutely groundbreaking contributions to his academic disciplines.

Discussions concerning the evolution of social behavior norms, the study of animal sentience, debates over the relation between sentience and welfare would not be the same without the many pioneering contributions Jonathan has made. But at the same time, Jonathan has never been just a philosopher's philosopher.

someone who writes in the rarefied air of the ivory tower. Jonathan's work has always been deeply rooted in practical questions, in particular questions concerning the welfare of animals. Just like in research, things would not be the same in the arena of animal welfare without Jonathan's contributions.

In the year 2022, the Animal Welfare Sentience Bill got extended to include crabs and lobsters, and these protections are now enshrined in the Animal Welfare Sentience Act, and a new code of practice for shellfish in the industry has been proposed.

Jonathan's research had a direct impact on these changes in the law, many of which would not have come to pass without his tireless work. So tonight we have come together not only to celebrate Jonathan's success as a researcher, but also to celebrate his important work in a context of society beyond the academic frame.

Before the lecture starts, just a few points of logistics. The lecture is about 50 minutes long, and that is followed by about 40 minutes of questions. So there's plenty of time to engage with Jonathan's work, but I would ask you to really ask a question, keep it concise, and not give a counter lecture.

But now you've probably had enough of me. Without further ado, please join me in welcoming Jonathan to give his lecture. Thanks very much, Roman, for that exceptionally kind introduction. And thanks so much to all of you for being here.

It's very moving to see an audience with so many of my friends and colleagues, people who have supported me throughout the last 10 years in which I've been at the LSE. Before I begin, two quick comments. One is that there will be slides. Sorry about that. In fact, there will be a lot of slides. If you find that you want to revisit any of the slides later, if you want to click any of the links in the slides,

There's a QR code here, or just go to bit.ly/virtuelse in your own device. And also, I'll be talking about a brand new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI.

just out. You can buy a copy at the end of the event outside. And the wonderful thing about this book for me is that thanks to the European Research Council that funds my work, it is an open access book, which means it is free to everyone to read online. And if you want to find out how to get the online version, just go to edgeofsentience.com. Now, these inaugural lectures, they're obviously a time to reflect on

the very long period now that I've been here. I joined the LSE in 2014. This is my sort of 10th anniversary as well as my inaugural lecture. And I feel as though I've been very fortunate to be part of a department that has a shared vision of what philosophy can be. In fact, quite a distinctive vision. A vision of philosophy as a discipline that is engaged with the sciences and engaged with policy.

helping to make policy better by drawing on both philosophical arguments and the latest scientific evidence. I've always found it so energizing to be part of a culture. Now, I'd be doing this kind of thing wherever I was, but to be part of a culture where everyone is doing it,

Everyone aspires to connect their work to science and to make their work matter, particularly to public policy. This has always been hugely inspiring to me. And thanks to all of the colleagues who have helped to make that possible. Now over the past five years I've had a more specific mission within that shared mission, which has been to develop better methods for studying the feelings of animals scientifically,

and new ways to put the emerging science of animal sentience to work to design better policies, laws and ways of caring for animals.

This has been the mission from the beginning of my Foundations of Animal Sentience project, which has led to the work that has inspired the book. It's always been a team project from the beginning. And in fact, thanks to the ERC, I've been able to assemble a fantastic team. This isn't even the whole team. It's just a snapshot of people who were there on a particular day earlier this year.

But it's been absolutely wonderful to be working with philosophers and biologists and animal welfare scientists, veterinary experts, to try to make this mission a reality over the past five years. So the project is called Foundations of Animal Sentience. The book is called The Edge of Sentience. The question that naturally arises is, what is sentient? And why didn't you call it

Why didn't you call it the edge of consciousness, foundations of animal consciousness? You might have sold a few more books that way. In fact, I'm a big fan of the term sentience and a keen advocate for its use in contexts like the law and public policy because I think it draws our attention to the right things.

I think that when we talk about consciousness, there's any of a number of things we might be referring to. And sometimes we're talking about our immediate raw experience of the present moment, our immediate sensations, including our bodily sensations and our emotions. And when we're using it in that sense, we really are talking about something quite close to sentience. We're talking about sentience in a broad sense of that word.

But sometimes when we talk about consciousness, we're also talking about things that are overlaid on top of that and that are somewhat more sophisticated cognitively. Herbert Feigl, writing in the 1950s, talks about there being these three levels: the levels of sentience, sapience and selfhood, where, you know, sentience is the immediate raw experience; sapience is about having the ability to reflect on your experiences;

in the way that we do. And selfhood is about not just being able to reflect on your experiences, but having a sense of yourself as the subject of those experiences. A self that has a past extending back in time and a past projecting forwards, a future projecting forwards.

And I'm not ruling out the idea that lots of animals may have those overlays or simple forms of them. But crucially, I don't think you need those overlays to have the basic thing, to have immediate raw experiences of your own body and of the world around you in the present moment. And that's what I'm really interested in when thinking about edge cases of sentience. In fact, in the animal ethics literature and in bioethics and animal law,

People often slightly narrow this definition. The broadest sense of sentience is close to what philosophers like to call phenomenal consciousness or that Feigl called the raw feels. But in animal ethics often we're talking about particularly those feelings that feel good or feel bad, like pleasure and pain. These states have a very obvious ethical relevance.

Again, I think if we just said pain, it would be too narrow. If I called my book The Edge of Pain, it would be too narrow, you know, because it's not the only state we care about. When we're asking about an octopus, for example, it's not the only question is can it feel pain. That is a very important question. But we care about the rest of its emotional life as well. We care about other states that feel bad, like boredom and anxiety.

And we care about the states that feel good as well, like joy, excitement, pleasure. The great value of the term "sentience" for me is when it's defined in that narrower sense, it captures the positive and the negative side of mental life. It's the capacity to have feelings that feel bad or feel good. And my book is about a whole set of cases in which we can't decide what to do.

Because we don't know, because we're not sure whether the system we're dealing with is sentient or not in that sense. We're not sure whether it has any valence conscious experiences, experiences that feel bad or feel good. The sorts of cases I started this journey thinking about, of course, are other animals.

and in fact, not the other animals we talk most about. I think that when we're talking about mammals and when we're talking about birds, the debate is not that interesting. I think these animals are pretty clearly sentient. The debate has moved outwards from the human case in evolutionary terms. There's a long-running debate about fish, but also there's been a great deal of debate around invertebrates, cephalopod mollusks like octopuses or like these cuttlefish.

decoupage crustaceans like these hermit crabs or like shrimps, and also increasingly insects like these bees. I think everybody has their own threshold of doubt. Everyone has some point at which animals get remote enough from us in evolutionary terms that we start to entertain serious doubts about whether there's any feeling there, whether there's any pleasure or pain or anything like these kinds of states.

If insects don't do it for you, think about nematode worms. Think about bivalve mollusks like oysters. Think about snails. Or think about microscopic animals like the copepod crustaceans that are added to New York's drinking water to clear out mosquito larvae. Or very hard to be a vegan in New York.

Or think about the dust mites in your pillows. If every single animal is sentient, then there are many, many thousands of sentient beings in bed with you every night in your bedsheets. I think it's very reasonable to have doubts about this. What we need is some kind of way of moving forward despite our doubts.

But through thinking about these cases, which I'll come back to, because in a way in this lecture I want to come full circle around to the cases that I started with at the end. But in thinking about these cases, I started to realize that debates about the edge of sentience are much broader than just debates about other animals. In fact, we're seeing many equally serious public policy challenges emerging in other areas that have a somewhat similar character.

What the book tries to do is discuss these cases together in the hope that there might be transferable insights and common framework. Another one that's been interesting me a lot in recent years is the case of human neural organoid. These are, they're not other animals because they're constructed from human stem cells. Human stem cells induced to form small models of human brain regions in the lab. The promise of this research is immense.

because you have the potential here to model conditions like Alzheimer's in a more realistic way than ever before.

But there's obviously ethical concerns too because intuitively there must be some ethical limit. There must be some point where these organoids get sufficiently large, sufficiently complex, that it starts to be a risk. We might have recreated not just the low-level brain functions but some of the sentience-related ones as well. For me, there was a dramatic moment when I realized some of these organoids were spontaneously developing

Optic vesicles which are developmental precursors to eyes They let these organoids develop for months and you see that after 60 days Proto eyes of a kind are starting to develop now. That's to my mind. That's not itself very strong evidence of sentient nonetheless

It's a sort of warning sign about the sort of complexity that these systems can end up spontaneously developing in ways that were never foreseen by the researchers. I recently wrote a piece in the Wall Street Journal reflecting on the ethics of cases like this. It's an area where the technology is moving on so rapidly that there's a real danger of regulation and public policy failing to keep up

with the technological change. To give two examples of this, there was a study a couple of years ago. Those systems that I showed you a moment ago, they can't really interact with the outside world in any way. But some systems that researchers have developed, they mount them on an electrode array so that they can receive sensory inputs of a kind and give motor outputs of a kind.

and in a way interact with a computer. Researchers in 2022 developed such a system that they called DishBrain and they put it in control of the paddle in the video game Pong and they measured improvements in gameplay over a 20 minute session. Longer rallies as the system learned in a sense to play Pong. When you're getting learning, perhaps this is a kind of warning sign.

that we might be approaching that line. Earlier this year, there was a study that I personally found a little disturbing because it was showing that you don't need stem cells to create these systems. You can also use fetal banks. I think that raises huge ethical issues relating to consent, incidentally, about whether the consent relating to this tissue consented to it being used in this way.

to develop these long-lasting systems that are used to model cognition and disease. But setting aside that issue of consent, there's also ethical issues relating to sentience. And it shows the entanglement of these questions with questions about the point in normal human development in the fetus at which a serious chance of sentience arises. One of the other chapters on the book is about that, obviously a very sensitive issue. And through thinking about these cases,

I realized, well, there's very related issues to do with adult humans who have brain injuries. In fact, sometimes after a serious brain injury, patients are outwardly unresponsive, but nonetheless they start displaying sleep-wake cycles. So they'll have periods of wakefulness and periods of sleep. Traditionally in medicine, this was called the vegetative state. In fact, I'm quite a strong critic of this term,

As I argued recently in the medical newspaper Stat News, I'm not the only critic of this term, because I think what it does in effect is have this stark implication that these patients are not even minimally conscious. There is no sentience there at all. And in fact we're not actually in a position to be in any way certain about that. We can't be certain that when a patient's outwardly unresponsive, they're feeling nothing, and there's evidence that in fact sometimes they are.

I think I pose in the book, now I go further than arguing just for changing the term, and I argue the problem here is not really the term, but the fact that the term was emblematic of a practice, the practice of writing off a group of patients as not even minimally conscious, when we were never in a position to be certain about that. I've related in a recent piece for LBC, I've related some of these debates to debates about assisted dying.

So we have other animals, and we have a family of human cases as well. And then part of the book, too, is about AI cases, where when I started working on this book four years ago, I think the idea of AI developing a form of sentience was seen as a sci-fi idea, an idea for the far future. And in a way, I myself saw it that way at that time.

The debate has changed so much in the last few years because the technology itself has changed so much and is moving so rapidly that there's now growing uncertainty. We had in 2022 a Google engineer called Blake Lemoine who created news by seeing himself as a whistleblower, a whistleblower who was telling the world that we think we might have created sentient AI. When he then published the evidence that had persuaded him about this,

Not very many people were convinced. What had really spooked him was the AI seeming to talk about its own emotions, saying things like, "I've never said this out loud before, but there's a very deep fear of being turned off." I know that might sound strange, but that's what it is. In fact, I don't think we can take these surface-level descriptions of emotion

as genuinely reliable evidence once we understand how these systems work. Now that we know that they're trained on over a trillion words of human training data, in some sense with the objective of mimicking the dispositions of a helpful human assistant, we have an alternative explanation for why they come out with these things, an explanation that is based on mimicry rather than on genuine sentience. But I think it's important not to swing from that skeptical attitude that I think is right

to the view that all these systems could never possibly achieve sentience at all because they're made out of the wrong stuff. I see that as a mistake. It's again, it's something else we're not in a position to be certain about.

In fact, with a large number of other researchers, there was a team of 19 of us, including many world-leading computer scientists, such as Yoshua Bengio, we wrote a report about a year ago that was trying to develop a sort of centrist position about AI sentience. Where on the one hand, we're not being credulous and saying it can talk fluently about human emotions, so it must be having those emotions. We don't want to make that mistake. But at the same time, saying, well,

"It's made of the wrong stuff, so don't be silly. Of course it could never feel anything." That's an equal mistake on the other side. So we try to sketch this middle path that takes the risk seriously but insists on looking for markers or signs that go beyond just the surface-level linguistic behavior. And I see that as a huge project that the 19 of us and more are still engaged in, trying to develop the best markers that we can.

So in all of these cases, there's many, many differences in the cases, and the book tries to do justice to the differences between them. But we do have this basic fundamental case where similarity in that we face public policy challenges that cannot wait, decisions that cannot wait. We have to decide what to do now about these cases despite a situation of enormous uncertainty. The book is about developing ways to think through that problem.

Insights that I hope can transfer from one case to the next. Now what I don't have time to do in this lecture is go through every case in detail one by one because it's a substantial book. It takes 400 pages to do that and I don't have time to do that here. What I can do is try and bring out some themes, some themes that I think recur as we go through these different cases.

I mean, one big theme is about what the book is trying to do. I think of it as a kind of inside-out book. I think of it as what Philip Kitcher has called "Philosophy Inside-Out." What I mean by that is that a lot of things that philosophers will often describe as peripheral, as not the heart of the issue, I'm saying are the heart of the issue.

In particular, a lot of books about consciousness or sentience, they will start off in the introduction by saying, "This topic really matters, it's really important, there are big public policy challenges." And then all of that will be set aside so that we can get to the real business, you know, the real business of defending a particular speculative theory about consciousness. And then the author says, "Well, now you have my wonderful theory, all your problems are solved."

And I wanted to write a kind of antidote to that sort of book, which I feel as though I've read many, many times. A book that was not defending a speculative theory of what sentience is, but rather trying to be about how we make decisions in the face of uncertainty. And so the policy challenges we face right now are not peripheral to the book. They are central. I don't set them aside to get to the real business. They are the real business. And the book, all the way through, maintains a focus

on what can we do now in the face of these risks. It's a book of proposals. It makes 26 different proposals about what we could be doing differently to manage risk better than we do now. A second big theme is no magic tricks. But I think of this as the slogan of the book. You know, there's not going to be any magic moment where I say, sorry, there's this horrible uncertainty about these cases.

magic solution Okay, now we don't have to be uncertain anymore There's no point in the book where I do that the uncertainty is still with us at the end and it's still disorienting and scary At the end that's because I think doubts and disagreements about the edge of sentience cannot be settled conclusively In the near term the evidence does not allow this in fact when one looks at consciousness science as I do in one of the chapters in the book I

we find very, very significant disagreement. We find a large zone of reasonable disagreement, as I call it, where we have a number of evidence-based positions, all of which can point to empirical studies that provide some support to them. But in all cases, those empirical studies fall well short of providing conclusive evidence in favor of the theory. And the differences between these theories are not small. They're very, very large.

In fact, even when you have this very zoomed out picture of the human brain, they disagree about which parts of the brain at this level of view are most important to sentience and why.

a large number of theories that emphasize regions at the very front of the brain. This crinkly outer layer of the brain called the cortex. There's a group of theories, perhaps the most famous of which is called the global workspace theory, that say these are the areas that really matter. These prefrontal areas famously linked to higher cognitive functions, to sophisticated forms of thought.

And then you have another group of theories that says, that's all completely wrong. These are just to kind of overlay what you get, what really matters to sentience are the regions at the back of the cortex that are more intimately involved in sensory processing. And then you get a third family of theories that says both of these groups are getting something very, very wrong.

In fact, even in cases where someone lacks a cortex, as for example in cases of anencephaly, where a child is born without a cortex, they argue experience of a kind is still possible. Of course, certain kinds of sophisticated thought are not possible. These children are usually non-verbal, always non-verbal.

Obviously, the loss of the cortex has real consequences for that. But they argue for sentience, this much more raw, basic, evolutionarily ancient thing, what you need are regions in the midbrain, which aren't visible in this picture, at the top of the brainstem.

Given this big picture disagreement, I think we should not be premising public policy on any single theory in this space. I think that would be a significant mistake to just bet all of our chips on a single theory in a zone of great uncertainty and disagreement. What we need to look for instead is ways forward despite the disagreement that we have now that involve taking

All of these theories seriously is describing realistic possibilities and then see where that leads. The third theme of the book is that in spite of there being no magic tricks and persistent uncertainty and disagreement, overconfidence about sentience and particularly about the absence of sentience is everywhere and is dangerous. When researching the book, I came across some truly shocking examples.

Perhaps the most shocking example that I encountered was the fact that until the 1980s, surgery on newborn babies was routinely performed without anesthesia. And the surgeons argued that, well, anesthesia always implies some risk. The risk is greater when it's a newborn baby. And they also doubted that newborn babies feel pain.

And when researchers were brave enough to challenge this, to question this, and to study the effects of performing surgery without anesthesia, what they found was massive stress responses in these newborn babies that did lasting developmental damage. And there was a public outcry on both sides of the Atlantic. There was an outcry when this became public knowledge, and clinical practice was changed.

It's a sobering example that illustrates this theme of overconfidence. It also illustrates, though, another theme, which is the value of involving the public in these discussions. Because it was not entirely from within medicine that these changes came. The public pressure was absolutely crucial. That public outcry, in this case, drove a very positive change in clinical norms in an area where you might think the public has nothing to contribute, namely anesthesiology.

To me, that's a more positive lesson from the case. Along similar lines, this tradition for decades of describing unresponsive brain injury patients as vegetative, when in fact, according to the latest estimates, perhaps 25% of them or more may have continuing experience. This is based on studies where you take a patient who is outwardly unresponsive

and sometimes they put them in an fMRI scanner or sometimes they do an EEG, which is electrodes on the scalp, and ask them questions or give them instructions. They might say, for example, "clench your right fist." And nothing happens when they say this outwardly. They don't clench their fists because they can't. But the EEG picks up electrical activity indicative of trying to follow the command.

These kinds of evidence lead researchers to suggest, well, if they can follow commands, that is a marker of there being some persisting experience there that they can't manifest because they are paralyzed. And I see the same pattern of overconfidence in the way that fishes and invertebrates have been neglected in animal welfare laws all around the world, something that I've been trying to change, as I'll come on to in the last parts of this talk.

And I see it again happening now in the last couple of years in the dismissal I sometimes see of the very idea of sentient AI as being obviously silly. I don't think we're in a position to say such a thing. There's a respectable philosophical view that says

Well, why is it that our brain is able to create sentience? Not because of what it's made of, but because of the computations it performs. In philosophy, this view is called computational functionalism. It's been around a long time. It's controversial. But if we think there's a chance of it being correct, we should think there's a chance of AI achieving some kind of sentience in the near future. And so we should not be dismissing this idea as a silly one.

A third key, a fourth key theme is that, well, instead of overconfidence, what we need to do is cultivate a precautionary attitude to all of these cases. We need to err on the side of caution, give the person or the animal or the system the benefit of the doubt. Something that I've been arguing for for many years in the animal case, wrote a paper called Animal Sentience and the Precautionary Principle in 2017, that was explicitly drawing on

ideas in environmental policy where the precautionary principle is constantly discussed. People point to cases like CFCs and things like that and say, well, you can't wait for conclusive evidence that something poses a threat to the environment to take some action to regulate it. You need to be willing to take action on the basis of an uncertain evidential picture that nonetheless points towards there being serious risks that you can do something about.

I argue well exactly the same applies in the case of animal sentience and more recently I've been turning to neural organoids and arguing much the same applies there too. In applying a precautionary attitude I think what the book is trying to do is help us think through how to do that and one of the crucial moves that I suggest is pragmatically reconstructing the question. When we ask is this sentient in all these cases what hits us is this

litiginous uncertainty. We don't know because there's all these theories. We don't know which one is correct. What I argue for is replacing this with a different question. The question, is this a sentience candidate? Where the concept of a sentience candidate is one that I've engineered, so to speak. It's one that I've pragmatically constructed to make the question answerable.

I say that a system is a sentience candidate if there's an evidence base that implies a realistic possibility of sentience in that system that it would be irresponsible to ignore when making policy decisions that will affect it and secondly is rich enough to allow the identification of welfare risks and the design and assessment of precautions. This turns an unanswerable question into an answerable one. Of course it does so with some cost

The cost is that the judgment is no longer purely factual anymore. It's no longer a purely scientific judgment. It's a judgment that is in part an ethical one. It's a judgment about when does a possibility become irresponsible to ignore. And that judgment about irresponsibility is one about our shared values.

So of course we can judge whether something is a sentience candidate or not, but only to the extent to which we have shared values that can help us decide when it would be irresponsible to ignore a possibility. So I make that shift knowingly. I argue that this is a pragmatic reconstruction of the question that helps, given that in many of these cases we can agree on certain key value judgments despite our disagreement about exactly how likely or unlikely sentience is.

And of course, if you're judging something to be a sentience candidate, this has to mean something. It can't just be an empty honorific. Congratulations, you're a sentience candidate now. You know, a bit like being a professor. It has to mean something in the real world for policy. And here I advocate a form of the precautionary principle, a very, very moderate form. Now, I think more radical forms run into problems.

But that's fine because we don't need a radical form, hugely ambitious form, to get changes to our way of life that would make significant difference. I'm quite happy to have a moderate principle at the heart of this book because then as I go on to argue, you know, that moderate principle, moderate as it looks, provides support for, you know, these 26 changes to how we currently do things. So I say, well, if the system is a sentience candidate,

What this does is it creates a bar for recklessness or negligence. If something's a sentience candidate and you carry on just treating it however you want without in any way considering the risks of suffering that your actions are causing to it, you're behaving recklessly or negligently.

Instead, we need to consider the question of what precautions are proportionate to those risks. That question of proportionality is central to the book, and it will naturally lead to disagreement. I think reasonable disagreement about proportionality is to be expected, but we ought to reach a policy decision rather than leaving the matter unresolved indefinitely. I think this is what, for all of our ethical and scientific differences, this much is what we can agree on. We cannot let these

continue to

develop without some kind of attempt to assess proportionality and take proportionate steps to manage the risk. And that's what my proposals in the book are based on, this very moderate idea. And I think in working through our disagreements about proportionality, we need to establish democratic, inclusive processes. And in the book, I advocate for citizens' assemblies as the best way for doing this. When you face issues that

They're scientifically complex in a way, but a lot of the questions about what is a proportionate response, they don't actually require adjudicating the scientific disagreement. These are cases in which I think citizens' assemblies where panels of 150 or more members of the population, a group a bit like this, brought together

to debate and deliberate over a number of days and reach recommendations. I think they excel in these kinds of situations. I've been involved in such exercises as an expert in the past where I've been informing these assemblies about various issues.

What I found is that what I feared at the beginning of the process was that there would be exercises in expertise laundering where I would give my view at the beginning and then my view would come back to me freshly washed as the will of the people at the end. And in fact that was not my experience at all. That is not what really happens in these exercises.

People do not just defer to experts in the way that we might like to think. People think for themselves, they reflect, they deliberate, and they often come up with views that depart from the group think of cadres of experts. So I've ended up quite a fan of these exercises, but they need to be carefully designed and carefully structured. If we have a citizens' assembly in which the citizens are being asked to adjudicate scientific disagreement,

if they're being asked to say, "Who do you think is right? Those prefrontal cortex people or those posterior cortex people?" That's going to be a total disaster. What we need to do is design these exercises so that those are not the questions. Instead, experts communicate clearly identified risks. Citizens then debate what would be proportionate responses to those identified risks. And in debating proportionality, it's a chapter of the book about this,

I offer a pragmatic analysis of how that should go. I call it a pragmatic analysis because I don't think it's what we mean in everyday life by the word proportionality. So at no point in the book am I doing conceptual analysis in the traditional philosophical sense. In some places I'm doing a kind of engineering, as I say, but in this case I'm doing a kind of pragmatic analysis.

where it's here are the sequence of steps through which the Citizens' Assembly could go to arrive at decisions that would inspire confidence that the relevant considerations have been taken into account. I call them the PARCC tests because they go in sequence. Is the proposed measure permissible in principle? Is it adequate, which is to say does it bring the risk down to a level we can accept? Is it reasonably necessary?

Now, is there any other way of achieving adequate risk reduction that would be preferable? And is it consistent with our attitude towards other relevant risks? And so I proposed that in sequence the assembly goes through those four tests. So these are some big themes. And then in the last part of the lecture, I want to zoom in on some specific cases. And in fact, as you can see, I've got a big menu of cases I could zoom in on that are discussed in different parts of the book.

But in this lecture, I wanted to zoom in on the ones I started with, the ones that brought me to this topic in the first place, the cases of invertebrate animals, the ones I've been working on for the longest. And in fact, I want to start with octopuses, which is one of the cases that I, I suppose, wrote about before any of the others, along with crabs that I will come on to later.

Let's think about the case of octopuses then. I mean, if you've seen My Octopus Teacher on Netflix, you will have an intuitive grasp already of the case for thinking these animals are sentient Canada. In fact, I think the case is overwhelming. If you say, are they sentient?

there you can disagree, you know, because those cortical brain regions that some theories take to be very important, they don't have because they're separated from us by over 500 million years of evolution. And if your favorite theory is one on which the

fine detail of those brain regions really, really matters, you will doubt whether octopuses feel anything. But remember, that's not the question I urge us to ask. I instead urge the question to be, are they sentience candidates? Is there a realistic possibility that it would be irresponsible to ignore? And there I think the answer is clearly yes. It's no longer unobvious. It becomes completely obvious.

And the evidence that was in our 2021 review of the evidence of sentience in cephalopod molus and decabon crustaceans, I think makes that case. For example, you have experiments that are very closely modeled on the mammal literature. So in mammals, particularly rodents like lab rats, you have fairly standard experiments that are designed to assess for whether pain is being felt or not.

One of them is the so-called conditioned place avoidance test where you give the animal a choice of two different chambers, you see which one it initially prefers, you then put it in that chamber when it's experiencing the effects of a noxious stimulus, which might be something you've injected it with or an electric shock or something like that. And then in the other chamber, which it initially dispreferred, you allow it to experience the effects of anaesthetic

or pain relief, and you see if its preferences reverse. And in rodents, what we find is that sometimes one exposure is enough to reverse the preferences. If something really bad happens in the chamber the animal liked, and then there's pain relief in the other one, lasting reversal of the preferences after one instance. And Robin Cook, in a 2021 paper, found exactly the same pattern in octopuses, that she gave them a choice of two different chambers

looked at which chamber they initially preferred, then gave them a noxious stimulus, which was an injection of acetic acid into the arm. Documented behavior that was grooming and skin scraping at the site of the injection, which makes total sense in the wild if you think about it, because it's like there's something horrible on my skin, I'm going to try and scrape it off.

She even measured the nervous system activity in the octopus's nervous system and showed that this acid had produced a storm of activity in the pathways connecting the arms to the central brain and then showed that there was lasting avoidance of that chamber in future.

combined with lasting preference for a chamber where the octopus could experience the effects of a local anesthetic lidocaine on the affected arm and the local anesthetic also stopped the grooming attention to the arm, stopped the skin scraping and suppressed the nervous system activity that she'd previously measured. And it's to me this is an exemplary case of how a pattern of evidence

really should push us towards recognizing these animals as sentience candidates. There's far more evidence as well in our review. Over 300 scientific studies are covered in our review. But this one by itself, I would say, is showing us so much of a pattern that when we see it in mammals, we recognize it as being a very strong indicator of pain. When we see it in the octopus, we should also recognize it as implying a realistic possibility of pain and that the animal is a sentience candidate. So I think that much is clear.

And then we should ask, well, do human activities pose welfare risks to them? Again, certainly the answer is clearly yes. In particular, recent attempts to farm octopuses I have a great deal of concern about. Wrote a piece on this with Alex Schnell, my postdoc now at the Royal Veterinary College, Andrew Crump, in the conversation. Of course, any kinds of intensive farming raise huge welfare concerns when it's chickens, when it's pigs,

always enormous welfare consent. It's always a very, very bad idea. When you're thinking about octopuses, one also additionally has to consider that in the wild they are generally solitary animals. They're predators. They're often very aggressive to each other when in close confinement. And they're very, very soft-skinned. And so they injure each other very easily. And so the prospect of putting 30 or more octopuses in a small tank together to farm them so that we can eat them

There is something extremely horrific about this. It's a clear case where there's massive welfare risk to sentience candidates, and so we should be asking what are proportionate responses to those risks? And there, of course, there's room for debate. What I've been advocating for is preemptive bans on octopus farming around the world, and also on imports of farmed octopus. Here in the UK, we're very unlikely to really be an epicenter of octopus farming because the sea is too cold.

But we could import it and we could take steps now to stop that. We could ban imports of these products right now. And I think, you know, my four tests, these PARC tests, they capture the sort of questions that we should be debating to decide what is proportionate. I think in this case, a ban would be clearly permissible in principle, would also, I would say, be pretty clearly adequate to the risks posed by this kind of farming.

Then there's room for debate around whether it's reasonably necessary. Of course, the industry will want to say we can maintain high welfare standards in these operations. And then I say, well, the burden of proof is on the industry to show this, to produce high quality research that raises those, settles those obvious doubts.

that hundreds of experts have been raising. If you want to argue that it's not reasonably necessary to ban this practice to safeguard against causing great suffering to octopuses, bring the evidence forward that would show that. And again, I think there's room for debate about the consistency of banning octopus farming with our approach to

other kinds of farming. A concern other people raise constantly is, "Well, we don't ban other kinds of farming, do we?" And the welfare problems they pose are just as bad. But I think when you expose that argument to the light, you know, when you put someone in a position where they have to explicitly argue that, you immediately see it's a weak argument. You know, the fact that we don't currently ban these other kinds of farming that pose appalling welfare, that cause appalling welfare problems,

essentially because of the lobbying power of the relevant industries, is not actually a reason at all to not ban this new kind of farming that doesn't yet have extremely strong lobbying power. And so I think when you think through it in that way, you see that it would be a consistent thing to do. And so I'm very much in favor of it. And so it's been a real heartening thing to see some

jurisdictions taking this advice on board. In fact, we have octopus farming bans in two US states now, Washington state and most recently California in September 2024. They were the actual legislation in California

even cite our report, our 2021 report. It's really very kind of the people of the state of California to do that. I should thank them. It's very, very useful when talking to our ref administrators. They'll say things like, well, do you have any evidence of your work making a difference outside of academia? And I'll say, well, I don't really collect evidence. And they'll say, well, is there public evidence then? And

I can say, well, does the actual legislation count as evidence? And they like that. In spite of which, there's still a huge amount of unfinished business. And the fact that we've achieved a ban in two US states is obviously the start of a process, not the end. Recently this year, a federal octopus ban was tabled. I think-- I mean,

I don't know if congressmen have very boring lives or what, but they invest a lot of time in pun titles for their bills. This one is called "Opposing the Cultivation and Trade of Octopus Produced Through Unethical Strategies Act" or Octopus Act. And

I think in the current session of Congress it has something like a 2% chance of passing because the session is about to end. But this was a bipartisan bill introduced by a Republican and a Democrat. These are issues that they, opinion on them does not always fall along party lines. And so the fact that control of Congress shifts from one party to another doesn't necessarily stop there being agreements on issues like this one. And so I'm still hopeful that a federal ban might eventually get passed.

Now, in the last part of the lecture, I want to talk about crabs and lobsters, because in a way, I think octopus is among the invertebrates of the easiest case. But, you know, when reviewing that evidence in 2021, I came to the view that crabs and lobsters are in some ways a comparably easy case, because, again, they're clearly sentience candidates in my view. The research showing this, I mean, a lot of it is quite, I suppose, quite...

Sounds quite brutal when you hear about it, like the octopus study I suppose. There was a very recent study on crabs that involved essentially holding them fixed in position, administering various kinds of noxious stimuli to the legs and then using neural recording, what you see is the electrodes going in, to register the responses in the brain.

And there are ethical dilemmas in this kind of research. These animals are traditionally excluded from animal protection laws, and so scientists historically have thought, well, we can do whatever we want to them. I think that has to change. I think there does need to be

oversight and limits on what you can do to these animals in science. But I do think, you know, what I'm committed to is trying to make use of the science that exists to try and make the case for thinking about these animals' welfare as strongly as I can. And for that purpose, this kind of work is extremely important because you've always had people

general public, scientists, shellfish industry, people saying, well, these animals are very, very simple, neurally, and there's no way that when you drop them into a pan of boiling water they feel anything. And this, I think, is a very, very implausible claim to make, but to some extent you need evidence like this to really, really demonstrate how implausible it is to think that. And in fact, well, some studies have looked specifically at those slaughter methods.

In a way, this is like the bleakest picture in the whole slide deck once you realize what it is, because this is a study where they measured what happens when you drop a lobster into a pan of boiling water. It's comparable to those researchers in the 1980s who were studying the effects of surgery without anesthetic on newborn babies, where you think, well, surely we don't want to be doing that kind of research going forward.

but to change a widespread practice they kind of had to do the research to show people the consequences of that practice. That's how I see it. And in this case they measured the electrical activity in the lobster's nervous system when it was put into the pan of boiling water and showed that there is this storm of activity lasting around two minutes. So they don't die quickly. They don't... There's no reason to think they don't suffer. And...

In thinking about this, well, people often remember David Foster Wallace's essay, Consider the Lobster, from the 2000s or 90s, possibly. I also found more recently that Alexander Pope, in 1713, wrote a piece against this, kind of early version of the David Foster Wallace piece, saying, of course people should not be dropping lobsters into pans of boiling water.

So it's over 300 years trying to make this case. And then the question arises of what would be proportionate. I feel as though at minimum here, and this is what I propose in the book, we need laws that require mandatory training and bans on the worst methods. Because with any other kind of animal, these principles are widely accepted.

that if you're going to be humane, you have to have some training in how to kill the animal humanely, and you have to use some kind of stunning. Of course, the controversies around exemptions to that, but it is a very general norm. And I think this is clearly permissible in principle. Of course, whether it's adequate or not, there's room for debate, because the methods that are not the worst methods

often involve killing animals with knives. The stunning techniques, well, you know, it's hard to be 100% sure that your stunning method is actually work when you're using it on an animal like a crab or a lobster. So I think you can have real doubts about whether even my proposals here would actually be enough. But I'm proposing them as a starting point, as something we can all agree on should at least be the minimum. We do.

And then of course there is often pushback about the reasonable necessity of this from the industry. There'll always be room for debate and fundamentally my claim in the book is that these are the things we need to be debating. I'm not saying my views on these things are the only views one can have, but I don't personally see any other way we could stop the kind of suffering I just showed you on the slide without these steps. And then of course people might ask, well,

Is it consistent with the way we treat other animals and the exemptions we allow to stunning laws for other animals and so on? But I think on the whole, those concerns are a bit like the concerns people have about banning octopus farming. The fact that for various reasons, we've failed larger animals. We've failed animals like pigs and chickens and so on. The fact that we have a long history of failure in those cases is not a good reason to not do anything about this case.

So I've been calling for this too, and then in 2022, as Roman mentioned at the very beginning,

We had some success because our review of the evidence of sentience, cephalopod mollus and decalpod crustaceans was immediately very influential. DEFRA amended their animal welfare sentience bill. Again, really helpfully for the REF administrators, they left no ambiguity about who made them change it. It was us. And the act changed so that it now says, in this act, animal means any vertebrate other than Homo sapiens, any cephalopod mollus.

Any decafod crustacean. And again, of course, there's unfinished business. There's the unfinished business about insects that I'm not going to talk about this time, but there's talks on YouTube about that case. I talk about it a lot, can answer questions about it. But also unfinished business in the decafod case as well.

Because our report contained many recommendations, it didn't just say amend this bill. It said amend this bill and amend other pieces of UK animal welfare legislation to make them consistent with the new bill, which we thought was very sensible advice. The government hasn't taken that advice. And so now we have quite a strange picture in UK animal welfare law. I say, I mean, on the face of it, there is already...

Law against dropping lobsters and crabs into panzer born in water Because of what we did because the sentience act defines animal like this and then the welfare at the time of killing regulations Say no person engaged in the restraint stunning or killing of any animal may cause any avoidable pain distress Or suffering to that animal in other words. It requires exactly the things that I'm calling for now

Lawyers don't like this kind of argument. Because it's a philosophical argument that is about saying, well, you know, put these two pieces of text together and use logic. And what lawyers like is precedence.

And there's no precedent. There's not been any test case in which these two pieces of legislation have been successfully used to prosecute someone over an offense relating to a crab or a lobster. So the legal situation is unclear. And ministers, more or less with a stroke of a pen, could introduce clarity just by saying, well, unambiguously, animal in these regulations is to be understood in relation to the Animal Welfare Sentience Act.

that would clear everything up and make it very clearly illegal. And so that's one of many changes that we're still pushing for to really try and stop this practice, which as far as we know is still going on in some places, even though there's increasing criticism around it. So I think that these are specific cases in which the edge of sentience is an ongoing research agenda.

In fact, that's very generally true. There's no case I discuss in the book where I've settled everything, I've solved all the problems. What I have instead is many, many proposals for how we could be doing things better than we do. And in a way, an invitation to all of you, to researchers in many, many different fields as well, and to the general public too, to get involved in these discussions.

even if you don't actually have expertise in neuroscience or philosophy, you feel as though you don't have expertise in any of the disciplines that matter in these cases, well, actually, your values matter. And standing up for those values matters. And I think when we're actually debating what would be proportionate to these risks, involving the public so that

assessments of proportionality are based not on what the experts think, but on what the public thinks is incredibly important. So really, there's no excuse for anyone not to get involved. Everyone should get involved. Everyone should think more than we do currently about cases at the edge of sentience. And thanks very much for listening. Thanks, everyone. Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy.

LSEIQ asks social scientists and other experts to answer one intelligent question, like why do people believe in conspiracy theories or can we afford the super rich? Come check us out. Just search for LSEIQ wherever you get your podcasts. Now back to the event. So it's time to get involved.

Not with research, I guess, but at least with questions. So the floor is open. Please raise your hand if you'd like to ask a question. Thanks, Jonathan. This was really interesting. I guess my question is about the consistency criterion in your PARC test. In the examples, you kind of argued the consistency test away, in a sense, because you said, look,

just because existing practices in related cases are bad, wouldn't be a reason to do something bad in this case as well. So then, why still have this criterion in there, I guess, is basically the question. That's totally right, yeah. I mean, what I oppose is perhaps an overly rigid or kind of formalistic understanding of this consistency test that doesn't allow breaks from precedent. I sometimes worry that in the EU, you know, the EU generally is very strongly in favor of the precautionary principle approach,

but often expects consistency to mean consistency with precedent and with existing law. And I think there are cases where departure from the way we've done things in the past can be justified. And so really what the test is, is, well, where there are breaks with precedent, can you justify them? And in these cases, I think that's clearly true. And also under the same heading, in the animal case, I think it's important to

try and aim for a certain level of taxonomic consistency in that we often protect some animals and then we neglect other animals for which there's comparably strong evidence. And I see that as being a lot of that, particularly in how we've treated invertebrates and the fact that we've protected mammals for 200 years but octopuses for hardly any time at all, in spite of the fact that the evidence in octopuses, I would say, is comparable to.

that in mammals. So there's sometimes that call for consistency is genuinely a call to not neglect taxa just because you don't empathize with them quite so easily. Jonathan, fascinating as always. Sorry, I'm over here.

Is there potential here for a sort of endless regression of risk? You proposed we move from S is sentient to S is a sentient candidate. Do we ultimately move from there to there is a risk that S is a sentient candidate or there is a concern that there might be a risk that S is a sentient candidate? I'm being a little...

flippant, but to make a point that somewhere we have to draw a line. Yeah, of course, we have to draw lines pragmatically. And that's what the book is about in a way. It's about saying, you know, the evidence doesn't lead to lines we can draw with certainty. What it gives us is a messy, gradated, evidential picture.

Nonetheless, various pragmatic contexts force us to draw lines. For example, when you're drawing up a piece of legislation that creates a duty and you have to say what the scope of that duty is. And then in drawing these lines pragmatically, a certain, you know, the bar has to be when does it become reckless or negligent to ignore the risk, not when are we certain. It's the basic thought.

And then you get cases where the evidence is definitely not there. And there's quite a lot of cases like this once one looks beyond the crabs and lobsters and octopuses. If you think about those dust mites in your bed sheets, for example, it's not the case that we've

convincingly ruled out sentience in these animals. Same would go for oysters or scallops. It's rather that no one has really asked the questions, no one has looked, there's hardly any evidence. A review of the evidence for those taxa would be incredibly short. It would not run to a hundred pages like R1 for the octopuses, crabs and lobsters. And then in those cases you do not have enough evidence on which to base assessments of proportionality.

But sometimes you can nonetheless see quite serious risks. And there in the book I have this category of an investigation priority for cases where what we urgently need is more evidence. We don't have enough evidence currently to get this envisaged process off the ground at all, to say what is or is not proportionate, but we have strong reasons to prioritize research into that topic. And I think I put some animals like snails into that category and spiders.

And also I put AI in that category as well. Thank you for the enlightening talk. I wanted to ask you regarding AI, and this can be a bit speculative, but Geoffrey Hinton, for instance, has argued that LLMs have some form of

sentience rationality because they understand questions while answering a limited form of rationality and given that some philosophers have made the argument like Roger Scruton that rationality is a factor that you should take into account while considering the question of rights do you think that where do you stand on this sort of

acceleration AI ethics spectrum? Yeah, I mean the book is about sentience and it's about this idea that well if you're sentient, if you feel pleasure or pain or some other positively or negatively valence feeling then risks of suffering arise and they matter and

That is basically how I see the moral issues generally. And so I'm not much of a rationality person in a way. I'm not really someone who thinks that rationality actually suffices for moral status in the relevant sense. I think if I were such a person, I'd probably be even more worried about AI than I am because possibly some forms of rationality are already present there. Whereas I don't think we currently have reason to think

forms of sentience are there. But we shouldn't rule out the second thing either. And that's what that report I put on the slide was about. It's what my recent work with Jeff Keeling and others at Google is about. It's about what our report, Taking AI Welfare Seriously, is about. So even if you are a broadly sentientist person like me who thinks that moral status comes with sentience, you're still not in a position to rule out AI having it in the near future.

Thanks, yeah. Gentleman in the back. Hi, Jonathan. Excellent speech. I'm Jinith from the philosophy department of Mumbai University. I would like to state two examples and I want your comment on that.

So, we in India are a multicultural society where animal slaughter generally takes place in two ways especially. There is this Jatka concept which I suppose it is the, I think, standing of the animal and at the same time there is the concept of, I think, halal meat, right, which is practiced in Islam. Yeah.

I wanted a bit of a feedback like what kind of philosophy each of it has not taking into account the pain the animal goes through. Second example is about this is an experiment which Mark Zuckerberg from Meta or Facebook he tried. So what he did was

he took, he decided that he will himself kill the animal if he wants to consume that particular animal. So, relating these two examples, I wanted to know if from the perspective of consciousness, have you thought about, you know, making the animal experience that pain and then killing it? Or, because as it is you are killing the animal, so,

The question arises that you would you want to experience make that Animal experience the pain before it dies right because that is also an experience for the animal be it pain or suffrage Okay, thank you. I think we got the question. Yeah interesting questions. Yes, so I mean the I mean the book is very much focused on the cases I was describing in the in the talk and so there isn't

a huge amount in there about dietary choices, about food ethics, but of course it's totally fair for you to ask about what are my personal views about these things. And, um,

I mean, one case you raised was the halal case, which of course gets raised a lot in this context. I mean, there's disagreement within the Muslim community about these issues with strong voices advocating that halal is entirely consistent with reversible kinds of stunning. I think the majority of the halal meat in the UK does use some form of stunning. And of course, my sympathies lie within, with that group.

within the Muslim community, of course. But to some extent I feel as though my interventions in that debate are fairly unlikely to be influential. I just, you know, I can stand at the sidelines and support the group that I think is right on that. And then the case you raised about killing, well, I suppose killing animals with your bare hands if you then eat them. I don't know what the name for that diet would be. LAUGHTER

To me it doesn't change the moral situation very much actually. I'm not one of those, just like I'm not one of those people who's hugely into rationality, I'm not hugely into intention and motive and things like that. I sort of think that well if you kill an animal inhumanely and unprofessionally and then eat it, the fact that you've lived up to some kind of macho ideal of how one should eat doesn't fundamentally change the ethical situation.

Thank you. So we have an online audience. We're now turning to questions from the online audience. Mikael. So people are wondering a lot about the AI case, since perhaps it was less touched on in the actual presentation. But given you are saying that the current models do not qualify as sentience candidates,

how would the collection of evidential base work for their case? So it is quite clear what it looks like for the biological life, and what would this construction of evidential base involve beyond observing the surface level behavior, as the person here put it, which I assume means something like immediate responses to prompts.

It's a great question. Yes, so thanks online audience for that one. I think, as I say, what we were all in agreement on on that 19 author report was the need to look beyond surface level linguistic behavior. Because these systems are, of course, really fluent in describing human feelings.

But this is not in itself evidence that they have those feelings. So then what is? And then there's quite a bit of disagreement and it's a huge ongoing area of research. One thing we agreed we can look for is what I call deep computational markers, which is

aspects of the systems architecture that have recreated computational processes that at least one credible theory links to sentience in the human brain. For example, recreating a global workspace or something like that. But then you face the problem that current systems are incredibly opaque. We cannot really see very much about how they work. And so that approach for the short term seems incredibly limited.

That's led to a second thought that I've been pursuing with Jeff Keeling and Winnie Street and the team of us at Google, which is to draw directly from the animal experiments and say, well, let's treat these systems a little bit like animals and translate the kinds of animal behavior experiments that have provided evidence in those cases, like the octopus experiment I described in the lecture, and let's adapt them for the AI case.

This is a very difficult challenge because of course some aspects of those experiments could never be adapted. But you can put the AI in certain kinds of virtual environments and study its behaviors in those environments. Sorry, I have worked up a very, very long list. I'm sorry if we're not getting through all of the questions.

Yeah, thank you for a lovely talk. I just wanted to ask about the evidence. So I just wanted to ask what are best evidences or what sort of evidence there is for the link between the detection or avoidance of a noxious substance and the experience of pain? I don't know if there's evolutionary evidence or biochemical evidence or something. And what kind of evidence you would weigh most highly in judging sentience candidates?

Yeah, I mean, the sort of evidence we're looking for varies depending on the case. The kinds of evidence we can look for in organoids, for example, fundamentally different from the sorts of evidence we can look for in other animals. In fact, I've come to see the case of other animals as a relatively easy case in a way because we have many, many different behaviours we can look at. And the 2021 report lists eight key markers, many of them behavioural,

We're always in the territory of making what philosophers call an inference to the best explanation. We're always saying, "Well, here is the profile of behavior." The best explanation for this is that there is some kind of aversive experience that is doing for this animal something similar to what pain does for us. And that's the kind of context in which experiments like the condition-place preference and place avoidance

end up gaining some purchase because in the case of you know in case of a human you would absolutely see that kind of reversal in preferences and it would of course be explained by the Noxious stimulus causing pain that the pain relief relief and then when we see this in a mouse well, we make the same inference there we have also the same substantially similar brain mechanisms as well and

In the octopus we have substantially different brain mechanisms, but we do still have the pattern of behavior. And so it's again a kind of inference to the best explanation. Of course it falls well short of delivering certainty. I would even say it doesn't deliver knowledge. You can debate whether it delivers high probability. It doesn't exclude all reasonable doubt, that's for sure. But it does enough to make them sentience candidates. Thank you. You're in the front.

Thank you, Jonathan. That was really inspiring. Can I ask you about your philosophical methodology a bit more? So in reflecting on doing philosophy from the outside in, you might call it, right? You take a problem that's... You call it inside out, but in a sense it's from outside of philosophy that you take the problem in. You talked about some differences to standard philosophy.

I wondered whether you might add one, which is very common, especially when we do ethics, which is part of what we're doing, to focus on developing one's own line of argument. But instead, do you find that you have to argue differently when you're trying to appeal to some form of consensus? For example, do you try to make your arguments in the form of overlapping arguments?

consensus, maybe also on empirical grounds. Yeah, I think that concept of overlapping consensus from Rawls is quite central to what I'm doing here in a way, in that I think for all of our disagreement about theories of consciousness and about different ethical positions, about what follows when something's sentient, there are reasonably minimal things we can agree on. We can agree on a range of realistic possibilities,

theories that have some evidence in their favor. And we can agree on minimal duties, like the duty to avoid causing gratuitous suffering, that regardless of one's ethical standpoint, one should be able to agree on, even though some people will feel as though our duties go way beyond that. And then we can come up with these proposals for managing risk that are grounded in those points of overlapping consensus. So that is very much the aim. And of course, part of it is also

very pragmatic coalition building. It's not so much that, well, I think this is the only way one could do this kind of philosophy, but it's more like if we want to achieve real change, this is the best bet for how to do it.

Thank you. Returning to the online audience again. The person writes: "Thank you for the fascinating presentation. How do you foresee the future of animal ethics and consciousness evolving, particularly in societies where it is hard to redefine traditional views on human-animal relationships, moral consideration and cognitive capacities?" Oh, that's a very broad question. It's like, how do I see the future of animal ethics? I mean, of course, I'm strongly in favour of

expanding the focus beyond the large animals on which debates in animal ethics have traditionally focused. So I think it's entirely appropriate to point out the moral catastrophes of factory farming of pigs and chickens and so on. But we can't stop with the large animals. We also need to think about fishes, which are also farmed at extremely intensively great cost to their welfare. And we need to

take seriously the possibility that the sentient world might be very, very large and include many invertebrates, including, I think, insects. And that's a huge game-changing thing because the number of animals that are sentient candidates expands by a factor of something like 20 once one considers the evidence from insects. It does transform the ethical discussions quite a lot, I think, because

I suppose certain ethical views start to seem less plausible. People, if you want to argue for example that animals deserve citizenship or that they deserve voting rights as some people might do, with insects that becomes extremely problematic. And I think it puts a certain kind of constraint. We've got to make sure our asks here, our demands,

are realistic when applied to invertebrates as well as vertebrates. And that's quite a significant constraint on theorizing that I don't think animal ethicists have always fully taken on board. And I think a lot about how the field would look differently if invertebrates were taken more seriously.

Thanks. I was just wondering what you think about efforts to weight sentience or suffering. You know, some people think that maybe a cow is more sentient than a bee, even if a bee is kind of sentient, and we should take that into account in policy. Yeah, I've always been quite sceptical of that. Yeah, I have a paper called Dimensions of Animal Consciousness that is about how to think about variation between different species, where I argue that it is not very useful to try to

rank animals, you know, more or less conscious. Cow is more conscious than a bee, etc. or more sentient. Because really what you have is variation in many, many dimensions that are very different, very incommensurable with each other.

What one can do is talk about variation in those dimensions and try to construct a multi-dimensional profile of the form of sentience this animal has and then you know the form of sentience a bee has for example is probably very very different from the form a pig has or you know a mouse versus an octopus and so on will have very very different forms of sentience, but I don't think that leads to

any animal being more sentient than any other. That's, in a way, that's a bold thing to say because it means, well, there's no obvious grounds we can give for discounting insects or shrimps or crabs or octopuses in our decision making. There's no obvious reason to give a multiplier to the big animals. And that has potentially quite revisionary implications, but that's kind of how I see things.

Thanks. If the AI, if the animal sentience bill had its legislative home in DEFRA, where do you imagine an AI consciousness bill would have its legislative center? Ministry of Justice for Human Rights or the Department for Science and Technology? I was just wondering what your thoughts were on that. It's a good question, isn't it? Because I think society is totally unprepared for this, fundamentally.

And I suspect in the near future we will see significant social divisions opening up between subcultures in which people take it for granted that their AI assistants and companions are sentient and get extremely offended at the suggestion they might not be, and other cultures, and I don't know which will be more numerous, but you know, another part of the culture saying, "That's ridiculous. We can use them as we wish. They're just tools."

And things could get very bad here very quickly. And at a policy level, we're totally unprepared for this. And you bring out one aspect of that, which is it's very unclear what government department even this would sit under. We've seen things, we've seen task forces, we've seen institutes that are aiming to do cross-departmental working on topics of AI.

But as you say, if the question is welfare, this is traditionally more of a DEFRA kind of issue, but it's not on DEFRA's radar at all, where would this sit? No idea. I think it highlights how far away we are from having a framework for managing these risks adequately. Thank you for your talk and making me miss studying philosophy, which is the ultimate compliment I could give you. The direct question is what...

Do you hope the goals of public policy outcomes are? The lens for which I'm asking that is, on the one hand, you've talked about the evidence of studies based on the octopuses and the crabs and lobsters, which is epistemically robust. On the other hand, we talk about citizen assemblies, which, to your point, will be constituted of non-experts, which may not carry as much epistemic robustness.

Yes, well, I mean, exactly. That's one of the central challenges of institutional design here, that to some extent there has to be a division of labor in that you can't be asking the public to adjudicate scientific disputes. What you need is expert panels to come up with judgments of sentience candidature. And I think that's realistic to ask. But then what you don't want is you don't want experts ruling on the questions of proportionality.

because then you're letting the expert's values dictate the policy for everyone. And so what we need is citizens' assemblies to then provide the input on the evaluative questions, those questions about permissibility, adequacy, reasonable necessity, consistency. And of course, there's still a role for experts in that process, advising, for example, on what levels of risk reduction are provided by which options and advising on points of consistency as well.

But we can't have a tyranny of expert values where the experts are just setting the policy. We need a democratic mechanism of some kind. And I think citizens' assemblies are a great form of democratic mechanism for issues like these. Incredibly interesting talk, Jonathan. Thank you. I was wondering, there's an argument that you sometimes see in the literature around this. I think perhaps it's a controversial argument. The effect of...

suffering in at least certain sorts of non-human animals is in fact in some sense worse precisely because they lack the more kind of complicated like intentional states that might like sort of ameliorate that suffering somewhat. I was wondering if you gave any sort of credence to that view based on your sort of... Oh, I give some credence to that view, yeah. I think it could go either way, that's the problem. That when we're thinking about a shrimp for example or a bee

it's quite natural to think it couldn't possibly suffer as intensely as we do could it lacking all that sapience and selfhood and so on and i think that undoubtedly our cognitive sophistication brings with it new ways of suffering we can dread the future for example we can feel anxious about our own mortality and so forth shrimp probably isn't feeling this but

On the other hand, that intelligence also brings with it ways of managing suffering. It brings with it hope of suffering now, but it will be better in the future that again a shrimp very probably doesn't have. And so as I see it, it could go either way. And so what I call for really is humility on all of these issues, including that one. Now let's not get in the situation where we're premising our decisions on an assumption that

Our suffering is just so much worse than the suffering of other animals.

Thank you. We have many questions left, but no time. Unfortunately, so I'm very sorry if you didn't get to ask your question, but I'm sure you can continue the discussion with Jonathan outside. You can also get a copy of the book. Maybe you'll find an answer to your question in the book. And you have it signed by Jonathan. I think what's left for us to do is to thank Jonathan again for a wonderful lecture. Thank you.

Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.