Anil Seth believes we tend to anthropomorphize AI, projecting human-like traits onto it, which can lead to misunderstandings about its capabilities and diminish our own understanding of human consciousness.
Intelligence is about doing the right thing at the right time, solving problems flexibly, and can be defined by function. Consciousness, on the other hand, is about subjective experience, the feeling of being aware and experiencing the world, which is not a function but an experience.
The human brain has 86 billion neurons with a thousand times more connections, and its architecture is highly intricate and dynamic, with neurotransmitters and glial cells playing significant roles. In contrast, AI systems, while powerful, lack this biological complexity.
Human brains evolved to keep the body alive and control movement, working in concert with the body and the environment. This embodied interaction provides a rich feedback loop that AI systems, which are often disembodied, lack.
Seth believes that computation alone may not be sufficient for consciousness. The biological substrate of the brain, with its neurons, neurotransmitters, and embodied interactions, may be essential for consciousness to emerge.
Attributing consciousness or understanding to AI can lead to mispredictions about its behavior and make us more vulnerable to manipulation. It can also result in a misallocation of moral concern, potentially leading to psychological harm.
Seth suggests designing AI systems in a way that minimizes the impression of consciousness, possibly through watermarking or other interface designs that push back against psychological biases, ensuring users maintain a clear understanding of the system's limitations.
Seth envisions AI as a utility, akin to electricity or water, used in various contexts to solve problems and drive innovation. He believes AI should enhance human capabilities rather than replace them, and its development should be carefully managed to avoid unintended consequences.
Hey, Belaval here. Before we start the show, I have a quick favor to ask. If you're enjoying the TED AI show, please take a moment to rate and leave a comment in your podcast app. Which episodes have you loved and what topics do you want to hear more of? Your feedback helps us shape the show to satisfy your curiosity, bring in amazing guests, and give you the best experience possible.
In the rush to develop increasingly sophisticated artificial intelligence, a big question keeps floating around. You know this question. How long will it take before some massive breakthrough, some kind of singularity emerges and suddenly AI becomes self-aware? Before AI becomes conscious? But we're getting way ahead of ourselves.
Lately, reports from AI researchers suggest that AI models are not improving at the same rate as before and are hitting the limits of so-called scaling laws, at least as far as pre-training is concerned. There's also worries that we're running out of useful data, that these systems require better quality and greater amounts of data to continue growing at this exponential pace.
The road to a machine that can think for itself is long, and it's starting to sound like it may be even longer than we think.
For now, clever interfaces like ChatGPT's advanced voice mode, the one I experimented with in an earlier episode this season, helps give some illusion of a human at the other end of this conversation with an AI. I was surprised by how much it actually delighted me and even kind of tricked me, at least a tiny little bit, into feeling like ChatGPT was really listening.
like a friend would. The thing is though, it's a slippery slope. We're building technology that is so good at emulating humans that we start ascribing human attributes to it. We start wondering, does this thing actually care? Is it actually conscious? And if not now, will it be at some point in the future? And by the way, what even is consciousness anyway? The answer is trickier than you might think.
To unpack it, I spoke with someone who's been tackling this question from the inside out, from the perspective of the one thing we know is conscious, the human brain. One of my mentors, the philosopher Daniel Dennett, we sadly lost earlier this year. He said we should treat AI as a tool rather than colleagues and always remember the difference. That's Anil Seth. He's a professor of cognitive and computational neuroscience at the University of Oxford.
He studies human consciousness and wrote a great book about it. It's called Being You, A New Science of Consciousness. And that quote from his mentor, it's something that sticks with him. It sticks with me because I think we have this tendency to always project too much of ourselves into the technologies we build. I think this has been something humans have done over history. And it's always got into trouble because we
we tend to misunderstand then the capabilities of the machines we build. And also, we tend to diminish ourselves in the process. And I think the recent explosion of interest in AI is a very prominent example of how we're falling prey to this problem at this moment. So this is why Anil's on the show today. He's come to share why he thinks it's imperative we see AI as a tool, not as a friend.
and why that difference matters to not only the future of this technology, but also the future of human consciousness. I'm Bilal Siddoo, and this is the TED AI Show, where we figure out how to live and thrive in a world where AI is changing everything.
Hi, I'm Bilal Velsadu, host of TED's newest podcast, The TED AI Show, where I speak with the world's leading experts, artists, journalists, to help you live and thrive in a world where AI is changing everything. I'm stoked to be working with IBM, our official sponsor for this episode.
Now, the path from Gen AI pilots to real-world deployments is often filled with roadblocks, such as barriers to free data flow. But what if I told you there's a way to deploy AI wherever your data lives? With Watson X, you can deploy AI models across any environment, above the clouds helping pilots navigate flights, and on lots of clouds helping employees automate tasks. On-prem, so designers can access proprietary data,
and on the edge so remote bank tellers can assist customers. Watson X helps you deploy AI wherever you need it so you can take your business wherever it needs to go. Learn more at ibm.com slash Watson X and start infusing intelligence where you need it the most.
Your business is modern, so why aren't your operations? It's time for an operations intervention. The PagerDuty Operations Cloud is the essential platform for automating and accelerating critical work across your company. Through automation and AI, PagerDuty helps you operate with more resilience, more security, and more savings. Are you ready to transform your operations? Get started at PagerDuty.com.
As we approach the 250th anniversary of the Declaration of Independence, TED is traveling to the birthplace of American democracy, Philadelphia, for an exciting new initiative. Together throughout 2024, TED and Visit Philadelphia started to explore democratic ideas in a series of three fireside chats that will shape our collective future as we work towards a more perfect union.
Our third and final event of 2024 about moving forward together took place on November 20th at the historic Reading Terminal Market. Hosted by TED curator Whitney Pennington-Rogers, we featured TED Talks and a moderated Q&A with world champion debater Julia Darr and head of curiosity at the Eames Institute, Scott Shijioka. Thanks to Visit Philadelphia and our supporting partners Bank of America, Comcast NBCUniversal and Highmark.
Go to visitphilly.com slash TED to learn more about this event and to hear about the exciting things we have coming up in 2025. So, Anil, I've been thinking about how not long after we invented digital computers, we started referring to our human brains as computers. Obviously, there is a lot more to it than that. But what is helpful and not helpful about describing our brains as computers?
It's clearly very helpful. My academic title is Professor of Computational Neuroscience, so I'd be rather hypocritical to say that it was not a useful way of thinking to some extent. And there's a very lively debate, mainly in philosophy rather than neuroscience or in tech, about why
whether brains actually do computation as well as other things. In fact, the metaphor of the brain as a computer has clearly been very, very helpful. If you just look inside a brain, you find all these neurons and chemicals and all kinds of complex stuff. Computation gives us a language to think about what brains are doing that means you don't have to worry so much about all that.
And of course, at the beginning of AI, there was this idea that intelligence might be a matter of computation. Alan Turing famously asked the questions about whether machines can think. And universal Turing machines were specified theoretically, which can do any computation. And the idea that, well, that might be what the brain is doing becomes very appealing. Also at the birth of AI, Walter Pitts and Warren McCulloch
realize that neural networks, these simple abstractions of artificial neurons that are connected to each other that underpin a lot of the modern AI we have, actually serve as
universal Turing machines. So we have this temptation, this idea to think, yeah, the brain is a network of neurons. Networks of neurons can be universal Turing machines, and these are very powerful things. So maybe the brain is a computer. But I think we're also seeing the limits of that metaphor and all the ways in which actually brains might
perform computations, but they may also do other things. Fundamentally, we always get into trouble too when we confuse the metaphor for the thing itself.
I love that. And I think a big chunk of that is also we talk so much about sort of the supercomputing clusters and just how fast technology is moving. And we're almost losing some appreciation for the intelligence that's inside our craniums. And to put it very plainly, how much more complex is the brain today compared to even the most advanced AI systems?
I mean, it's a totally different thing. I think we really do the brain a great disservice if we think of it purely in terms of number of neurons. But even then, there are 86 billion neurons in the human brain and a thousand times more connections. It's incredibly complicated, even at that level. Also, the brain is so intricate. The connectivity in one area might be slightly different from the connectivity in another area. There are also...
neurotransmitters washing around the brain changes every time a neuron fires synaptic connectivities change a little bit it's not a stable architecture and then there are all the glial cells and all the supporting gubbins that we often don't even think about but are turning out to actually be significantly involved in the brain's function there was there was a recent paper in
in science, I think that this gargantuan, impressive effort to
unpack in as much detail as possible one cubic millimeter of brain tissue in the human cortex. In this one cubic millimeter you've got 150 million connections, nearly 60,000 cells. And to store all that data in a standard computer was just an enormous amount to characterize. And even this is just a, you know, it's not everything, right? This is just a very detailed model. The brain is very complex. Very complex. That is quite amazing.
What's also interesting about the sheer complexity in the brain is the brain doesn't sit in a vat, right? At least not usually. And of course, the brain works in concert with the rest of the body. Does that aspect of being embodied give humans any advantages over AI systems? I think it depends what you want the system to do. You're absolutely right. Brains...
didn't evolve in isolation. They evolved in response to certain selection pressures. What were those selection pressures? They weren't sort of write computer programs or write poetry or solve complex problems. Fundamentally, brains are in the business of keeping the body alive and later on moving the body around, so control of action, things like that. And those imperatives, fundamental to me to understanding what kinds of things
brains are, they are part of the body. They're not this kind of meat computer that moves this body around from one meter to another. Chemicals in the body affect what's happening in the brain. The brain communicates with the gut and even with the microbiome. We're seeing all these kinds of effects that transpire within the body. And then, of course, the body is embedded
world and there's always this feedback from the world and understanding these nested loops of how the brain is embedded within a body and the body is embedded within a world I think that's a
a very different kind of thing than the abstract, disembodied ideal of computation that drives a lot of our current AI and of course is also represented in a lot of science fiction. We have things like HAL, which are 2001, which, okay, there's the body of the spaceship, but it's a kind of disembodied intelligence in many ways. So then how important is it that we're embodied to have consciousness and intelligence? And we'll get to the definitions in a bit because...
I'm curious what happens when you embody an AI. And I'm, of course, thinking of all the humanoid robot demos that we've seen lately, where it seems to be this crude representation of kind of what we do. Like we've got sensor systems that perceive the world and we build a map of it and then we can figure out how to take action in it. This is a fascinating question. I mean, so far, the AI systems that we have, the ones we tend to hear about mainly anyway, language models and generative AI, they tend to have been
trained and then deployed in a very disembodied way but this is this is changing robotics is improving too it's lagging behind a little bit as it always does but it is improving and there are
Fascinating questions about what difference that makes. One possibility that strikes me as plausible is that embodying an AI system so that you train it in terms of physical interactions, don't just drop a pre-trained model into a robot, but everything is trained in an embodied way, might give us grounds to say that AI systems actually exist.
understand what they say, if it's a language model, for instance, or understand what they do. Because these abstract symbols, words that we use in language, there's a good argument that ultimately their meaning is grounded in physical interactions with the world. But does this mean that AI systems not only are intelligent and possibly understand, but also have conscious experiences? That's a separate question.
And I think there's many other things that might be necessary for us to think seriously about the possibility of AI being conscious. I think that brings me to
the logical next question, which is what is the difference between intelligence and consciousness? Perhaps let's start with intelligence. Both intelligence and consciousness are tricky to define, but most definitions immediately point to the fact that they're different. And if we think about a broad definition of intelligence, it's something like doing the right thing at the right time. A slightly more sophisticated definition might be the ability to solve problems flexibly. And
whether it's solving a Rubik's Cube or a complex problem scientifically or navigating a social situation adeptly. These are all aspects of doing the right thing at the right time. Importantly, intelligence is something you can define in terms of function, in terms of what a system does, what its behavior ultimately is. So there's no deep philosophical challenge
for machines to become intelligent in some way. There may be obstacles that prevent machines from becoming intelligent in this sort of general AI way, which is the way that humans are intelligent. But intelligence fundamentally is a property of systems. Now, consciousness is different. Consciousness, again, is very hard to define in a way that's going to get everyone signed up to.
But fundamentally, consciousness is not about doing things, it's about experience. It's the difference between being awake and aware and the profound loss of consciousness in general anesthesia. And when you open your eyes, your brain is not merely responding to signals that come into the retina. There's an experience of color and shape and shade.
that characterizes what's going on. A world appears and a self within it. Thomas Nagel, I think, has the nicest philosophical definition, which is that for a conscious organism, there is something it is like to be that organism. It feels like something to be me, feels like something to be you.
Now you can finesse these distinctions as much as you want, these definitions, but I think it's already clear. They are different things. They come together in us humans. We know we're conscious and we think we're intelligent, so we tend to put the two together. But just because they come together in us doesn't mean they necessarily go together in general.
As you describe consciousness in this sort of subjective experience, a term that keeps getting thrown around in AI circles now is like qualia, right? This notion of subjective conscious experiences and figuring out if large language models can actually have this. Certainly they're good at like making it seem like they do, especially the jailbroken models. But it also takes me back to something else that you've talked about, which is our perception of reality is sort of this controlled hallucination that we don't fully like
perceive reality in this completely objective sense. I don't know if that's the best characterization, but I'm trying to connect the dots there where it seems to be like even our experience of reality is kind of hard to grok and fully explain. And so I wonder, doesn't that point to us not being able to create a very clear definition to measure that in a synthetic system? Yeah, I think you can go even further, actually. I think there's very little consensus on... Well, there's no consensus on
on what would be the necessary and sufficient conditions for something to have subjective experience, to have qualia in this sense.
When you and I open our eyes, we have a visual experience. It's the redness of red, the greenness of green. This is the kind of thing that philosophers call qualia. And there's a lot of argument about whether this is actually a meaningful concept or it's just something that we think is profound and it actually is just a wrong way of looking at the problem. But for me, there is a there there. When we open our eyes, there is visual experience. However, we label it as qualia or something else.
But for a camera on my iPhone, well, no, we don't think there's any experiencing going on. So what is the difference that makes a difference? And could it be that some kind of AI that's a glorified version of my camera on the phone would instantiate the sufficient condition so that it not only responded to visual signals, but had subjective experience too? I think
That's the challenge we need to face because, as you say, AI systems, especially things like language models, can be very persuasive about having conscious experience. And especially the ones where you ask them to whisper and get around the guardrails in one way or another. They can really seduce our biases. And so we can't just rely on...
what a language model says. If a language model says, yes, of course I have a conscious visual experience, that's not great evidence for whether it's there or not. So we need to think, I think, a little more deeply about what it would take to ascribe conscious experience to the system that we create out of a completely different material.
Material is an interesting point you're bringing up, sort of the substrate from which intelligence and perhaps consciousness can emerge. Because what you are saying is that, you know, I think it seems clear that we could have a
super human intelligence level AI system that isn't necessarily conscious. But I do wonder when people make arguments like, hey, well, if we just keep throwing more data and compute at this thing and it keeps getting more and more intelligent, consciousness will be this emergent property of the system. And it almost has this like techno religious kind of fervor to it. Why do you think consciousness might be uniquely biological? Why does the nature of the substrate matter? I don't know that it matters, but I think
it's a possibility worth taking seriously. In a sense, the opposite claim is equally odd. Why should consciousness be a property of a completely different kind of material? Why would computation be sufficient for consciousness? After all,
For many things, the material matters. If we're talking about a rainstorm, you need actual water for anything to get wet. If you have a computer simulation of a weather system, it doesn't get wet or windy inside that computer simulation. It's only ever a simulation. And the way you set it up is also very informative because there has been
been this implicit assumption, at least in some quarters, that indeed if you just throw more compute and AI gets smarter in ways which can be very, very impressive and very, very sometimes unexpected too, that at some point consciousness will
just arrives, comes along for the ride and the inner lights come on and you have something that is also experiencing as well as being smart. I think that's a reflection more of our psychological biases than it is grounds for having credence in.
in synthetic consciousness. Because why should consciousness just happen at a particular level of intelligence? I mean, you could make an argument that some forms of intelligence might require consciousness. And those may be the kinds of intelligence that we humans have. But that's a bit of a strange argument because there are plenty of other species out there that don't have human-like intelligence that are very likely conscious.
And there may be more ways to achieve intelligence than through what evolution settled on for human beings, which is having brains that are also capable of consciousness. So the question for me, the fundamental question is, is computation sufficient for consciousness? If we try to design in the functional architecture of the brain as it is, you know,
and run it in a computer, would that be enough for consciousness? Or do we need something much more brain-like at the level of being made of carbon, of having neurons, of having neurotransmitters washing around, of being really grounded in our living flesh and blood matter? And I don't think there's, well, there's just not a knockdown argument
for or against either of these positions. But there's, to me, good reasons to think that computation is likely not enough. And there are at least some good reasons to think that the stuff we're made of really does matter. Given all of this, you do believe that it's unlikely that AI will ever achieve consciousness. Why is that? I think it's unlikely, but I have to say it's not impossible. And the first reason it's not impossible is that I may very well be wrong.
And if I'm wrong and computation is sufficient for consciousness, well, then it's going to be a lot easier than I think. But even if I'm right about that, then as AI is evolving and as our technology evolves,
We also have these technologies that are becoming more brain-like in various ways. We have these whole areas of neuromorphic engineering or neuromorphic computing, where we're building systems which are...
just sticking closer to the properties of real brains. And on the other side, we also have things like cerebral organoids, which are made out of brain cells. They're little mini brain type things grown in the dish. They're derived from human stem cells and they differentiate into neurons, which clump together and show organized patterns of activity. Now, they don't do anything very interesting yet.
So it's the opposite situation to a language model. A language model really seduces our psychological biases because it speaks to us, but a
a clump of neurons in a dish just doesn't because it doesn't do anything yet. Now for me the possibility of artificial consciousness there is much higher because we're made out of the same material. To the specific question, why should that matter? Why does the matter matter? It comes back to this idea about what kinds of things brains are and the fact that they're deeply embodied and embedded systems.
So brains fundamentally, in my view, evolve to control and regulate the body, to keep the body alive. And fundamentally, this imperative goes right down, even into individual cells. Individual cells are...
continually regenerating their own conditions for survival. They don't just take an input and transform it into an output. And in doing this, I think there's pretty much a direct line from the metabolic processes that are fundamentally dependent on particular kinds of matter, flows of energy, transformations of carbon into energy, things like that, all the way up to these higher level descriptions of the brain
making a perceptual inference or as we said earlier, a controlled hallucination of best guess about the way the world is. So if there is this through line from things that are alive and why we call them alive all the way up to the neural circuitry that seems to be involved in visual perception or conscious experience generally, then I think there's some reason to think that consciousness is a property of living systems.
As you were answering that, in my head, I have this visualization, maybe the future of this conscious AI system that we finally create isn't going to be a bunch of Jensen's NVIDIA GPUs and some data center, but perhaps this like giga brain that we build out of the very things that our brain is made out of.
That's one hell of a visual, I got to say. Yeah, I think that's a possible future, right? Because we're already on that track with neurotechnologies and hybrid technologies as well. And people can plug organoids into rack servers. People are beginning to do this already to sort of leverage the dynamical repertoire that these things have.
And nobody knows how biological a system needs to be in order to move the needle on the possibility of consciousness happening. It may be not at all, or it may be a great deal indeed.
Hi, I'm Bilal Sadoo, host of TED's newest podcast, The TED AI Show, where I speak with the world's leading experts, artists, journalists, to help you live and thrive in a world where AI is changing everything. I'm stoked to be working with IBM, our official sponsor for this episode.
Now, the path from Gen AI pilots to real-world deployments is often filled with roadblocks, such as barriers to free data flow. But what if I told you there's a way to deploy AI wherever your data lives? With Watson X, you can deploy AI models across any environment, above the clouds helping pilots navigate flights, and on lots of clouds helping employees automate tasks, on-prem so designers can access proprietary data,
and on the edge so remote bank tellers can assist customers. Watson X helps you deploy AI wherever you need it, so you can take your business wherever it needs to go. Learn more at ibm.com slash Watson X and start infusing intelligence where you need it the most. Are your digital operations a well-oiled machine or a tangled mess? Is your customer experience breaking through or breaking down?
It's time for an operations intervention. If you need to consolidate software and reduce costs, if you need to mitigate risk and build resilience, and if you need to speed up your pace of innovation, the PagerDuty Operations Cloud is the essential platform for operating as a modern digital business. Get started at PagerDuty.com.
So I have to ask the question, can artificial neural networks then also teach us something about biological neural networks? And the reason I asked this, I was reading the Anthropic CEO's rather extended blog, and he brought up this example where basically like a computational mechanism was discovered by AI interpretability researchers in these AI systems that was rediscovered in the brains of mice.
And I was just asking my question, wait a second. So like an artificial system, like a very simplified simulation is still telling us something about the organic, you know, more complex representation. What's your thoughts on that? And do you think this trend will continue? Oh, absolutely. I think this is for me, the line of research that was certainly the line that I'm following. The use of computers and in general, in AI in particular, is
They're an incredible tool. They're incredible general purpose tools for understanding things. And even in my own research, this is what we do. I mean, we'll build computational models of what we think is going on in the brain, and we'll see what these models are capable of doing. And we'll also see what predictions they might make about real brains that we might then go and test in experiments.
I have to imagine the advances in technology, both on the sensing and the computation side, is making a huge difference. And I'd love to hear some examples. There are examples in many different levels. So, for instance, there are algorithms involved in generative AI that might really map onto things that brains do. So, at one level, it's about discovering things.
what the functional architecture of the brain is through developing these new kinds of algorithms. But then there are other levels too. There's the levels in which we might use AI systems as tools
tools for modeling or understanding some higher level aspects of the brain. So for instance, we use some generative AI methods to simulate different kinds of perceptual hallucination. So the visual hallucinations that people have in different conditions like in psychosis or in Parkinson's disease or after psychedelics. And
This goes back to some early algorithms by Google in their deep dream when they turned bowls of pasta into these weird images with dog heads sprouting everywhere. But we can use those in a more serious way to get a handle on what's happening in the brain when people experience hallucinations. And then right at the other end, and I admit this is something that for me anyway is still...
uncharted territory and something I'm really interested to explore is when we actually leverage the tool set that AI is delivering now. The language models, the virtual agents. I was reading a paper just the other day about a whole virtual lab that was discovering new compounds to bind to the COVID virus.
virus particles. And this virtual lab was basically doing everything from searching literature to generating the hypothesis to critiquing experimental designs and proposing new experimental designs and so on. So I think there's a lot of utility in AI for accelerating the process of
of scientific discovery. I think AlphaFold is such a great example of that, right? What took a PhD, the entirety of their PhD to figure out a couple of molecules, we've mapped out a huge, huge opportunity space and just put it out there. That is such a beautiful example because also it just exemplifies the way in which I think it's
productive for us to relate to these kinds of systems because AlphaFold intuitively seems like a tool, right? We treat it, we use it as a tool, or rather the biologists do, to just rapidly accelerate the hypotheses they can make at the level of protein binding and so on.
We never think of AlphaFold as another conscious scientist. It doesn't seduce our intuitions in the same way that language models do. So I don't think there's anything quite comparable to AlphaFold in the neuroscience domain yet. And I'm trying to think what the equivalent problem would be, what one thing might be. And
And this is very speculative. Maybe somebody's working on this already. One of the big unknowns in the brain is really how it's wired up. There was another recent paper looking at the full wiring diagram of the brain of a fruit fly. And this is an incredible resource already. It was computationally incredibly difficult to understand.
to put this together from the little bits of data that you might get in individual experiments. So there could well be a role for AI in helping amass large amounts of data to give us a more comprehensive picture of what kind of thing a brain is. And there may be many other creative ideas out there too.
But all of them, I think, would treat the AI as, in I think the most productive way, as a kind of tool. I agree. There's a lot of inclination to, I call it the co-pilot versus captain question. A lot of people are like,
Yeah, this is like my personalized Jarvis and I'm going to be like Tony Stark in the lab and just like, you know, doing what I need to do. And it just like preempts my needs. And it's cool that it's not constrained sort of by wall clock time, right? That you can just throw more compute at it and they can move faster. But fundamentally, to me, it feels like humans are still doing the orchestration.
What do you think are the risks of going the other route where we start feeling like these systems should be the captain and let's build the grand AGI system and ask it what to do and then let's do it blindly?
Yeah, I think, I mean, there's, of course, a huge amount of uncertainty. Maybe it's not a terrible idea in some ways, but it does strike me as something that is certainly not guaranteed to turn out very well. And human intuition still seems very important in interpreting the suggestions that might come from AI or
or just what AI will deliver in whatever context. Having a human in the loop still seems to be very, very important. But there are some larger risks here that to the extent we do this, then I think we are moving back towards imbuing artificial intelligence with properties that it may not in fact have. Things like, oh, it really does understand things.
what it's doing or it may be indeed be conscious of what it's doing as well. I think if we misattribute qualities like this to AI, that can be pretty dangerous because we may fail to predict what it will do. Another
concept from Daniel Dennett is something called the intentional stance. And it's a beautiful idea about how we interpret the behavior of other people is because I attribute beliefs and knowledge and goals to you or to whoever I'm interacting with. And that helps me predict what they're going to do. Now, if we do this with AI systems, and this is what language models in particular encourages to do, then we
We may get it right some of the time, but we may get it wrong some of the time too, if the systems don't actually have these beliefs, desires, goals, and so on. And that can be quite problematic.
There's the other side to all of this too, where technology is also advancing to a degree where we can kind of coarsely figure out what's going on in people's minds. And so earlier in the season, we had Nita Farahani on and she touched on the concept of cognitive liberty. And we were basically nerding out over how we're basically putting all these like neural biometric recorders on ourselves. And
Yes, right now they can coarsely read our brains. And what was even trippier to learn about is manipulating our dreams with targeted dream incubation. What keeps you up at night when you think about sort of the ethical considerations from AI kind of making our minds more of an open box than they have been in the past?
One of the things that I think about, and I was recently writing a paper with a philosopher, Emma Gordon, about brain-computer interfaces as well, is why is the skull this boundary that we think of as particularly significant here? I mean, we've already given our data privacy away in so many ways. It's true. Not a good thing, right? But in many ways, at least for people who have been around for a while, the cat's already out of the bag.
But the idea of getting inside the skull does seem to be significant, partly because there's no other boundary that's left. And while we're very used to the ideas, the importance of things like preserving freedom of speech, then there isn't really the same degree of attention paid to something like freedom of thought. So we're just not used to what kinds of guardrails and moral guidelines we might need in this case.
And then there are also, I think, some more subtle worries, certainly in this space of brain-computer interfaces. Because let's imagine a situation where we each have, or brain-computer interfaces are widely used, a lot of brain data is extracted, it's used to train models, which are then used to underpin the utility of brain-computer interfaces so that they can predict what someone wants to say or do on the basis of brain activity. Now there are some stricter
extraordinarily powerful and compelling use cases for this kind of thing in medicine, in treatment for people with brain damage or paralysis or blindness. But if we generalize that to enhancement of everybody and we try to think, okay, these things are not just assault-specific clinical problems, but they become part of our society more deeply, then there's a potential that there's a kind of enforced
homogeneity. We might have to learn to think a particular way in order to get the brain-computer interface to work. And that may be a completely unintended consequence, but it strikes me as a worrisome consequence as well. There may also be, you know, just...
kinds of social inequity that start to happen too about, okay, people with access to these systems can do more or will be allowed to train them so they can think in their own distinctive way and not have to think in the way that the mass market BCIs require. So I think there's a lot of sort of feedback cycles that can start to unfold in this case. But fundamentally, it's the
Really, there's nothing more to privacy once you go inside the skull. And then there's a stimulation thing as well. Brain-computer interfaces can be bidirectional. And if they're bidirectional and you start imprinting thoughts, goals, intentions, then we're definitely in a very ethically troubling situation.
That last bit to me is the stuff that keeps me up at it. It's like giving a bunch of companies read-write access to your mind, right? And to your brain. And in a sense, the point you brought up about sort of, you know, homogeneity, sort of like lack of intellectual diversity, we're already kind of seeing that where people are using LLMs and it's all kind of like the same milquetoast prose. And, you know, people are almost losing the ability to write and think and
And yeah, I think there's something kind of disconcerting about that. Yeah, I mean, there might be a more optimistic view of this too, that the sort of milquetoast homogeneity of large language model output may cause us to really value human contributions more. Just as in other situations, there's a value attached to the handmade, the bespoke, and we may end up
living in a situation where we just view these two kinds of language quite differently.
And just as someone who grows up in a bilingual household will naturally learn to speak two different languages, future generations might become accustomed to, okay, well, that's kind of large language model language. And this is human language. And they just innately feel very different, even though they're using the same words. Oh, yeah, kind of like just forming code switching and you know different contexts how exactly to behave. I think that's a valid point. Perhaps I have a slightly more jaded take on this because I'm like,
Yeah, people are going to want the Whole Foods experience, but a vast majority of people are like, give me the free ad-funded Mountain Dew straight to the vein. And I deeply, deeply worry about that. So let's leave the lab for a second here. What are the kinds of AI tools that you yourself are using outside of the context of the lab?
I'm pretty a light user of AI tools, at least the ones that I know about. Because of course, one of the things is AI is hidden beneath the surface of many of the things we use. And every time I use Google Maps, there's machine learning or AI happening there. I do use language models increasingly, sort of as verbal sparring partners rather than as sources of text that I will then edit or use directly.
It kind of is glorified search engines in that sense. And yeah, I find them more and more useful, but I still don't trust them. I think it's a case of...
Using them to help us to help humans think more clearly rather than to outsource the business of thinking itself. So have you ever, whilst interacting with all these large language models, felt yourself forming a connection with these systems? Or are you able to keep that separation and distance? Like almost like you're forgetting it's a tool and kind of more like a colleague. Does it ever feel like that?
It does. This is another of the things that keeps me up at night, back to that question. There's something so seductive about the way we respond to language that even if at one level we can be very, very skeptical that there's anything other than just statistical machination happening, the
feeling that there's a mind that understands and might be conscious is extremely powerful. And I'm thinking, one way of thinking about this is that there are plenty of cases where knowledge does not change experience. So for example, lots of visual illusions.
There's a famous visual illusion called the Muller-Lyer illusion, which is a visual illusion where two lines look different lengths because of the way the arrows point at the end, but if you measure them, they're exactly the same length.
And the thing is, even if you know this, even if you understand what's happening in the visual system that gives rise to this illusion, they will always look the way they do. There's no firmware fix for our brains to fix that. That's right. And so the worry for me is that there will be similarly cognitively impenetrable illusions of artificial consciousness.
That if we're dealing with sufficiently fluent language models, especially if they get animated in deepfakes or even embodied in humanoid robots, that we won't be able to update our own wetware sufficiently in order to not feel that they are conscious. We will just be compelled to have those kinds of feelings.
And that is a very problematic state to land in too, because if we are unable to avoid attributing, let's say, conscious states to a system,
Then again, we're going to be in the business of attributing it with qualities it doesn't have and mispredicting what it's going to do and leaving ourselves more open to coercion and more vulnerable to manipulation. Because if we think a system really understands us and cares about us, but it doesn't, it's actually just trying to sell us Oreos or something, then that's a problem.
And I think that the most pernicious problem here is something that goes right back to Immanuel Kant and probably before, which is the problem of brutalizing our own minds. Because here, if we are interacting with an artificial system that we can't help but feel is conscious, we have two options broadly. We can either be nice to it anyway and care about it and bring it within our circle of moral concern,
And that's okay, but it means that we will waste some of our moral capital on things that don't need it and potentially care less about other things because we humans have this in-group, out-group dynamic. If you're in, you're in. If you're out, you're out.
So we might either do that or we learn to not care about these things and sort of treat them in the same way that we might treat a toaster or a radio. And that can be very bad for us psychologically because if we treat things badly but we still feel they are conscious, that's the point that Kant made. That's what brutalizes our minds. It's why we don't poke the eyes out of teddy bears or pull the limbs off dolls. The science fiction...
film and then series Westworld dealt with this beautifully. How dangerous it is for us to take this perspective. So this keeps me up at night because there's no good option here. We need to think very carefully not only about
the possibility of designing actually conscious machines, which even if it is unlikely, if it happened, would be very ethically problematic because, of course, if something actually is conscious, it's a moral subject and we would need to be very careful about how we treat it. But even building systems that give the strong appearance of being conscious is also problematic for different reasons.
And this scenario is basically already with us. It will be very soon unless we think very carefully about how we design these systems and design against giving that impression in some way. I think you very beautifully paint this picture of why it's problematic on both ends, right? Like treating it like the Rick and Morty, this robot wakes up. What is your purpose? Your purpose is to put butter on my toast. That is your purpose. Just get back to please putting butter on my toast.
And it has this existential crisis. And I think on the other end, the Westworld example is very valid, too, where you have things that are indistinguishable from humans. And we go act out all these sort of lower urges or whatever the right way to put that. And we suddenly start bringing that to life.
sort of behavior to interactions with actual humans. But the real question I come at is where you end it, which is from a user experience standpoint, right? A lot of people think that it is important to have these systems be as human-like as possible and meet the user sort of where they are. Do you want to talk about why we need to be more nuanced? And do you have any ideas for what that sort of
what would be a better way to build these systems? Because it seems like either extreme kind of sucks. I think this is super interesting. And in fact, I think talking to you just now is helping give focus to this as a serious design challenge. And I'm not sure it's one that's been well addressed so far. Because of course, yes, there is a good reason to build systems with which we can interact very fluently. It can also be very empowering if we can
If we can have a machine generate code by talking to it about what we want a program to do, that's hugely empowering for many people, so long as it does the thing that it's supposed to do and not something else. But is there a way of having the benefits of that, designing systems so that we can preserve the kind of fluent interaction that natural language gives us?
but in a way that still at least pushes back to some extent on the psychological biases that then lead us to make all these further attributions of consciousness, of understanding, of caring, of emotion, and all of these things. I don't know what the solution is, but I think it's
it's a really important problem. One simple solution would be, okay, these things just have to watermark themselves to say, I am not conscious, I don't have feelings. And of course,
Language models do that until you play around with them, press them a little bit. But that may not be enough. I mean, there may have to be other ways where we design interfaces which through practice or through education or through some other manipulation are shown. And this is really a question as much for psychology as it is for technology. What kinds of things...
Preserve fluid interaction, but do not succumb to our psychological biases and what properties we actually know. I would love to see progress focused on that problem because that would show the line we need to walk. And you're right that there aren't any solutions. Do you think we can build those antibodies?
I think we have to try. I mean, that also brings up another point, which is, again, very contentious in the tech sphere, which is what should we do about regulation? What kinds of systems should people just put out there? And what I come back to in that conversation is always the fact that in other domains of invention and technology, we're very cautious. We don't
put a new plane in the sky without being fairly sure it's not going to fall out. We don't release a new drug on the market unless we can be very sure it's not going to have unintended side effects or consequences. And there does seem to be an increasing recognition that AI technologies are in the same ballpark. And
Doesn't mean that we want to stifle innovation, of course, but we can help shape and guide innovation. I think there's again a sweet spot to be found there. And then on the other side, the education, one of the challenges there, of course, is that things are moving so fast that it's very hard to keep up.
But it's important to try. One thing that strikes me here is the very term artificial intelligence is part of the problem. It brings with it so much baggage that there's some kind of magic and it's like, you know, science fiction mind, whether it's Javis or whether it's- SkyNet. ... Hal from 2001 or whatever it is, whatever your favorite conscious intelligent robot is, that's what we think of. And
Artificial intelligence has this brand quality which I think is a little bit unhelpful. It may have been incredibly successful in raising large amounts of venture capital, but it's not a particularly helpful description of what the systems themselves are doing. Of course, most people working in this, at least they used to at any rate, talk about machine learning rather than artificial intelligence.
And then another level of description, you can just say, well, these things are basically just applied statistics. You start describing something as applied statistics. Even that is educationally valuable because it highlights how much we load onto these systems by the words we use.
One other very simple example here, which I think the horse has already bolted here too, but it's always annoyed me how people describe language models as hallucinating when they make stuff up. Yeah, it's giving them too much credit. It's giving them way too much credit. And it's doing something more specific than that. It's cultivating the implicit idea that indeed language models experience things.
Because that's what a hallucination is, a hallucination. And if you apply it to a human being, it means, well, they're having a perceptual experience of something that's not there.
And the fact that that linguistic term caught on so quickly, I think is itself telling because it just revealed like, okay, there's some implicit assumptions about what people think these things are doing. But it's also in a positive feedback, it's unhelpful because it leads us to again, project qualities in. If we're going to use a word, I wish they'd used confabulation.
Because in human psychology, confabulation is what people do when they make stuff up without realizing they're making stuff up. And that, to me, is an awful lot closer to what language models do. But I don't think it's going to catch on now. But we should be careful about the language we use to describe these systems for exactly this reason.
What advice would you have for people that are listening to this so that they can take advantage of the tools at their disposal today and not get sucked into perhaps the pseudoscience and the fake spirituality that kind of comes as a package deal with AI today?
Part of it is exactly recognizing these often implicit motivations that drive all these associations that lead us to think of these things as being more than they are or different than they are. In the extreme, it gets complicated.
pretty religious right i mean there's there's the idea of the singularity which is a sort of the techno-optimist moment of rapture and the possibility of uploading to the cloud and living forever with the promise of immortality i mean the story is is very it's textbook religion right um
And so that in itself, I think, is useful to bear in mind. There's a larger cultural story behind this. It's not simply an objective description of where the technology is. And then cashing that out further, it is, for me anyway, it's just a matter of continually
reminding myself of the differences between us and the technologies that we build, to resist the temptation to anthropomorphize, to project human-like qualities into things, to retain a slightly critical attitude.
to what's going on behind the interface, that it is not an alternative person. This can be easier said than done because, as we were discussing before, one of the things that keeps me up at night are these cognitively impenetrable illusions of intelligence, consciousness and so on that AI systems can bring to bear.
But yeah, I think it's just at its most basic, reminding ourselves that if we think an AI system is a reflection of us, that it's something in our image, what we're probably doing is overestimating its capabilities and underestimating our own capabilities. I love that. That's punchy. And that brings me to the last question, which is,
Given our discussion thus far, it makes me very curious. What is your idea of the sort of ultimate final form of AI, if you will, that appeals to you as a neuroscientist? Like what excites you the most about the potential for this future, you know, where AI can serve human intelligence and consciousness? This is a very, very good question. I mean, I think...
My vision, optimistic vision about this is not some sort of single super intelligence like Deep Thought and Hitchhiker's Guide to the Galaxy or whatever it might be. There's a single super intelligent entity. Maybe AI in the future is going to be a bit more like electricity or water. It's a basic concept.
Utility. It's a utility. It's a utility. And it's used in many, many different ways, in many, many different contexts to do many, many different things. And it's in this world, we face the challenge of
of recognizing that there are some things which we once thought were distinctively or uniquely human, which aren't. And so there will be a social disruption to that. This happens, of course, in all technological revolutions. But the flip side of that is that the space that's opened for massive innovation, creativity, the ability to solve all sorts of problems. So I think it's not a single thing. It's many things.
One last thought on this. I've heard the idea many times that the distinctive thing about AI is that it could be humanity's last invention because AI systems can design, develop, improve themselves. Oh, they'll invent everything else? Or we lose the ability to have dominion over what they may end up being.
And that's something that I'm still, you know, I'm still a little bit unsure how to think about that, whether that's a, that's a real difference or whether it's something that we can still be careful to, to manage. But my, yeah, my optimistic view of AI is as some kind of utility that's drawn on in many, many ways.
Yeah, that permeates everything versus the singular, all-encompassing AI in the sky, which again, starts sounding very religious. Anil, thank you so much for joining us. Thanks for the conversation. I really enjoyed it. Wow, what a conversation. Anil Seth reminds us that the story of AI isn't just a tale of machines gaining power.
It's a mirror reflecting our own biases, aspirations, and fears. We project so much of ourselves onto these tools. We anthropomorphize the algorithms. We give meaning to their outputs, as if they share the complexity of human experience. But as Enel said, we overestimate their capabilities and underestimate our own. And that's something worth meditating on.
For all the dazzling feats AI can pull off, it's still us, humans, who design, direct, and decide what these systems become. And it's our responsibility to tread carefully, not just for the sake of innovation, but for the future of what makes us human.
There's another fascinating question here too: If a truly conscious AI system does emerge one day, will it even look like the systems we've built so far? The unique, messy biology of the human brain – neurons, synapses, and glial cells – doesn't just power intelligence. It creates the rich, subjective experience we call consciousness. Silicon and software might never be enough.
Consciousness may demand a substrate that mirrors what we're made of, a construct that's alive, pulsing with the same kind of vitality that flows through us. And this is wild to imagine, a future where conscious AI doesn't emerge from humming server racks or massive data centers, but from living organic systems, gigabrains we create from the very building blocks of life itself.
and trying to recreate the sparks of consciousness, we'd be stepping closer to understanding what makes it so mysterious and so uniquely human. For now though, that's all in the realm of speculation. What's not speculative though is this: the choices we make today, how we design, interact with, and regulate these systems will shape not just the future of AI, but the future of our own humanity.
And as much as we might marvel at the power of these tools, it's our responsibility to stay grounded, to remind ourselves that these systems are reflections of us, not replacements for us.
It's truly an incredible moment to be alive and a terrifying one too. And as we grapple with the unknowns ahead of us, perhaps the best question we can keep asking is, what kind of future do we want to create? Not just for AI, but for ourselves. The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard.
Our producers are Dominic Girard and Alex Higgins. Our editor is Banban Chen. Our showrunner is Ivana Tucker. And our engineer is Asia Pilar Simpson. Our technical director is Jacob Winnick. And our executive producer is Eliza Smith. Our researcher and fact checker is Jennifer Nam. And I'm your host, Bilal Volsadu. See y'all in the next one.