We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Is Nothing Like a Brain, and That’s OK

AI Is Nothing Like a Brain, and That’s OK

2025/5/20
logo of podcast Quanta Science Podcast

Quanta Science Podcast

AI Deep Dive AI Chapters Transcript
People
S
Samir Patel
Y
Yasemin Saplakoglu
Topics
Yasemin Saplakoglu: 人工神经网络深受人脑启发,但两者在复杂性和多样性上存在根本差异。大脑拥有数十亿个不同类型的神经元和神经胶质细胞,通过复杂的化学过程和信息网络同时处理多种任务。尽管如此,研究人工智能与人脑的异同仍然具有重要意义,可以帮助我们改进人工智能系统,并加深对大脑工作机制的理解。我认为,探索人工智能和人脑的不同构建方式将是一件非常有趣的事情。 Samir Patel: 我认为用人类思维的方式来思考AI是否准确、有用或麻烦。我们想知道用人类思维的方式来思考AI是否准确、有用或麻烦。我们想知道用人类思维的方式来思考AI是否准确、有用或麻烦。我发现人工神经网络和实际神经网络之间的历史和关系非常复杂,而且随着时间的推移会变得更加复杂。AI可能会以完全不同的方式发展,并对思想或智能有完全不同的定义。

Deep Dive

Shownotes Transcript

Translations:
中文

Deep down, I know it's not thinking exactly, this chatbot I'm testing out, but it's a pretty convincing imitation. I know, or I think I know, that the large language model behind it is really just crunching massive amounts of data to predict the next word to say and ultimately tell me what I want to hear or what I asked for. But wait a second, isn't that kind of what my brain is doing all the time too?

Welcome to the Quanta podcast, where we explore the frontiers of science and math. I'm Samir Patel, editor-in-chief of Quanta magazine. For the inaugural episode of our new podcast, we're going to be talking about AIs and human brains and exploring how different and maybe a little alike they actually are.

The language of human thought, intelligence, learning, neurons, even hallucination, is built into the way we talk about AI today. So when we were planning a recent special issue about how AI is changing math and the sciences, we wanted to ask whether it's accurate or useful or troublesome to think of AI and language models in particular in these terms.

I'm here with the author of one of the pieces from this package, which compared artificial neural networks with networks made of actual neurons. It's Quanta staff writer Yasmin Saplakoglu. Welcome, Yasmin, to the show. Thanks for having me and letting me nerd out about this. I'm excited to nerd out about this. So one of the questions we like to ask going into these interviews is, what's the big question here? Where are we going to end up at the end of our conversation?

So the big question, you have these incredible AI systems and you have this incredible brain. How similar are they? How different are they? And why are we asking this question in the first place? Is there any use to it? Right. Now, I think...

Part of the reason that we refer to the systems behind AI as neural networks is because they were inspired by the brain, by biology. So what's the historical connection between neuroscience and the way that we started to program computers?

artificial neural networks are deeply, deeply inspired by the human brain. And this isn't surprising because we've long been enchanted by the brain. You know, every piece of technology, we've always compared to it. It's what we know. One of my sources put it in a really good way, saying it's the most complex piece of active matter in the known universe. That kind of blew my mind, by the way. Like, known universe, literally the most complex system or, like, chunk of stuff in the universe.

is the several pounds of weird matter that's inside each of our heads. Right. It's the meat in our heads. That sort of blows my mind. And even when we're doing like the, oh, you know, studying astrophysics makes us feel so small. Like that is a fact that makes me,

humans feel special somehow. It's unusual. It's weird. Right. And the brain is actually what is probing all of these questions. It's helping us to ask all of these existential questions in the first place. You know, it's our consciousness. It's our intelligence. It's our ability to be curious. And it's all coming from this raw material that we're carrying around in our skulls. Right. So let's go back a bit to this idea of the brain inspiring computers. It feels like

whether it's science fiction or something like that, recreating the way that we think in a digital space has long been a goal or a story to tell. But actually, some of our first neural networks were indeed inspired, at least by some kind of abstraction, of the way our actual brains work. Yeah, even the earliest artificial neural networks were deeply inspired by insights we had from neuroscience. And so one of the earlier neural networks was called the perceptron.

Okay, like that's better than science fiction. It sounds like it comes from Flash Gordon or something. The Perceptron. Right. Sounds crazy, doesn't it? And it wasn't particularly advanced. It could read a simple code, a series of holes that are punched into a card, and it could predict whether the next hole was going to appear on the left or on the right. And it was a binary code, so it would say zero or one. But it was one of the first really amazing machines because it could learn. When we say it could learn, what does that mean?

So I'll give a brief overview of an artificial neural network. It's a bunch of interconnected nodes that work together to process some sort of information. Right. And the nodes essentially are like digital versions of our neurons.

They either turn on or they turn off or they don't react at all or whatever it is. Right. And learning in this case is adjusting the connections between those nodes to have a certain weight, which influences how it communicates with the next neurons. And this is what's happening in our brains all the time. We have an input that's going into a neuron and that neuron, based on its other connections, is deciding how strong that input is going to influence what happens next in the brain.

Exactly. A big way that the brain learns is through synaptic plasticity. So the neurons that are talking more and more are getting stronger connections, and the ones that aren't are getting weaker connections. And this is sometimes known as neurons that fire together, wire together. And this idea basically was taken in the earliest forms of the artificial neural networks and used as what's known as weights.

So even going back to the perceptron, when was that, like the 50s? Yes. Okay, so 1950s. They're already thinking, okay, let's make a digital version of the brain and let's have these nodes that can turn on or off.

And let's see if we can build in some of the synaptic plasticity to that system so that whatever we put in, we can tweak it as it goes along. That's what the learning part of it is, right? Exactly. Yes. If that's happening in the 50s, why are we just now getting powerful neural networks? So...

The Perceptron was really amazing and people were super excited about it. But at the time, artificial learning didn't really take off. You know, computing power was really limited and there wasn't enough funding. So a lot of people, the field basically just went back to programming and artificial intelligence kind of got neglected for a while. And part of the

Advances that led to today's really powerful neural networks also came in neuroscience in the decades that followed. We had to know an awful lot more about the brain to be able to create a model of it in a digital space. Right. It was advances both in neuroscience and in the computer science fields that really led to today's incredible artificial neural networks. So do they still have some kind of...

brain-like structure to them. Yeah. So what distinguishes today's amazing neural networks from the very early ones, like the perceptron, is that they have many, many more layers of these interconnected nodes, so they can do a ton more and they can do it in a more complex way.

Okay, so we've gone through the ways in which it's similar, right? The nodes in a neural network that we think of them as neurons, they fire or they don't, just like our neurons. We can train them, but there's a limit to this comparison, right? There's a but. Yeah, like what's the big but here? Right. If you squint your eyes, sure, they're similar and that makes sense because artificial neural networks were deeply inspired by all of this neuroscience and the most complex machine that we know, which is the brain. But...

When you dive in deeper, they're really not similar at all because the brain is just so complex and filled with such diversity. And none of that is really representative in the artificial neural networks that we have today. When you say diversity, what kind of diversity are we talking about? First of all, there's 86 billion neurons in the brain. 86 billion with a B? 86 billion with a B. Okay. But...

they're not the only cells that exist in the brain. You have billions and billions of glia that are also doing a bunch of different tasks. And even if you zoom into the neurons themselves,

Neurons aren't all the same. You know, they look different, they behave different, they are connected to each other in different ways. There's just complexity and diversity at any scale that you look at. Right. It's hard to take millions of years of evolution, an entire lifetime of learning, and all this complexity that builds up in the brain over time and simulate that digitally with nodes or neurons that are all the same. Exactly.

What we're saying is that the brain is just orders of magnitude more complex in its architecture than what we're able to do with a neural network. And it goes beyond even its architecture, you know, its chemistry. The neurons are communicating through this vast network of different types of molecules that they're throwing at each other.

I mean, if you think about it, at any given moment, the brain is doing so much. It's helping to regulate your body temperature. It's helping you think about what you had for breakfast or what you're going to have for lunch. It's pulling up a memory. It's helping you smell. It's helping you see. It's helping me talk to you right now. And this is all happening simultaneously at the same time through all of this chemistry and complexity and webs of information flow.

The idea that you're talking about, which is that a neural network is an extremely simplified version of what's happening in the brain, isn't a huge surprise to me. But what surprised me when I read your story was that there is actual utility to the comparison between a brain and an AI. So how are computer scientists using

thinking about using some of this complexity from actual neurons to make their AIs better? So there are so many different efforts that are currently going on in this. Some of the efforts are around, for example, figuring out ways to bring some of this diversity that the brain has into the artificial neural networks.

and also to figure out ways to use neuromodulators, which are a subset of neurotransmitters that the brain sends to communicate. And if they can figure out ways to include information that neuromodulators are being used for in the brain, then they might be able to create artificial neural intelligence systems that can better reason or have a little bit more of a

ability to have knowledge. So there are groups that are actively working on this kind of stuff? Yes, there are groups that are working on this stuff. And then there's other groups that are trying to combine the two fields by figuring out whether they can attach biological neurons to artificial components.

Which sounds crazy. Okay. We're getting a little cyborg-y here. But actually incorporating live neurons into a digital network. Right. So they think that maybe if they could just use the biological neurons themselves instead of trying to abstract them into some sort of digital form, then they might be able to improve these systems, whether digitally

it is to help them reason better or to make them more energy efficient, which is something that the brain is really good at.

I'm guessing that research is still in the early stages. Yeah. Because I feel like if we had AIs that were powered by actual living neurons, we would know about it. Yes. This is totally, totally early stages. Yeah. That's very interesting. But let's take it from the other side then. In what ways are neuroscientists and our understanding of the brain benefiting from the way AIs are working?

It's kind of funny because for decades and decades, neuroscience was the field that was influencing computer science with all of these developments. But now the pendulum has kind of swung the other way where many neuroscientists are actually looking to AI to help them understand biology and the human brain itself. How can an AI that we know is like a grand simplification of a real actual neural network

How can that tell us more about the brain itself? Artificial neural networks are more similar to parts of the human brain as a model than any model we've had in the past. So neuroscientists are looking at artificial neural networks, such as those that are involved in language processing or image processing, to see if they can learn something about how the brain does it. This is interesting because obviously if we want to know

precisely how a brain reacts when it sees a picture of a cat and recognizes it as a cat. We can't open up

that neural network to understand what's happening within it. But if we have a simplified version of it that's operating on what we think is a similar architecture, an AI image recognition software program, we can crack that open as much as we want and try to see what's going on inside of it. Exactly. And there's still problems, of course, with understanding exactly what's happening within neural networks as there are with the brain.

There's still so much unknown. But even given that, artificial neural networks, we can still kind of probe in this way, and it's still able to give us insights that we can't get from looking at our own brains. Yeah. I think my big takeaway from your story, which is excellent, by the way, was just how complex both the history and the relationship between

actual neural networks and artificial neural networks are, and that it's going to actually get more complicated as we keep going. Right. So that eventually we could see a world where AIs are actually built on something that is much more like the brain than we would have anticipated, or they could evolve in a completely different direction and have a completely different definition of what constitutes thought or intelligence. Totally. Totally.

One of my sources said something really great. Thomas Nassolaris, a neuroscientist at the University of Minnesota, said, "It will never not be interesting to compare humans to AI because here we have a second form of intelligence."

Who's to say that we need to build that intelligence in the exact same way that evolution has built our intelligence? You know, there's different ways to do it. And probing and looking into what those different ways are, I think, is going to be super, super interesting to see in the coming years. Yasmin, before you go, we would like to get a recommendation. You could recommend a book or music or an article, whatever. What's exciting your imagination these days?

I recently read an article in The New York Times called She's in Love with ChatGPT by Kashmir Hill. It was about people who are actually falling in love with chatbots like ChatGPT because, you know, it's gotten so realistic and almost human-like in its answers. Especially as I was reporting this, I found it to be, you know, a fascinating dive into a new kind of era that we're in.

That's amazing. Thank you, Yasmin. Really glad you joined us. We would love for everyone who's listening to check out Yasmin's article. What's the title of the article? "AI is Nothing Like a Brain and That's Okay." And it's part of our recent package that we dropped on the site called "Science, Promise, and Peril in the Age of AI." There's some other great stories in there. There's some great explainers, glossary. You should definitely check it out when you get a chance.

Also on Quanta this week, you'll see stories about the phenomenon of chirality, in which molecules differ from their own mirror images, and a Q&A with physicist Sidney Nagel, who delights in solving the mysteries of everyday life. We'll leave you today with the musical project of Dr. Simone Sun, a neuroendocrinologist at Cold Spring Harbor Lab.

In addition to research, they also make electronic music using their experimental data from neurons as raw material. This is from a track called Analyze, which can be found on their website, simonesun.com. The Quanta Podcast is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. I'm Quanta's Editor-in-Chief, Samir Patel.

Funding decisions by the Simons Foundation have no influence on the selection of topics, guests, or other editorial decisions in this podcast or in Quanta Magazine. The Quanta Podcast is produced in partnership with PRX Productions. The production team is Ali Budner, Deborah J. Balthazar, Genevieve Sponsler, and Tommy Bazarian. The executive producer of PRX Productions is Jocelyn Gonzalez.

From Quanta Magazine, Simon France and myself provide editorial guidance with support from Matt Karlstrom, Samuel Velasco, Simone Barr, and Michael Kenyongolo. Our theme music is from APM Music. If you have any questions or comments for us, please email us at quanta at simonsfoundation.org. Thanks for listening. From PR.