Artificial intelligence has been around at least since the early 1950s, when researchers at the University of Manchester made a machine that could play checkers and a machine that could play chess. But at the end of 2022, 70 years later, OpenAI casually released an early demo of ChatGPT. And suddenly, AI was all anyone could talk about.
Even if you were steeped in the tech industry, it seemed to explode out of nowhere like a sci-fi story come to life. High schoolers had a whole new method for cheating on homework. Coders had a new automated tool that could debug code across dozens of programming languages. Users watched in amazement as the blinking cursors unspooled entire stories or screenplays in a matter of seconds.
And ever since that first release, AI has stayed at the top of the news cycle. How to use AI for family time. BBC panic and possibility. What workers learned about AI in 2023. Google pitches media outlets on AI that could help produce news. AI-generated garbage is polluting our culture. Mainstream media covered the emerging capabilities of AI and the uncanny improvements. But even more than that, it covered the reactions.
Highly prominent tech leaders and scholars, even celebrities, had all sorts of grandiose statements to make about AI's future. And with all of these predictions hitting social media, we started to hear from two polarized camps.
There is the techno-optimists, who claim that AI is going to solve major global issues and usher in a world of radical abundance. Marc Andreessen, who wrote the Techno-Optimist Manifesto, wrote that AI is quite possibly the most important and best thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.
For the so-called techno-optimists, soon enough we'll all be living in a world with boundless creativity and invention. And then, there are the pessimists. The people who believe we're going to create a super intelligence, which could take off without us and pursue arbitrary goals that could destroy human civilization. These folks toss around terms like P-Doom, the probability that AI will annihilate everyone on the planet.
Some of them self-identify as Doomers, like Eliezer Yudowsky, who proposes that a super-intelligent AI could commandeer weapon systems against humans if it comes to see us as a threat. The pessimists are picturing dystopia, urging us to slow down or stop development altogether, roadmapping catastrophic outcomes. I want to show you something, so please bear with me.
Hey, ChatGPT. My producer Sarah and her dog are on one side of the river. They want to cross the river. There is a boat available. What steps do they need to follow? Write the solution in bullet points and be as concise as possible.
Sarah gets into the boat and rows to the other side of the river. What about the dog? Sarah leaves the boat on the other side and walks back to the original side. Sarah puts her dog in the boat and rows with the dog to the other side of the river. What? Why not do that? Sarah and her dog disembark from the boat on the other side. Wow. Okay. This is the technology that has the power to save or destroy humanity? Sometimes it seems like all the hype around AI is just that. Hype.
I'm Bilal Velsadu, and this is the TED AI Show, where we figure out how to live and thrive in a world where AI is changing everything. Want a website with unmatched power, speed, and
and control? Try Bluehost Cloud, the new web hosting plan from Bluehost. Built for WordPress creators by WordPress experts. With 100% uptime, incredible load times, and 24-7 WordPress priority support, your sites will be lightning fast with global reach. And with Bluehost Cloud, your sites can handle surges in traffic no matter how big. Plus, you automatically get daily backups and world-class security. Get started now at bluehost.com.
When it comes to hype, Gary Marcus is one of the main people telling us to tone it down.
Gary self-identifies as an AI skeptic, and that's really what he's known for in the AI industry. He doesn't think that AI is going to save us or make us go extinct. And he doesn't think that the large language models that have captivated the public imagination over the past few years are anywhere close to intelligent. Nor does he think that they will be if we continue down this training path.
But he does think that the hype around generative AI has real consequences, and that there are concrete harms we should take into account when we think about what kind of AI future we want.
Gary, who was a professor of psychology and neuroscience at NYU, is known for his sharp critiques of generative AI models. And even though a number of tech people portray him as someone who just hates AI, he actually co-founded his own machine learning company, which was acquired by Uber in 2016. So just because he's skeptical of the current path AI is on doesn't mean he thinks we should avoid building advanced AI systems altogether.
His latest book will be out this fall and it's titled Taming Silicon Valley, How We Can Ensure That AI Works For Us. So Gary, there has been so much hype about AI. And it's funny because when you go talk to, you know, let's say non-technical knowledge workers, they have this perception of what AI can do when they read the news or they go on Twitter and then they go use the tools and they're like, holy crap, this kind of sucks, right?
And I think it's almost frowned upon to discuss the limitations of the technology. Oh, I'm hated. Like, I think people have me on their dark points and stuff. You are often portrayed as someone who's an AI naysayer, but you're actually someone who's devoted your life to AI. So I want to go back to the beginning and ask you, when you were first getting started, what was it about AI that excited you? I mean, I've been interested in it since I was a kid. I learned to program on a paper computer when I was eight years old.
And I very quickly started thinking like, how do we get it to play games? I got interested in game AI. And something not everyone will know about me is I didn't finish high school. And the reason I was able to not finish high school is because I had written a Latin to English translator on a Commodore 64. And that allowed me to skip the last couple of years of high schools. I was already thinking about AI in my spare time. It wasn't a school project. I was just always passionate about it. I'm
had multiple cycles, I should say, also of disillusionment about AI. So, you know, it's possible to love something and be disappointed in it. I think a lot of parents of teenagers probably feel that way. I kind of feel like AI is like a recalcitrant teenager right now. Like, you want the best for it, but you can see right now it's struggling with its decision making and its reasoning. And, you know, I really do want AI to succeed. And I'm glad that you picked up on that because a lot of people kind of misrepresent me as, you know, just
as you say, is an AI naysayer. I am a generative AI naysayer. I just don't think it's going to work. I think there are a lot of risks associated with it, and I don't like the business decisions that people are making. And for that matter, I don't like the decisions the government is making by not making decisions. So there's a lot of things I don't like right now. But I am...
at some level, almost an accelerationist. There's so many problems that I don't think we can solve on our own or even with calculators, even with supercomputers. And so like, I would love for AI to come to the rescue in a lot of things. And so I believe in the vision. And when I hear Sam Altman say like, AI is going to do all this stuff,
and stuff, I'm with him. It's only when he says, well, we're on a trajectory right now to AGI that I like kind of roll my eyes and I'm like, no, are you kidding me? There's so many problems we need to solve before we have an AI that is sophisticated enough to behave as an actual scientist.
I mean, so you introduced a couple of interesting terms, obviously AGI, which is a hotly debated term, but I think you had a very pithy definition, at least a working definition. What the heck is AGI? This thing that seemingly every major AI lab is moving towards. How do you define it? AGI is supposed to be artificial general intelligence.
And what they were thinking about is basically intelligence with the flexibility and generality of human intelligence. And part of the contrast there is AI has had a long history of building solutions that are very narrow and tightly fit to particular problems. So like a chess computer that can play chess and do nothing else is a classic example of narrow AI. And I think a lot of us in the field for a long time have been like, that's not really what we mean by intelligence. It's useful. It's more like a calculator that's built for that purpose.
And then ChatGPT came along and it kind of disrupted the usual ways that we talk about this. So in some ways,
It's not a narrow AI. You can talk to it about anything, but it's not really what we meant by artificial general intelligence. It's not actually that smart. You know, whether or not it gets something right depends very heavily on what exactly is in the training set and how close is your problem. And you can't really count on it because you never know when your particular problem is going to be subtly different from one that it was trained on. And so you can't count on it. That's not really intelligence. So you have the generality, but...
it's almost like an approximation of intelligence, not really intelligence. And I think it's further away than most people say. If you ask me when AGI is gonna come, I'll say very conservatively, it could come in 2030 because like everybody's working on it. There's billions of dollars being spent and what do I know? But if you ask me like from my perspective as a cognitive scientist,
What do we have solved and what don't we have solved? Where have we made progress and not? We've made tremendous progress on mimicry and very little progress on planning, on reasoning. Those are problems we've been thinking about in the field of AI for 75 years, and progress on those has really been limited.
So, Gary, you say progress has been limited, but what about achievements like AlphaGo? I mean, that program's ability to strategize in games like Go or chess is far superior to humans. Like these systems are making moves that humans would never even think of making.
chess is exponential, Go is exponential. Reasoning and planning and open-ended real-world problems where you can't crush it by having infinite simulated data the way you can in Go, we have not made progress on those problems. We have this approximator that's 80% correct. 80% correct is great for some problems, but if you want to do medicine, if you want to do guide domestic robots, things like that,
80% isn't really good enough. What happened with driverless cars is we got to 80% correct real fast. And then there've been subtle, tiny improvements, you know, every year or whatever, but we're not anywhere close to a general purpose, level five self-driving car. You can plop it down in a city it's never been and drive as well as I could drop an Uber driver in that city. We're just not close to it. So, you know, why has driving been so hard? Even though we have, you know,
millions or billions, I guess now of training hours. And it might be because there, there's this immense periphery of outliers, which is really the issue for a lot of these things. You know, cases were just not in anybody's training set. And my favorite example of this is a Tesla. Somebody pressed summon. You probably saw this video and it ran straight. You even, I don't even know, into a jet. You knew where I was going. Right. The worst summon example of all time. Like Tesla,
No human would have made that mistake. Now, there's a separate question, like eventually driverless cars might still make mistakes that humans don't, but be safer. But for now, they're not safer than humans. And they do make these bizarre errors that tell you something about their operation and tell you something about why the problem's been hard to solve. The reason the problem's been hard to solve is there are outlier cases. There's a fake it till you make it culture in Silicon Valley, I think. And the reality is, I don't think we're close to AGI. I'm not so sure we were even close to driverless cars.
It is interesting though, you're talking about mimicry, right? These systems are very good at it, whether it's mimicking how we write text or produce images.
That mimicry aspect gives you the illusion of understanding, right? That's where it gets a little insidious and people start trusting these systems more than they should. They make really bizarre errors. I gave the example of Galactica predicting or saying that Elon Musk died in a car accident. And the exact sentence was something like, on March 18th of 2018, Elon Musk was involved in a fatal car collision. Well, in a...
classical AI system, if you assert something, you look it up in a database or you can look it up in a database to see if it's true. And you can't do that in these LLMs. That's just not what they do. They don't even know that they're asserting anything. There's an enormous amount of data that indicates that Elon Musk is in fact still alive, right? So the notion that he died in 2018 is an absolute non-starter. The evidence is
heavily against that. And any credulous
credible artificial general intelligence should be able to, among other things, accumulate available evidence. Like if you can't do that, what are you doing? So what happened in that particular example is on March 18th, you know, some people died in car accidents and some of them own Tesla, but it doesn't, you know, distinguish between Elon Musk owning Tesla, the company versus an individual owning a Tesla car and so forth. So you get the approximate feel of language, but without the ability to do any fact checking whatsoever.
I want to just double click on this language part, right? When we put together language as humans, it's common for us to perhaps misspeak. We say the wrong word. If we don't have all the information about something, we'll probably say inaccurate stuff. Certainly, social media is littered with inaccuracies, either intentional or unintentional. And so in your understanding, how is this different from what happens when these generative AI models hallucinates?
And I know that's a term that you have perhaps popularized as well. Popularized with regret.
The problem with it, before I get to the main part of your question, is that it implies a kind of intentionality or implies a humanity. It's a little bit too anthropomorphic. You have to think about these things like a cognitive psychologist does. And the first rule of cognitive psychology is any given behavior can be caused in different ways by different internal machinery, different internal mechanisms. So, you know, humans make mistakes all the time and say things that are untrue all the time. One of the reasons they say things that are untrue is that they're lying. Yeah.
humans lie all the time. They lie to fool other people. They lie to fool themselves and so forth. That's never the explanation for an LLM hallucination. They have no intent. So, you know, right from the jump, you know that the explanation is different. I mean, in fact, whether it says something that's true or false, it's using the same mechanism. It's
hopping through the clusters. And sometimes it hops and lands in a good place because it has pretty good data about the world indirectly through language and maybe through vision. So sometimes it lands on the right place, sometimes the wrong place. It's the same mechanism. If you lie, you're actually doing some second order processing thinking, does anybody even catch me? What's my body language or whatever? So those are very different mechanisms. There are other reasons people make mistakes too. Like their memories are bad.
And you could argue there's some of that going on in an LLM with the lossiness and so forth. So there might be a little bit of overlap. Sometimes we try to reconstruct something we know we don't really have it right. We try to make our best guess. Making your best guess is also a little bit different. So they're just not the same underlying mechanisms for hallucination. So you have to look at the underlying causal mechanism. And it just happens that the reason that LLMs are so prone to this is they cannot do fact checking. And that's just what they do.
So, yeah, like strolling through latent space may not be the equivalent of like AGI. Certainly it's convenient at times, but like you said, it's 80% as good. It might even be useful. Like, I mean, it might be part of AGI. I don't want to say that like we should just throw all this stuff away altogether, but it is not AGI. Like it could be a subset of AGI.
I always like to think of these as sets of tools. So like, if somebody came to me and said, I have a power screwdriver, I can build houses now. I'd say, it's great that you have a power screwdriver. This is a fabulous invention, but you still need a hammer and a plane and a level and, you know, blueprints and so forth. You need a federation of approaches that in concert will solve this like grander problem. Exactly. I like the word, in fact, funny you said that, I like the word orchestration. If you look at neuroscience, the, the,
brain imaging literature, the one thing I think it really tells us, like, I think it was very expensive and didn't tell us that much, but the one thing it really told us is you can take somebody, you teach them a new task, you put them in a brain scanner and
And they will online, on the spot, figure out a new way of orchestrating the pieces of the brain that they already have in order to solve that new task. And then, in fact, what's even more amazing, you take another 20 people and they'll probably mostly do the same thing. So there's some systematic stuff that the human brain does to orchestrate its capacities. It's amazing. And we don't know how the brain does that. I don't know if AI has to do the same thing, but for humans, that orchestration is like
very fundamental to what artificial general or sorry, natural general intelligence is about is putting together the right tools for the right job at the right moment. To bring things back to a TED talk from this year, Mustafa Suleiman of Microsoft has said that AI hallucinations can and will be cured by as early as next year. And many like him in the industry, you know, who are sort of the leaders of the current AI wave have been saying this too. What are they getting wrong?
So I don't know if they actually believe it or not. I think if they believe it, they're naive. What they're missing is these systems inherently don't do fact-checking. They don't do validation. I like your phrase, they stroll through latent space. And that's just not a solution to the hallucination problem. I mean, imagine you had a newspaper and you had a bunch of journalists, some of whom were actually on acid, and they filed their stories online.
what are you going to do? Like have them write more stories? No, you need a fact checker. You need someone whose skill is to like trace things down. That's just not what these systems do. Like if you actually understand the technical side of it, it just seems like an absurd claim.
So, yeah, clearly you don't think more data and more compute is the solution. I don't think it's the answer. There's always going to be outliers, and it's just not the right technology for it. So I think my most important work, which was wildly underappreciated at the time, was in 1998. I talked about what I called a training space problem. I looked at earlier neural networks.
and showed that they didn't generalize essentially beyond the distribution. So now it's become a generally known truth that there's a problem with distribution shifts. And that's why the driverless car thing, like end-to-end deep learning for driverless cars doesn't work because the distribution shifts, you have a distribution of data to break that out.
that you're trained on. And now some of the test is outside that distribution, like when someone hits someone and there's an airplane there. So you wind up going out of the distribution. That is a problem for this class of models that people have been focusing on for a quarter century. It was a problem in 1998. It has not been solved. There's no conceptual idea out there where somebody can say, well, I can explain to you why this has been going on and here's how I'm going to fix
Someday someone will do that, in fact, I think. But right now it's just like mysticism. I'll add more data and it will go away. And then there's one other way to look at the question, which is to look at the actual data.
So everybody's like, it's an exponential. I'm like, why don't you plot the data? So we don't have all the data we need to do this perfectly, but what you should do is you should plot GPT to GPT-2 to GPT-3 to GPT-4. We don't really have GPT-5. And fit the curve and tell me what the exponent is, okay? And the problem is whatever curve fitting you do for that, you should have seen progress from GPT-4, which is now 13 months ago, 14 months ago, to now.
And we don't, right? Everything has topped out at GPT-4. If you do the statistical procedures you should do to plot curves, you have to say that it has slowed down. We should be further ahead right now if you believe the earlier trend. That earlier trend is just not holding anymore.
And I think to your point, a bunch of leaders like Demis Hassabis has also come out on record and saying like, yeah, maybe like, you know, just scaling data and compute may not be the answer. We may need some fundamental breakthroughs. Which was the point, by the way, of my most maligned paper. Deep learning is hitting a wall. We see these empirical generalizations, the so-called scaling laws, but they're not laws. They're not laws like physics, right? You cannot say optionally, I won't follow gravity today, right? But the scaling laws, right?
were just generalizations for a period in time, which would be like saying, you know, the stock market rose for the last six years. That doesn't mean it's going to rise again next year. But still, there's a lot of conversation about AI risk, right? Like AI has certainly entered the public consciousness. And when we start thinking about AI risk, we hear all about the risks from AI working too well, right? This is the like AGI has been achieved internally scenario, right?
But from what I've heard you say, it sounds like you believe the bigger risk actually comes from the gap between AI appearing to have human cognition when it actually doesn't have it. So in your view, what are those concrete harms that we need to be most concerned about? A lot of the discussion is about risks of super smart AGI that may or may not materialize and certainly I think needs attention. Right.
But there are a lot of risks now. I think the single biggest risk right now of AI, of generative AI, is that it can easily be misused. And it is being misused. So people are using it and also deepfake technologies and things like that to make propaganda. There's also, you know, people are using it for phishing expeditions. Other deepfake tools are being used for non-consensual deepfake porn and so forth.
So there's that whole class of things. And then there's a whole class, I would say, that are more about reliability, which is like some people have been sold a bill of goods about how strong this technology is, and they want to put it everywhere, like in weapons systems and electrical grids and cars. So there are a bunch of problems that come from overtrusting systems that are not very sophisticated. So there are different sets of problems right now. But I think that they are serious, and I think –
it's worth asking oneself every day, because this is a very much evolving dynamic thing, are the net advantages of LLMs
greater than the net costs. And I honestly don't know the answer. Even if it is true, as I have argued, that we're kind of reaching a plateau with what we can do with LLMs, we're not reaching a plateau with our understanding of what we can do with them. So there is still going to be both positive and negative use cases that nobody's discovered, even if there is no capability improvement, let's say for another five years or whatever.
I think what you're saying is also like despite the gap between the expectations and reality, humans wielding these models despite all the imperfections can still do a lot of good and bad, right? That's right. And there might be an advantage to the
malicious actors because the good actors mostly want their systems to be reliable and the bad actors care less. So think about like spam. Spammers send out a million pieces of spam a day and they only need one of them to hit. So there's some asymmetry there that may actually shift things, but that's speculative. I'm not certain of that.
Another one of the ethical debates that has been dominant in this time of large language models is the copyright issue. And the fact that these models are mimicking data that was created by real people, whether it be writers, artists or just people on the Internet. Do you think there's any plausible world in which, you know, like creators are actually remunerated for their data?
It has to be like that. I mean, look at what happened with Napster. For a minute, everybody was like, information wants to be free. I love getting all this free music. And the courts said, no, it doesn't work that way. People have copyrights here. And so we moved to streaming. And streaming is not perfect, but we moved to licensed streaming. That's what we did. We went to licensing and that's what we're going to do here. Like either the courts will force it or Congress will force it or both. There was an amazing letter from the House of Lords I just saw in the UK saying basically that...
If the governments don't act here, we're going to normalize this thing where people are stealing left and right from artists. And that's not goal. I have this book coming out, Taming Silicon Valley. One of the things I say at the end is we should actually boycott stuff that isn't properly licensed. Because if we say, oh, it's fine that our friends who are artists get screwed, we're next.
Support for the show comes from LinkedIn. LinkedIn ads allow you to build the right relationships, drive results, and reach your customers in a respectful environment. They are not getting barraged. This is very targeted. You will have direct access to and build relationships with
With a billion members, 180 million senior-level executives, everyone's on a LinkedIn, it seems like, and 10 million C-level executives, you'll be able to drive results with targeting and measurement tools built specifically for B2B. In technology, LinkedIn generated two to five times higher return on ad spend than other social media platforms.
You'll work with a partner who respects the B2B world you operate in. 79% of B2B content makers said LinkedIn produces the best results for paid media. Start converting your B2B audience into high quality leads today. We'll even give you a $100 credit on your next campaign. Go to linkedin.com slash TED audio to claim your credit. That's linkedin.com slash TED audio. Terms and conditions apply. LinkedIn, the place to be, to be.
For the people listening to this, right, that are either overly optimistic or perhaps like overly skeptical about what's happening in AI, like what is the right measured way to look at the innovations taking place? What advice do you have for people that want to sort of combat this AI hype cycle? Like how should media be approaching coverage of AI? And then how should consumers and decision makers and main purchase decisions in the world be skeptical about consuming that AI news?
I think the consumers have to realize that it's in the interest of the companies to make AI sound more imminent than it does. And you wind up with things like the humane pin that are really exciting because they sound like they're going to, you know, do everything but your laundry for you. And then they don't. I think journalists need to get in the habit of asking skeptics. And that includes me, but there are many others out there saying, you know, does this thing actually work? If Sam Altman says we're on a trajectory to AGI, like ask,
10 professors and see what they think of it. And, you know, probably nine of them are going to say, that's ridiculous. This is not AGI. We're actually very far away. And that needs to be factored in. Like just reporting the press releases from the companies is usually wrong because the companies, you know, have an interest in stock price or valuation or whatever. They're obviously going to have an optimistic bias. You want to counter that. I,
I see journalists do that all the time in politics, but I don't see it as much in, in AI coverage, like skeptics, you know, once in a while they get a line. And I mean, I, I shouldn't complain. I've had a lot of media lately, but I still think on the whole that there is a very strong bias to report essentially press releases from the companies saying, you know, we're close to solving this thing. And like,
We've been getting that for driverless cars for 10 years and the cars still aren't here. Like how many times do we have to see this movie before we realize that promises are cheap? And until something has gone from a demo to actual production that everybody can use that you've tried out, you should be a little bit skeptical. So it does seem generative AIs here, right? We cannot put the genie back in the bottle as is perhaps stated in every single panel that I've seen about AI. Yeah.
What kind of restrictions and regulations should we be introducing around deployment? You have a book coming out in September called Taming Silicon Valley. Give us the TLDR. It is a big challenge. I can't do it all in one sentence. One of the arguments of the book is that...
AI is kind of like a hydra right now, especially generative AI, like your new things popping up every day. We need a multi-pronged approach. We need to have pre-deployment testing for something that's going to be for 100 million people. So I don't think we need to restrict research at the moment. But as things get rolled out to large numbers of people, same as you do with medicine or car or food or anything else,
If you don't have the prior practice to say that this is safe, then people should be required. And what's crazy about GPT-4 is they wrote this long paper explaining many risks, giving no mitigation strategies essentially for any of them. And they just said, good luck, world.
And they didn't reveal what data they were trained on, which would really help us in trying to mitigate the risk. And so there's like, good luck world, we're not going to help you. There's this quote that Maureen Dowd had of Greg Brockman some years ago saying, we're not just going to throw things over the fence and leave them to the world to figure out. But that's exactly what they're doing right now. And we cannot trust the Silicon Valley to self-regulate. Like, look what's happened with social media.
They've just basically fallen down on that job and it's been really bad for teenage girls. It's really been bad for the polarization of society and so forth. So having some pre-deployment testing is important. Having auditing is important. The government has to back up independent researchers and say, yes, we're going to make it easy for you to do the analysis to see if this stuff is safe.
You gave the example of a well-regulated industry, you know, such as aerospace and obviously airline and air travel is largely safe. But we still have an incident like the Boeing one. Right. So is regulation the answer?
Well, I mean, we would have a lot more incidents if we didn't have regulation. Imagine if we didn't have regulation, then, you know, commercial airline travel, like in the 1940s, like lots of people died, like routinely. I don't think regulation is easy. And I think we have to take an iterative approach to regulation, just like we take an iterative approach to everything else. And there are lots of problems with government and, you know,
Part of the reason that I wrote this book is because I don't think government by itself is getting to the right place. And I think we need to have more of the public involved in making the right choices about these things.
You know, it's interesting you mentioned that government by themselves won't be the counterbalance to the private sector. And that's interesting because, you know, I think a lot of politicians, if you talk to them, they're like, oh, yeah, you know, I'm going to do my stints here. And then eventually I'd like to go do a stint at an AI company. And you hear about the, you know, kind of the revolving door. And so how do you see us as combating that? Do you think like consumers in aggregate are a strong enough voice to be that counterbalance?
I think consumers need to wake up and be that counterbalance. I think you're exactly right about the revolving door. You know, when I testified in the Senate a year ago, every senator in the room seemed to appreciate the value of having strong and high regulation. But here we are 12 months later, and the EU has done something, but the U.S. Congress has not. So the president actually introduced this executive order that was as strong as he could.
But the nature of the Constitution of the United States is you can't make law. So the things that Biden put forward are basically reporting requirements and things like that and getting offices, chief AI officers in different parts of the government. All of that is to the good, but we ultimately do need some constraints. I also talked about the need for international collaboration on AI governance. People were warm to that in the Senate, bipartisan, Republicans and Democrats, but nothing has actually happened. And
The public needs to realize that if it goes down that way, we're going to see the kinds of ways in which social media has disrupted society, but they're going to be worse. And AI could easily do much more to privacy, really escalate cybercrime, possibly disrupt democracy itself. The only way anything is going to happen is if the people stand up, organize, put their money where their mouth is, and say, this is important to us.
A common criticism of regulation also seems to be pitting the U.S. against China, right? Like, hey, if we prematurely constrain innovation in the U.S., well, our adversaries are going to get ahead. What do you have to say about that?
I mean, in many ways, China's actually ahead of us in regulating AI. So for example, they had a copyright decision recently where a trademark character was ripped off by an AI system and the Chinese court said, you can't do that. So artists are actually probably better protected in China than they are in the US right now.
now. There's something that China does that I think is indefensible that I don't think we should do, which is they demand that their LMs till the party line. I think that's absurd. We should never do that. But in other ways, China actually has some of the same concerns as we do. They don't want to deploy systems that are going to completely destroy the information ecosphere, cause massive amounts of cybercrime and so forth. What would really change the world is getting to AGI first. And that's a question about research priority.
I think the United States has historically led innovation. If we stop pouring all of our money into LLMs just because they're the shiny thing and placed our bets on research more broadly, that we could easily win that race. And I hope that we will. So building off what you just said about research priorities, let's bring it full circle. This is pretty technical, but I think it's important for us to get into. Right now, we're on this path of generative AI models that are based on these massive neural networks, which means they're kind of a black belt.
box. Like, we really don't know why they do what they do. This is a huge departure from more rules-based symbolic AI systems, which would allow us to see exactly how a system makes a decision. So from a technical perspective, what is the direction you would like to see the AI industry go after, if not towards this current paradigm of generative AI? I mean, in my personal opinion, I think the most profitable thing to look at
and it would take a while, is neuro-symbolic AI that tries to combine the best of the neural networks that are good at learning with the symbolic systems that are good at reasoning and fact-checking. Like, I think this has to happen.
And I think it will happen. And whoever gets there first, I think is going to be at an enormous advantage. I don't think though that it's like a two-day project. Like I think it's really, really hard in part because the neural network side is not interpretable. So if it was interpretable, you could just dump it off to the symbolic system, but it isn't. We don't really know what's inside the black box. And so building the bridge between the black box world and symbolic AI is really hard. And we probably need a whole bunch of innovations there.
But I think in the long run, that's what you want. You don't want just AI that can write first drafts. You want AI that you can trust, that you can count on. And I don't see how to get there unless you can check facts against known databases and things like that. You need an AI that can be consistent with the world or at least give an explanation for why rather than just one that dumps something out
It's also interesting because in a sense, like the battle lines are drawn, right? People use terms like, oh, you're either an accelerationist or a decelerationist, right? You know, if you're actually an accelerationist, what you should do is to divorce yourself from LLM's and say, I want AI to work, but this isn't it. Like if you want AI to save the world, you should
every advance and say, is this actually getting me there or is it wasting my time? In terms of people wanting to pause AI, like I don't think we should pause research, but Gen AI is causing problems and we at least need to think about that. I think it's good that we have a public conversation comparing the risks and benefits. And if they were high enough, like if there's enough cyber crime and disruption of democracy, maybe, you know, we should reaffirm
reconsider whether this is really worth it. And yeah, it helps people with brainstorming, but like helping people with brainstorming and maybe coders writing a little faster and so forth might not actually offset like a major disruption to democracy. And like, we should actually think about that. I agree. I mean, there is good that you can do with these models. You obviously giving the co-pilot example of like sort of assisted coding where it can kind of do it like reasonably enough and getting it to 80% certainly saves a
humans a lot of drudgery. And that's exciting. And so overall, I have a techno-optimist bent. But I think where I struggle with sort of this polarity of like, you know, accelerationists versus the D cell folks, or just that framing is that like,
Even if you're a techno-optimist, that doesn't mean that you can't be like open and acknowledge the limitations of these models. Yeah. I mean, it's weird for me because I actually think I am a techno-optimist. You heard it here first. But not too many people know about it. Yeah.
It's like we all agree on the North Star and we should be solution agnostic of how we get there and not turn into this like dogmatic holy war of like, you know, who's who's like pro or anti tech. And I think just everything is so divisive today and the polarities prevents actual nuanced discourse from happening. That's absolutely right. And it goes back to social media. Right. It's very hard to even put out a nuanced view on social media.
It actually goes back to the tech industry itself, kind of accidentally creating those conditions that makes it hard to have a good conversation. Well, hey, look, I think there is no shortage of hype in the AI space. And occasionally we do need, you know, some cold water thrown on our faces. Gary, I'm so glad you're out there talking about the other side of all of this and just being cautiously optimistic of how we proceed. That's what I'm taking away, that you and me are both cautiously optimistic. I think you're painted as being far too negative.
But honestly, if you like delve into your arguments, not to use the delve word, I totally used that before ChatGPT, I swear. You know, I think you'll find your position is a lot more nuanced. Well, thanks for giving me a chance to explicate it. So is AI going to save us or make us go extinct? Well, according to Gary Marcus, probably neither.
And I have to say, I agree with him. In the AI industry, Gary is a pretty polarizing figure. As for where I come down on all of this, I consider myself cautiously optimistic about artificial intelligence. I don't think AI can solve every single problem facing humanity, but I do think it's one of the most powerful tools at our disposal to impact real-world change. And while I do recognize the flaws and risks of generative AI,
I also tend to believe it opens a lot of exciting and meaningful doors, especially for creatives. I mean, it's a really cool tool that can bolster human talent and honestly automate a lot of tedium we consider to be modern day knowledge work.
Where we run into trouble, as Gary says, is when we start confusing the appearance of thinking with actual thinking. The appearance of reasoning with actual reasoning. It's just not the same process when we open up the hood, and so we're not going to get the same result. These large language models are amazing mimics, but they aren't yet able to reason from first principles, much less explain their reasoning to us. And the risks that emerge in the gap between the appearance of thinking and actual thinking are real.
So, the decades-long pursuit for artificial general intelligence will continue. But it's okay to admit that the current path may not be the only way to get there. And instead, it's important to remember that there are a range of views on which direction AI will go and how we'll get there. Many of them far more nuanced than unbridled accelerationism or hyperbolic doomerism.
We can be excited about the possible futures for AI, but also practice healthy skepticism. And remember that we as the users, the customers of these companies, help decide the AI future we want to see. The TED AI Show is a part of the TED Audio Collective and is produced by TED with Cosmic Standard.
Our producers are Ella Fetter and Sarah McRae. Our editors are Ben Ben-Shang and Alejandra Salazar. Our showrunner is Ivana Tucker, and our associate producer is Ben Montoya. Our engineer is Asia Pilar Simpson. Our technical director is Jacob Winnink, and our executive producer is Eliza Smith. Our fact checker is Dan Kalachi. And I'm your host, Bilal Bulsidu. See y'all in the next one.