Support for KQED Podcasts comes from Star One Credit Union, now offering real-time money movement with instant pay. Make transfers and payments instantly between financial institutions, online or through Star One's mobile app. Star One Credit Union, in your best interest.
With reliable connectivity, enhanced cybersecurity, and advanced fiber solutions, Comcast Business is powering the engine of modern business. Switch today and ask how to get a $500 prepaid card on a qualifying gig speed package. Offer ends 4-21-25. New customers only with a two-year agreement. Other restrictions apply. From KQED.
From KQED in San Francisco, I'm Scott Schaefer in for Mina Kim. Coming up on Forum, Pulitzer Prize winning author Gary Rivlin has been covering Silicon Valley since the mid 90s when the technology gold rush began with startups like Netscape and PayPal figuring out how to strike it rich off the emerging Internet.
In his new book titled AI Valley, he traces the rise of artificial intelligence, complete with visionaries, some epic mistakes, and now the race to dominate the field by players like OpenAI, Meta, Google, and of course, Microsoft. Rivlin joins us to discuss how the AI revolution is transforming the Valley. That's next, after this news.
This is Forum. I'm Scott Schaefer in for Mina Kim. Well, you know, it might seem like the use of artificial intelligence started with the advent of ChatGPT, the user-friendly platform released by San Francisco-based OpenAI in late 2022. But as Pulitzer Prize-winning author Gary Rivlin describes in his new book, the roots of AI go all the way back to the Eisenhower administration with many twists and turns and dead ends along the way.
Rivlin's new book, titled AI Valley, describes the stakes in the race by tech giants like Microsoft and Google to cash in on the artificial intelligence revolution and how it's changing the focus and culture of Silicon Valley funders and entrepreneurs. Gary Rivlin, welcome to Forum.
Thanks. Great to be here. Well, let's dig into that timeline a little bit because, as I said, AI has been around for longer than a lot of us realize. Things like autocorrect, transcription services, translation apps, that kind of thing. Break down the history for us. We don't have to necessarily go all the way back to Dwight Eisenhower. But how did we get here? Well, I mean, it is interesting. AI dates back to the 1950s or before, but
It's always been tantalizingly just around the next corner. Like for 70 years, it was just around the corner. It's sort of been almost teasing with human beings. And, you know, they were so optimistic in the early days, in the 50s and 60s. They really thought they were just one little breakthrough away from having AI that could do incredible things. And, of course, it took decades and decades of research, wrong turns, and finally coming up with this sort of
consensus forming around this idea of machine learning, neural networks that don't learn by coding line by line to teach a computer, but a computer learns in the fashion of a human. You say, read these books, read these scientific journals, read these articles, and they learn by consuming and finding patterns. And that's how we end up with chatbots and these neural networks that have
you know, taken, take it over the, not taking, have really taken on the last several years. And you talk about, I think, a 50 year freeze, you know, where, where there was kind of nothing happening and you write that you, when you were covering Silicon Valley in the 1990s, if anyone mentioned AI, you don't remember it. So like, why was it so dormant?
Well, there's that terrible wrong turn. So in the 1960s, someone actually in the late 1950s, someone came up with this idea of the neural network.
And he was just ridiculed. The consensus had formed around a symbolic logic. It's called rules-based computing, this idea that you're going to have to teach a computer line by line just to imagine every...
every situation. And so the 60s, the 70s, the 80s, there was very little done around what we call machine learning. Now, there are a few pioneers come the 80s, 90s, early 2000s working on this, but it really wasn't until the mid-2010s that suddenly folks realizing, oh, the
the path to AI that's useful, the path to AI that could really do things that people needed to do was around machine learning, deep learning, neural networks. One of the first things that made big news, and we didn't call it AI, was when that IBM computer beat chess master Garry Kasparov in 1997. Was that, you know, even if it didn't kind of make it into the public consciousness, was that a light bulb moment in a sense for people in the industry?
It was, but really it was the pinnacle of the wrong approach. That was a rules-based computer. That was rules-based computing where we're going to teach this computer to anticipate all these different circumstances and in fact did beat the reigning champion, chess champion, who felt humbled by that.
That was almost like the last gasp, the last great moment for rules-based. And most of the important breakthroughs were on the other half around machine learning, deep learning. And what was the pivotal point there?
You know, I think it's just one of those things where you had folks like Jeffrey Hinton, who's kind of the godfather of machine learning, a few other brave pioneers that, you know, despite the ridicule, despite.
the, uh, consequences to their careers, uh, said, no, no, this is the right path. Uh, that machine learning is the way these computers, uh, is the way that AI is going to become productive. And, you know, slowly but surely they won converts. There was this one key moment, uh, I think it was 2012, uh,
where there was a contest. You know, can you, can a computer correctly identify a range of animals based on a picture? And the winner that year, by far the winner that year, was a neural network. And that really was kind of an aha moment for a lot of people like, oh, wait a second. These things are outperforming rules-based. You know, that
that winner was trained by just like, okay, look at millions of pictures on the internet, learn what it is, and that way they were able to identify a cat versus a horse versus a tiger, as opposed to this rule-based approach. So that was, that to me was maybe the pivotal moment. You know, when you think about the evolution of the internet, I mean, Netscape, the advent of Netscape was a big deal in the mid-90s, I think maybe 1994, 95. Right.
And that was, you know, that's really when it exploded. The internet began to explode. In terms of, you know, when you look at the timeline for AI, what are the similarities? What are the differences with what you saw in Silicon Valley with all these startups, you know, emerging? Yeah, I mean, to me, OpenAI and the release of ChatGPT was the stardust pistol for this race to cash in on AI that was in November of 2022. It's
analogous to me to Netscape going public, which was 1995, August of 1995. And that was the starter's pistol for the dot-com race, for the internet really taking on. And for the same reason. I mean, you know, the internet had been around for years before the Netscape browser, but their browser made it easy for regular folks to get on the internet, point and click, some of that, you know, image-based point and click.
point and click. Before that, it was kind of like computer nerds and you needed to know code and all this complicated stuff. Yeah, you had to go to your DOS prompt or something like that. And, you know, the same thing with OpenAI releasing ChatGPT, you know, there was
years of advances, but ChatGPT let you talk to the computer. You know, you mentioned Google Translate's been around since 2015. That's AI, autocomplete. In fact, Google was using AI in the 2000s to fix garbled search results
There was misspellings. You didn't quite put it right. They were using AI, but it was always behind the glass. The difference is that in November of 2022, you could converse with it. It was an actual product that you could interact with as opposed to this thing happening, you know, behind the glass.
that you couldn't see, your recommendation engines on Netflix, those kind of things. We're talking with Gary Rivlin, Pulitzer Prize-winning journalist. His new book is called AI Valley, Microsoft, Google, and the Trillion Dollar Race to Cash In on Artificial Intelligence.
Gary, you know, the first two words in your book, in the preface, are Elon Musk. And he, of course, with Sam Altman, two of the co-founders of OpenAI. And his vision for AI, as you describe it, was kind of humanitarian. I mean, especially when you compare it to his image right now with Doge. Talk about what he saw in AI and OpenAI generally, you know, and how it differed from what Sam Altman was thinking.
Right. So, well, they were aligned in 2015. I mean, you know, Musk is really interesting on AI because he was both a leading critic and a leading proponent. So early on, the early 2010s, he was among those investing in this company, DeepMind, which to me was the first great AI startup, the first deep learning AI startup. Google would end up buying it in 2014 for $650 million.
And, you know, Musk was an investor in that. But at the same time, he was kind of a leading critic. I mean, he was talking about laser-eyed robots, you know, taking over humanity. And so it opens with this scene between Musk and Reid Hoffman in 2015. Reid Hoffman, founder of LinkedIn, main character in the book, where, you know,
Reid Hoffman had gone to Stanford and essentially was studying AI. He did a couple of internships while at school. They were AI-oriented. And he just kind of
It just wasn't ready. It was just juvenile where the technology was. So he just ignored it. But this dinner they had in 2015, Musk and Hoffman, Musk convinced Hoffman that, no, no, this is the moment. Yet at the same time, Musk was leading the charge for not just safety, but security.
saying things that would really scare people around AI. So Hoffman's point of view on Musk, which I think is an interesting one, is, you know, everything will be okay as long as I'm in charge. Hoffman called it, you know, Musk's God complex. But, you know, at that point...
Musk, Sam Altman, others who are behind OpenAI. It started off as such an idealistic thing. Google had bought DeepMind. That scared folks. Like, oh, wait, is AI going to be in the hands of just a few giants? So OpenAI was started in 2015 to counter that. Let's not have the profit motive behind
be behind the development of something as powerful as artificial intelligence. You know, let's make it so it's for the rest of the world. Things changed over the years. We could talk about that. But, you know, kind of the initial vision was actually rather beautiful. Yeah. Well, and when Musk left, I don't know, I think there was a disagreement with Altman and
And there was that pivot. And Altman was able to get Microsoft to invest big time in OpenAI. What impact did Elon Musk's departure have on the trajectory of OpenAI? Right. So in 2018, Musk was feeling really frustrated. He had poured tens of millions of dollars into OpenAI, which at that point was a nonprofit.
And, you know, he was like Google was beating them every which way. And he said, I'm going to stop funding you. You should just sell it to me and make it part of, you know, one of the companies he owns, Tesla, or just, you know, kind of another one of his holdings. And, you know, Altman and the team did not want that. So they had a breakup.
And that was the point where they realized, like, wait, we could raise millions, tens of millions as a nonprofit, but not the hundreds of millions, billions we need to really develop AI. So that's the point where Altman and company created a for-profit subsidiary that was successful.
controlled by a nonprofit board, but the subsidiary could charge, make money, make profits. And that's when they brought in... Reid Hoffman was a venture capitalist behind it, other VCs. They brought in Microsoft. Microsoft in 2019 gave...
a billion dollars and then 10 billion dollars a few years later all right much much more to talk about with gary rivlin author of ai valley we'd love to hear from you what are your concerns or hopes around generative ai what questions do you have give us a call at 866-733-6786 866-733-6786 you can also find us of course on all the social media platforms blue sky facebook instagram threads
We're at KQED Forum. Scott Schaefer here this hour. For me and Kim, stick around. Hey, I'm Jorge Andres Olivares and I'm hosting a new show, Hyphenacion. Unlike many other hyphenated Latinos in the U.S., our cultures and our communities inform our choices, like with money. We had that pressure to be the breadwinner. Religion. I just think Jesus was what we would now define as Christ.
And family. We're not physically close and we're not like that emotionally close either. So join me and some amigas as we have easy conversations about hard things. Catch Hyphenation from KQED Studios wherever you get your podcasts and on YouTube. Hey, it's Glenn Washington, the host of the Snap Judgment podcast. Let's snap.
We tell cinematic stories that let you feel what it's like inside someone else's skin. Stories that let you walk in someone else's footsteps. Storytelling like you've never heard. The highs, the lows, the joys, the pain, the twists, the turns, the laughs, the life. Snap Judgment drops each and every week. Listen wherever you get your podcasts.
Welcome back to Forum. Scott Schaefer here this hour for Meena Kim talking with Gary Rivlin, journalist and author of his latest book is titled AI Valley, Microsoft, Google, and the Trillion Dollar Race to Cash In on Artificial Intelligence. We'd love to get you involved in the conversation. Give us a call at 866-733-6786 or reach out with your comments and questions on social media. We're at forum at kqed.org on Blue Sky, Facebook, Instagram threads. You know the drill.
What concerns do you have? Do you use AI in some form now? Is it affecting your life in any way? And, you know, if you work in the field, we'd love to get your thoughts, too, on what you're seeing on the inside. Gary, one of the things you describe AI as being is sort of like a parrot that seems to know everything but understands nothing. Is that still true or is that more of a characterization of it, you know, a few years ago?
Yeah, it's not my characterization, the stochastic parrot. There's a...
a linguist from University of Washington, Emily Bender, who, if I didn't coin it, popularized it. Yeah, it is. I mean, I think it's important that people understand that there's so much potential in AI around education and medicine and science, but we need to understand its limits. I had one computer scientist say, like, AI can understand anything.
Everything that someone over 20 years old understands, but very little of what someone under five years old understands. It doesn't have common sense. It doesn't really understand everything. It is a parrot. It just finds patterns, mathematical patterns, and is...
generating it that way, it doesn't quite understand. So things like autonomous AI people talk about, that really scares me. There really has to be a human in the loop because it doesn't quite understand anything. People talk about reasoning models. It's emulating reasoning. That's the new thing in AI in the last year or so, reasoning models. But it doesn't really have human reasoning.
And so, yes, I think that's one of the great limits of AI right now. But you do write about other personal AI like Pi, which has a form of empathy, but of course it isn't really empathy, but it sounds like empathy. It seems like empathy to the person who's using it. What do you make of that? Yeah, so...
Pi is created by a company called Inflection, which they've moved away from a product called Pi. But their whole thing was it wouldn't just have IQ, but would have EQ, emotional intelligence.
ends. And, you know, I used it. We had something of a family challenge in our life just when this thing was coming out. So instead of doing the journalist thing of trying to get it to jump the fence and misbehave kind of thing, I tried to use it in a genuine way. Like, we were concerned. My son was having surgery, serious surgery. And I talked to it. And, like, it was...
sort of amazing. It was like, it was a question answer thing. Like you would, it had a voice so it could speak to you. And just like a friend would ask questions, you know, there's a, you know, the pies were perfectly articulated. Ask the right questions. How's your son taking this? How's the school? How are you taking care of yourself? All of that. And so I was impressed. It got nuance. It got humor. I kind of got glimmers of what people talk about when they say AI is a therapist, AI is a confidant.
But in the end, it really didn't mean anything to me. You know, there's a famous quote from Shirley Turkle at MIT, sociologist, the performance of empathy is not empathy. So it gave me a few things to think about. So I'll put that in the plus column. But it, it,
Me personally, it didn't really mean anything other than like, wow, that's impressive. Yeah, but you could see somebody who's maybe more isolated, it being very meaningful. Like it was that movie Her. Right. So, I mean, we have a loneliness epidemic. It's not just older people, it's a lot of younger people. And so, you know, I think we can't really be snobby like, oh, why would we ever go to this thing? Like, well, you know, a lot of people don't have...
a network. A lot of people need, you know, they come home and they're alone and they can talk to this thing. I think we're not very far away from a lot of people having serious romantic relationships with these things. Do I understand it? No. But on the other, I mean, I can't relate to it, but, you know, I do think we have to get used to this, that these things, these artificial intelligence products are going to be more and more and more central to our lives, whether our
personal lives or work lives. All right, let's bring our listeners into the conversation. Again, the number is 866-733-6786. And let's start in San Jose and George, you're first. Welcome to Forum.
Good morning. I'm wondering what the thinking is about the conflict of interest between what you would like, what the user would like is unbiased research. The AI algorithm should do unbiased research. But there could be commercial interests that are shading the algorithm. And so I'm wondering what the thinking is and how to preserve the unbiased research aspect from the commercial interest. Yeah, Gary? Yeah.
Yeah, I mean, in a way, that's my greatest fear about AI, that it's in the hands of just a few tech titans, and they're going to put profits ahead of everything else. You know, the bias of these models, another thing that scares me, you know, if it's ever making decisions around, you know, jobs or sentencing of someone who's convicted of a crime, it's like...
It's just a mirror of us. Its biases are our biases. It's being trained on our words, on our books, on our articles. So the same biases that are baked in to the training material are going to be reflected by the model. Now, through fine-tuning the next step in the process, you can minimize that. But I think it's going to be really, really hard to...
take out the bias that's in these models because, again, it's part of the training model. So any biases towards men, towards Caucasians, whatever, that's in the training. And so that's going to be a huge, huge challenge.
What about that profit aspect of that? I mean, you know, we know now that because of these algorithms that, you know, you and I could search for the same thing and get very different results based on our search history. And some of that is, of course, to sell us stuff. But how does that play out with AI? Yeah.
Yeah, I'm not quite following what you mean. Well, the profit motive. I mean, obviously, the bigger companies are in this now to make money. So how do they make money? And is it different from the model that, say, Google has now for Facebook? Yeah.
What we're seeing is, you know, trust and safety a few years ago was front and center at all of these companies. And it's really falling by the wayside because there's now this race to cash in on AI. And so we're seeing, you know, company after company, you know, whether it's open AI where...
you know, a number of employees have left, saying they left because they really feel that safety is taking a backseat to the next product, the next iteration of a product. And so that is a huge, huge concern. Another concern, too, kind of related to this is, you know, there's that
idea in Silicon Valley, oh, let's just get five guys, and it's invariably five guys in a room to decide this. And like, you know, that might work for this product or that. But AI is so powerful. It's going to be so central. So I
I really think we need a much broader group of people than Silicon Valley, historians, sociologists, people from different parts of the world, people of different ages and races and creeds and all. That's another big worry of mine. But in the race to cash in...
to put out the next model, there's this leapfrogging. Like right now, I think OpenAI is slightly ahead. It had been Google last week and the month before it was Anthropic with Claude. So with this great race, and of course the Chinese are nipping at the heels of the U.S.-based companies. With this race, you know, I really do fear that it is all about profit. It is all about winning. Yeah. George, thanks for the call. Let's go now to Walnut Creek. And Katrine, you're next. Welcome.
Hi there. I wanted to ask where would I go as an average person to invest in AI if the vision that this is, you know, the next big thing, sort of like how the Internet was in the 80s, is there a mutual fund that combines Google, Microsoft, and AI together? And how much would you recommend in terms of investment? Gary? My expertise. I'm really nervous right now. But actually, I mean, keep in mind that, you know,
Most of AI development is in the hands of startups, private companies. I suppose if you wanted to, you can invest in Google and Microsoft, Nvidia, the chip maker. A big portion of those businesses are built on the success of AI. However, in the last few weeks,
none of those would have been very good investments. Yeah, it does change really quickly. And, you know, Google just today was found guilty of violating antitrust law. Meta is... There's a trial underway in D.C., I think it is. Yes. So, you know, it...
And yet, at the same time, you know, sort of the focus of the book is how these giant companies like Microsoft are really running the table. But even those companies are investing in dozens of other startups. What are they looking for other than to gobble them up once they realize their value?
Yeah. So actually, I started this book right at the beginning of 2023, right after ChatGPT started to become a big deal. And I wanted to focus only on the startups. I covered in the second half of the 1990s, it was the startups that was most interesting. Who is going to be the next Google? Who's going to be the next Meta? And it turns out the next Google is probably Google, the next Meta is probably Meta. That big tech has this huge advantage. It's very expensive because
Nowadays, it costs billions of dollars to train and operate these models. Very few startups have access to that kind of money. To me, Microsoft, which has put in
$15 billion or so into OpenAI, to me, it's they're hedging their bets. Maybe our model would be the one, but if not, we own a big piece of OpenAI. They had access by being an early investor in OpenAI. They had access to its models, whereas competitors didn't. Google has put billions of dollars, Amazon too, into Anthropic.
That happens to be my favorite chatbot these days, Claude. And again, I hedge you the bet, Google, a company that's been way ahead of everyone around machine learning, they've been investing in machine learning since at least the early 2010s. Okay, we have our models, but...
If there's other company, you know, maybe we'll own a big piece of them or as you suggest, which I do think is a likely scenario, they'll just be gobbled up by a giant. What do you like about Claude? You know, it has some of that same feel that we were talking about, pie, that
You could talk to it. I like its personality, if that's the right word to use for a bot. But for writing, for editing, it's just very good at that. There's PerplexityU.com, a couple other popular ones. There are
They're not creating the model themselves. They're giving you access to various models, whether it's ChatGPT, whatever. I like that they footnote. As a journalist, when I'm doing research, it's really important with these things that you look at the original source because they have the hallucination problem. They make a lot of mistakes.
But, you know, Claude, I'll just, okay, here's a chapter. Give me feedback. You know, what about, is there any repetition? You know, oh, I'm having trouble with this sentence, this paragraph, this transition. You know, give me five different alternatives. You know, it's just very useful. It's just over time, I've just come to think that Claude is the best for writing, for editing. Do you ever, like, when you're doing that, do you think like, ooh, am I going to cross the line here, you know, based on the advice? This is really good editing advice. Yeah.
Um, or is it, or is it the same as an editor?
Like a human editor. Right. So, you know, I mean, I treat it like a research assistant. I've hired research assistants in the past. And, like, they sometimes make mistakes, and you have to be careful about that. You know, I don't say write this chapter based on those notes. You know, I could be plagiarizing people. I don't know where they're getting this stuff from. I use it more for feedback and editing. I never quite cut and paste and put into something I write. But...
There's been this notion that AI is going to replace humans. And ultimately, AI is going to replace a lot of human jobs. But I think in the short and medium term, it's largely going to be humans using AI are going to be much more efficient, much more effective than humans who don't use AI. Yeah.
So, Katrine, thanks for that call. And I just want to emphasize, Gary did not give any investment advice. So don't do anything based on what he said. I'll give out the number again. It's 866-733-6786. Talking with Gary Rivlin about his new book, AI Valley, Microsoft, Google, and the trillion dollar race to cash in on artificial intelligence.
We got a lot of listener comments. Chris writes, what is the timeline for AI doing most jobs? What will humans do? And who are the people programming AI? It seems like there's not much diversity in the industry. You alluded to that a moment ago, Gary.
Right. So, there's this big prediction of superintelligence, AGI, artificial general intelligence. And a lot of people in the field think within the next year, two years, three years, we'll have this AGI. I'll point out skeptically that almost all of them have raised billions of dollars in funding. And so, their investors want them to have a product. I
I'm dubious of that. I think we're a little bit further away than that. I also like what is AGI in the eyes of the beholder. These things already, they know everything or they know a lot about everything. They're amazing, the depth of knowledge across the board that no human can ever attain. Is that AGI? I think what's going to happen is startups are going to start claiming AGI and they're
there will be people who are going to be doubting that we're there yet. But whether it's three years or 10 years, I don't doubt it's coming. And let's just use autonomous vehicles. I mean, you're in San Francisco. See them all the time. All the time. All the time. And so they're coming. 10 or 15 years ago, I would have told you they were five or 10 years away. And obviously, they're not
quite at the center of things, um, the way people anticipated, but they are coming and, you know, there's eight to 10 million people in this country who make their living as a driver, Uber driver, cab driver, long haul driver, delivery driver. And those jobs are going to be wiped out. Customer service, um, centers, there are already companies using AI, um,
in their customer service centers. And, you know, there's some things that are just really easy and the AI can handle. And the stuff that's more complicated, they'll just, you know, give to humans so humans can spend more of their time doing the stuff they should be paying attention to and all. And so that's another area where I assume that AI is going to, you know, take...
a lot of jobs. The big surprise though is five, 10 years ago, you would have said AI is coming for blue collar. It's coming for the factory floor, the drivers I just mentioned. But ChatGPT 2023 taught us that, oh, they're coming for knowledge workers first, computer programmers, writers, content creators. Yeah. And like doctors are using it, lawyers are using it, legal assistants are being replaced in some cases by AI. Yeah.
But again, I would look at it as an assistant. Augmented. Exactly right. You're amplifying your intelligence. You're augmenting it. It's a great assistant. You're a co-pilot. And that's the way to really use it. So let's use medicine. These AI models are...
far more accurate at reading mammograms and other imagery. That's not to say, oh, let's have AI do all the diagnostic work. I like the idea that there's an actual medical professional doing that and your backup is the AI. And I think that's the model for the near term and the medium term, that AI is a great help or whatever. You're
you're a lawyer, you know, I'm not saying you should, you know, anyone should hire one of these models as your lawyer. But for a lawyer, it could help
draft their briefs. It could do some of the research. Again, you really have to check it because there's a famous case in 2023 where a lawyer in New York or a pair of lawyers in New York filed a brief and every one of the cases cited was made up. So use it as a first step. Don't use it as the beginning and end. Yeah. You got to watch out for those hallucinations, right? Yeah. Yeah.
All right. We are going to continue our conversation with Gary Rivlin. So much more to talk about. His book is called AI Valley. And we'd love to hear from you. Give us a call. 866-733-6786. Again, 866-733-6786. Or you can find us on all the social media platforms, Blue Sky, Facebook, Instagram, Threads. We're at KQED Forum. I'm Scott Schaefer here this hour for Mina Kim.
And we'd love to hear from you, especially if you work in the industry. So give us a ring again, 866-733-6786.
This episode is brought to you by Pluto TV. Pluto TV has all the shows and movies you love streaming for free. That means laughter is free with gut-busting comedies like The Neighborhood, Key & Peele, and Ferris Bueller's Day Off. Mystery is free with countless cases to crack from Criminal Minds, Tracker, and Matlock. And thrills are free with heart-pumping hits like The Walking Dead and Defiance. Feel the free. Pluto TV. Stream now. Pay never.
Welcome back to Forum. Scott Schaefer here for me and Kim talking with Gary Rivlin. His latest book is called AI Valley, Microsoft, Google, and the Trillion Dollar Race to Cash in on Artificial Intelligence.
Trillion. That's a lot, Mark, Gary, really. I'm thinking of Mark Zuckerberg. That's a lot of money. We often get confused. You wish. So, I mean, it is interesting because, I mean, this is the year of the billionaire. We talk a lot about billionaires, but, you know, nowadays, Microsoft is worth around $3 trillion. Apple's worth around $3 trillion. You know, Meta, Google, they're over Amazon, over a trillion dollars. And, you know,
What I really wanted to focus in on is that Silicon Valley thing. Not to create a nice little business that makes you millions or tens of millions of dollars, but the next Google, the next Meta kind of thing. That's really a trillion-dollar raise because those companies are worth trillions. The idea that we're going to create a product that has millions and tens of millions, hundreds of millions, maybe billions of
of users, and then it'll be worth trillions of dollars. But that is the new number. No one has a net worth yet in the trillions. Mr. Musk is, I guess, roughly halfway there, at least before the last few weeks was. But for the companies, for that race to cash in, the mad dash to cash in on artificial intelligence, those that are going for these
consumer models that humans are going to use as a personal agent or some way like that, you know, those are potentially worth trillions of dollars. Yeah. All right. We've got a lot of listener comments, but I want to bring in a caller first before we get to those. Mark in Livermore. Welcome. You're on with Gary Rivlin. Hi, Scott. Thank you for taking my call. You can hear me, right? Yeah. Go right ahead.
Okay, so I work in the semiconductor industry for a company called Renesas here in the Valley. And I have a customer that's doing AI designs. They're called Etched, I guess. And they have a really interesting technology. They have something called e-transformer technology, which in layman's terms means that you can do parallel processing. So in the end, you can process data faster, I guess. That's the term.
That's the key here. The bottom line, yeah. So they, and they, okay, and the, for the people, well, I'm not an expert on the topic, but I'm working on, I've learned quite a bit now. There are two pieces. One piece is the machine learning, the learning piece of the AI, and the other piece is the execution. So, you know, the algorithm learns the
the pattern or learns the voice, learns the images, whatever, and then it makes a decision. That's the other piece of it. So these guys are competing with the likes of NVIDIA and so on. Very awesome technology. I'm excited about it. I'm in that field. But I'm conflicted. The biggest problem I have with
with these technologies is, for example, like they're putting sensors in cars. They're forcing us to use our brain less and less. Can you leave some of the thinking for me, please? Right? I mean, it just... Yeah, I mean, and this is a trend, Gary, that's been happening in so many ways. I mean, just who remembers phone numbers anymore, for example? Yeah, I love the term cognitive offloading, where this idea, like I use the example, like,
I drive to my brother's house, I have no idea how to get there. If suddenly my phone died, I wouldn't know north, south, east, west. And so increasingly that is going to happen. That is a worry. Initially there was this worry around the schools like, oh no, we can't let kids use these chatbots because they'll use it for cheating. And that's still a concern. I think it's essential that they use it so they start getting used to it because it's going to be at the center of their lives. But to me, the real concern is that
they're going to rely on it so much, they're not going to be able to do basic things. So I do think Mark raises an interesting, important point that, you know, if you rely on these too much, like calculators, you know, before calculators, people...
could do math on paper and pencil. And nowadays, I'm not so sure how many people can do that. So there will be repercussions. But I think an optimist would point out, well, on the other hand, there'll be all these things you can do. Someone could use one of these models, a text-to-video, and make a movie. A single person could make a movie using one of these things. So you'll gain these superpowers
even at the same time you're losing powers. Yeah, Mark, thanks so much for the call. We've got some interesting listener comments here. Carl writes, can you tell us about how AI can be used...
Well, you know, I mean, a powerful tool for good is a powerful tool for bad. So on the one hand, you know, there's a lot of promise in AI creating new vaccines and new treatments, but it could also create a death deadly pathogen, you know, that kills a lot of people. You know, an AI that could write your wedding toast.
something, you know, a limerick for your friend on his 50th... Done that. ...can be, you know, used to craft far better scam letters. You know, I mean, that's another concern of mine, that these things are used to manipulate people. They could really...
figure out based on the data they have how to really manipulate people to get them to do something. Somebody overseas, instead of getting one of those letters that has obvious spelling mistakes like, okay, this is probably a scam, they're going to be much better crafted. These things are at the point where they could compose
perfectly constructed paragraphs. You'd have no idea in any language. I could write something and translate it to Chinese, you know, Hindu, whatever. It doesn't make a difference. So... And it seems like one of the points in the book is that we ought to be thinking about those things more than these apocalyptic kind of things that maybe Elon Musk talked about with regard to robots, you know, controlling the world and setting off nuclear weapons and that kind of thing. Right. So, I mean, I really think...
I mean, Hollywood made great movies with Terminator, but I like to joke like, well, talking about robots that just did what they were supposed to do is not a very good movie, but one that tries to take over the world. That's a good movie. How do we stop? And I really do think that we're spending so much time on the apocalyptic stuff that we're not looking at all the things within line of sight that are scary. AI in the use of warfare, AI in surveillance,
AI is a huge energy hog. You know, it's eating up a lot of energy. Those are the concerns. Those are the things I wish the dialogue was about, Rad, and, you know, are these machines going to subjugate humanity? Here's a comment from Maria who writes, what political biases could be at play in how AI is programmed when books are being banned from schools and science is being defunded and disrespected? What information will be fed to our AI models?
Right. So those on the right would already say that we have woke AI. All the models we've been talking about have been taught not to sound misogynist, racist, anti-Semitic. And so on the other side, you have Elon Musk. He has Scrock, that's XAI. He wants an AI with foresight.
far fewer guardrails. Not to sound cynical, but I'm convinced that we're going to have, like everything else, our siloed AI. You can have the more liberal AI, the more polite AI, you could say. You're going to have the more conservative right-wing AI. You're going to have AI with no guardrails. So I think it's going to be another thing that compartmentalizes this
All right. Let's go back to the phones. And Paula in Berkeley, you're next. Welcome. You're on with Gary Rivlin.
Thanks so much. I just wanted to say it's a great conversation, and you were talking a lot about how lawyers and doctors are already using AI. And I heard something recently that really stuck with me, which is that in the future, having a human doctor or having someone that comes into your home to fix things or provide services for you is actually going to be an extreme luxury. And so lower-income folks and middle-income folks will have to rely –
completely on AI. To get a human, you actually have to pay a premium price. I think that really stuck with me because it really ingrained the importance of having humans stay in jobs, get assisted by AI. I love AI. I use it all the time. But I think it's really important that we think about this as a conjoined future rather than something that AI is going to take the lead on or humans are going to take the lead on. As soon as we start allowing humans to get pushed out completely,
Think that the socioeconomic aspects of it the overlay of it will be dangerous Gary sort of a different version or a different take on the digital divide, right? I mean that that's a great point. I 100% agree with that but a little bit of the flip side though, so there's roughly a billion people on this planet that have access easy access to a medical professional, but we have five billion people or so with a smartphone so ideally
that someone should be seeing a medical professional. But on the other hand, you know, you don't have access. You show it like, oh, is this a rash that my kid has that I should rush them to a city right now? Or don't worry about it. It's not a big deal. I do think that the flip side here is it could be a bit of an equalizer. We'll give everyone
access to healthcare, as one example, to people who don't have access right now. Another counter example to that is AI in education and AI tutor. I think there's great potential with an AI tutor that knows exactly what you don't understand and could craft the lesson
for this individual. A teacher with 30 kids can't quite do that. There's an amazing study out of Nigeria that showed that teachers working with the AI tutors, not just an AI tutor, teachers working with AI tutors, in six weeks, they made two years of advancement on average in a student's comprehension. And so that's the kind of thing that I think
can be an equalizer between the developed and the developing world. What about accountability, Gary? I mean, you know, you talk about medical advice. You know, there's also malpractice in the real world with doctors and others. I mean, are you saying sort of like, well, access to some advice that's largely good is better, is worth the risk? Right. So, I mean, right now, to use the medical example, you know, there are...
It isn't being widespread use for that very reason. You know, kind of the bigger point, though, is like, you know, in 2023 through some of 2024, I feel like there was a dialogue about how do we use this responsibly? How can we make sure that we get more of the good from this technology than the bad? I mean, every technology cuts both ways. You know, I mean, the car is amazing, but it's pollution. 40,000 or so people die a year in the U.S.
from, of course. So, you know, how can we do that? And so we're having this dialogue. The Biden administration, you know, took modest steps that I thought were sensible. And now with the Trump administration, they immediately got rid of those, what I'm calling sensible steps. J.D. Vance was at it
AI conference that this was the third one. The first two were all about safety. This third one held in January in Paris was not about safety. That was definitely a secondary, if not tertiary issue. And, you know, he just stood up there and stopped with the hand-wringing. You know, we got to develop this thing. We're in a race with China. And so, you know, I do fear that
the kind of issues that are being raised here are going to go by the wayside because we're in this race. And, you know, if anyone wants to talk about, wait, maybe you should test these things before releasing it and share the data with the government, people are just going to say China, and that's going to fall by the wayside. You're listening to Forum. I'm Scott Schaefer in for Mina Kim. You raised China, Gary, and China
I'm wondering, like, to what extent is, you know, Trump, of course, likes to erase whatever the president before him did, you know, whether it's Obama or Biden and, you know, getting rid of the regulations, which you thought were somewhat helpful. How much of that is erasing his predecessor's record and how much is it about really winning this race against China?
Yeah, I do think it's about winning the race. I mean, it's in part about China and part about, you know, the high profile support of some big name tech people, you know.
Why has some of the biggest names in tech made this rightward shift and supported Trump? To me, because they don't want anything get in the way of AI development or crypto development. And so this is the kind of free enterprise Wild West that his supporters in the tech world are demanding. And that's what the Trump administration is delivering.
Another listener comment here from Ami who writes, how much of an energy toll does generative AI take? I only use chat GPT to tighten or streamline my own writing. Last weekend, I used it to create a very simple illustration for a report. Can we assume a certain amount of energy and water consumption based on how long chat GPT takes to process a prompt? Or is there another way to measure our personal usage in terms of impact on energy consumption? Yeah, no, it's huge. There was a study out that
A simple query to a chatbot requires roughly 10 times as much energy as a Google query. And some are saying, oh, that was too simplistic. But that gives the general idea that this is exponentially more of a demand. It's just we have the number of data centers out there are increasing every year exponentially.
and it's eating more and more. We really are looking at a huge problem with the energy consumption and the demand on our, you know, electric grid. There are some who say that, well, maybe AI can come up with a solution, but, you know, so far it's a concern. To me, it's another one of those concerns like we really should be planning for this. You know, the schools need to be planning for this. Business needs to be planning for it. But, you know,
Those who are paying attention to our energy grid really need to be paying attention to this because you can just look a few years out and we'll have like a doubling of the number of data centers we need. And that's going to be a huge, huge problem. Coming up to the end of the hour, but let's see if we can get one more caller in. Noah in Oakland, what's on your mind?
Hi, thanks for taking the call. My comment is every time I hear about AI, I hear about the scary things about it mostly. Like it's going to replace people and cause unemployment and it's being directed towards a profit motive and guardrails are being stripped away. And it just strikes me as a very broken kind of economic model where this tool that can vastly increase productivity
is not deployed to liberate people from mundane tasks. It's not deployed to improve access for people to more products that can improve their lives. It's deployed to enrich someone and kick people out of labor, and in this case, intellectual labor, not mundane tasks, things that are fulfilling and interesting. And so my question for the guest is, you know, how do we confront this? Is there a way to
deploy AI or regulate it in some kind of better way such that it serves the interests of all of us, of the public, of working class people? Is it public ownership and a takeover of AI? Is it something else like this? Yeah. Thank you. Yeah. Thanks, Nora, for the question. Gary? It's such an important question. I mean, this is our system. It's a
entrepreneurial and capitalistic system. And it's these companies that, you know, have investors and they want to see a return on investment. You know, if you look at the universities, they're doing some really interesting stuff where they're trying to talk about exactly what Noah was talking about. Like, you know, how can we guide these things? So it does more good. They really are powerful. They really could do incredible things.
And, you know, another example is, you know, science. Like, you know, you don't necessarily have to have a huge lab. You could use AI. So if you're in the developing world and you don't have money, you know, it could be a really powerful tool for doing research relevant to your reality and not just taking it from the Western world.
But, you know, those are all anecdotal. I'm afraid I don't really know the answer other than I wish we were having this dialogue. I wish we as a country, there's so many other things to talk about, so it's getting drowned out. But I wish we were having the dialogue that the caller is bringing up, that, you know, this really could be, you know, an amazing advancement. It could be far more of a net positive than a net negative. And I'm scared that because we're not really paying attention to, you know,
Guiding this thing so it's more positive. It will end up being just more of the same social media
Redux, except for a different set of issues. Well, reading the book is a good place to start. It's called AI Valley, Microsoft, Google, and the Trillion Dollar Race to cash in on artificial intelligence. Gary Rivlin, thanks for a really interesting hour. Thank you. I loved it. Thanks. You've been listening to Forum. Thanks to my guest and, of course, to you, our listeners, for all your great questions and comments. I'm Scott Schaefer in for Mina Kim, and you've been listening to Forum.
Funds for the production of Forum are provided by the John S. and James L. Knight Foundation, the Generosity Foundation, and the Corporation for Public Broadcasting.