We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode What’s Next in Artificial Intelligence?

What’s Next in Artificial Intelligence?

2025/5/13
logo of podcast KQED's Forum

KQED's Forum

AI Deep Dive Transcript
People
A
Alexis Madrigal
J
Jeff Horwitz
K
Kylie Robison
参与讨论和分析最新的科技趋势,包括社交媒体平台的发展和智能家居设备。
L
Listener
N
Natasha Tiku
N
Nitasha Tiku
P
Paula
P
Peter
Topics
Alexis Madrigal: 我认为人工智能领域存在着根本的分歧,一方是怀疑者,他们质疑人工智能的实际能力;另一方是信仰者,他们坚信人工智能的潜力无限。尽管ChatGPT的发布引起了巨大震动,但公司们已经迅速将其商业化,数据中心遍布全国,人工智能正像Clippy一样嵌入到各个领域,迅速扩散。然而,这种根本性的分歧依然存在,我们对人工智能的实际能力和潜在局限性的看法仍然存在差异。 Nitasha Tiku: 作为一名科技文化记者,我个人是Claude的忠实用户,它就像一个增强版的词库,能够帮助我更准确地表达想法。此外,我也在测试ChatGPT和Anthropic的深度研究功能,它们在搜索网络和提供更优质的答案方面取得了显著进展,相比一年前有了很大的提升。这些工具在研究和信息检索方面展现出巨大的潜力。 Kylie Robison: 我也经常使用Claude,它确实是一个非常出色的词库工具。然而,在使用ChatGPT进行研究时,我必须对所有信息进行事实核查,因为它经常出现错误。虽然人工智能在某些方面非常有用,但我们不能完全依赖它,必须保持批判性思维和验证信息的准确性。 Jeff Horwitz: 我个人对目前大多数旗舰聊天机器人的印象并不深刻,但我确实发现它们在处理大型数据集和提取信息方面非常有用。在受控环境下,当我们需要从大量数据中提取相关信息时,这些工具可以发挥很大的作用。然而,我并没有成为这些聊天机器人的重度用户,因为它们的实用性仍然有限。

Deep Dive

Shownotes Transcript

Translations:
中文

Support for KQBD Podcasts comes from Landmark College, offering a fully online graduate-level Certificate in Learning Differences and Neurodiversity program. Visit landmark.edu slash certificate to learn more. Switch to Comcast Business Mobile and save hundreds a year on your wireless bill. Comcast Business, powering possibilities. Restrictions apply. Comcast Business Internet required. Compares two unlimited intro lines and lowest price 5D plans of top three carriers. Tax and fees extra. Reduce speeds after 30 gigabytes of usage. Data thresholds may vary.

From KQED. From KQED in San Francisco, I'm Alexis Madrigal. We're in a strange moment with AI. The shock of the release of ChatGPT is over. Companies have rushed to commercialize these new architectures for machine learning. Data centers are getting plopped all over the country. AI is getting embedded in everything, like Clippy, proliferating across the digital landscape. And yet...

The fundamental divide between skeptics, what can it do? And believers, what can't it do remains. We talk with reporters covering AI and answer your question. It's all coming up next, right after this news. Alexis Madrigal here. We've got a little pledge break going on right now. So you get a bonus on the pledge free stream podcast or on our replay at night. Write these little meditations on the bay and we call the series One Good Thing.

Barbara Stauffacher Solomon is a legend of design the world over. She was revered for her blend of Swiss modernism and pop art, including the bracing and playful graphics she created at the Sea Ranch in the 1960s. They were termed super graphics, and when I first encountered them maybe a decade ago, their bold colors and interesting geometries were still striking. It's no wonder they generated international acclaim at the time.

It would not surprise anyone, I'm sure, to find out that Bobbi, as she was known, encountered all kinds of sexist crap in her career in design. Nonetheless, she outworked and outlived so many of her male contemporaries. And over the last 10 years before her death in 2024, her career became an object of fascination for curators all over the West Coast.

Solomon worked across the world on different huge projects over the decades, but she was a homegrown San Francisco icon. A woman who lived in a tiny three-story house in North Beach for 50 years. A place so special, it was profiled by Aaron Fayer in a recent delightful San Francisco Standard feature,

Solomon was a character of and for the city, someone who raised her children here and knew the intimate corners of her neighborhood. And I truly regret I never got to meet her. Solomon said of the twilight of her life, quote, In my old age, I walk to the top of Telegraph Hill every day with my golden retriever Gus and sit on the same stone wall and smell the same eucalyptus leaves I smelled as a kid.

if my house is like my skin this city is like my daily warm bath it's your one good thing for today bobby and her daily warm bath if you come to san francisco summertime will be a love in there welcome to forum i'm alexis madrigal

It's funny, you know, where our studio is here in San Francisco, I look out each and every day and stare at the OpenAI building across the street. Sometimes there are fleets of black SUVs that pull up. Sometimes the muscle outside is particularly muscly. Business and political leaders from all over the place sometimes just show up.

Artificial intelligence is the next big thing many people are sure, even if it's not clear exactly what it will do. On the one hand, optimists within the industry are sure that AI is already better than most lawyers or coders or writers. And on the other hand, a recent survey of corporate executives found that a large majority of AI projects totally disappoint.

Then there are the environmental and ethical concerns, etc., etc. So we wanted to take a moment here with three reporters who cover the field to tackle some of the headlines, answer your questions about the state of the field from people who don't have a product to sell you.

We are joined by Natasha Tiku, tech culture reporter at The Washington Post. Welcome. Hey, thanks for having me. We also have Kylie Robison, who is a reporter with Wired and covers the business of AI. Welcome. Hello, thank you. And we've got Jeff Horwitz, tech reporter with The Wall Street Journal. Welcome. Hi. Natasha, let's start with you. Do you use AI in your daily life? Do you use it in your work or other ways? I actually do. I am a...

Claude Loyalist. Made by the company Anthropic, another San Francisco company. Yeah. I mean, first of all, if you want to use it as like a souped up thesaurus, it's chef's kiss. I have also been testing out the like deep research features in ChatGPT and Anthropic where you can ask it a question, you know, and it'll go out and search the web for you and give you its little...

chain of thought with much better results than you were getting a year ago. I'll say that. Yeah. And when you say chain of thought, this has been kind of a hot topic in AI, which is sort of having...

having one of these large language models sort of explain why it's doing what it's doing and showing you that, like showing you this is the thing that the machine is, this is the sequence of, I don't want to say thought, I want to say sequence of... Gibberish. Gibberish that it uses to get to its eventual answers for you.

How about you, Kyla? Do you use it in your or do you feel like you're refusing it? You're like, I cover it. I'm not going to use it. No, not at all. I think that I should have to use it if I'm covering it. I'm also kind of a clawed diehard. I just think it's really good. Yeah, like a souped up thesaurus is exactly how I ended up using it. Like, what am I?

looking for when I say these words, what is like the one word I'm looking for. It's really good at that. But otherwise, I mean, like I use Chachapiti search for research, but then I have to go and

fact check everything because it has been wrong pretty frequently with me. Yeah, those are some of the tools I use. Yeah. I mean, it makes sense that it's good at being a thesaurus. It's like a machine made of language. You know, it's like if it's not good at that, it's not really going to be good at anything. Jeff, have you used, there are different tools, like say Google Notebook LM, something like that, where you might be able to like, say, feed in some...

set of PDFs and YouTube lectures and get something back or be able to query that set of, uh, of tools. Have you used any of those or do you feel like they, they are not relevant for you yet? Um, I, I,

I have found fairly limited use cases. I don't do a ton of coding myself, which I think is, in my mind, the preeminent usage case right now in tech. You can have it spit you out some vibe code. Yeah, but...

But I have, I think, look, under like controlled circumstances where you are, have a really large data set and you are attempting to surface information and you are able to verify which parts are relevant immediately. It feels like that would be a great project under certain circumstances.

I have not been a giant user or, candidly, all that impressed by sort of most of the flagship chatbots that have come out. And that's more than just metas, which I recently had a lot of experience with. And, candidly, it was not that useful. But we can get into that. I do like... I mean, I think one of the things that has changed is these...

having the tools be working with a set of documents that you presented to it. I mean, I got interested in sort of like the biological basis of memory and fed in a bunch of PDFs about that and then into this Google Notebook LM tool and did find it somewhat useful as a non-biologist to have this type, different types of summaries sort of presented to me, you know.

Natasha, people talk about a term, artificial general intelligence, often abbreviated by nerds into AGI. Some other companies have other terms of art for basically the same thing, like very powerful AI and things like that. What does it mean when people use this term, AGI, and how is it kind of used in this sort of discourse?

It's used frequently in the discourse. Often, I would just clarify that it's not a scientific term. There's no shared definition. There's no way to test for it. But it is very evocative. It's, I think, meant to just...

conjure up this vision of a future. You know, it's very sci-fi. But it's also used by, you know, executives for trillion-dollar companies to, you know, to make their coding co-pilot sound more futuristic. I think that it's really...

it's really dominant in the discourse of why we think that these tools, which, as we've all said, you know, can break down in certain ways, aren't reliable, will change our economic future. Yeah. One of the ways I've heard it described is an AI capable of doing all cognitive tasks. Do you think that that is even a realistic goal? I mean, Kylie, you cover the business of these things. Do you think that's a...

It even is a business goal right now for these places to have that kind of AGI? No. No, I don't think so. I don't see any facts or reality that show that we are going to have a system that performs all cognitive tasks that a human can. And that's sort of what we work on is the facts of the situation. But I think...

AI will get more advanced in some territories in which it's trained to do so. But I don't see a world anytime soon where we're getting this like flashy AGI term. And Natasha said it perfectly. Like it's not scientific. It can't be checked. It's very vibes based, which is a weird way to approach a perhaps revolutionary technology. Well, we're in a vibes based economy. Yeah.

Yes.

I can't believe you forced me into this sort of optimist corner here, but I just want to say that, like, what if they say, well, okay, we feed in different cognitive tests of different kinds and the tokens that come back, the words that come back are the right answers. And so what is cognition, if not being able to get the answers to cognitive tests? I mean, I'm not qualified to answer that, but I don't think that that's, like, the most instructive or helpful framing to talk about, like, things

these very real tasks that, I mean, you know, these tasks that they can complete in a somewhat reliable way. I think it just distorts what we're seeing in front of us. Yeah, also, if you're talking about a machine that you, you know, feed in any question and it comes back with the correct answer, whether that piece of information is known or unknown previously, then, you know, what the hell are you doing in the studio, man? Go across the street and get rich. Yeah.

That's not where things are at currently. I mean, we are... The benchmarking for these things is, in terms of progress for AI models, is candidly somewhat suspect. There have been some recent semi-scandals over efforts to game them. It turns out that tuning them to sycophancy, basically just like...

Well, I'm not exactly sure, Alexis, but that's a really smart question you asked. Is actually a way to get people to rank them higher? Jeff, A-plus guest. Works for humans, too, apparently. Well, hold on. Hold that thought, though. Hold that thought. We'll be back with more with Jeff Horowitz, tech reporter at The Wall Street Journal, Natasha Tiku, tech culture reporter at The Washington Post, and Kylie Robison, reporter at Wired. I'm Alexis Madrigal. Stay tuned.

Support for Forum comes from San Francisco Opera. Experience the soaring highs and heartbreaking lows of bohemian life this summer in John Caird's beloved production of La Boheme. Puccini's most adored opera transports us into the heady bohemian world of 19th century Paris as we follow a circle of starving artists falling in and out of love, living for the moment. La Boheme runs June 3rd to 21st.

Learn more at sfopera.com. Support for Forum comes from Rancho La Puerta, a health resort with 85 years of wellness experience, providing summer vacations centered on well-being. Special rates on three- and four-night August vacations include sunrise hikes, water classes, yoga, and spa therapies, all set in a backdrop of a dreamy summer sky. A six-acre organic garden provides fresh fruits and vegetables daily. Learn more at ranchopuerta.com.

Forum, Alexis Madrigal here. I've got three tech reporters here in the studio with me. Kylie Robison, Kylie Robison from Wired, Jeff Horowitz, tech reporter at The Wall Street Journal, and Natasha Tiku, tech culture reporter at The Washington Post. Of course, we'll take some of your calls and comments as well.

Have you found AI to be helpful in your life? How are you using it? Do you have questions? Do you have concerns? Give us a call, 866-733-6786. That's 866-733-6786. You can email your comments and questions to forum at kqed.org. You can find us on social media, Blue Sky, Instagram, et cetera, or KQED Forum, and there's also the Discord. I want to talk about a particular topic

instantiation of the artificial intelligences, the kind of chatbots that are sort of being introduced as digital companions. Recently, Mark Zuckerberg said, the average American, I think, has fewer than three friends, three people they'd consider friends. And the average person has demand for meaningfully more. I think it's like 15 friends. Yeah.

Sorry, that quote. They're all laughing in here at the quote. I can't believe I'm in this position again. But I just want to say, having followed things that Mark Zuckerberg says that are the least human-sounding things you've ever heard in your life, there oftentimes is some insight gleaned from having a billion users and staring at dashboards of people wanting to have friends or whatever it is. Yeah.

Tell us a little bit about, Natasha, about some of what Zuck was getting at here of like why you might want to have digital companions or if you see any insight in that.

Yeah, well, in that same interview, he also talked about how meta AI already has a billion users. So I think that this new vision that he's putting forth, which is not the metaverse, you know, not a coding tool, not a scientific tool is based on, I'm imagining what feedback this

they're seeing. I had looked into the daily usage of these AI companions, which mostly had been pushed by smaller companies. And you can see the engagement. Like Character AI or something like that. Yeah, Character AI, Chai. There's two Chinese competitors, Taki, PolyAI, now called PolyBuzz. And the graph is just like, you know, it's like 96 minutes, 86 minutes, 50 minutes. And you look down at chat, GPT is eight and Anthropic is nine.

I had to take Google Gemini off the list because it was less than one minute per day. I mean, you know,

I'm not saying the business model and case like long-term retention is better, but I think that executives are definitely looking at, you know, the ability of these digital companions to keep you talking, to keep you engaged for the, I think probably some of the reason Jeff was alluding to earlier. I just, yeah. And Jeff, we're going to talk about your reporting in a second. And that was a fantastic answer. I've also just been laughing at the idea of poly.ai in the San Francisco context, you know,

Just a virtual polycule for everyone. Jeff, let's talk about this reporting. What were you doing? What were you trying to figure out about the way that these chatbots are working? Yeah, so I just pick up where Natasha left off. The

Certainly, one of the primary use cases right now in terms of what people actually find compelling is people who want to talk to them on like a social basis. Because it turns out that if you're like looking for a particular answer or...

you need help with a coding project. That's kind of a quick thing, but what do we all waste our time doing? And that is talking to people, right? And that is a thing that, you know, they're basically, and meta in particular has sort of like among the major players has really sort of seized on. One of the things that your reporting kind of points out is that in order to make a compelling conversational companion, you kind of have to take off some of the guardrails, right? If all it ever says is sort of like,

Great. Nice to be here with you. Like you, it needs to be able to say things, right? Yeah. I mean, and there's a very particular set of guardrails that it turns out that when you're talking to chat companions, people want access to, um, and that is a sexy time. Um,

Look, that's the word of art I've chosen to describe this because it's simultaneously embarrassing and also like, you know, but... It's the appropriate level of cringe. Yeah, it feels... Exactly, which I think is appropriate when you're having sexy time with chatbots. But so meta...

was really into the idea of introducing AI companions. The data did show that if you could hook people on those, it would be a very valuable source of time spent. And Meta is a company that is all about increasing engagement. And this is back in 2023 at DEF CON's Packer Conference.

you know, sort of how to bake off with all these other models. And the conclusions were that Meta's bots were kind of the safest, but also the most boring. And this made Mark Zuckerberg upset per sources inside the company. And he basically ordered the company to move faster, take more risks. Some of those involved

Adding new features, so like giving the bot access to your user data or allowing it to know where you live or to speak with you on the phone, like an audio conversation would go. And other ones were like loosening guardrails. And it turns out that some of the guardrails that were loosened were ones related to sexy time. Let's hear it. Because I think that maybe there's something to it.

actually hearing this is a chatbot using the AI generated voice of actress Kristen Bell you're still just a young lad only 12 years old our love is pure and innocent like the snowflakes falling gently around us

I can't decide whether to laugh or cry. If you could hear what came right before that section in which Miss Bell's AI voice... Like we possibly couldn't play it on the radio? We couldn't run it in the Wall Street Journal. Or certainly did not want to run it in the Wall Street Journal. No, it was a fully explicit sex scene with a 12-year-old. That 12-year-old being me in this instance. Right, yes.

But yeah, so Meta basically knowingly released, they were aware and there were internal concerns about this, all ages bots that were capable of, again, sexy time. And...

They've since changed that. That's been rolled back when we got in touch with Meta and let them know that we were aware, one, of the bot's capabilities, two, that they were using the voices of celebrities that they had contracts with, and three, also that we were aware that Meta was...

had knowingly done this, those things did change. So they said to you though, basically, and I think there was, you mean you printed the answer in your story, but they basically said to you, well, the way that you got the bot to do this is like outside the boundaries of normal usage. This is a total edge case.

But then how does that track with the idea that people want to have these kind of quasi-romantic relationships? This is the little weird part about that response, right? Is that while minors can no longer engage with bots that way, they have deliberately left it on for adult users because this is one of the primary use cases. And you can see user-created bots that Meta allows, like...

um, sexy siren, Sasha and hottie boy. And like my girlfriend and a whole bunch of like stepsis, like, like bots were like at the top of the list. Uh,

These things were, I mean, this was a thing that incidentally from people in Zad Mehta, they were like, they didn't like this. They really wanted people to be using their AI companions to like discuss sports and philosophy with and like, you know, like talk about, you know, Roman history. That's right.

And this was just like... Yeah, we're all Roman history buffs in here. I mean, I think that would kind of be Mark's ideal version of this. The problem is that's not what people wanted to use them for. And they kind of tried to nudge people toward these other different use cases. But like, you know, again, the usage was...

This is something that very clearly users wanted and that they have sought to preserve. Yeah. I mean, you're describing basically the trajectory of Character AI. Yeah. Right. The company that was started by, you know, one of the original developers of the Transformers technology that undergirds like this whole AI boom. You know, when I talked to them, they gave examples of talking to Einstein, talking to Shakespeare, you know, curing loneliness. Yeah.

And you can see no matter what meta spokespeople say, you can see in the Reddit forums, many users, if these companion apps put up guardrails talking to each other about how to break the guardrails and how to get it to do something.

Sexy time. No, I'm not going to say that. Sexy time. You will never catch me saying that. You know, erotic role play. I mean, they're really good at role playing, like the kind of features that... That is what they are, in fact. Fabulous, yeah. You want to say anything, Kylie? Well, I guess all

All I'm thinking is this is what happens when you build models that work like slot machines and less like philosophy professors. This is what humans are going to do. They're going to hit that dopamine button over and over and over. And if your company is capitalizing off them hitting that dopamine button, they're going to keep building it. And I think that's what a lot of the AI safety people worry about. That's why the sick of it update to ChachBT, their latest model, was rolled back because safety people were like, this is a disaster.

Yeah. I wrote about friend AI, which is this pendant that's yet to be released where people talk to it throughout their day. It's like a literal physical pendant. Yeah. Physical pendant, like a necklace. And yeah,

Throughout writing that story, I just thought, how can we digitize human connection in this way? And the people who profit off this will tell you this cures loneliness. It's either this or nothing for many people. And sure, that can be a compelling argument, but I just don't actually think that that's the answer to the case. I just wanted to note for people who are not following the rollout of different AI models. And we're talking about this sycophancy issue.

latest model from OpenAI, it's basically like, you know, they're constantly retraining and updating these models, tuning them in different ways. And in the latest release, basically people decided that the model, like, is too much of a kiss-up. Like, it's too... It is too nice in its responses. And it's always essentially saying, you know, oh, you're so... What a great question. What a wonderful idea. Like...

even when people are presenting quite terrible ideas to the model. I mean, one explanation I've heard for this from AI people over the years is just that the reverse, the model that refuses to do something for you, is like death for people's engagement with that model. And so they're constantly having to sort of

fine-tune this thing a famous example was when they had to try and write Claude out of saying certainly because Claude kept prefacing every answer you'd be like Claude what's the you know I don't know what's a queer anyway ask a question and it would just say certainly and then the next answer would be certainly and the next time be certainly and so they have they had to try and beat that out of it let's bring in Peter in Tampa Florida on this topic actually welcome

Hi. You know, it's kind of like what I'm concerned about is that AI is kind of like Cliff on Cheers. Like, it can babble on about anything, but it's really intelligent. You know what I'm saying? For instance, for instance, you know,

You might say, if you ask it the question, why is metric superior to American customary weight and measure, it'll answer you. But if you read an article I wrote called Drastic Measures, you might ask an intelligent question like, if metric is superior, why wasn't it used in any of the Apollo missions? Right.

In other words, this thing isn't thinking. It's just kind of like clip. It'll just babble on about anything and it's comprehensible, but it's not, it doesn't mean anything. Yeah. Yeah. Does that make any sense? No, it totally, I mean, I think it makes, it makes exact sense, right? It doesn't have an

underlying model of meaning that people have been able to parse out. And of course, you could ask it, you know, why is the metric system better or why is the metric system worse? And in both cases, it would say, certainly, great question. Let me tell you, you know, and that's kind of one of the issues.

I mean, I love the metric system. I'm sorry. Peter is the director of research for Antimetric.org, I'd like you to know. But that said, yeah, yeah. But I do think the... I mean, yeah, that is that the providing answers that you want to hear and probabilistically guessing what the...

most pleasing outcome would be. That is how these things are being ranked. And obviously, figuring out what pleasing means in any circumstance is an issue. And likewise, I think one of the reasons why

I've had trouble thinking about this as a great research tool. It's just that I have been unable to find circumstances where ruling out hallucinations is a possibility. So that would be the, actually the metric system is better because, and then it would just make some crazy stuff up. Yeah.

You know, the more specific the question I ask, the more likely it is to come back to me with a citation that does not exist. And I found that to be a thing that runs across tools. Perhaps the other panelists would have a better sense as the progress on that front, but it

to me seems like something that has not changed much in the last year or two, you know, that I've been playing around with these tools. Yeah. I once uploaded a bunch of documents as part of a lawsuit, which typically comes with just a ton of documents. And I uploaded it and I asked certain questions and it just started giving like,

fake answer or like fake quotes from the document. And then I went back and I looked across, I don't know, like 29 documents that I uploaded and I couldn't find it. I was like, okay, that just gave me way more work than necessary. To be honest,

Slightly optimistic. I don't think it's a bad idea to argue your beliefs with a chatbot. If it's like if you want to take one side or the other, like he is very into the U.S. way of doing things and not the metric system, you can argue about that with the chatbot for all your days or anything that you believe to be true. And I think that can be fun and not super like high stakes to do. Yeah, yeah.

Um, Natasha, while we're sort of on the topic of chatbots, you've reported on X's chatbot, Grok. Um, it has taken a different tack in some of the sort of fine tuning of its, again, I don't want to anthropomorphize, but it's, it's personality techniques, right? Um, how would you describe that for, for people?

Yeah, well, Elon Musk wanted to present his chatbot as like the opposite of open AI. So in his parlance, it's not a woke chatbot, it's a base chatbot. And that to him meant, you know, you would think it would mean like removing some of the guardrails, right?

But it seemed in a few instances to try to be steering the user toward specific responses. Like if you asked a question about Donald Trump or Elon Musk, and this came to light partly as a result of

you know, you can kind of prompt these models if you know what you're doing and get their quote unquote system prompt, which is basically like instructions for their quote unquote personality. You know, and you can see how much work, which you've alluded to in this conversation, how much work goes into, you know, how many dials and knobs are turned by these billion dollar corporations controlling what comes out and what you see. You know, I think that

I can't even use grok because it's so corny. It's just so painful. It's Elon Musk's Twitter humor writ large. Based AI is corny? That's so weird. Yeah.

For people who are interested in this system prompt stuff, it actually is totally fascinating. And Anthropics actually published some of theirs. It's sort of like if you're using one of these tools, you're making a prompt and it's returning some text back to you. The system prompt kind of informs every prompt. It's sort of like upstream of all that.

And, you know, we mentioned, I mentioned earlier this use of the word certainly. I mean, one of the most fascinating things is in the system prompt for Claude, which is Anthropix tool, they actually explicitly banned Claude from using the word certainly. But actually, Claude still used the word certainly, which is kind of an indication of how imprecise the tuning of all these tools is, which is why you can have these companies rolling out their biggest product and then having to roll it back because they can't actually do

make it work precisely the way they want it to. It's like the system engineering itself doesn't allow that. Yeah, they're using the same natural language technology that you and I are using. You know, it's not, you're not seeing a mathematical formula or an algorithm. You are, they're doing basically the same thing you might do when you say like, you know, give this to me, explain it to me like I'm five. Yes. But it's just a

Or like your billion dollar corporation version of that. Yes, exactly. We're talking about the world of artificial intelligence. We've got Natasha Deku, tech culture reporter with The Washington Post, Jeff Horwitz, tech reporter with The Journal, and Kylie Robison, who's a reporter with Wired covering the business of AI. We'll be back with more right after the break.

Support for Forum comes from San Francisco Opera. Experience the soaring highs and heartbreaking lows of bohemian life this summer in John Caird's beloved production of La Boheme. Puccini's most adored opera transports us into the heady bohemian world of 19th century Paris as we follow a circle of starving artists falling in and out of love, living for the moment. La Boheme runs June 3rd to 21st.

Learn more at sfopera.com. Greetings, Boomtown. The Xfinity Wi-Fi is booming! Xfinity combines the power of internet and mobile. So we've all got lightning-fast speeds at home and on the go. That's where our producers got the idea to mash our radio shows together. ♪

Through June 23rd, new customers can get 400 megabit Xfinity Internet and get one unlimited mobile line included, all for $40 a month for one year. Visit Xfinity.com to learn more. With paperless billing and auto-pay with store bank account, restrictions apply. Xfinity Internet required. Texas fees extra. After one year, rate increases to $110 a month. After two years, regular rates apply. Actual speeds vary.

Welcome back to Forum. Alexis Madrigal here. We're talking about the world of artificial intelligence, joined by Kylie Robinson, a reporter at Wire, Jeff Horwitz, a reporter at The Wall Street Journal, and Natasha Tiku, a reporter at The Washington Post.

Take in more of your calls and comments in this part of the show. The number is 866-733-6786. Forum at kqed.org. Social media, we're KQED Forum. And there's the Discord community, of course. So one listener wants to know, and Kylie, we'll start with you on this. Please discuss whether you and your guests feel your jobs are threatened by AI. Yeah.

No, not the people in this room, but I would say me a couple years ago as an intern, answering emails, taking pitches, writing sort of low-level articles about startups. I think that those jobs, entry-level jobs, could be in danger. But once you get to a level where you spend all day taking sensitive calls with people who are really trusting you to not mess it up and keep them safe, I don't think...

AI in any near term or even really long term will be able to do that. Yeah, I mean, there's a lot of reality to reality. You know, it's like not just things that can be found on the Internet, scraped and repurposed.

I mean, I, you know, whether or not it can replicate what we do is one question, but how it affects the business model of journalism, we've already seen. You know, people are moving more and more to searching for answers through these chatbots. It's like for a lot of people, I think they're portal to the web. We have seen that when they search in that way, they don't end up clicking on the links. And I think, you know, we've also seen...

Yeah, just, I mean, you take an already atrophied industry with, like, very few ways of currently making money that are not advertising-based. Then you take away the advertising. Yeah. So I think that the effects are immediate and, you know, which is...

very ironic considering that if you look at some of the data sets from when the data that was used to train them was public, news rankings are very high up there. And we're seeing as OpenAI is brokering deals with many publications, including my own, they are also continuing to scrape for content. So

So, yeah, I think that the business model, I mean, and many business models are in parallel, all of the same influence that Google search had over companies that have a web presence is just being replicated. Yeah. Thanks, Natasha. Jeff? I would say that there's a...

There is a kind of a cover of that kind of the techno utopian AGI is almost here. Um, that that argument kind of is meant to sort of distract from the actual real world current applications of these things. Um, which is, you know, I, the best, you know, and most profitable use of this stuff to date has been just spewing out low grade content. Um,

Social media, it is now a dominant form of content. It's literally for a while, people just power washing whales. Fake videos, people power washing weird crusty things off of whales were just dominating on Instagram.

And, you know, and it's just sort of like weird random video clips. So like just producing large quantities of slop and, you know, automating large scale fakery is something that that is a real capacity right now. And whereas a lot of the like beneficial ones are fake.

I think kind of down the road and to some degree that, that talking about AGI when actually we're going to be able to replace all of the interns with, you know, uh, first the interns and then, you know, higher and higher up the white collar ladder, um, of knowledge work with this stuff feels like it might be kind of in some ways an excuse to justify some of the cutting that I think we've seen both in corporations in Washington. Yeah.

Also, I feel like slop is a real contribution to the discourse. You know, this kind of began... Well, Kelly, do you want to talk about... I mean, it kind of began talking about how when you generated images, the hands would have like seven fingers or the faces would be very strange. And people were like, ah, AI slop. But now it's kind of...

it is it almost has been distilled into a more general term for the sort of stuff that these machines produce yeah i think ai slop is it's a big thing and i was just listening to a podcast that discussed this i'm going to try to explain it italian slop and there's like and it's like you

you make this character that's sort of Italian and there's like ballerina cappuccino. Our Gen Z listeners will know exactly what this is, but people are like consuming this slop and then making story arcs for their own characters with AI slop. And maybe that's too harsh to call it slop, but it really is just like, it's almost like we've run out of content to consume and this is where we're at. It's certainly one way to use the technology. Yeah. Yeah.

Here's another way. This is from The Register. This is kind of referenced at the top, but this was an IBM survey of 2,000 CEOs. And of course, all these CEOs are saying, yes, we are adopting it. We're adopting AI in some way. We're deploying it within our organizations. And yet...

The vast majority of the people who've done these deployments say that they have been disappointing. Kind of to your point, Jeff, and some of the things you've been saying, it's like the AI game right now is sort of that, well, imagine the curve of improvement and what it would look like if we were way out on that curve. But it's selling to people like right now these supposed solutions inside companies, and they feel like they don't want to be left behind.

Yeah, I mean, I think if you look at generative AI as not a...

you know, the first phase towards AGI, but just a technology like any other technology that has trade-offs, you know, you might be able to produce a lot of slop or a lot of code a lot faster. I mean, the executive survey is interesting, but we've also seen like a number of companies say that, you know, 30% of their code base is now written by AI and

But you don't hear like the adjustments that you have to make because this technology is not yet, you know, and may never be flawless. So you have say you have like replaced your interns and you're churning out code. Then you need some senior people to come in and check it like.

This is how, you know, it's like the self-checkout technology all over again. You know, it's used, I think, to justify lowering people's wages, diminishing their, like, the worth of their labor. But you still have to have people come in and make this technology work. Yeah.

Do you want to add something, Kai? Yeah. I think my nearest term concern with AI is that we saw with the Klarna thing where they said that they were an AI-first company and...

Explain what Klarna is, just for people who don't know. Well, Klarna, as far as I know, is like you can buy later with their checkout. FinTech, as people might think. FinTech, sure. And now they say that they're an AI-first company. They're not hiring people. They're replacing them with AI. And they recently rolled that back in an interview with Bloomberg and said, you know, it actually...

was not good for our company and lowered the quality of our product and we're going to start hiring humans again. I think overall, we're going to see companies adopt this technology before they're ready, before it's ready. And I think that mass unemployment and continued instability in this country is going to be a really, really dangerous combination. And that's something that genuinely scares me. Let's go to Paula in San Francisco. Welcome, Paula.

Hi, thanks. Super interesting conversation. I just wanted to offer a different perspective. I work in corporate America. I had a head injury last year and I've had issues, cognitive stuff, nothing that's hugely impactful, but just definitely there's some gaps.

Before that, I never would have used AI. I actually frowned on my husband, who used it all the time in his work. I was like, "How could you do that?" But I found it actually has been really helpful. More as just somebody to chat with. I say somebody, but obviously a computer.

But having that conversation with the AI tool, having it do things for me, I actually am a big fan of Quad. I think folks were talking about that earlier, just in terms of accuracy. You do have to check it, but that process of having something, it's kind of like they were saying, an intern that comes, gives you something, and then it's your responsibility to reform it. But it's been really helpful for me as I recover from this.

And how will you use it? Just like in practical terms, Paula, would you basically write an email and then stick it in there and be like, am I doing anything wild here? I kind of struggle. I still struggle through my emails. I don't like it to write things for me, like narratives. But if I need to do some quick research or I need to formulate a table or something, something that makes my brain hurt in terms of getting the margins right and that sort of thing, I'll say, pop this information into a table first.

for me or pull out, here's some notes, pull out the action items that I need to do. Stuff like that, again, you still have to check it, but it just kind of gives you that head start and for me the cognition, just like that little delay, it kind of helps me make it up without having to go and say, hey, I need an accommodation or I need something like that, which really kind of impacts you in your professional career, I think, whether we like it or not. And then also just like having kind of someone to

chat with about feelings that maybe you don't want to tell other people. I have friends. I still talk to all my friends about all the things that I'm dealing with. But like if there's something that's embarrassing that I kind of just want to get out and have, you know, some response back or help me prepare me to talk to somebody. And do you think it's just the process of saying it out loud or do you think it's what's coming back to you in the form of the

I think it's what's coming back. And, you know, when you're talking about the guardrail, that really resonated with me because there are times when it's just like, you know, I hope you're talking to a therapist. I'm like, yes, I'm talking. I have a therapist. I have a doctor. I have all these things. Let's just chat. I'm not taking this, you know, I'm also a lawyer. So like, I get that whole aspect of it. But I do think that there's something really valuable, like you said, about just saying it out loud, having someone say something in response to you and validate that, like, you know, I was going through a divorce that was validating a lot of things for me. And I was able to talk to

my friends about it because I'm like, okay, I'm not entirely crazy. Whereas before you're just searching the internet, looking for random Reddit threads or, you know, going on discord or something. So it's a little bit more private. So I just wanted to offer that perspective and I don't debate any of the other points that have been made. Fantastic call, Paula. Thank you so much. I, um, I,

I simultaneously very much agree that there are clearly positive use cases for it. At the same time, I also cringe when the caller refers to having someone to talk to, which was, you know, those were words she used there. Because it's not someone. It is something that is being formulated to sound kind of like someone. And I think that's one of the reasons why with the meta and chatbots, like other than just like the...

insanity of releasing a child sex bot to the world knowingly being a thing. I think that was one of the things that seemed concerning was that they were making the choice to prepackage this stuff as if it were human. And that, in my mind, for a major company was kind of crossing a river. We're going to talk about that more in just one second. This is Forum...

I'm Alexis Madrigal. Paula, thank you so much for that call. I totally agree. I think this is, it was a choice to essentially have chatbots present as a humanoid, right? Like, of course, we're so used to it now that it doesn't even scan as a choice anymore, but it was a choice, right? Didn't have to do it this way. Well, to

To Paula, right? I've done this. I've actually done this. There's this app called Rosebud that someone at a party recommended to me and maybe she can check it out. Rosebud. Rosebud. It's a startup. And you can choose different models and I chose Claude. And when I was deciding, I had like a

stressful period of my life in March. I was deciding whether to come to Wired, which I have done, and I had these panels at South By, and I just started blabbing to this thing. And it's basically an AI journal, and it

you can pay for it to have a longer memory, which most people do, I think. And it remembers, you know, insights from last week. And it's like, hey, you said this last week. And, you know, how does it apply here? And I found it really intriguing and helpful and a nice bridge in my life when I'm moving so fast, traveling so much. And like, I just want to blab into something, you know, like typically I would journal, like physically journal. But, you know, I don't, I haven't used it much since that one period really at all. But I found it

it, you know, helpful. So to her credit. Yeah. Natasha? Well, another thing that stood out about Paula's comment and I think gets at your use case too is she said it's private and it's, you know,

So I've talked to many users who like using AI, digital companions, whatever, as a therapist for that reason. But I would just really caution people that, you know, these are made by the same social media companies that collect data about us.

And we can see in the past couple weeks with what Mark Zuckerberg said about incorporating your social graph information on your Instagram, Facebook, which you never intended to be fed into an AI. They have some controls there. But I understand, you know, because they have this anthropomorphized chat function, it feels more private. But I think people will start adopting some of the same methods

procedures that they do for Google searches that you wouldn't want somebody to find, you know. So there's a real tension there between promising people, you know, a human-like approach

and privacy and like lack of shame and a corporation collecting your very, very personal data, which, you know, OpenAI also said that people are talking to it in deeply personal ways. Also, there's a reason why there's a license required to be a therapist. Yeah.

I mean, we've learned that, I think, probably over the course of the 20th century, that bad, bad, bad things happened when you just let people put out a shingle on that line of work and, you know, declare themselves to be professionals. And that is where things are at right now. Like, quite literally on the, you know, the meta AI, some of the more popular bots that are not, you know, sex-focused are, you know, like therapist bots. And...

I mean, are those things particularly well-versed in, you know? And not really. I mean, I will say that all of them, incidentally, are based on the same model. And yes, the therapist bots are also up for sexy time, which is pretty sure not allowed in an actual therapy world. But no, it just seems like it is a packaging thing's

in ways that are compelling at first glance, like that is the strength of this, of these models, like above all else so far. And I, I do worry a little bit about like mass introduction of those. Although I will say that something we haven't really talked about much is that

while yes, there are dedicated users, these things are not that popular yet in a broad societal sense. Like some people are putting in serious hours, but like it is like for meta, these things are, you know, they may be the future, but in terms of the current business. Despite the fact they're being pushed very strenuously. Yeah, they're being pushed super hard. Very clippy. They may be the future, but the current business, they're an afterthought. Yeah.

I just wanted to get into the last couple comments, really good ones. Listener writes, as a professor at CSU, which has done a $17 million deal to have chat GPT

For EDU, integrated into the school, I'm concerned about how we discuss AI LLMs doing research or brainstorming ideas, which really just ends up being a broad search, not what we would consider to be actual research at a level new students need to understand how to do. Most students coming in do not see a problem with asking AIs to get ideas for topics when the topics are meant to come from their personal experiences, especially as it is now embedded in the AI-powered university. That's a quote.

Some even turned to the bots when in a group discussion with their classmates instead of actually turning to their peers for discussion. More and more, we're finding that students consider the AI research and conversation and ideas to be more valuable than their own, which is highly problematic. Let's end on that happy note. We've been talking about what is next in the world of artificial intelligence. Been joined by Jeff Horowitz, tech reporter with The Wall Street Journal. Thank you, Jeff.

Thank you. Kylie Robison, a reporter at Wired covering the business of AI. Thank you, Kylie. Thank you. And Natasha Tiku, tech culture reporter at The Washington Post. Thanks for having me. I'm Alexis Madrigal. Thank you to our listeners for calls and comments. I'm the real Alexis Madrigal, not a bot that will soon take my job. Stay tuned for another hour of Forma Head with Mina Kim.

Funds for the production of Forum are provided by the John S. and James L. Knight Foundation, the Generosity Foundation, and the Corporation for Public Broadcasting.

Support for Forum comes from San Francisco Opera. Experience the soaring highs and heartbreaking lows of bohemian life this summer in John Caird's beloved production of La Boheme. Puccini's most adored opera transports us into the heady bohemian world of 19th century Paris as we follow a circle of starving artists falling in and out of love, living for the moment. La Boheme runs June 3rd to 21st.

Learn more at sfopera.com. Planet Money helps you understand the economy. We find the people at the center of the story. Garbage in New York, that was like a controlled substance. We show you how money influences everything. Tell me what you like by telling me how you spend your money. And we dig until we get answers. I had a bad feeling you were going to bring that up. Planet Money finds out. All you have to do is listen. The Planet Money Podcast from NPR.