We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI is Becoming More ‘Cheers’ and Less ‘50 First Dates’

AI is Becoming More ‘Cheers’ and Less ‘50 First Dates’

2025/6/6
logo of podcast WSJ Tech News Briefing

WSJ Tech News Briefing

AI Deep Dive AI Chapters Transcript
People
B
Brad Lightcap
S
Stephen Rosenbush
Topics
Stephen Rosenbush: 我认为我们正在告别“初恋50次”的时代,进入一个AI能够真正记住并理解我们的时代。过去的AI聊天机器人就像每天都在和陌生人打交道,无法记住用户的偏好和历史。但现在,通过将聊天机器人连接到包含我们个人数据的各种数据库和应用程序,AI可以更好地了解我们,提供更加个性化的服务。例如,作为一个学生,AI不仅会提醒我即将到来的物理考试,还会根据我的笔记和课程材料生成个性化的测验。这种个性化记忆和互动将使AI更像一个了解我们的朋友,而不是一个冷冰冰的工具。此外,这种技术还可以应用于团队协作,例如在急诊室,AI可以帮助医生快速了解患者的病史,提高工作效率。

Deep Dive

Shownotes Transcript

Translations:
中文

You can Venmo this, or you can Venmo that. Venmo this, or you can Venmo that.

The Venmo MasterCard is issued by the Bancorp Bank, and a pursuant to license by MasterCard International Incorporated card may be used everywhere MasterCard is accepted. Venmo purchase restrictions apply. Hey, T&B listeners. Before we get started, heads up. We're going to be asking you one more question at the top of each show this week. Our goal here at Tech News Briefing is to keep you updated with the latest headlines and trends on all things tech. So we want to know more about you, what you like about the show, and what more you'd like to be hearing from us.

Our question this week is what other tech podcasts do you listen to? That is assuming we are not the only one. If you're listening on Spotify, look for our poll under the episode description, or you can send us an email to tnb at wsj.com. Now onto the show.

Welcome to Tech News Briefing. It's Friday, June 6th. I'm Victoria Craig for The Wall Street Journal. Today, two stories on the future of the relationship between humans and AI. First, if you want to be where everybody knows your name, is a chatbot.

The solution? We'll tell you whether AI can really rival familiarity among friends at the local watering hole. Then we'll hear from OpenAI's chief operating officer on the state of the AI industry, where it is, where it's going, and what we know about a new suite of devices his company is working on. But first, artificial intelligence is entering its Cheers era. You know...

Sometimes you want to go where everybody knows your name.

The long-running 80s sitcom was based on the idea that the more time bartenders and bar regulars spent together, the more they got to know each other, their histories, preferences, quirks, and all. Stephen Rosenbush, head of WSJ Pro's Enterprise Technology Bureau, writes that that kind of sustained memory may soon be replicated in artificial intelligence chatbots. Stephen, if we're entering AI's Cheers era, what era are we leaving behind? Ooh.

We are coming from the place of 51st dates, where you have to explain yourself to a total stranger for the first time over and over and over again. That's sort of what it's been like interacting with AI chatbots. Yeah, they don't really remember us, do they? There's a limit to how much information they can recall. As you point out, AI assistants weren't really built to remember us. They're just there to answer our questions and be these

all-knowing things and wait for us to interact with them. So how difficult is it to factor in more memory capabilities for this sort of next generation of AI, if we can think about it that way? I would say that it was difficult and complicated, but not super impossible. There was a lot of infrastructure and tooling that had to be built. The generative AI model was built, and then the chatbot was built on top of that.

And it became a little bit more functional. It acquired the ability to do some reasoning. And now, over the last six to nine months or so, companies have started to build out the protocols that allow these chatbots to remember who we are, understand our preferences, and get to know us in a more significant way. And how does it do that? How does it get to know us if we don't explicitly tell it various points of information?

A lot of this came down to simply connecting the chatbot to all these databases and applications that already have a ton of data about us. Our email, our calendar, all these applications know us essentially really well. So the data was there and a big challenge was connecting the chatbot to the apps and the databases behind them that we use every day.

And there was a turning point in the last few months. Anthropic, the AI company behind the cloud chatbots, created this protocol that made it easier to connect chatbots to things called tools, which can include many different kinds of applications that we use.

And I love some of the examples that you give for how we can use this in our day-to-day lives. One example that you give is if I'm a student with a big physics exam, it won't just remind me that I have that exam coming up, but it'll give me personalized quizzes. It will take notes from my professor, photos, even my notes, and cobble that all together and give that to me in one package. Is that right? What are some of the other ways that it can work?

I mean, just to go over that point again, that came from Google's developer conference the other week. The company explained how it was beginning to connect its Gemini models to Google apps, beginning with Gmail and then later this year connecting it to other apps. So as those other apps are connected to Gemini, it should be possible for the chatbot to begin to understand and then remember that

I'm a student. I study physics. I have an exam coming up. And they've gleaned all this insight from looking at my inbox, from looking at my calendar. And then because there's this reasoning ability now,

The chatbot knows that I'm a student, that I have a physics exam coming up next week. It can proactively then suggest, hey, would you like me to quiz you on this? You don't even have to prompt the chatbot to do this. So things become more proactive. And then in a somewhat different area, you can start applying all of these memory functions to teams of people. There's a company that works in the healthcare space called

helping doctors write notes of their interactions with patients. This company is called Bridge. They are looking toward eventually being able to provide shared memory for emergency room teams so that if you're a patient and you're in an ER and there's a shift change, the person who comes on deck doesn't have to work so hard to get to know you. In theory, you can start to make that shared memory much more usable and accessible.

That was Stephen Rosenbush of WSJ Pro. Coming up, is it possible for personal tech to be even more personal? OpenAI's operating chief thinks so. That conversation after the break. At GMC, ignorance is the furthest thing from bliss. Bliss is research, testing, testing the testing until it results in not just one truck, but a whole lineup.

The 2025 GMC Sierra lineup featuring the Sierra 1500, Heavy Duty, and EV. Because true bliss is removing every shadow from every doubt. We are professional grade. Visit GMC.com to learn more.

A couple weeks ago, we told you how one of Apple's famed design gurus had been working quietly with OpenAI's Sam Altman to dream up the next generation of consumer electronics with an artificial intelligence twist. The details were scant, but WSJ reporter Keech Hagee, who helped break that story, has been trying to coax out more information about what OpenAI-designed devices could look like.

Last week, she sat down with Brad Lightcap, the chief operating officer of OpenAI at the journal's Future of Everything event in New York. Here are some highlights from their conversation. You've been releasing some agents, this Operator one, which is a research preview. What has the usage been in uptake and what are people really using them for? How much usage are you seeing of the agents, both for coding and the other stuff? Yeah, agents is a little all over the map. So operators specifically are

very early. And for those of you who don't know, Operator is a system that basically allows an AI to use a computer the way that a person would use a computer. So it can click around, it's got a mouse and a keyboard, it can move around, control applications, control the web, and so on. And it is really early for that type of agent, at least, but we think it's going to be a critical part of the future. And the right way to roughly, in my opinion, think about agents is

You're going from an AI system that has what I would call wide and diverse factual knowledge to AI systems that have wide and diverse ability to reason. So they can think really hard about a problem and actually come up with, in some sense, new ideas or new knowledge about a topic. And also can think about how to solve a problem. So critically, in their chain of thought, you can actually see how they start to map out a strategy to go actually execute a task.

So that's kind of the phase we're in now is this transition from what would be purely chatbots and reasoning models into what really are more agentic AI systems that can use tools, solve problems, execute tasks, write code, control computers, and basically string all these capabilities together end-to-end to actually solve fairly complex problems.

So you were part of the deal to buy the rest of Johnny Ives' company, IO, and bring them aboard OpenAI. And Johnny Ives, of course, the famous designer who designed all these beautiful Apple products.

I was just at the Apple Store yesterday getting a new iPhone and the salesman was so excited he preloaded the new action button with ChatGPT. But it's very clear that there still is a button. You are still basically at Apple's mercy. And I'm curious if the future of OpenAI is going to be one where you will need to have a direct relationship.

with your customer, not through some other tech company, but direct. Is that necessary for success? Well, what I would say is I think there's an incredible opportunity to build an entirely new class of devices. And I think every time you get a change in the underlying computing platform, you tend to get corresponding change in the types of devices people use to engage with that platform.

platform. I think for us what it represents is actually an interesting challenge of how do you now start to build AI that really is truly personal. ChatGPT is great, right? But it's, to your point, it's just an app on your phone. Like you have to go in and you've got to input something. There's a lot of inertia to be able to use it and get value from it. But so much of like the way that we actually engage with people and do things is happens in the real world, right? It happens in places that don't involve us necessarily just staring at a screen.

And so I think a lot of the inspiration for the types of things that we feel like we can build will be the types of systems that are more ambient, right? In some sense, almost feel more personal. They detach you a little bit from having to always be looking at a screen. I think there's a place for a phone and I think there's a place for apps and whatnot. But the way that we kind of look at the problem statement is very much an emphasis on personal computing and how we build this ambient computing layer. Okay.

I want to end on ROI. So I have been constantly searching for examples of productivity gains by people using AI. And I was asking Chet to be here today, and it was like, oh, here's NVIDIA's stock price. I'm like, no, no, I don't want people paying for AI. I want people using it and getting productivity gains out the back end. So what are some of the best examples, and how do people measure the ROI of these things that you're trying to sell them? Yeah.

This is actually one of the hardest things that we have to deal with, is AI being this enabling layer of technology. Everyone uses it a little differently. And in some sense, you get varying level of acceleration. So you get 2% here, 10% here, 2x there. And it's very spiky.

So just to give you maybe some data points that might help contextualize it. Most startups now, I think, if you flew to San Francisco and took a poll, would say that the majority of code they're writing is actually written either in partnership with or completely by AIs. That is an amazing thing, right? AI is amazingly good at coding. And so the types of productivity gains you get, and these are all anecdata, so it's all self-reported, but would be probably software engineers saying something between a 50% speed up and a 2x speed up in their kind of total output. And that number is only going up.

Similar things ancillary to that. So things like data science, quantitative fields like trading where people have to process a lot of information quickly, a lot of the same benefit. Interestingly, actually, creativity also is an amazingly kind of powerful area for AI. So the way that creatives are using the systems to improve their rate of iteration, use it as an idea partner, as a thought generation partner, has been an accelerant to that field. And then there are others where I think the more kind of agentic

That was Brad Lightcap, COO of OpenAI, speaking to WSJ reporter Keech Hagee.

And that's it for Tech News Briefing. Today's show was produced by Julie Chang. I'm your host, Victoria Craig. Additional support this week from Melanie Roy, Jessica Fenton, and Michael LaValle wrote our theme music. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Sinsley are the deputy editors. And Falana Patterson is The Wall Street Journal's head of news audio. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.