We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Sam Altman’s Gentle Singularity, Zuck’s AI Power Play, Burning Of The Waymos

Sam Altman’s Gentle Singularity, Zuck’s AI Power Play, Burning Of The Waymos

2025/6/13
logo of podcast Big Technology Podcast

Big Technology Podcast

AI Chapters Transcript
Chapters
This chapter explores Sam Altman's essay on the 'Gentle Singularity,' discussing the pace of AI development and its potential impact on various aspects of life. It also examines the current limitations of AI and the challenges in implementation.
  • Sam Altman predicts significant AI advancements in the coming years, including cognitive agents, novel insights, and real-world robots.
  • Current AI capabilities are impressive but limited in real-world applications; productivity gains are minimal.
  • The focus is shifting from foundational model development to practical applications and integration into existing systems.

Shownotes Transcript

Sam Altman shares his vision for the singularity as OpenAI keeps shipping. Mark Zuckerberg is on the warpath to fix his company's AI effort. The WWDC fallout continues and Waymos are ablaze. What does that mean? That's coming up on a Big Technology Podcast Friday edition right after this.

AI moves fast, and the path forward isn't always clear. Cisco gives you the infrastructure, security, and insights to stay the course. Cisco. Making AI work for you. Visit cisco.com slash AI.

Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional, cool-headed, and nuanced format. So much to talk with you about this week.

If you thought we were going to spend the entire episode talking about WWDC, I'm sorry to say that's not going to happen today. Instead, we have so much going on, including a vision setting document from Sam Altman at OpenAI. Some really interesting news coming out of Meta as Mark Zuckerberg tries to write the AI ship. OK, we'll talk a couple of minutes about WWDC because the company seems to be digging itself into a deeper hole. And then, of course, the image of the week.

Waymos lit on fire in Los Angeles amid the protests. Joining us as always on Friday is Ron John Roy. Ron John, great to see you. We're gonna have a lot to talk about this week.

Waymos are ablaze and listeners cannot see, but Alex is holding a TikTok-style influencer microphone, I think in a corner of a hotel room maybe, or? At a friend's apartment. So I do want to say, for those listening, watching, I brought all the proper equipment to record normal podcasts this week, but I forgot one cable. So that is podcast life. All right, let's talk about this post from Sam Altman, The Gentle Singularity.

Kind of an interesting way to put it. I'll just read the beginning. We are past the event horizon. The takeoff has started. Humanity is close to building digital superintelligence. And at least so far, it's much less weird than it seems like it should be. We have recently built systems that are smarter than people in many ways and are able to significantly amplify the outputs of the people using them.

The least likely part of the work is behind us. The scientific insights that got us to systems like GPT-4 and O3 were hard won, but will take us very far in some sense, some big sense. ChatGPT is already more powerful than any human who has ever lived. Ranjan, I got to ask you, I mean, obviously, like, you know, you can make a case for many of these claims as the CEO of OpenAI, right?

Why now? And why do you think Altman feels the need to come out with this post? Because this is like a major, I would say, vision setting document from him. So normally when I see a blog post from a founder of a company like OpenAI, I call the gentle singularity that's very...

bombastic and future-looking. I think I usually will kind of discount it as more just marketing content. But actually, I don't disagree with a lot of the things he's saying. I think he actually provides a pretty realistic view in terms of 2025, we're already seeing agents that can do cognitive work, writing computer code. 2026, we'll see the arrival of systems that can figure out novel insights.

2027 may see the arrival of robots. Then getting into like imagining what could 2035 look like. I've been a long like proponent

proponent of the idea that innovation has slowed, that like a cell phone today looks like it did in 2011, basically. That there's a lot of, like for our day-to-day lives, there have not been just kind of like dramatic changes since the late 2000s, early 2010s, when we did see a kind of like fundamental shift in the way we interact with technology. So I'm actually this, oddly enough,

was kind of exciting for me and kind of actually had me thinking about what could life in 2035 look like.

So in this post, Altman artfully writes a response to a lot of the core complaints that we see about AI. Just to paraphrase, he says, you got it to write a paragraph. Now you want a novel. You got it to help a scientist in research. Now you want it to come up with the discoveries of its own. So first of all, OK, well, who's setting that bar exactly? It is hype posts like this. So you're almost arguing with yourself, Sam. But the other side of this, which is really interesting, is

Yes, we've seen. Look, we are happy to talk about how impressive some of this technology is.

But we haven't really seen it take the next step, right? It's amazing in the chatbots right now. And, you know, trying to apply it outside is not as easy. And in fact, there is a new paper that just came out where they looked at a company going from zero to 30% of its code written by AI and a key measure of productivity only went up by 2.4%. Now that's billions of dollars in

in the real economy, but it's not exactly making a one, a normal engineer, a 10X engineer. So talk a little bit, I mean, I understand like this is the trajectory that OpenAI wants to go on. And if you believe AI is gonna get to the place that a lot of folks are saying, then this is what we expect. However, how do you sort of contrast that with the clear limitations we're seeing with the technology today?

Well, no, that's where I think we're at that inflection point. I think it's going back to the model versus the product and the app layer. But like we have seen the just kind of like foundational advancement over the last few years just accelerate to a dramatic degree. But now we're going to start seeing this applied. And that's everyone is working on it and actually getting this like

genuinely applicable at scale. Because as you said, like now a single engineer or across an engineering department, you can automate a lot more of the code writing process. But what does that actually do to overall productivity? It's still minimal. So actually bringing AI into larger scaled systems, both in our personal lives and our professional lives across enterprises, I think we're going to start to see that more. There's more of a focus on that.

So I think, again, I think the next two to three years, we see a much, much bigger jump in the way work changes, our lives change versus the last few years as everyone just, it still was living in kind of the toy phase of things.

But even if it's an application issue, I would say that some developers who will say that this code won't necessarily code in the way that your company codes. It will bring in legacy code that you've phased out. We'll have junior developers that will code things and not understand how they work and then ship them and break the app. And I think those applications are in the most powerful application of this technology right now, which is coding.

And clearly that goes for like having it right thing and work across systems. So talk a little bit about where you see the gap between what this technology is capable of and why we're seeing these issues in implementation. I mean, part of this has to be organizational or even an individual level, just trying to find out the right use cases. And it sounds like you believe that there's

a long way to go in terms of what we can do even with the current systems. No, no, I don't think there's a long way to go. I think we're finally working on the right pieces of it. Like the foundation model race has gotten boring. I mean, I feel like when's the last time any of us got truly excited by some new foundation model update? Now the things that are exciting, what you hear about, like what everyone's... Our actual...

outputs and actual applications of AI. So I think we start to see the change a lot more. I think, again, to me, and I've been asking this for a while, if we could all just take a breath and move away from the almost rat race of foundation model advancements and actually be like, okay, now how do we

take the technology that exists as of June 13th, 2025, and then actually implement that into our lives. And we're going to get into the Craig Federighi, Joanna Stern from the Wall Street Journal interview in just a little bit. But I actually thought a lot of what came out there was

people expected, even companies like Apple, you would just kind of plug in an LLM and it would just solve everything. That's not how it works. Everyone's been learning the hard way that it takes a lot more organizational and systems nuance to make things work. But the reality is finally set in. Apple understanding that better than anyone else. But now the real work can begin.

So now that Ranjan has made a principled stand against AI hype and building up the technology beyond its capabilities, I am now going to continue reading Sam's post and give you all a dose of AI hype and building the technology past its current capabilities just for the exercise of getting Ranjan to respond to some of these claims.

So you had already mentioned that 2025 will see agents that are able to do real cognitive work. Altman says 2026 will likely see the arrival of systems that can figure out novel insights. 2027, we may see the arrival of robots that can do tasks in the real world.

He says the 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human level intelligence we can go, but we are about to find out. So what do you think about these predictions? Are you on board with them? Having said the beginning of Sam's post is directionally on point.

2035, given the last two years of technological advancement, it is kind of crazy to think about what life could look like by then. And it's kind of exciting, I think. Like, I genuinely—and also terrifying in certain ways—but, like, it should be different.

Given what we have to work with right now, and even with generative AI, large language models, I am a true believer. I don't agree with the Gary Marcus's of the world in terms of saying the technology is not good. I think it has not been used to its potential or in the right way to date outside of chatbots.

But I think I'm still sticking with it. 2035, thinking about how different life could be versus...

10 years from now, 2035 versus 2015 to 2025, how much has life really changed driven by technology in the last 10 years? When you're just, I'm looking around my apartment right now, it doesn't look that fundamentally different. The way I go to work and when I sit at work and all that stuff,

I guess like virtual conferencing and stuff is a big big change but other than that it all kind of looks the same people dress the same 2035 we're all wearing moon suits and uh have a have a robotic best friend

More than Moonsuits, here's what he says. The rate of new wonders being achieved will be immense. It's hard to even imagine today what we will have discovered by 2035. He then gives a bunch of examples, but concludes this paragraph by saying, many people will choose to live their lives in much the same way, but at least some people will probably decide to plug in. I think that means connecting their brains with the AI.

ron john you talked about you know wanting to live differently are you plugging in oh man sam you had me you had me until there uh i don't know i i remember like i i have an aura ring on my finger i ended up getting one i have an apple watch like i have the surface of me now is connected in many ways i have airpods in right now i wear meta ray bands when i'm walking around so like

It's not injected into me yet, but definitely like, I don't know. What do you think your outfits will look like in 2035 in terms of will they be covered in technology? Will you have a brain computer interface? Will you have a Johnny Ive medallion on a big Mark Zuckerbergian chain around your neck? What's it going to be?

I'm going full WALL-E. Get me in a go-kart, give me a big soda, and put me on autopilot. Full WALL-E. No, I mean, I think it'll probably look a lot like it looks like today. I do anticipate that we'll have humanoid robots around, but the question is, how good can the industry get them, and how safe can the industry get them? I think humanoid robot safety is something that's not talked about enough, but if one of those things go rogue, you could have a Terminator problem. And...

You don't want a Terminator problem. Never a good thing. Don't want that. That's one of the things you want to try to avoid. But look, if you do your best and it happens, no one can really blame you, right? Yeah. I mean, you tried. You did fine. All right. It's the fault of Congress.

This is an idea that Sam had in the piece that I thought was interesting. He goes, if we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain, digging and refining minerals, driving trucks, running factories to build more robots, which can build more chip fabrication facilities, data centers, etc., etc., then the rate of progress will obviously be quite different. So he's describing like a humanoid robot

robot robot explosion similar to like the the intelligence explosion that some expect with AI I thought that was an interesting idea I am going running counter to the greatest tech minds of our of our time but like I don't get the whole humanoid robot thing I think we've debated this in the past as well like to me it's

Applying the human form factor to robotics rather than actually having specialized robots that actually solve specific problems and are built... Because again, right now you go to any automated warehouse, it's not humanoid robots moving around. It's robots that have been specifically designed to handle repetitive tasks of picking up boxes and moving them and placing them and pulling out items. I'm still...

team specialized robotic form factor versus team humanoid form factor and robotic form factor I

I highly disagree. I am on team humanoid. You're a humanoid guy. Maybe humanoid with like six or seven arms. Yeah, why not seven arms then? I would go seven arms. Yeah, go seven. No, why not make it 12? Do a full, what's it? The goddess with all the different- Durga, yeah. Durga. It was obviously a very good design decision to give those arms to Durga. This idea that we have these functional robots makes a lot of sense because those robots don't have a world model.

they don't understand the world as we do because they don't see it as we do. They don't understand physics, really. I mean, they might be able to grasp things and have that hard-coded in them, but it's similar from going from like hard-coded AI to a large language model, which understands, right? But like, you know, can be conversant on a bunch of different topics.

When you build AI with a world model that understands physics, objects, how things work together, then you want to go humanoid robot or maybe souped up robot that takes a humanoid form because all of a sudden you can be functional. Like the idea that you can have humanoid robots, which is one function, do all these things that Sam is discussing, which is, again, digging, refining materials, driving trucks, etc.

which we already have steering wheels and they have hands, right? Running factories and building more robots and building chip facilities. That is an exceptional form. I don't think you want to go too specialized for each because ultimately what's, you know, this is a very complex world that requires complex maneuvering around to be really useful. In a weird way, I guess that's like the most human centric thing

or human forward view of it. Cause I want to just kind of rebuild and remap everything to actually be more efficient for the specialized robots. But I think maybe you're right. The Durga model souped up eight to 10 arms. Maybe that's some wheels on the feet, right? Yeah. That's what, so is anyone working on that Boston dynamics? I'm sure it's probably, I mean, we're talking about eons of evolution, like,

Something got something happened in a good way to get us to where we are right now. It really does work. So let's just sort of conclude this by bringing this back down to earth with the final passage from Sam's article, which I think is like really good. He says for a long time, technical people in the startup industry have made fun of the idea guys, people who had an idea and were looking for a team to build it. It now looks to me like they are about to have their day in the sun. This is, I think, pretty interesting. It's kind of an homage to vibe coding.

But there has always been this idea of like, you know, so many people are like, I got an idea for a startup and they just never build it because they don't have the technical talent or let's say the charisma to get a bunch of people around them to build it.

it, these idea guys and the technical people can just go out and build it. But with vibe coding or with AI coding, maybe it does become the age of the idea guy. What do you think? Yeah, I'm going to Sam ends with me in agreement here. I 100% agree with this. Like, I mean, I was having a conversation with like an early stage startup founder recently who had not built a prototype and still just had a pitch deck.

And I was like, to me, there's no excuse for that right now. Like anyone can build at least basic things right now. And actually many people, you do not have to be, have a full technical team to build a functional product. And that means that anyone with an idea should be able to actually realize that idea in some form. And that's- Or at least prototype. Yeah.

at least prototype, but even get to some level of functionality. And I think that's actually exciting. That's like the best, most exciting part of generative AI for me. So I think idea guys, it's your time. All right. So final thing, let's talk about super intelligence. This is the new word. Sam says, OpenAI is a lot of things now, but before anything else, we are a super intelligence research company.

We have a lot of work in front of us, but most of the path in front of us is now lit and the dark areas are receding fast. We feel extraordinarily grateful to get to do what we do. Okay, two questions for you. One, why is everybody talking about superintelligence now? We're going to get to it in a moment with Meta.

I thought AGI was the buzzword. Is that now something that is too low of an ambition? I guess when you raise $40 billion, that is what it is. And second, you don't take any issue with this. It does seem to be...

Again, you're someone that doesn't like hype. This is hype. I mean, gotta call it out for what it is. Sorry. I mean, again, this has been quite the emotional roller coaster for me going through this because I've been supportive. And then we end again with, to me, like, how is it not a bitter story to,

of the AGI to ASI, Artificial General Intelligence to Superintelligence rebrand. It's crazy. It's weird. It's ridiculous. Like, it's just, it happened. Everyone has just comfortably moved on from AGI, started using superintelligence. I think that's the name of Ilya's

Yes. Safe superintelligence. Safe superintelligence. That was kind of the first from a thinking from a pure branding perspective. That was the first inkling. Clearly the messaging worked. Everyone started saying it. It absolved people from having to achieve AGI or when everyone is saying AGI is already here yet life doesn't feel significantly different. So I'm going to give superintelligence the...

Like, I mean, from a branding perspective, the fact that they've shifted to this conversation and now we're all just accepting it and moving on is crazy to me. But it's happened across the industry, I feel. So kudos to the kudos to Ilya from a branding perspective and to the comms folks, whoever came up with super intelligence first is a term.

You've done good. Or you've made things harder for everybody else. You've bought a couple more years of runway. Well, Ilya obviously has raised billions without releasing a product. By the way, on the subject of Ilya, next week on the show, Dwarkesh Patel is going to come on. And he has some very interesting thoughts about what Ilya is up to and the type of AI that he may or may not be building.

and how that might help advance the state of the art. So stay tuned for that. That will come next Wednesday with Dwarkesh Patel or Wednesday, June 18th with Dwarkesh. So stay tuned for that really fun conversation. Okay. As this happens though, we are seeing model improvement and Ranjan, you said that when was the last time we were excited for a model release?

And it's funny because I've sort of been the like pour cold water over this Sam Altman statement while you've been sort of enthusiastic about it through our conversation today. But I will say I definitely was excited for the 03 model. That model to me is like the first model that really works and is useful in various ways to me in my daily life. And now...

OpenAI is releasing O3 Pro, which is a better version of the model. It's going to be available to those initially, to those paying $200 a month to OpenAI, which unfortunately no longer includes me. But there's a sub stack called Latent Space that talks a little bit about why this model is an improvement and why I think it's going to help lead to better products. Just to throw that out there one more time. First of all, the post about current models is

Says they are like a really high IQ 12 year old going to college. They might be smart, but they're not a useful employee if they can't integrate. So this talking about O3, the authors say this integration primarily comes down to tool calls. How well the model collaborates with humans, external data and other AIs. It's a great thinker, but it's got to grow into being a great doer.

O3 Pro makes real jumps here. It's noticeably better at discerning what its environment is, accurately communicating what tools it has access to, when to ask questions about the outside world rather than pretending it has the information access, and choosing the right tool for the job. When you think about improvement in models and what that leads to, I mean, we're going to see, right? This is just the very, very obvious.

early reflections on what this can do. I think a model that does understand its environment, like I talked about, super important, can ask questions to people and then understands which tools to use when it has to do a task.

to me i would say that's that's pretty uh important and i'm excited to at some point get my hands on this i will fully agree the next great battle in ai is tool calling that's where we're going to see the maximum amount like actually bringing these models and agent and agentic ai that's all that matters is the ability for an action to understand its concept context

and then take the next correct action. And to do that, you have to know what tools you have access to and which tool is correct to interact with next. So I think this is huge. Actually, like this is where

And I'll give you, it's on the model level, so fine. Models matter, fine. But I think this is very astute, that tool calling is going to be the key to agentic AI, which is going to be the key in integrating into the existing world, systems, companies, processes, organizations, everything.

What is tool calling? Just explain what that is. It's the ability of the model to actually call out to another tool, either via API or script or whatever resource it uses to access another tool. It's ability to... So currently, you might be doing that manually by actually coding out API calls. There is a world where...

Like a large language model should be able to generate that on the fly, understand what tool it should call out to, and then actually generate that connection in real time and make that call, transfer whatever data needs to be transferred, take whatever action needs to be transferred. So like right now, if you use deep research, you kind of start to see it in action. What is it doing? It's calling out to a bunch of websites.

That's via the internet, the World Wide Web. It's calling those websites. Maybe it's downloading documents and then it's going to parse them. Like each one of those is an action that often requires a specific tool. But then you imagine that in large systems that exist already. Right.

And the ability so you don't have to manually map out every single block on an agentic workflow. Like that is a huge area of opportunity right now. And I really think that's what that's the next great AI battle. And it's at the model layer. So I'll give you that. Okay, that's super interesting. We definitely should do more on that. So folks expect more conversation about tool calling on the show.

We have so much more to talk about. We've got to talk about this meta thing, very quick reaction in WWDC, and the fact that Waymos are on fire. We'll have a very fast-moving second half right after this.

Intuit Enterprise Suite integrates powerful multi-entity financial management tools with payments processing, payroll, HR, marketing, and more. With AI-powered automation, forecasting, and industry customization, businesses can make faster decisions and boost productivity. Visit intuit.com forward slash enterprise to learn more. Money Movement Services by Intuit Payments, Inc. Licensed by NYDFS.

The future with AI is moving fast, but progress takes more than speed. It takes direction. Cisco gives you the infrastructure, security, and insights to implement AI with purpose, from data centers to workplaces. We don't just help you keep up. We help you lead. Cisco, making AI work for you. Visit cisco.com slash AI. That's cisco.com slash AI.

And we're back here on Big Technology Podcast, Friday edition, talking about all the week's AI news, a lot of more theoretical stuff. Let's get more practical in business here in the second half. Meta is making a very big investment in scale AI. I call it like an aqua hire-sition. It's weird. They're not buying the full company, but this is from the... I actually think I said that on air on CNBC. No, no. Hold on. Can

Can you coin that trademark? The aquahirzition. Aquahirzition. That is what this is. And that describes us better than anything else. And it's amazing. The word came out of my mouth on air and I was like, what did I just say? I'm going to roll with it. I'm going to stick with it. Go. Aquahirzition.

So this is from the information meta to pay nearly 15 billion for scale AI stake in startups. 28 year old CEO. I love how you like companies are investing in other companies and you get the CEO and like the top talent because of it. It's,

It means something that's happened multiple times, including with inflection with Mustafa Suleiman going over to Microsoft and Meta, which has had some regulatory issues, is taking note. So this is from the story. Meta has agreed to take a 49% stake in the data labeling firm Scale AI for $14.8 billion. Meta will send the cash to Scale's existing shareholders and place the startup CEO at

Alexander Wing, former Big Technology podcast guest, in a top position inside Meta. Meta would put him in charge of a new super intelligence lab. There is that word. Hit the bell. Along with other top scale technical employees.

That will put him in competition with some of his customers and friends, including OpenAI CEO Sam Altman. Another interesting point from the story, Meta CEO Mark Zuckerberg has been actively recruiting top AI researchers in an effort to boost his company's AI efforts. He was frustrated with the reaction to its latest AI offering, Lama 4, and aims to catch up to competitors such as Google and OpenAI.

Ranjan, your reaction, good or bad move from Zuckerberg? This one is tough for me. I really go back and forth in terms of good or bad. They are taking some action. They've been falling behind and clearly they want to catch up. So that's good that they're willing to take some bold action, but...

Again, aquahiresition, $15 billion for a 49% stake just to hire the guy. I was even confused again if it was truly an aquahiresition. But actually, they announced that the chief strategy officer of Scale.ai, Jason Droge, will now be CEO. And Alexander Wang is full-on meta. He's not a little scale, little meta, some kind of...

weird Elon Muskian dual role. He's all in. Is it really, is he worth that much? Is he just some consultation to Mark Zuckerberg over the last few months and giving him good advice? Is he worth that much?

Let me do my best to make the case for this deal because I think it is worth it. And I don't think it's going to be the last one, because if you read through the lines, it's not just Alexander Wang. This is from Bloomberg. Zuckerberg has spun up a private WhatsApp group chat with senior leaders to discuss potential candidates. The chat, which they've called recruiting party, is active at virtually all hours of

of the day. And Zuckerberg has also been hosting folks at his homes in Palo Alto, California and Lake Tahoe and personally reaching out to potential recruits. Okay, let me set the stage here.

So if the things that we talked about in the first half, if Sam Altman's predictions come true or have true, that this is a rapidly advancing technology that's going to determine the future of technology, you really can't afford to be mediocre for a couple of years and hope to catch up. And I think that's been the alarm bell around Apple, but certainly it's an alarm bell with Meta because after they took the lead in open source, they were surpassed by DeepSeek and then Lama 4 was not up to expectations.

So I think Zuckerberg sees this and what he's doing is looking out at the landscape and saying there's basically three vectors that you can compete on with AI. The first is GPU, just scaling up GPUs. Meta has that, right? They had a ton before this moment. They use them to build very impressive loud models off the bat, and they've got the GPUs. The other two things that you need are data and talent.

And meta has a lot of data, but scale has proprietary data that's being basically is being used to help companies scale up their models beyond just using GPUs. And then the talent thing is super important. You'll remember that

Sergey Brin came on the show a couple weeks ago and said that he believes that algorithmic scaling, not necessarily compute scaling, will lead to the most improvements. And the way you get algorithmic scaling is building new algorithms. And the way you build new algorithms is with talent.

So to me, this is Zuckerberg clearly seeing an issue with his company and making, I would say, the strategic exact right move to fix it. Unlike another company that we've seen within flagging AI product that is it seems like it's still in denial about.

about what is wrong. So that to me is the case for Zuckerberg, not only going after scale and Alexander wing, but starting this recruiting party and going hard core on recruiting top talent. There are, there are reports that he's offered eight and nine figure cash amounts to top talent engineers to come over to meta. That's like in the tens and,

And maybe even up to $100 million to a person, not a company, a person to come over. So that I think he realizes the stakes and he's making it happen. And he's shown an ability to do this in the past. That is, I think, the bull case. What do you think about that, Ranjan? All right. I think that's, I mean, again, anything when you look at these relative to market cap or even cash on hand are not like existential for Meta. So it's...

In that sense, I think it's not unreasonable. I think it's also fair, too, from a shareholder perspective, that the further behind they fall, there's more risk to meta-stock in terms of people questioning strategy versus spending a very aggressive move like this. It's still, I guess...

Scale AI kind of helped OpenAI build their model so they have a clear understanding of what kind of data, along with other major companies. So he has been at the center of all of this. So maybe just that kind of proprietary knowledge also has a significant amount of value. It's still more this whole aqua-hierosition model is just...

a sign of the times, I guess, more than anything else that I've seen in a while. But I buy what you're saying a little bit. And I mean, it's clear that they do need new talent because as you pointed out to me in our text messages off air, the product isn't exactly working even beyond the models.

This was my favorite story of the week. So this is the most meta, and let's just call it a Facebook thing because this is like old school Facebook. So the new Meta AI app, which many people may have downloaded, and it's a separated app that's essentially kind of like the chat interface, chatbot experience that you would expect from a chat GPT or perplexity.

But one of the small nuances is they'd also positioned it as somewhat of an AI social network.

Now, it was a bit unclear what that meant, but people started noticing, and I actually had not even noticed the Discover experience in Meta AI. And I don't use it. I use it for generating images that are fun with my son. Like he wants to do like half animals, half dolphin, half squid or something like that. And I'm like, all right, I'm going to use Meta's image on this one. If you go to the Discover tab, it posts images.

of chats from people that probably don't realize that it's being posted. And it's like, as a social network, even...

crazier. It posts people's voices prompting meta AI. So it has like an audio clip, audio message, voice recording. And there's all types of crazy situations. A lot of people, very personal, asking about like a legal brief for a custody battle, asking about like relationships and depression. My favorite one of this one I found on Reddit, a screenshot,

someone saying you're supposed to be my wingman, where my big booty future wife at. So all types of requests. But people almost certainly unknowingly posting their AI chats to a public social network feed. Thank you, Meta slash Facebook, for bringing some of that old unanticipated sharing activity back to social networking.

Yeah, what's old is new again. And, you know, there's some funny parts of this, like you mentioned the guy who asked for the AI to be his wingman and trying to find his big booty future wife. And towards the end of the screenshot that someone shared, he says, big booty in a nice rack. And the AI is like, you got specific tastes. I like it. What kind of conversations are these?

But they're also, it's quite sad. And it sort of goes into this conversation of people needing AIs for companionship, given that our society has done such a poor job in building community and sustaining and fostering community, that people feel like they need AIs to be their friends. And, you know, you just listen to these conversations with people and these AIs, who the AI has become their companion in many cases. And it's just like,

Oh, it's just such a glaring magnifying glass, or I don't even know if that's a phrase that makes sense. I'm joking. I'm like trying to, I'm making a light of it, but it's actually, it's terrifying and it's sad. And like, I mean, a lot of the queries that have been posted around are, I mean, are it's people who are just really looking for help and answers and companionship, but they're

Do you want to hear my conspiracy theory on this? I always love a good conspiracy theory. Okay. We have a podcast after all. What would we be without conspiracies? So I was thinking about like, I mean, on one hand, to do something this clumsy, I actually can see. Like, it's just...

One product manager makes this decision. And I saw that there are some people who actually, it looked like they were purposefully posting to kind of show expertise around a subject or even like if you go through, a lot of people do like prayer affirmations and stuff, but then their handle is like a church or something religious. So then you're like, okay, I can actually see this person knew what they're doing. And this idea that you push your prompt into a feed and then it's getting liked and shared,

makes some sense. But then I was like, what's Meta's biggest threat? It's the chat GPTs of the world owning like the true human relationship and data and questions and queries that really get into the soul of a person. Suddenly, I think this is going to continue to get become a much bigger story. And like suddenly,

this idea that people are going to share everything with a chatbot is a little scarier the more people start thinking, "Oh wait, you know what happened with Meta. I'm going to stop asking ChatGPT and Claude these really personal questions." And suddenly, Meta is actually in a better position relative to OpenAI on kind of that personal connection to chatbot.

What do you think? That's a great conspiracy. That is a great conspiracy. I won't rule it out entirely. All right. Let me ask you one more question about this scale thing before we move on to Meta. This came up in our Discord. There's been plenty of reporting on why Meta wanted to buy Scale AI, but why did Scale AI want to sell? Are the main LLM providers getting good enough at obtaining trading data themselves?

the deep seeks signal a top for services like this. What do you think? - Yeah, I definitely think so. I also think synthetic data in training foundation models is gonna become more and more of just a standard practice. Like we've exhausted the like race for real world data

Foundation models have also gotten very, very good. And regular listeners will know that I'm definitely of the school that we don't need bigger and bigger and bigger models. So I think in that sense, like the game Scale AI played, the service they provided was brilliantly timed. They became like a critical part of overall LLM infrastructure. But what they did, their job, again, like actualized

actually having people manually tag, like large networks of people manually tagging data to make it more ingestible for a large language model or for a training, it's not going to be as relevant anymore. You can even now have large language models, yeah, do the tagging itself. So the service they provided was not going to last. So good on Alexander Wang and timing in terms of making this move.

Okay, so we talked again about meta, seeing an issue and addressing it. Now let's just go quickly to Apple because I thought we were done with WDC coverage. But then there's been a bunch of executive interviews that have come out mainly with Craig Federighi and Jaws, who's their head of marketing. And it just seems like this company is diluted. They've said that they are not looking to build a chatbot, but also that

Siri is, you know, their mission is to make Siri the best assistant. They said that they want that, you know, Apple intelligence is basically out there already, but that they're not giving a shipping date because they don't want to overpromise. MG Siegler said this in Spyglass.

He said, Apple clearly wants to frame this as people perhaps being upset because they simply don't understand the intentions here. He says they don't want a chat bot. They want to do more than that, baking AI into every product. I think that's actually a fine strategy, but only if your AI works really well. And well, the state of Siri, the actual ship stuff over the past 15 or so years suggests that it doesn't.

They have to get their AI house in order to hear Apple tell it there's nothing wrong, just a minor delay. And internally at Apple, it's better than it's ever been. You're crazy to think otherwise, like that's the message that Apple is giving. So talk a little bit about what you saw from the post-WWDC interviews. To me, they were even worse than the underwhelming event itself. And where does Apple go from here?

Okay, yeah. Separate from the event, the post-event, the Joanna Stern from Wall Street Journal interview, I mean, with Craig Federighi and who's the other one? The head of marketing jobs. It was one of the most fascinating Apple pieces of media I think I've seen in a long time because she did an incredible job just kind of like in a very calm way, but just repeating the right questions. Craig Federighi, you could kind of see like...

getting a bit frustrated but still having that perfect smile and just kind of like no no there was a moment where he was uh like he let the smile down he looked like he was gonna lose it right yeah you see him remember to smile and then all of a sudden bam

Cheeks go up. Okay, okay, okay. So you caught that as well. Watch, it's a seven minute clip. If listeners, maybe you'll catch it too and let us know. Like there's one moment I'm like, oh shit, he's about to lose it right now. And then total recovery and smile. But overall it was, I don't understand the...

The way they approached it, the whole kind of narrative they're trying to push is, "We'll release it when it's ready. It's not ready yet. It's a very complex problem. Everyone else is just doing chatbots and we want to do more than a chatbot." No, everyone is not just doing chatbots. There are incredible AI experiences and solutions and products that span far outside of a chatbot. And they kept repeating that. Again, like,

Querying your own data

is doable. You can upload a bunch of documents to an AI service and actually query them. Yes, it's a complex problem to do it across all of your data, across all of your apps on your iPhone at the operating system level. I know it's complicated, which leads me to the one thing she did not press them on. Why did you do that marketing push? Apple in the past, the beauty of the company was

here is this incredible story around a product and here's the product and it just works. And remember those commercials, the girl from Last of Us looking up someone she didn't want to talk to and finding their information quickly? I don't know, they were terrible. They launched the largest Apple-style marketing campaign. Why did you do that if you weren't ready?

That's the one question I felt was not pushed on. For sure. But I think the entire conversation was just exposing of Apple for, again, doing that. And the attitude for Apple being like, I don't understand why anybody's upset. We're doing exactly what we said we would. I mean, to me, there was like a lack of, I think, self-awareness and humility there. Yeah, I think...

Like they could just say, or they could have, yeah, I don't know. Do you think it would be better to say, you know what?

We have been behind, we've screwed up, and we are going to deliver. That's all we're doing where it's like hair on fire situation in the company and we get it and we're going to deliver. Or do you think it would be better that they took a, you know what, we are the best for privacy. We only deliver product, which they kind of alluded to. We only deliver products when they're at 100% privacy.

So anyone, not just tech forward people could use them. They didn't, they kind of alluded to that, but they didn't even really, but what do you think is a better direction? It's hard for me to say. I think this, everything is fine is probably the worst direction, but ultimately any other direction, it doesn't matter until you ship.

I mean, basically, they could have just come in and say, listen, like, this is something we wanted to do. We understand it doesn't matter until we ship it. So we are working hard to ship it. That's all. Do you think they should have canceled WWDC given there was no real announcement? No.

that would have been worse i think because that shows you just like you don't even care to show up no no i mean you say you say you know what we are going to all people are working around the clock we that we get it we are going to deliver the world's greatest ai assistant that anyone can use so there's no reason to have a whole event to talk about operating system names and changing backgrounds on chats and stuff like that that

It could have been like update notes in an iOS app update or system update. Like, I don't know. Don't forget about the phone app. You got to get everybody together to talk about the phone app and the messages app. Wait, can you explain to me Liquid Glass? Why is it exciting? No. Okay. I want someone to...

I really, I really, I saw something where it's like, they're getting back to like what they're great at design and liquid glass. And I still didn't get it, but I, I want to, I want to at least try. You will try. That's the thing you can be forced to at some point. All right.

All right. Let's just very quickly hit this story. I think, look, we're not going to spend a lot of time talking about it, but it's important for us to just stay on top of this story. It's an important one, which is how generative AI is changing the web. This is a story. New sites are getting crushed by Google's new AI tools.

The AI Armageddon is here for online news publishers. Chatbots are replacing Google searches, eliminating the need to click on blue links and tanking referrals to news sites. As a result, traffic that publishers relied on for years is plummeting. Here's some stats. Traffic from organic search to Huffington Post desktop and mobile websites just fell by over half in the past three years.

Nearly by that much at the Washington Post. Business Insider cut 21% of its staff last month. As the CEO, Barbara Peng, said that the cuts were aimed at helping the publication endure extreme traffic drops outside of our control. Organic search traffic to websites declined by 55% between April 2022 and April 2025, according to data from the company SimilarWeb. They do analytics online.

Fifty five percent. That's crazy. And Google is going to be sending even fewer visitors with this new AI mode. Not to mention, Google is now offering employee buyouts in the search organization and other organizations while not offering them in places like DeepMind that does business.

AI. This is, I mean, we've done some reporting on this here with my story about world history encyclopedia, but it's very clear now that that was the rule and not the exception and the web is in some even deeper trouble. Remember we said it was kind of on life support? No, no, no. This is like hospice now. Instead of the web is dead, then we tempered that with the web is in secular decline and

Now, web is in hospice is definitely another direction to take it. But yeah, no, I mean, this is what we've been talking about forever. And it's definitely...

going to dramatically affect anyone who optimized for a pre-LLM world, like who didn't just publish and have their website. Like Business Insider is the greatest case of a company that for longtime media folks, the invention of the slideshow on a website to get an additional display ad to click for every slide you cycled through was one of the most like

but actually brilliant innovations in monetizing web publishing. Like Business Insider forever, that's how they operated. And that is not working anymore. And maybe there's going to be like ChatGPT first publishers, but trying to game Google to get traffic, to show display ads, to make money, that is beyond hospice. That is dead. Done for, right?

Yeah, yeah. Like overall people having websites and interacting with them in different ways. I think web has some some room to breathe and like there's there's it's not over yet, but monetizing on display ads based on page views that is long, long gone, especially if you built like a powerhouse optimization engine circa mid 2010s on that. That's long gone. Now talk about this mid journey story.

Yeah, so...

We saw Disney and Universal Studios sued Midjourney. We've talked about New York Times sued OpenAI. One of my predictions has been like, we're going to start to get some guidance or resolution, I think by the end of this year, in terms of like how copyright will play out. And we need it. Like I feel it's one of the things holding the overall industry back, not having a clear direction of what's indemnified and what isn't. But my favorite part of this though was like,

I mean, the New York Times Open AI, for people who had looked into that, they were able to recreate by prompt essentially the entire text of articles. But that's still not as visually jarring as literally asking, show Iron Man flying action photo. And there's a photo of Iron Man. There's ones of the Simpsons, the Minions. I mean, Midjourney,

clearly trained on copyrighted info and returns that info, that's a problem. And there has to be some kind of resolution to all of this before people will start actually at a professional level using these technologies in a proper way.

It's sort of a perfect lead into our final story of the week, which is why Waymo self-driving cars became a target of protesters in Los Angeles. Time has a couple of theories here. They sourced the Wall Street Journal that say that part of the reasons the cars were vandalized was to obstruct traffic. They said some social media users suggested self-driving vehicles in particular have become a new target.

because they are seen by protesters as part of the police surveillance state because they have cameras and they have 360 degree views of their surroundings and their tool has been tapped

by law enforcement. Other people are just talking about the fact that you shouldn't feel bad for them. This is from one organizer. There are people on here saying it's violent and domestic terrorism to set a Waymo car on fire. A robot car. Are you going to demand justice for robot dogs next, but not the human beings being repeatedly shot with rubber bullets in the street? What kind of politics is this?

Honestly, it seems to me that it's just kind of talking around the issue. I think people are just afraid or they're uncomfortable broadly with

with AI, which like despite all the progress we talk about on the show, broadly the public not comfortable with artificial intelligence, especially as they see it do things like run over some of the previously protected rights like copyright and you know all these companies are clearly trying to automate work in their own way and the public is just starting to really feel uneasy about it or has for a long time and it's manifesting itself in the physical form of burning these Waymos. What do you think?

I'm going to not...

attribute that level of importance in terms of, I don't know, you want to burn something. If you burn a Waymo, it'll get a little more traction on social media. It's also a little more visually jarring than like other cars if you were to burn them. So I think it's just, I don't know. I think connecting it to a deep rooted, like distrust of AI, it's, I don't know.

I think it's just people wanted to burn something and you get a little more engagement by burning a Waymo than a Corolla. First of all, I just want to say I don't condone the burning of Waymos. I do not condone it. No condoning of the burning of cars. But why do you think they get more engagement on social media? It's because of this unease. It's because there's this feeling that it's Skynet. All right. Okay, you're right. You're right. You're right. The reason behind... It's more...

of a story or like emotionally of an emotionally resonant thing that will put you on one side or another to burn away more than a Corolla. Again, we do not condone the burning of cars here on big technology podcast. We're good on our newsletter, but I don't know if that's the case.

How are we going to have humanoid robots, Johnny Ives, pin? I mean, if people are going to burn Waymos because they're afraid of cameras, I don't know. But I guess a humanoid robot would actually just fight back and not let you burn it. Maybe not. I mean, they're not going to be programmed to fight back. Like all this alignment work is going to be done for them not to fight back. Even if they're getting burnt...

And Tesla Optimus is not going to fight back and let itself be burned. Maybe Elon's won't, but the others, Google's will definitely be like, fine, whatever you need to do. But I think you're really hitting on the point here, which is so great. Like we talked about this in the beginning. Let's just close with it. Like we're going to hear a lot of rhetoric about AI in the physical world, humanoid robots, all of those things along that nature. But-

There's an assumption that people are just going to allow this to happen, especially as even if Dario is wrong and it doesn't cause 50% of entry-level jobs to go away, it's going to change people's lives. And this is something that's happening effectively top-down versus bottom-up in most cases.

There's just going to be discomfort there and people are going to keep attacking these things. I'll just say this last thing. When I was at BuzzFeed, I did a series where I would fight with robots. I tried to steal. Yes, I tried to steal. I did effectively steal. It's so funny. I stole lunch out of a DoorDash robot. I just ripped it open and took the lunch out of it with DoorDash PR there.

I fought a tackling robot at a football field. This was a series. And I think underneath it all was just this thing that like I was like, I am not going to be the first. I have an urge inside me to beat the crap out of these things. And so will a good chunk of society. And I think we're starting to see the beginnings of that. Well, it's also good that you are preparing yourself for all modes of robot combat and

And that, that could, uh, could be required in a, by 2035, according to the gentle singularity. So, so maybe, maybe I need, I need to start scrapping with robots just to, uh,

Just to prepare myself a little bit. I'm not going to burn them. No burning cars. No burning. Fight a robot. Sparring. A little sparring. Yeah. And you'd be surprised because they can fight back in some situations. The lunch delivery robot, I beat that one easily. Just ripped the top off, ran. By the way,

That video, they put it on Jimmy Kimmel for two weeks in a row. Really? Yeah, he took our video of the robot crossing the street and then put special effects and had a bus run into it and the thing blew up. God bless mid-2010s media.

It was a good time. But the football robot definitely got the best of me. So very humbling. Watch out for that one, listeners. Exactly. All right. So we'll end it there. We look forward to a future where humanoid robots are among us, a gentle singularity. Unless if you ask the people and then you might get a different answer. Rata, so great to see you again. Thanks for coming on the show. See you next week.

All right, everybody. Thank you for listening. Again, I'll be back on Wednesday with Dwarkesh Patel, and we will see you then on Big Technology Podcast.