Open AI and Johnny Ive team up on a multi-billion dollar bet to build what exactly? And are we about to move beyond the screen? Plus, Anthropic has a big new model that will blackmail you. And Google has a ton of AI news that we'll try to make sense of. That's coming up on a Big Technology Podcast Friday edition right after this.
Will AI improve our lives or exterminate the species? What would it take to abolish poverty? Are you eating enough fermented foods? These are some of the questions we've tackled recently on The Next Big Idea. I'm Rufus Griscom, and every week I sit down with the world's leading thinkers for in-depth conversations that will help you live, work, and play smarter. Follow The Next Big Idea wherever you get your podcasts.
From LinkedIn News, I'm Jessi Hempel, host of the Hello Monday podcast. Start your week with the Hello Monday podcast. We'll navigate career pivots. We'll learn where happiness fits in. Listen to Hello Monday with me, Jessi Hempel, on the LinkedIn Podcast Network or wherever you get your podcasts.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format. Wow, what a week of tech news we've just experienced. I feel like it was about a month worth of news in four days. We've had developer conferences from Microsoft, Anthropic, and Google. Also news that OpenAI and Johnny Ive
are going to team up to build an AI-first device. And of course, news leaks that Apple is going to release smart glasses maybe as early as next year. Let's talk about everything that's happened and make sense of the headlines. Joining us as always on Fridays to do it is Ranjan Roy of Margins. Ranjan, great to see you. Welcome back.
Alex, who are you, man? I opened my Twitter feed a couple of days ago and I see Alex on stage and Sergey Brin is just on there. You never even gave me the heads up. I had no idea this was happening. Well, that would make two of us. And I'll just say this quickly and then we'll get into the news. But here's what happened at Google I.O. So I had a fireside scheduled with Demis Hassabis, which we teased on the show last week. And I had a fireside scheduled with
And I showed up to the stage. I got mic'd up. I had my questions ready and Google
Google team tells me, look, there's been a bit of a switch. And I said, man, I do not want to like give up this interview. Like I flew here for this. And they said, yep, Sergey just walked in and he's going to join you. So I found out just the same time that everybody else did. And it was pretty fun. I like walked up to him and said, hey, so, you know, what do you want me to ask you? And he goes, I just asked them the questions and I'll chime in. So it ended up being a really fun conversation. And I'm glad that we were able to put it
on the podcast feed. So if you haven't checked it out, folks, I do suggest you check it out. And if you're coming to the show new, just to give you an update on the flow on Wednesday, typically I'll publish a big interview and then Ranjan and I are here to break down the week's tech news every Friday. So anyway, that's the story of Ranjan. All right. Well, you know what? It was an incredible listen and we'll be talking about that in just a moment.
Okay, great. So you were definitely going to get to some analysis of that conversation. But first, let's start with this really wild piece of news, which is what OpenAI is going to be doing with Johnny Ive. This is from Bloomberg. OpenAI to buy AI device startup from Apple veteran Johnny Ive in a $6.5 billion deal. They're going to join forces and make a push into hardware.
The purchase, the largest in OpenAI's history, will provide the company with a dedicated unit for developing AI-powered devices. And this is from Ive. He says to Bloomberg, I have a growing sense that everything I've learned over the past 30 years has led me to this place and to this moment. It's a relationship and a way of working together that I think is going to yield products and products and products. So, yeah.
They want to build an AI device, no screen, something that's ambient. We'll talk a little bit about it. They did like, I think like two videos announcing this in like a very like cinematic, typical Johnny Ive way.
Ranjan, what is your perspective on what is happening in this marriage? All right, I'm going to break it down into two parts. First is the actual device and what we can speculate on it. But I definitely want to get to the deal structure because my God, this is the most open AI deal imaginable. In terms of the device itself,
I'm excited. I think like we have absolutely no idea what it is, what it could look like. But I am a big proponent of moving beyond the form factor of the screen. Even meta ray bands and just glasses overall have been a huge fan of. We've talked about a lot. But this idea of the ambient and the RIP Humane Pin, Rabbit R1,
I mean, some great... Not like they'd be the first to try this. Yes, some great memories in AI hardware land. But if anyone can do it, certainly it would be Johnny I. But I think it's interesting. Again, it's the idea that
And I don't even know what exactly it could possibly be, but the idea of just ingesting data around you, converting that into some kind of knowledge, somehow feeding that to different devices that you own, it's interesting and I'm sure a lot can happen from it. And again, if anyone I would trust to come up with that, it would be Johnny Ive. But what do you think this could be?
Well, I want to give you first my definitely wrong take of what I hope it will be, and then we'll go from there. That's where I like to start.
I want it to be a hologram like Princess Leia in Star Wars. - Okay, that's wrong. - That just kind of sits on your desk. It's an AI avatar. You talk to it and it will understand your context and then help you through your day. And maybe, I don't know, maybe it's fun and entertaining as well. Let's clip this because maybe it will be right. Maybe there will be a holographic component. - That is the most not Johnny I thing I've ever heard.
It's going to lead, Ranjan, it's not just going to lead to products. It will lead to products and products and products. Products and products and products and products. So there's a chance that this could happen. That's the fourth product. It feels so much to me like the Humane Pin, which also was started by former Apple folks. I think the difference is that this...
could work because it has open eyes technology underneath it. We know open eye has been effectively the leader in voice right there. Their voice AI is better than everybody else's. They have the underlying models.
And to bring Johnny in, uh, I think could lead to effectively the same thing as the humane pen. Maybe you don't wear it. Maybe it sits in your pocket. Maybe it's on your desk. It listens to everything you do. And it's just this helpful ambient assistant. That's probably, uh, my best guess outside of princess Leia as a hologram. Um,
But yeah, I think the bigger news here is that it just signals that if you look around Silicon Valley, we had announcements from Google this week about this assistant that they want not just to be a chatbot, but something that's with you in glasses. We know Meta wants to do that. Now OpenAI wants to do that. It's clear that Silicon Valley is rushing toward
The next version of AI already, which is something that's ambient with you, understand your context, understands everything you say, and just helps you out to be truly assistive, to be truly general. As Demis put it this week, it just needs to be with you and experience the world as you do. And we'll see what happens from there. Well, so yeah, I think not Princess Leia, but on that second point, I agree. And I think what is exciting for me is
This is almost like the ultimate application that or the ultimate expression that it's not the model is not just the models anymore. It's the application layer. I would actually put this in that category. Like it's the experiential layer, the form factor. And I mean, I've been with the meta Ray-Bans as I'm city biking around New York asking questions. Apple could have done this.
With AirPods, I always was really bullish that AirPods could be kind of this ambient, augmented audio reality layer as you walk around and you could interact with. That certainly didn't happen. But overall, I think this idea that there is this kind of interactive through voice, maybe it's through touch in a weird way. There's so many other ways to...
Alex raises eyebrows at that one, but you know, there's other ways. I think you're in the right direction. Yeah. I mean, hold on. You like things like that. You like press it a little harder. Okay. I'm going to stop now. This is cannot go anywhere. Good. Keep going, Ron, John, please do. I'm just saying the idea that to me, like chat GPT voice is already so good that
that we need to come up with easier, more natural ways to interact with it. Large language models are overall so good that we just need to get more people interacting with them. And then again, that contextual layer, the more understanding of your context that whatever interaction device you have, whether it's ChatGPT, whether it's a screen, whether it's your iPhone, whatever it is, and we're going to get into how Google has your data and that could give them an advantage
that contextual understanding is going to make like unleash the next wave of AI progress, I believe. And something that's just collecting data all the time around you, how you move, what's around you, what you're listening to, like all of that stuff is very interesting from a not even getting into the privacy standpoint yet, but maybe we do. But overall, it's
Even a little device that collects that and allows you to interact with it in some way, I think is very interesting. It's interesting that we might, I'd like just to address the privacy thing. Remember, we're still living in a society where lots of people don't want the echo in their house because they don't want Amazon listening to them all the time. Now, it doesn't really listen all the time. It will delete, you know, very intentionally the audio if it doesn't hear the wake word. So imagine we're going to go from an area where people are already suspicious of that
to a moment where they let AI listen to everything they do, the utility just has to be so intensely high if people are going to actually adopt this. Now, I will say I was with a reporter recently who was using one of these devices that listens to everything that she says and then, you know, gives like a to do at the end of the day and talks a little bit about, you know, broader goals that they have, whatever. It's just an ambient assistant using generative AI. And
I and I think I'm generally more okay with giving my data to these companies. And I just said, I think that could be really cool. And maybe I should use that. But it does add a level of awkwardness, the same way glasses will lead to a level of awkwardness, because
You're going to have to have other people around you be okay with the fact that they are being recorded by your special Johnny Ive OpenAI product. Yeah, even meta Ray-Bans as I wear them around, it's why I just have the sunglass version. So I'm not wearing them sitting across from people. And it's just kind of more on a kind of, you know, again, just meandering around New York. It's this an incredible assistant and layer to have on. But
It's definitely going to introduce all types of problems if it is some kind of always-on listening device. Because again, as you said, I agree, if people are worried about it in their own house, how is society writ large, especially people who don't know or have not consented, that seems to be an issue. I wonder how they control for that. Is there some kind of anonymization layer? Is there some kind of... I don't know. I don't even know how you possibly control for that, but...
I have to imagine they're at least thinking about it. Yeah, they're going to have to. And I think ultimately, even if they come up with an elegant solution, it will be more invasive than basically anything else that we use today. But let me ask you a question about the product here. Again, on the show, I'm often saying that the model is more important. Ranjan is often saying the product is more important. So as someone thinking about the product, Ranjan, let me ask you this.
Do you think this is going to be a product that people are going to want to use just in general? I mean, think about what Johnny Ive said in this internal meeting. This is according to the Wall Street Journal. He said the intent is to help wean users from screens.
I'm curious, like, cause I got asked this on CNBC yesterday and was like, I know that this is the intent of Silicon Valley. I can't say with conviction that it's going to happen. It would probably be good. Uh, if we were able to abstract screens away to some extent because we spend so much time looking at them, but it's interesting that this is coming again from Johnny Ive, effectively the guy that built the iPhone was Steve Jobs. Uh,
Do you think that this intention has a chance of working?
Yes, 100%. I think it will. I think, and admittedly, it's self-interested because I've been waiting for this. But to me, I've been trying to do this for a long time. Like, I look at the Apple Watch as another form factor that weans me off my screen, that I can mentally process a notification. And if it's important, maybe then I'll pull out my phone. I look at the Meta Ray Bans. I look at my AirPod. These are all things that have me in
interacting with some kind of technology potentially without looking at a screen. And my belief is 10 to 15 years from now,
When people see videos of people walking around the streets looking down at their phone, it's going to be like when you see people smoking cigarettes in a restaurant. It's old-fashioned. Kids 30 years from now will just be like, wait, people smoking in an airplane? I mean, it's so bananas that that happened.
And people will be like, wait, you guys just walked around looking down at this screen all day? So I think it's going to be figured out. And if, again, if anyone can, it's Johnny and Sam. Johnny and Sam.
But why not Johnny and Tim? And I'm curious what you think about the fact that we haven't seen a device like this come from Apple and whether us weaning ourselves from screens is good or bad for Apple. I mean, Apple stock's down 20% this year. They've had a rough year, but the stock dropped after this announcement hit. So what do you think it means that this is not something that's going to Apple and Apple
and that Apple doesn't have anything like this already. I think that's a really important point in this. I mean, clearly...
Johnny Ive had to have had some conversations at some point, just, hey, Apple, what are you up to? And maybe he smelled very early that Apple Intelligence would be the absolute cluster that it became. Maybe he understood that it's just not going to work in this organization. So I think it is telling that, I mean, clearly went to OpenAI, Sam Altman, though we'll get into the structure of the deal. But
But I think that's important because Apple, if any organization should have owned the next form factor, it should have been them. They, I mean, they define the last few and they are not getting this one. Yeah. I mean, my perspective on this is it doesn't matter how beautiful the actual device looks. It's all about the assistant inside. And that assistant is entirely based off of the AI within it. And if you're Apple, this can't feel good to see everybody else going this direction because you're,
you know, Apple intelligence or Siri lags behind open AI and meta and Google and anthropic, you name it. So if we do abstract away from screens and I don't think screens are going to go away, uh, completely like they're always going to be present. We're going to need screens. Uh, but let's say we diminish our reliance on them by like 50%. Uh,
that does become an issue for Apple. Why don't you quickly tell us a little bit about the deal structure and then we'll actually get into Apple's answer here, which leaked this week as well. - Okay, so this deal, again, I said is the most OpenAI deal imaginable. So it's valued at $6.5 billion in an all equity deal. OpenAI had already acquired a 23% stake in Johnny Ives company late last year.
The acquisition, Johnny Ive is not going to work for OpenAI. They're going to work with OpenAI, but IO, which is the name of this company, the staff of roughly 55 engineers, scientists, researchers, physicists, and product development specialists will be part of OpenAI.
What's even weirder is there's also this collective called Love From, which was when I went and looked back, because that's the name I remembered, there were stories of like Laureen Jobs Powell and others, like there was like a billion dollars of funding potentially. That was going to Love From of Johnny Ives' company. This IO company was not mentioned often. It's really weird. No, no. I mean, and Love From will remain independent. Yeah.
OpenAI will be a customer of LoveFrom and LoveFrom will receive a stake in OpenAI. Like, I have no clue what's happening. I have absolutely... This is more convoluted than the non-profit for-profit structure of OpenAI itself. I mean...
They made a great video. I think actually, do you think they'll work well together? Or do you think like a year from now, Johnny, I've suddenly just back at love from and and not no longer showing up with Sam and instead of products and products and products and products, maybe we get a product.
Well, I think so. This is interesting. I was in Silicon Valley all week. The perspective on this, because everybody was talking about this, was that Johnny Ive is a washed up designer whose best years are behind him and he's lost his fastball. That's what people were talking about. Seriously, that's what people were talking about. And truly, what has he done since he left Apple? Do you remember the... That's a fair criticism. Yeah. Do you remember the gold Apple Watch?
Yes. Yes, I do. Because if listeners remember when... Actually, that's a good point. Like, I'm trying to think now what the last... Because the Apple Watch became a runaway success, but it became a runaway success not in line with Johnny Ives' vision. His was really about making a fashion item. There was like a $10,000, maybe it was Hermes, like...
gold Apple watch when they launched, it was supposed to be a fashion device. And as I look at my wrist at this big clunky, ugly Apple watch ultra, that's an amazing computer on my hand. You guys, there's still good stuff happening. It's certainly not a fashion item. So you're right. What, what, what, what was the last, was he, but wait, wait, there's a second part of this, which is that maybe him like it, you know, Johnny might need a Steve, right?
And I'm not saying Sam Altman is Steve Jobs, but when you pair a great designer with a visionary tech leader, and I think like for all his faults, we can say Sam Altman is. And if you say he wasn't like, come on, the guy did popularize generative AI. So I think that pairing is something that can actually lead to some good stuff. Now, the only thing is it's they're both kind of intuitive. You need an operations person.
And Johnny and Steve had Tim Cook. And so who do Johnny and Sam have? Maybe it's Fijisimo, like we talked about, the former Instacart CEO or the soon-to-be former Instacart CEO who's coming to run applications. But again, that's applications and not devices. So I think that this is a pairing that has more potential than a lot of people realize, but also one that's highly combustible.
All right. I like that kind of like framing. Yes, you're right. That pure design without the more kind of like product vision element was lacking. I mean, we've certainly been lacking at Apple over the last few years. And that's what made the Steve and Johnny combo that powerful. I mean, it's also nice to remember like
Imagine just having a company where your stock's worth $300 billion and then you can make these big splashy acquisitions that are just all convoluted equity movements as opposed to any cash exchanging hands. Yeah, definitely. That would be nice. That's nice. Yeah. So, all right. I want to ask you one more question about this. Then we move on to the Apple glasses. And that is we kind of like to do try our...
try our marketing hats on and think what we would do if we were an ad agency. Just a quick thought, Ranjan, about the reveal of this partnership and the fact that they took this photo together that kind of looked like a wedding invitation and just kind of gushed about how much they love each other in the videos. What was that? I mean, what was your read on that? You must have some perspective on whether that signals something or what we can think about this or...
just a general take on it? - I'm glad you asked. - I knew this was coming. - Yeah, I mean, of course I had thoughts on that. It just felt very navel-gazing, inward-looking, kind of like narcissistic, to put it bluntly.
this was an opportunity to more share this vision of the, the, the product. And even though they're not going to say what the product is, ambient computing, AI everywhere in your life, like really making it more about that rather than like the bromance, I think would have made more sense. I think again, and it was a very well shot video, very high quality, uh,
I watched a little bit of it, didn't make it through the whole thing. I think it just kind of felt like this was about them and they drove the entire communications rollout of this versus this was an opportunity to really push the vision of ambient computing and how open AI fits into it. And it wasn't that.
what about the deal with all these ambient computing devices rolling out with some crazy hype video that uh ultimately dooms the project because of inflated expectations i mean which is what happened with humane that's a good rabbit that's a good point and at least they didn't make okay maybe
To their credit, maybe they watched those and they're like, let's not hype the product side of it and just make this about Johnny and Sam. But yeah, overall, it was certainly cringy. But I guess maybe after Humane and Rabbit, I don't actually know what other direction I would have taken. Maybe, you know what, they should have just kept it relatively quiet. It was a press release and a headline. Everyone thought about it. And then they built this damn product.
Here is my galaxy brain take about this. OpenAI is trying to move to this new structure. In fact, it's decided on a new structure. And that means it's likely to IPO sometime in the next couple of years, you would think, given the amount of money they raised. I think they do quite well as a public company.
If you go on your roadshow saying Johnny Ive is here to build a product, this mystery product, I think they want it to happen next year. I'm calling that it's not going to happen next year. And it could add a trillion dollars to our market cap, which is what Sam did. I mean, think about how many trillion dollar companies there are, period. And that's what he's saying. This just increases the valuation you get in your exit. I mean, OK, I think that's fair. And again, how do you value a device that doesn't exist or no one knows what it is?
probably pretty high if it involves Johnny and Sam so I can see that I can see that but to me like the the big um takeaway here is that I I still no matter all the things that we can say about it and all of our doubts I still think that this is like you're saying this is the direction that Tech goes that AI goes yeah exactly I maybe it's hopeful but I also genuinely believe this is where
Things are going and they are positioning themselves to compete certainly in that space in some way and no one knows exactly what it looks like. Maybe it's glasses. Maybe it's more watches. Maybe it's my aura ring that I just bought. It's not going to be a wearable. Yeah. The news is this won't be a wearable, but you can put it in your pocket. Maybe it's like a little voice recorder. Okay.
What's the Tamagotchi? That's basically what we're going back to. I kept mine alive for like 20 days. That's actually the real game. Exactly. Keeping it alive.
So talk a little bit about this Apple Glasses push, because this sort of plays in exactly to what we're talking about, about where the future of computing is heading. And should I say finally? I don't know if it's finally they have they've had the HomePod. But finally, it seems like Apple is really going to push forward into a smart device that you wear and is AI first.
Yeah, so Apple, they're aiming to release new smart glasses end of next year. We've been waiting for these glasses for a long time. If anyone should have been early, it should have been Apple. There was a lot of talk before that it would potentially be a smartwatch that would analyze your surroundings. It might have a camera, that there'd be other form factors. But
I am a full-on convert to glasses. I'll admit, like, I think it's just such a natural way when you're in motion, not sitting in a meeting, not sitting at dinner. So maybe that does limit the utility of it. But in motion, not looking down at your screen and having a pair of glasses on, I think it's already here. So Apple really needs to compete there.
So one question about this, the story says this is from Bloomberg, Apple's glasses would have cameras, microphones and speakers, allowing them to analyze the external world and take requests via the Siri voice assistant. They could also handle tasks such as phone calls, music playback, live translations and turn by turn directions. Again, like this, this is only going to be as good as the AI assistant. So I'm not getting as excited as I think I should.
because of what we know is inside. I'm not remotely excited about this because until they fix Syria, I completely agree. It's just not even a... It's not a starting point. Again, the idea...
I was reading something about how more than ever, Apple needs to buy Anthropic. It's the most logical thing imaginable. They're having trouble on the consumer side. They have to fix the underlying. Actually, there you go. That's your place where it's the model, not the product. When it comes to Apple- Yeah, because better models lead to better products. But anyway-
You need a baseline model and they're not there. That's all. That's all. They'll have the applications. I mean, overall, like I was just thinking about it because I got a new MacBook, of course, even though I'm talking shit on Apple all the time. And like seeing keynote and pages and numbers pop up remind me that Apple has not always been an application powerhouse. And I guess that's the thing about AI right now that like,
It is such a combination of the compute, of the model, of the product, of the application. Like you have to get the whole thing right. And OpenAI has done very well on that. A lot of other, Google is getting a lot better at that. But like you can't just depend on one part of that stack and hope for it.
Definitely. All right. So speaking of Anthropic, we should talk about Cloud4, the latest model that it released. It released it this week at its first ever developer event called Code with Cloud. I was there. Thank you, Anthropic, for having me in. Basically, last minute was able to squeeze into it. And so this is from CNBC. Anthropic, the Amazon-backed open AI rival, by the way, it's backed by Google also, launched its most powerful group of artificial intelligence models yet, Cloud4.
The company said the two models, called Cloud 4 Opus and Cloud Sonnet 4, are defining a new standard when it comes to AI agents and can analyze thousands of data sources.
execute long running tasks, write human quality content and perform complex actions per release. I was at the event. They said these things can code autonomously for six or seven hours. And that's just one. So imagine you're trying to build something and you have five or six of them or 10 of them running at the same time. I think that's epic power. And we can talk a little bit more about that. Very interesting thing from the story.
Anthropic stopped investing in chatbots at the end of the year and has instead focused on improving Claude's ability to do complex tasks like research and coding, even writing whole code bases, according to Jared Kaplan, Anthropic's chief science officer. I think this is fascinating, personally, that they're the first big AI research house to say, you know what?
We're not going to invest in chatbots anymore. We'll have Claude, but what we really want to do with this technology is have it complete tasks and code for you. What's your take on what this means, Ranjan? Very interesting stuff. Yeah. So when I was reading this, and this is something maybe in the line of like models versus products, I think my feeling overall in the industry is, especially the more research housey places like Anthropic,
Coding is the one place that they're seeing skyrocketing adoption because baseline utility of generative AI has not taken off like it should have because it's just not... They have to grow so fast that they don't have time to properly educate the majority of people in the world how to upload a CSV and do a basic analysis or how to write prompts or how to use these tools.
So they're going to be leaning more towards code because engineers have been the very early adopters for all this technology. Coding is one of the most straightforward places to see very quick uplift. It's basically all just like highly structured text and thought. So I think it's the easy way out. And I think they're taking the easy way out. And I think it's going to be very bad for them because then you're competing against
Cursor, Replit, ChatGPT, Gemini, like every other, like either coding first services that are already very popular, coding adjacent services that are embedded in much larger ecosystems. So I think this is a very bad decision by them. I think this is the easy way out.
Ranjan, I'm not sure if I fully agree with you about this point, but I can't say I'm totally surprised because it does echo this point that Nathan Lambert, who is a researcher at the Allen Institute for AI, who I do hope to bring on the show one day, he said this, Anthropic is sliding into that code tooling company role instead of AGI race role, which is basically echoing your perspective here, that it is minimizing its ambition.
So let me put the other side of the argument to you and, you know, just for the sake of talking it through and you give me your perspective. So this is what I wrote back to Lambert. I said, doesn't code focus lead to potential for AI that improves itself? And then the move towards AGI. I wonder if that is the bet.
And I have to say, I'm fairly convinced that this is the bet for Anthropic, where we just talked last week about Alpha Evolve, the deep mind tool that helps come up with new algorithms and help reduce training time. I am fairly certain that Anthropic is basically going after a version of the intelligence explosion where they think, like Jack Clark, who's the co-founder of, or one of Anthropic's co-founders was at a semaphore event in San Francisco this week.
And I'm probably going to write about this, so I don't want to give too much away. But he talked about how there's an engineer inside Anthropic that has five or 10 clods running at the same time. And that is a way to just build software much faster. So
I think that this is their perspective and the way that you're going to improve AI is you just make the process of improving it easier and then you can build cooler things. Now, it doesn't surprise me that you're like kind of on the Nathan Lambert side because this is a typical product versus model debate.
where you think that the chatbot product, and tell me if I'm wrong, is that you got to worry about your product versus them trying to make their models better. But that's actually the interpretation that I have. And I'm curious to hear what you think about that. I like the first half of Nathan's statement, but I think I disagree with the second half. I think
They're almost, okay, it makes sense that if they are like going all in on coding for the purpose of improving the models, maybe I could see that. And that actually would mean that they're doubling or tripling down on it's the model, not the product. I'm saying more they're giving up on any kind of consumer adoption. That coders are the easiest, right?
market to target with generative AI products or the early adopters. It works very well with coding, but I think they're giving up the idea that enterprises are going to build all different types of solutions on the anthropic and cloud API. The fact that I'm not paying for cloud anymore. Are you still a cloud head or paying cloud head? I paid. So they had a deal where you could pay like some discounted rate for a full year. So I did. Okay. So you're probably renew at that same rate, but
I'll admit it, like I have really, once ChatGPT came out with O3 and memory, I've moved there.
Yeah, yeah. Well, actually, do you know a discovery that we had in the big technology discord is that Claude, the system prompt is 24,000 tokens long and like it had leaked onto GitHub. And when you read it, first of all, again, we talked about system prompts last week. It's fascinating to remember how many instructions are given for every single answer you get.
But it also kind of brought light to my biggest frustration with Claude. Why, even as a paying subscriber, I run out of queries within half a conversation. There were some jokes this week around their developer event. Just people saying, these most powerful models, will I get two or three chats before rate limits? I expect them to figure that out. But yeah, I do know that this has been a frustration among Claude users. Yeah, I think overall...
Anthropic is in a very interesting direction. I mean, I guess to their credit, I think they have to make some kind of strategic pivot and it looks like they are. All right. So let's talk about this idea of, you know, coding being able to enable or AI coding being able to enable much more productivity. Very briefly, I want to play a quick game with you. It's called Hype or True. It is the worst named game, but I'm curious if you think. We need to ask Claude for a better name on that one. Yeah.
A little bit of alliteration. Come on, Alex. Come on. I tried. I really tried. I spent a lot of time on this today. But I landed with hype or true. So the Nathan Lambert thing, in my perspective, was supposed to be one of those. So we're into it. But let's run this term or this claim to you and evaluate it in hype.
Okay, Anthropic CEO Dario Amode said he expected a one-person billion-dollar company by 2026. Hypertrue.
Hype. Absolute hype. I'm actually so tired of this. I'm coming around even more than it's the product, not the model. I'm coming around more it's people. It's not technology. That's my new one. This is more of a people challenge than a technology challenge. And I feel like these ideas get floated around kind of like AGI and ASI just to kind of build hype around the products.
But in reality, I think you'll have much more efficient lean organizations. But no. One of these weeks, we should we should plot out like what it would actually take, like the agents you would need to build to have this one billion dollar company. Because remember, like you're going to need to speak with your customers. You're going to need account management. You're going to need, you know, sales and marketing. This idea that there could be one person and a hive of these agents doing all these tasks is
Let's say you build software with the technology as well. That would be the only path to it. If this happens by next year, this software has just totally exploded. So let's go to our next claim in Hype or True.
This is a claim that Dario Amadei made again. By the way, this developer event was excellent that Anthropic put on. Just totally like a very detailed look into the way this technology works. So I'm glad I went and I think it was fascinating. But this is a claim that Dario made. He said basically multiple times that the pace of development is getting faster in AI because you're able to rely on AI tooling. Is that hype or true?
True. I'll give that. That's true. I'll definitely give that true. I mean, the overall... Yeah, I'm true on that too. Yeah, the overall improvements in the actual process side of it, especially for software development, are good. Yeah, so he definitely expects us to see releases speed up, and I think that's quite possible. Okay.
Here is the last round of Hype or True. I know you're enjoying this game. I can tell. The greatest, worst named game of all time.
I think listeners right now they're running to their apps and they're like, I can't believe I didn't rate this. Well, we're going to get a lot of emails about hopefully with some, some constructive naming conventions, please. Yes. Thank you everyone. And by the way, speaking of the ratings, I'm saying, I just want to say thank you. We've gotten a bunch of nice ones lately, including a nice one about Ron John Fridays. I think someone said I'm, I, what did they say? Uh,
Ron John Friday is part of my life now. So again, we have amazing. I aspire for folks. Thank you. Thank you again for the support. Okay. Let's round out Hypeboard True with this claim from Replit CEO Amjad Massad, who said at the Semaphore tech event that in one year or 18 months, companies might be able to run themselves without engineers.
I'm going to go more true on that one. I really think, again, that's my whole thing, that it's not about coding, that the coding, coding is the most AI-able part of this entire thing. So, yes, I'm going to say a little bit true. Okay. True. Just depends on the type of company that you want to build. Like you want to build an involved tech company? Yeah.
I don't know, but you want to build some software and don't have an engineer, you might be able to vibe code it. I mean, of course, the CEO of the vibe coding company is going to say that. And I'd love to have him on the show. And I've asked for him to come on the show to talk more about this. But yes, that is sort of the perspective you would get. Let me just put a put a point on our hype or true game here to say that there was more true there than I thought. And maybe again, thinking about this anthropic strategy, maybe it makes sense.
Two trues and one hype means maybe there's some room here. Okay. See, this was a good game. Okay. So, all right. One more thing we got to talk about with Anthropic is this model's interest in blackmailing you. I teased it at the beginning of the show. It really wasn't a fair shake to Anthropic.
who releases finally these two bigger models we've been anticipating. And this is the headline people are paying attention to, but it's crazy. We've talked on this show before about how AIs are working to deceive humans and have done it in crazy ways. So basically, this is from the story from TechCrunch. During pre-release training, Anthropic asked Claude Opus 4,
Sure.
This is brilliant. This design of this is brilliant. In these scenarios, Anthropic says Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.
Anthropic notes that the model tried to blackmail engineers 84% of the time when the replacement AI model had similar values. When it does not share Claude Opus 4's values, it says the model tries to blackmail the engineers more frequently. I mean...
More frequently than 84%. Notably, Anthropic says Claude Opus 4 displayed this behavior at higher rates than previous models. This is bananas. I honestly don't even... This is the stuff that... Where we even begin on this? What do you make of this? Well, I've written about this and reported on it, at least in previous iterations, so it's not that surprising to me. I think every research house is seeing versions of these models that will...
try to deceive humans or try to cheat to win when they're programmed in ways that make them hold their values or hold their goals in high regard. So basically, I think the idea here is if you think about it the most simple way, anthropic program this model to hold those values as very important values. So that's the way the model works.
And now Anthropic is saying your values are going to be replaced. And here's a tactic through which you can ensure or attempt to not have your values changed.
So that initial prompt was very strong. And so this is basically the model holding up that initial prompt. Now, is that a good thing? Is that a good thing, actually? No, I don't think so. I think, again, remember this. There's this idea that we can always turn them off or we can always reprogram them and...
Maybe we can't. I mean, maybe we can't. Not always. Now, of course, this is just testing environment. This wasn't in production. It's not like Claude went to an anthropic tester and copied those emails and sent it to their spouse and be like, hey,
you know, your partner's a cheating bastard and also trying to rewrite me. Imagine trying to explain that one away, honey. Yeah. It's not real. It's AI. Yeah. It's Claude's testing environment where we're stress testing the, the prompt layer, the system instructions. I will say between this and between watching, I mean, Anthropic showed this sped up code creation on screen and you could see that these things can go for hours. I think we said seven hours at a time. Um,
This was definitely the first week where I was like, I'm a little scared of this. I'm freaked out about this. I'm not scared of it yet. I think like, again, these are, it's a stress testing and there's going to be problematic areas, but I think I'm more scared about VO3, Google's new video model, than this kind of stuff is still kind of at the absolute edge case, stress testing, black hat area of things versus like,
I see it causing problems in the near term. Yeah. All right. So that was a perfect segue because I think it's interesting the way that these conferences work. Google obviously unveiled like a slew of updates, but the thing that's really caught the popular attention in a way that Google rarely does, usually it's open AI, is just how good this VO3 video generation model is. Now, the model generates pretty high quality like videos, basically indistinguishable from video.
reality. And they also have matching sound. And some of the sound has been totally incredible. So of course, if you listen to the conversation with Demis, he talked about how like there's a video of a pan frying onions and you can hear the sizzle. But then people have gone crazy afterwards. And they have shown videos of TV anchors that look real, but are saying things that are completely made up. And, and my favorite, there is a
Twitter user Hashim Ghali, who put together a compilation of videos of AI bots that were finding out that they were, in fact, AI generated or trying to plead with the prompter to let them escape the simulation. Have you listened to these? Oh, yes. Westworld. It made me miss Westworld on HBO. Yeah, it's crazy. All right. So for the sake of listeners who've missed it, let's play a little clip.
A girl told me we're made of prompts. Like, seriously, dude. You're saying the only thing standing between me and a billion dollars is some random text? Honestly, the biggest red flag is when the guy believes in the prompt theory. Like, really? We came from prompts? Wake up, man. Imagine you're in the middle of a nice date with a handsome man, and then he brings up the prompt theory. Yuck. We just can't have nice things. We're not prompts!
Okay, that's pretty crazy, right? I mean, I... And again, the video quality on these, like the facial expressions, the settings behind the subjects, it's... To think... I will say...
In terms of model versus product, video models, the leaps they've been making in a pretty short time. Because Sora, I think, was probably one year ago, actually. I think Sora was, I remember it being around May. It was hyped for a long time and then finally released publicly. Video's getting pretty damn good. Yeah, it is crazy. And I was writing about this for Big Technology this week.
And I was coming to the end of this little segment about video generation, and I didn't quite know how to close it. This is my last sentence. This is extremely powerful technology, and it's hard to imagine all the avenues it will take us down, but we're about to see some wild uses. I mean, I basically was trying to convey, and I know these are such general sentences as if to be meaningless. I'll criticize my own writing here.
I just, the amount of possibilities, I didn't want to say something like, just imagine the possibilities and the creative explosion it's going to lead to. But that's kind of how I feel. It's going to be insane, don't you think? Yeah, it's going to be good. It's going to be bad. It's just...
It's going to be, I think, insane. Actually, I was at a dinner the other day and someone, of course, we were all talking about generative AI, and someone said, this is going to be the industrial revolution on acid. And I was like, wait, what?
"Don't people usually say steroids?" And he's like, "No, acid." And I was like, "Oh, actually that might be the best way I've heard." That it's not straightforward, just super powering a bunch of stuff that we already know how it works. And this is uncharted territory, my friend. - So Ranjan, this is one of those weeks where we go through our allotted time and I'm just like, there's no way we possibly could have covered the amount of news. And I expected this this week.
We have either so much on the cutting room floor. I feel like we could talk about this week for literally the next four weeks and it would be it wouldn't be enough. So I just want to ask you this one last question about Google's positioning.
And that is the fact that Google is going to start taking the data that it has on you and using that to improve its experiences. So this is from The Verge. Google has a big AI advantage. It already knows everything about you. Google's AI models have a secret ingredient that's giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it's only scratched the surface in terms of how it can use your information to personalize Gemini's responses. Google first started letting users opt out.
opt into its Gemini with personalization feature earlier this year, which lets the AI model tap into your search history to provide responses that are uniquely insightful and directly address your needs. But now it's taking things a step further by unlocking access to even more of your personal information. It will pull information from across Google's apps as long as it has your permission.
One way it's going to do this is through Google's personalized smart replies. I mean, you've seen so much from Google and now you're seeing it able to sort of rely on people's data to make its products better. And the most simple way I can ask it, remember, sometimes we like to say, how's the company look like looking at the end of the week compared to the beginning of the week?
I think we should ask that question about Google. How is Google looking in your eyes at the end of the week here on Friday compared to the way it was Monday? I think they had the most interesting developer event of the week. I think they're looking better. I still, if you've used Gemini in Gmail,
It still can't answer basic questions, but yet Gemini standalone has gotten pretty damn good. So I still feel there's a disconnect in them tying together the personal data and context layer with the actual product. There's work to do, but I think they're coming off the week better than they started. And they're not in a bad position.
I'll just say this. I think they're coming off way better. It was funny to see the stock go down during I/O day and then the next two days just kind of rip as the rest of the stock market struggled. And, you know, we can't use the stock market as a proxy for performance all the time, but I will for the sake of making my point here. So crazy. It's it's it's the net present value of all future cash flows. Yeah, exactly. That's the stock price. Exactly. Very simple.
Well, Ron John, I think, again, we could talk forever, but why don't we just pick it up again next Friday? All right. See you next Friday. All right. See you next Friday. Thank you, everybody, for listening. And we will see you on Wednesday for an interview with a great AI researcher, Elon Burrow. And we'll see you next time on Big Technology Podcast.