We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Boom Times For ChatGPT, OpenAI’s Deep Research, AI Super Bowl

Boom Times For ChatGPT, OpenAI’s Deep Research, AI Super Bowl

2025/2/7
logo of podcast Big Technology Podcast

Big Technology Podcast

AI Chapters Transcript
Chapters
This chapter analyzes the recent resurgence in ChatGPT's popularity, examining various contributing factors such as the introduction of voice chat capabilities and the overall increase in public awareness of generative AI. The discussion also touches upon the significance of ChatGPT's brand recognition in the market.
  • ChatGPT's web traffic tripled from April 2024 to late 2024.
  • The introduction of voice chat capabilities is considered a key inflection point.
  • ChatGPT's brand recognition as the leading generative AI tool significantly contributes to its success.

Shownotes Transcript

New ChatGPT growth numbers come in, OpenAI has built a pretty good research assistant, and the Super Bowl fills up with AI ads. We'll cover that on a Big Technology Podcast Friday edition right after this. From LinkedIn News, I'm Leah Smart, host of Every Day Better, an award-winning podcast dedicated to personal development. Join me every week for captivating stories and research to find more fulfillment in your work and personal life. Listen to Every Day Better on the LinkedIn Podcast Network, Apple Podcasts, or wherever you get your podcasts.

Welcome to Big Technology Podcast Friday Edition, where we break down the news in our cool-headed and nuanced format. We have a major week of news to cover and some of our own to break. And we're joined, as always, on Friday in studio, live from Spotify headquarters.

by Ranjan Roy of Margins. Ranjan, great to see you in person finally. Welcome back to the show. Big Technology Podcast Friday edition is all cleaned up today. Alex and I are here at Spotify Studios. We're sounding good. Usually we're sitting both in New York or in some strange locale having a conversation through the computer screen, but today we talk in person. And

We love to cover the news every week. This week, we're going to break some news or at least share some new data on ChatGPT that I've gotten from SimilarWeb, which shows ChatGPT's really interesting growth story. So we're going to start there. And then, of course, we're going to cover deep research, which both you and I have spent $200 to try. And then, of course, it's the Super Bowl this weekend. So we're going to talk about why these companies are spending money on Super Bowl ads. And now

and not on improving foundational models. And I have a feeling I know where both of our perspectives are going to be on this one, although we might be more aligned than usual. All right. Here's the data from SimilarWeb. So

For quite some time, and I even wrote a story about this last year, ChatGPT had been flatlining. The growth had just completely stopped. So you see a very, very quick run up to 100 million monthly users or on the chart that we're looking at now from SimilarWeb, they're measuring web traffic. So about 2 billion visits per month. And it flatlines. And it is fast.

basically either down or just barely touching where it was in early 2023, so four or five months after Chachi Pati is released. And then there's an inflection point. And I'm pretty sure the inflection point is when Sam Altman tweeted her because the moment OpenAI releases, or not even releases, announces the fact that they have these superior voice chat type of capabilities where you could talk, you could interrupt, it feels live.

All of a sudden, interest in Chachi Petit skyrockets, and we can see in the chart that we're looking at. And for those at home, it's just an inflection point moment where Sam Altman tweets her, and it goes from $2 billion.

visits per month to 4 billion, basically 4 billion. And that's when we start to see OpenAI announce that they've gone from 100 million users to 300 million users. So Ranjan, I'm curious what you think about the boom times for ChatGPT, mostly just like, how important is this for OpenAI that they've actually found something that's made their chatbot take off?

I think this reflects that OpenAI and ChatGPT is the Kleenex or Xerox or the household name of any kind of generative AI. And the numbers, again, we heard 100 million to 300 million, but seeing this from a third party is actually pretty impressive. To see it from April 2024 to late 2024 is

almost triple in terms of traffic is incredible. But it makes sense. They're the household name. Every non-core tech person I talk to does not say, never talks about Claude, no longer talks about Bing. There's a brief moment they might have been and is only talking about ChatGPT. So

I think it's both good for open AI, but it's also good for generative AI in general. It shows it's becoming more of a regular thing. So my theory is that this whole brouhaha with Scarlett Johansson when Sam tweeted her and people were talking with open AI or they thought they could. By the way, they did release it, but just months later.

generated way more interest in using ChatGPT. Now, there's been so many other releases they've done, better models. They incorporated Dalian, which is image generation. So that might have done part of it. They've also stopped hallucinating. The responses are definitely better. But I'm curious. I mean, it's really, really fascinating that ChatGPT just stagnated for almost a year and then picked up. So I'm saying it's the Scarlett Johansson thing. What's your perspective? I'm...

That's an interesting theory. I'm going to give you – I'll give you that, but I'm still going to disagree. I don't think it's Scarlett Johansson here. I think this is – again, this is reflective of if I think in – throughout 2023, no one outside of tech talked about generative AI.

2024, it became a thing. We've talked about this. That was when the hype cycle kicked in in high gear. That's when everyone started thinking about it. That's when everyone started talking about it. It's every single headline. And ChatGPT is the first place people will go. It literally, it's shorthand for everyone I know for AI right now. So that makes sense. It reflects the industry, not just OpenAI. Yeah.

One of the things that's interesting looking at these numbers is just how unevenly distributed the gains in AI have been. So if we're looking at our similar web numbers, again, this is web visits. Bing had $1.5 billion per month in February 2024.

It had all of $1.85 billion per month in October 2024. You look at ChatGPT, starts with $1.6 billion, and now it has $3.7 billion per month. So it's left Bing in the dust. And by God, I mean the rest that you mentioned, Claude,

It doesn't even factor. There is no consumer adoption, basically, for cloud. Question here. Is Bing.com in the data, the search engine as well? Or is it... That's the search engine. Okay, okay. So ChatGPT has surpassed the search engine. And the search engine really hasn't gotten much of a bump. No. Even though it's delivering so much of the same services. So you're right. It really is the brand that makes...

the biggest difference here. Actually, let's take a moment here to pour one out for Bing. Because remember, I think in 2023, when we would talk, we were Bing boys. Remember, like, Bing was on par with ChatGPT as kind of the face of whatever was going to happen in generative AI. I remember people having, like...

just the weirdest, wildest conversations with Bing. No one is doing that today. No one is stress testing Bing. Microsoft, they just kind of, I guess they went all in on co-pilot and enterprise, but Bing consumer, it was a good run. It was a good run, but we tried. We tried. We do have cameras with us today, so allow me to just quickly address the audience.

Yes, we were Bing boys and we apologize for that. And if you're just joining us today or recently, let's wipe that out of our memory and we're going to pick up as if that never happened. I'm a proud former Bing boy. I'm okay with it. Honestly, everyone goes through their Bing phase at some point, right? Well, look, you got to live it out. Bing was at its best when it was trying to steal reporters' wives. Once they neutered that capability, it was toast. I mean, look at what happened.

It's really disappointing and a disaster. Yeah. Sorry, Bing. But you're right. To me, the –

Oh, man. It almost makes me question my normalcy because I'm on perplexity all day. I'm looking here, Claude. These are the places I'm spending a lot of my day and no one else is. No one else is. Maybe we're just ahead of the curve. Hopefully. I like to think that sometimes. Me too. But here, look, this is another thing that we think about coming out of last week where we talked about how

DeepSeat came out. It's about as performant as OpenAI's reasoning model. It's much cheaper, and it shows you the full chain of thought. And, well, actually, we'll get into that in a second. But it's about as performant, and it's much cheaper. And we talked about how models don't matter. And if you're looking for the optimism about OpenAI, it's that they have a runaway success as a product in ChatGPT, and the numbers just really push it forward. Yeah, no, no, I think that's correct. And we talked, to me, still...

OpenAI's greatest trick in the world, and we've talked about this before, is that in the UI, the way it kind of like let the text stream out to you when it didn't need to, if you ever call the API, it just gives you a block response, made people feel like this was something magical and it was thinking. OpenAI has always been, and we're going to get into deep research, operator is not a good product.

But it's a mesmerizing product. It's a beautiful product. It's just not very good. So they still have a strong team. And now the head of product, Kevin Weil from Instagram and Artifact briefly, like they're playing the right game in terms of product, I think. I think. Financially, we can discuss separately. Last week, we also looked at DeepSeek's performance and we said, oh, this is bad because they've commoditized OpenAI's model. But...

Further data that I got from a similar web shows another story, which is maybe even more concerning for open AI. So we all saw DeepSeek go to the top of the App Store charts. And for me, it was like, well, the App Store charts take into account hotness. Like how hot is your app? If your app is super hot, then you're going to go to the top of the charts. But then you look at the traffic and it's not only that people were downloading it, it's people were using DeepSeek a lot.

A lot, a lot. And this is again from similar web. You see last week, so January 28th, ChachiPT had 139.3 web and mobile visits. DeepSeek had 49 million. So it cut about like further than any other company has been able to cut into the lead of OpenAI. And it had about a third of the traffic that ChachiPT took years to build.

And I think part of this is just because the product, the DeepSeek product, if you go to deepseek.com and I can't recommend it because you never know what's going to happen to your data there. But if you go there,

You'll see the chatbot write out its full chain of thought, and it's mesmerizing. You see the reasoning work in a way that you only get bullet points with OpenAI. And, of course, there was a lot of media interest which drove this. But for me to see these numbers and to see that it basically built a third of what ChatGPT has, again, taken years to do, that to me might have been the most concerning thing for OpenAI. That...

All of a sudden, there's a challenger that might make ChatGPT not that verb or noun or whatever you want to call it. Yeah, but I think the numbers, the more interesting part of that to me is, again, January 28th, 49 million visits versus 139 for OpenAI. Yeah.

That reflects just kind of the, just how quickly this can rise and fall because that had to be driven by the media hype, curiosity. It also kind of makes me wonder still how niche is all this behavior because I don't think tech norm or normie normal people are going to deep seek. It was all of us going and spending time and testing it against open AI and to get those kind of numbers for that quick,

But like that bounce, I think still shows that this stuff's ephemeral and like it can –

People can go anywhere. People can have a bunch of bookmarks up. They'll switch to the next thing because if DeepSeat came out of nowhere and got to those numbers quickly, and we'll see where it is in a month or two now, I think it shows that no one has a competitive stronghold or any kind of lock-in on this stuff other than us now paying $200 for OpenAI ChatGPT Pro, which we'll get into. Well, 49 million people in a day – or 49 million visits in a day to a website. That's not just –

The nerds. That is some part of the general population. If it is just the nerds, then what? The entire usage of ChatGPT is nerds times three? That's embarrassing. That's what worries me. No, no. When I look at this number, I cannot believe any non, we'll go with nerd, but tech forward person was going to deep seek. So that actually, the 139 million visitors to ChatGPT were...

What percentage of that is non-early adopters? That does make the kind of addressable market of this a little more questionable. Back to OpenAI. If we were worried about their models commoditizing last week, if their chatbot can commoditize like you said, you could just go to a different website. And next thing you know, ChatGPT is unseated.

Shouldn't there be alarm bells going on in OpenAI headquarters right now because of what we're seeing? Of course. I think definitely. To me, Gemini is the most interesting competitor in this because – or even, I mean, Microsoft, I guess. Is it co-pilot now or what's the generative chat? It will always just be Bing to me. It will just be Bing to me as well. I think –

- Once a flame, always a flame. - Because where people already are and just injecting the chatbot layer is always going to be easier and the distribution side, actually sorry, we haven't even mentioned Meta AI in all of this and their numbers, I'm sure they always have, they can always get when you have three billion users some dramatic headline number, but having the chatbot integrated into where people already are is always gonna be a natural advantage

And I think this is another case where we have not started to see that level of utilization for Gemini. But OpenAI, yes, alarm bells ringing very loudly, I think, should be the case. And so as the siren went off, the OpenAI team, a merry band of characters, made their way to Reddit to answer questions from the town. It really feels like that's what happened. They all did a Reddit AMA.

And they gave some very interesting answers. So it is really clear that DeepSeek put them on their heels and they said as much in this AMA that included Sam Altman, CEO of OpenAI. Kevin Wheel had a product.

Somebody asked, I think we should just read the Reddit usernames because they're fun to say. I always love – I gave a wedding speech once where I found marital advice from Reddit and the best part of it was reading the entire Reddit usernames out loud to the entire audience. You're still friends with these people? Yeah. One of my closest friends. Okay. See? So folks, what we're about to do is just going to bring us closer. Yeah. So let's go to our good friend, Lulz Inventor.

Alton Venter says to the OpenAI team, would you consider releasing some model weights and publishing some research? And in response comes a remarkable statement from Sam Altman. Yes, we are discussing. I personally think we have been on the wrong side of history here and need to figure out a different open source strategy. Not everyone at OpenAI shares this view, and it's also not our current highest priority.

We are on the wrong side of history on open source coming from the CEO of OpenAI. To me, it's just it really pushes the point home that whatever happened last week and, you know, the whole everybody's been out there trying to, you know, sort of bring it down and say this isn't such a big deal. It put the whole proprietary model industry, the OpenAIs and the anthropics of the world down.

On their back feet, understanding that they are about to be passed by open source and they have to embrace it. Curious how you read this statement. I'm glad we're on video today so viewers can see me just shaking my head because this is where, in terms of a company with this valuation, it sometimes still kind of – it amuses me to know that whatever corporate communications people would normally be around are not because this is just Sam Altman, I feel, just –

writing out loud. And just at this exact moment, that thought went through his head that maybe open source, that's the topic du jour. So let's say something big and controversial, but then even qualifying himself saying it's not a top priority. So that one felt really to me like just kind of stir the pot a little bit, but I don't read too far into that because

It can't be their strategy. It literally cannot. So financially, like they cannot – if they open source their model and try to win only on the product and the UI alone, they will never be – what is it? Wait, what was the – 300 billion dollar valuation that they're aiming for. Yeah, yeah, yeah. 300 billion.

You're not going to be a $300 billion company when you're open source like that. Maybe you will. Maybe it's the product that matters and you open source your model and you incorporate the best of open source and you grow that way. Which I do argue regularly. But in this case, the way they have built themselves out, I don't think they will win on that.

I happen to like this about Altman. Get on Reddit. Leave the comms people in the conference room and just say what you feel. Just go. Even if it's not 100 percent true and then people like us can sort of break it down and explain to the people at home what we think is real. Thank you, Sam. Thank you, Sam. And your merry band of open-eyed gentlemen singing on to Reddit. Right. The nobles of the court bounce their way down to Reddit.

Okay. Back on the rails. So here is from Theory Sudden, 5996. They say, let's address this week's elephant, DeepSeek. You know, what do you think about it? Sam Altman says it's a very good model. We will produce better models, but we will maintain less of a lead than we did in previous years. I mean, that to me is the biggest confirmation that what DeepSeek did, really, even the playing field, and Sam saying it himself. Yeah, no, I mean, honestly,

On one hand, recognizing and trying ... The rational point of view would be they see DeepSeek, they see R1, and they're just recognizing that this is the state of the industry. Potentially, we need to go open source. Potentially, we will not have as dramatic a lead over our competitors, but we're certainly going to get into. Then there's GPT-5 comments down the road.

And they obviously are still trying to sell this idea that GPT-5, whatever it is, and it's not going to be GPT-5-0. It's just going to be GPT-5 is going to be this earth-shattering AGI, whatever it is. They still have to sell that idea, and they're trying to. I see you've been deep in the Reddit AMA, which makes my heart warm. It really does make me happy. Where else can you get Sam Altman unfiltered? Actually, everywhere. Yeah.

Okay, let's talk about chain of thought. So one of the most interesting things that DeepSeq will do when you go to deepseq.com is it will show you exactly the way it's thinking through a reasoning problem when you use its R1 model. And OpenAI just gives you some bullet points. I've had so much fun trying to work through this chain of thought, really seeing how the model thinks. And I think whether the model is actually thinking or just computing is like a pretty fun debate that we can have in what is thinking.

Maybe we'll come to that at another point. But the Redditors are asking, can we please see all the thinking tokens? Here's Sam Altman. Yeah, we're going to show a much more helpful and detailed version of this soon. Credit to R1 for updating us. So, okay. So, they are literally...

admitting out loud that they've been pushed by DeepSeek. And Kevin Wheel, the product head, says, we're working on a bunch of these to show a bunch more than we show today. The problem is the more we show, the more that we can get distilled. They're obviously still smarting that in their minds, DeepSeek has distilled some of their models and put it into their own. I think this is great for the industry. I think this is really good.

No, okay. So I had said one of the greatest tricks, UI tricks of all time was open AI and the text streaming to make you feel like the computer is thinking.

I think DeepSeek has taken the next greatest UI trick in terms of showing the chain of thought processing. And again, as you said, we can maybe save the what is thinking for an ayahuasca retreat or something like that down the road. But I think – Live on air. Live on air, of course. But without getting too philosophical, it's –

Again, kind of a party trick in the sense that these models always go through some logical iterations to get to that output. There's always – and you said in the end it's actually just math. So the text representation of that you're seeing is still some kind of party trick here, let's say. Like it's still a computation that's happening. But DeepSeq doing that – and I've seen writer.com, which is an enterprise generative AI tool that we use –

Like they had sub-questions and it showed you the different types of questions that it was asking to get to the final answer. So other tools and models have done this. DeepSeek brought this to the general population. And it's brilliant because it makes people even more attached to these type of tools. Like it makes them really think that there's thinking, which makes them more usable. But in reality, like I don't know if you tried this.

When you saw within that chain of thought something that didn't quite work in the way you wanted to,

You can't just tweak that step. You're starting from scratch again. So, yes, I'm sure there's like some Twitter thread about how to prompt engineer your way out of chain of thought reasoning. But in reality, it doesn't really give you that much help. Yeah, but the chain of thought is really very cogent and so fun to read through. And you see the model be like, nah, maybe that doesn't work. Like especially one of the cool things about DeepSeek is just like it's very casual, the language and not so formal. So whatever they did to make that work.

It's pretty impressive. All right, let's talk about Stargate. So Theory Sudden asked, how important is the success of Stargate to open AI's future, which is, again, for listeners, it's the 500 billion attempted infrastructure.

infrastructure build by OpenAI, more likely. Announced 500 billion. More likely tens of billions, which is still impressive. Kevin Wheel says, yes, everything we've seen says that the more compute we have, the better the model we can build, and the more valuable the products we can make. We're now scaling models on two dimensions at once, basically the traditional LLM and the reasoning models, and both take compute. So it is serving products for hundreds of millions of users. And

And as we move to more agentic products that are doing work for you continuously, that takes compute. So think of Stargate as our factory for turning power GPUs into awesome stuff for you. Such a product guy response. Such a product. Such also a trained communications. Well, he spent years at Meta. He's been on this show. So that's just how he operates. Yeah, yeah, yeah. Even on your episode last Wednesday with the VP of Omniverse and Simulation from NVIDIA, it's interesting to me that

how kind of like dogmatic people are about more compute means more intelligence, better outputs, better products. And I mean...

Kevin here is going down that same path. More compute is better. And I like that people are starting to recognize, and it's kind of a nice way of putting it two dimensions. There's going to be the raw compute in terms of getting better output, but also coming up with new techniques and ways to actually drive that output. But in reality, I still, I think DeepSeq has shown us, and

The number is not $5 or $6 million to actually build and train the whole model, but the actual training part of it was $6 million. We can probably take that at some face value that the future does not only mean more compute means better products. And I think the industry, at least a lot of people, but the people with the most vested interests are still living by that rule right now.

Yeah, well, I'm a believer. I think it's right. Once I try to tell you a quick story, once I try to get Kevin Wheel to leak me some information from Facebook, I've never been shut down by somebody so quickly in my entire life. So that's just – that gives you some context to his response. All right. Let's talk about GPT-5. Ranjan, I feel like this brings you great joy. So do you want to take this one away? So Reddit user Concheria had asked – What do you think that means, Concheria? Yeah.

Like the shell and then turning it into a name. They have a bunny username. They have a bunny ninja avatar, whatever you incorporate that into. I'm sure we'll find out after the fact. Meaning if you're listening, Concheria, Conqueria, please let us know the etymology of your Reddit username. Now I'm starting to feel weird reading these names because I'm like I'm sure we'll find out at some –

Dirty term after this. Anyway, let's just read it. We're canceled. But so they ask, will there be an update to advanced voice mode? Is this a focus for a potential GPT-5.0? What's the rough timeline for GPT-5.0? I did like that they just kind of by default called it 5.0, showing how confusing. And there's been a lot of kind of like almost hilarious –

aggregations of what the series of model names has been from OpenAI. And I think this shows how ridiculous it is. So thank you, Sam responds. And he says, updates to advanced voice mode coming. I think we'll just call it GPT-5, not GPT-5.0. Don't have a timeline yet. So at least one.

And they're streamlining the way they're marketing this to GPT-5, which I think is a good thing. At least he didn't say AGI. I'll give him credit because actually they need to announce AGI before we get to GPT-5. So I think that's why that did not make its way into there. But-

I don't know. There was a long period of time where for OpenAI to succeed, they had to get to GPT-5, whatever that would be. And I think they've actually, to their credit –

gotten to a point where that's not necessary anymore. Like the battle of the next year or two could just be in the operator and deep research and whatever other product, which makes me happier than anyone that people are actually competing on product now. But I think it shows that the fact that he's a bit cavalier about this after, what was that tweet?

about like night sky or something like that oh yeah so Sam wrote this like really weird crypto cryptic cryptic crypto lord almighty cryptic poem that you know made us think that something big is coming but I think he was just writing a poem or maybe he had Chad TPT write him a poem and he was sharing it with the rest of us uh

But so whatever Sam was trying to communicate in the past or at least kind of allude to, now I kind of like it. That it's just no timeline. We'll call it GPT-5, but let's talk about other things. Yeah. Again, I doubt we're seeing GPT-5 this year.

or maybe ever. It'll just be versions of- February 6th today, I think we see GPT-5 this year. Oh, yeah. End of the year. I think if they don't have any killer runaway products, they kind of have to. They have to release something. And again, like,

whatever 4.0 became, 4.0 mini, whatever, they could have just called one of these GPT-5 and tried to like build some hype around it and we'd all go along with it. So at a certain point, if none of those products that they are releasing, and we are both paying $200 a month right now for these new products, so maybe they'll be okay and they don't need to. But if the pressure comes, I think they have to release something.

Okay, so you've mentioned multiple times that we were paying $200 a month. I mean, when I spend $200 a month on SaaS. We're talking about it. I'm talking about it. We did it so we could tell you, everybody at home. For you, for our listeners. What OpenAI's ChatGPT Pro is all about. And so we will skip our planned segment on ChatGPT Search and tell you about our experiences giving OpenAI so much money to use ChatGPT. So OpenAI is...

now allows you to spend $200 for a few things. Unlimited use of ChatGPT, their AI agent operator that will go use your browser and do tasks for you. And then something that just came out this week, which we teased in the open, a new ChatGPT agent called Deep Research. By the way, amazingly, they decided to take the exact name for the similar product that Google has,

We'll cover that in a bit, but I found that fairly shameless and wrong. Let me read the story. OpenAI is announcing a new AI agent designed to help people conduct...

In-depth complex research using ChatGPT, the company's AI-powered chatbot platform. This is from TechCrunch. I love how reporters still have to write that ChatGPT is the company's AI-powered chatbot platform. Just in case you didn't know TechCrunch. You're writing to a tech audience. What are you doing? Come on. Appropriately enough, the bot is called DeepRotation.

deep research opening. I said in a blog post published Sunday that the new capability was designed for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research. It could also be useful. The company added for anyone making purchases that typically require careful research like cars, appliances, and furniture. Um,

We have both attempted this. It is, in my mind, the best research agent that you could potentially use, that you could use right now. And Ranjan, I know you've been deep in the weeds, so I'm curious what you've been using it for. Last week, you said Operator was interesting, not worth the money. Is deep research worth the money?

Yes. I will say, yep, yep. It is – it's fantastic. It is incredible. It's – and I said last week, Operator is mesmerizing and useless. Deep research is fantastic.

So basically, market research-oriented questions asking what are e-commerce trends within a specific category. Look through Reddit. Look through different research reports. Look based on geographies. Even asking an initial question, it will break down and ask you back good questions as though you're talking to an actual research analyst. And then it will provide you

an incredibly well-sourced number of bullet points, paragraphs with hyperlinks embedded. It does an incredible job with this and it makes smart arguments the way you would expect from, I don't want to say PhD level person because I don't even know exactly what that would mean in terms of intelligence. But overall, this to me was huge and it did a good job. My kind of litmus test on all this is I think there's a lot of generative AI where people

products come out and people, rather than looking at what is available today, talk about what it could be in the future. Operator definitely fell into that category.

This, on day one, on day zero, actually delivered what it promised. And, I mean, honestly, you have to figure if you're in any strategy-oriented role, just any business-oriented role, research-oriented role, this becomes incredibly valuable. Yeah, I've used it for a number of interesting things. I was on CNBC Tuesday to talk about Google earnings. So I asked it to give me an entire prep document about the state of Google. Is it that up to date? Yeah, so this is the cool thing about it.

It searches the internet and it will, if you ask it, it will give you current information. And so it like pulled out like the projected ad spend. And right now it just has text, but over time opening, I anticipate it'll be able to put charts in there, which I think will be fascinating. And I thought, wow, like,

I won't prep for CNBC without this again. It is really, really good. I think they're just going to end with, I won't prep for CNBC, but... Yeah, no, I do my prep. I work very hard on that and on this. And I also had it give me a prep for the podcast.

And so I actually took last week's prep document. So for folks, we spend the week just kind of dropping stuff in a Google Doc that we find interesting. And now on our Discord also, which has been quite fun. And I just downloaded the prep doc last week and I downloaded.

put it into the query. And I said, use this as a reference. You can now go to the internet and search our show and see what my episodes with Ron John look like and give me some topics to talk about. And at first, like it went super broad and gave me like what I would do if I was doing like an AI overview podcast. And I was like, no, I need only information that came after February 2nd. And so of course, like the top AI story of the week,

is deep research. So it gave me... It talked about itself as the top... Oh, yes. Oh, you're good, deep research. You're good. Look, it has selfish tendencies and motivations. So it does really feel human. That's AGI. I think it's AGI. That's human. It's AGI. Definitely. And then it really broke down. Next thing, alphabet earnings, which again, I was on CNBC to talk about. And

And, you know, it says AI spend soars among deep seek challenge. And it talks a little bit about what we're going to talk about in a bit, just the CapEx that Apple is going to go through to try to build AGI, but AI, right?

So I found that to be very good. And then I also asked it to sort of give me a report on like how to enroll in healthcare in New York State. And it just- Good luck on that. Yeah, it was, I don't think AGI- AGI is not gonna solve that one. I think super intelligence will help us with that on that one. Well, I have a question. Yes. Is there a moat for this for OpenAI? Is it, it's a really, really well done product. And again, going back to our, my general thesis that OpenAI's strength lies in the product. Yeah.

And the models shouldn't matter. And hopefully they recognize that too. And if they'd only invest in the product more, but it's a good product, but can DeepSeek or Google or whoever else? I mean, Google has a product named Deep Research. We just don't have access to it. I don't even know. Oh yeah, we do. Is it public? It is public. Oh, have you tried it? I tried it today. I had to put together a similar episode plan. Who won? OpenAI won. Okay. Although Google was good, but...

But OpenAI won. So the question of is it a moat? No, I don't think it's a moat. Yeah. Okay. I switched my laptop over here because I'm about to read a lot and I don't want to face away from the camera for the entire thing. I think that's right. And we can proudly see our Apple devices here. That's right. Even though Apple intelligence sucks. Apple intelligence. Yeah. Oh, so I did buy a new Mac computer this week. Of course we did. And I went to the Apple store and they're like –

And have you heard about Apple Intelligence? And I'm like, oh, God, yes, I have. By the way, Vision Pros, nobody. Nobody anywhere close to them. Are they still up in the Apple Store? They're up, but they used to have a special section, and now they're off in a corner. And legitimately, no one cares about that. I don't know what I would do if I walked into the Apple Store and the sales rep with a smile on their face came up to me and said, have you heard about Apple Intelligence? I might be arrested. I might be arrested. Sure.

Calm, cool, and collected. All right, Siri, Siri, calm, Siri, calm. So the post that I want to read is from Ethan Mollick. It's called The End of Search and the Beginning of Research. He's a Wharton professor that's actually quite good on AI, and he's been on the show, which I have to mention every time we cite his work. It's just part of the contract. Part of the contract. And he makes this point that what we're seeing right now is this combination of

a new mode of AI interaction called reasoning, which we talked about, and agents. So let me read some of this because I do think it's so good. He says...

For the past couple of years, whenever you use a chatbot, it worked in a simple way. You type something in and it immediately started responding word by word or more technically token by token. The AI could only think while producing these tokens. So researchers developed tricks to improve its reasoning, like telling it to think step by step before answering. That approach called chain of thought prompting markedly improved AI's performance. So that's like the move from traditional LLMs to reasoning.

He says,

Because reasoners are so new, their capabilities are expanding rapidly. In months, we've seen dramatic improvements from OpenAI's O1 family to their new O3 models, and that's where DeepSeq factors in. And DeepSeq has this R1 model that everyone went crazy about last week was a reasoner. So basically what's going on with this deep research is

He calls it, he says, deep research is a narrow research agent built on open AI still unreleased O3 Reasoner with access to special tools and capability. You can see that the AI is actually working as a researcher, exploring findings, digging deeper into things that interest it and solving problems like finding alternative ways of getting access to paywalled articles.

And it goes on for five minutes. Sometimes I can think for five, ten minutes. He ended up getting a 13-page, 3,778-word draft with six citations and additional references to one of his queries. This is the point I'm trying to make by reading this.

I think what we're experiencing with deep research and the reason why it's even a question that it's worth paying $200 a month for is because it is an implementation of these new AI methods that we're starting to see with deep research. We're starting to see with R1. And it might be that we're just at the cusp of something very interesting happening in AI with this reasoning moment. What do you think about this? And do you think that

I am reasonably excited about it the same way that Ethan Mollick is. No, I completely agree. And this is to take – I don't want to be cynical about it. But to me, I'm incredibly, incredibly excited about, again, watching what deep research was able to do and what that means for certainly any kind of like just general research type stuff. But also – and OpenAI very –

kind of, you know, from a marketing perspective, shoved in, you can research couches because they want to try to have some more commercial aspect to this or more consumer-focused aspect. But this is going to happen. Like, this is going to... There's no question to me that

These type of models, these type of actions will kind of reshape what the web is, the way we interact with it, the way we interact with most apps. And I think that's good and that's going to completely rebuild so many areas and so many things. I think the area to kind of maintain some caution is –

What is the word agent mean? What is the word agentic mean? Is this agentic? Is this something else? I think that term is still being thrown around a little too cavalierly because like

Now they've kind of gotten it to where a simple chatbot query is an agent, which I don't think is necessarily the case. Just seeing chain of thought processing from DeepSeek isn't agentic. But deep research showing you that it's going into a bunch of different websites and showing you which websites it's going to and showing you what it's extracting from those websites and how it's compiling it.

I think that's huge. I think that's incredible in terms of showing people this is possible. To me, the biggest change that I think needs to happen is letting people interact within that process. Because right now, you kind of like put in the prompt, let it think for 20 minutes sometimes, and then get something and then have to revise it. But imagine you can actually...

in the middle of all of that action, say, actually, wait, I don't like that. I like this. I think that will be a huge change in terms of how useful this stuff is. Not only that, it's going to learn your tendencies. And the more you interact with these things, like right now, the memory is just something that they don't have. And that memory is coming. So they'll learn your tendencies. And next thing you know,

you're going to have like a research assistant that really knows everything that you want. And just to think about how much room there is to improve, there's already so much going on now.

Malik is pretty level-headed, again, Wharton professor who's deep into AI. He says, these systems are already capable of performing work that once required teams of highly paid experts or specialized consultancies. These experts and consultancies aren't going away. If anything, their judgment becomes more crucial as they evolve from doing work to orchestrating and validating the work of AI systems.

The labs, the research labs he means, believe this is just the beginning. They're betting that better models will crack the code of general purpose agents, expanding beyond narrow tasks to become autonomous digital workers that can navigate the web, process information across all modalities and take meaningful action in the world.

It's pretty high praise. It is. I think to me, I was thinking, especially on that shopping side of things and like thinking, okay, management consultants potentially replaced or that industry certainly changes us having to do lots of research in general, but we have very specific parts of our job and profiles.

for the larger population, like where does this start to apply? And the shopping thing, it's still weird to me because how much of that does someone really want to be automated? Like is the process that the agent is going through, is that actually the joy that a person experiences? Is going around and clicking on different websites and reading through their views, is that annoying and a pain or is that the part of it that people actually enjoy?

Do you like online shopping? I do. I do. Yeah. There is enjoyment. And also, like, you'll never feel that emotional attachment to something you get if the bot just got it for you. Yeah, exactly. Like, it's the act of doing the shopping or doing the research sometimes is – am I getting into –

it's the journey, not the destination right now. I think it's also the destination, but there is definitely joy in sort of finding cool stuff to go visit and then going and doing it. Like if a bot's just doing that for you,

Then it's just like, all right, well, I could have just Googled it and went to the first result. Yeah. So I think right now – and don't get me wrong. The research, consulting, strategy, journalistic, this is a pretty big opportunity in market. I'm not downplaying that at all. But still –

who is using this and how, especially to expand outside of that. It's still not a trillion dollar market. I mean, to get to that, what are the use cases for agents? Because again-

Apple intelligence cannot find our flight information in our email when you ask Siri, which they pitched us as agentic, and that's my momentary Apple intelligence bashing. But what are actual agents being used for in everyday life and for normal people? People have not been able to articulate that case well, and I'm still waiting for that to happen. It might have to be humanoid robots, going back to the NVIDIA conversation. All right, all right. Humanoid robots are always...

An easy sell, I think, for anything. Everyone's building them. But let me ask you another question about what Ethan is saying, which is basically that consultancies aren't going away and that the orchestration of AI is going to be more important than what the actual reports are.

I don't know. I've always been on the side that AI will be creative in the workforce and not destructive. But I think you have to look at this with clear eyes, and that is that there are going to be jobs that just completely go away, even if more jobs are created over time. And it seems to me like this stuff is going to –

Maybe not get people fired, but certainly make a company think twice before hiring. No, 1,000%. Actually, I think it was from Goldman Sachs like a couple of weeks ago. They were talking about how an S-1 financial filing, which is an enormous document but was always kind of a non-human kind of like really plug-and-play type of document –

used to take two weeks and 16 bankers and now can be done in like five minutes. And again, that makes complete sense to me. You have a bunch of data feeds and AI can aggregate it and you just review the entire thing. Like that's going away. Management consulting, all the research and grunt work goes away.

goes away and that's good. And I mean, you can imagine out of all the job displacement, the least sympathetic group when we say the bankers and consultants are under threat, pour one out for Bing and the bankers. Yeah. By the way, one of the interesting things, I'm sure you noticed this too, it's way more accurate than it's been, way less hallucinations. It gives you sources, you can click through to the sources and

And the numbers are good. Yeah. Actually, that's a really good point. Everything I clicked through was 100% correct, which was almost shocking to me in terms of the output. That's huge. That's huge. This meaningfully changes especially any kind of job that involved opening a lot of browser tabs and copying and pasting text and synthesizing that text.

That is completely changed and there's no way to argue. I get saying orchestrating and validating will keep certain populations like at least a little less scared. But this is big. This is huge. My internship from 2009 just disappeared. I mean half my life has disappeared right now.

So it's not just OpenAI. Google also has a release this week. They released a set of Gemini thinking models. Some tech crunch. Google is bringing its experimental reasoning artificial intelligence model capable of explaining how it answers complex questions to the Gemini app. The Gemini 2.0 flash thinking update is part of a slew of AI rollouts announced by Google this week. Also, talking about the CapEx, the company is planning to spend $75 billion on expenditures on

like growing its family of AI models this year. That's a considerable jump from the $32.3 billion on CapEx it spent in 2023. That's a lot of money. $75 billion? It's like when Satya said, I'm good for my $80 billion, right? Sundar is saying, I'm good for my $75 billion. That, of course, had me go check where NVIDIA is right now. So it's down 12% since the DeepSeek announcement, and it's back up a little bit.

The story of compute, the story of Nvidia, the story of chip demand, I think the one thing that was interesting about the last week or so

I mean, Mark Zuckerberg and Meta did not show that they're moving away from this CapEx spend. Google's coming out and saying it. So it's clear that the tech giants are still taking this path. And OpenAI still wants this path and saying Stargate is very important.

So I think it's interesting because the entire big technology industry has a vested interest because if compute and CapEx are critical, then only they can win. So this is going to be really interesting to watch play out.

That as long as compute and CapEx are critical, they're the winner. So they're going to say that. They're going to keep spending. And if someone – that's why I still think – and we talked about this. DeepSeek was such a big story and remains a big story because it showed that that entire narrative can just collapse on its own if smaller players come out and do interesting things.

That's right. And by the way, I went out on CNBC and I said, I think Google is on its way to being the best position company in the AI race. And of course, they promptly missed their cloud numbers and went down double digits. But I think they do have so much potential. The one thing they really need to fix is the way that they name their models. So

If you go to Gemini right now, there's Gemini 2.0 Flash. There's 2.0 Flash Thinking Experimental. There's 2.0 Flash Thinking Experimental with apps. There's 2.0 Pro Experimental. There's 1.5 with deep research. At least we know what that means. 1.5 Pro and 1.5 Flash.

Flash. Do they do this? Is it just a joke to them? Don't change Google. Don't change. I love it. I love it. I want Google to never stop naming models. You know, OpenAI, I'm a little disappointed in them. I think their model naming convention is not good. I want that from Google. If they ever had a perfectly streamlined suite of products with a beautiful name, I would question everything. Remember Bard?

None of this stuff works. They have a thousand chat apps. Nobody uses Google chat. They need to spend. No, no. G chat was the greatest product of all time. And they, I don't even, it's called like hangout chats with meat or something like that right now. I'm ready to put my head right through the table. I am.

They're spending $75 billion this year. Could you spend $500 million and buy an ad agency and just name this stuff like normal human beings? Get a subscription to Gemini 2.0 Flash Thinking Experimental with Apps and ask it to name your models for you.

But I don't know. With all the turbulence and volatility in the world, Google giving its models and products really inconsistent names just makes me feel just a little more at peace. This makes you happy. Don't change, Google. Don't change.

So speaking of AI and job loss, there was a great New York Times story about Klarna over the weekend. Klarna is, of course, a payments startup. It says, why is the CEO bragging about replacing humans with AI? Ask typical corporate executives about their goals in adopting artificial intelligence, and they will most likely make vague pronouncements about how the technology will help employees enjoy more satisfying careers or create jobs.

as many opportunities as it eliminates. And then there's Sebastian Semyonkowski, the chief executive of Klarna. He has repeatedly talked up the amount of work they have automated using generative AI.

Okay, yeah, that sounds familiar because he was on the podcast and he was talking about how much work they automated with generative AI. Oh, that's a story that catches my mind. It catches my eye. Let me scroll down. Okay, so this time story, as usual, cites one podcast and another podcast. And then it says this.

When the host of the Big Technology podcast asked why he was so intent on taking Klarna's AI prowess, Szymiakowski said it partly it was good for humanity. We have a moral responsibility to share what we are actually seeing, that we're actually seeing real results and that actually having implications that are actually having implications on society today. Then he acknowledged that another part of the motivation was self-promotion. For sure, we are regarded as

as a thought leader. I was pretty stunned to see The Times name our show in the story, especially because they had so many podcasts be nameless. So thank you, New York Times, for citing us. And I want to point out this was Ron John's question because we spoke right before and he goes, ask him why he's talking about it. Very interesting question, Ron John. Thank you for that. Well, I'm glad The Times is finally on it. That what is the motivation behind bragging about replacing humans with AI? I think...

Again, I'm glad. I genuinely am glad that they're asking this question because the marketing impetus behind all these pronouncements have to be questioned. And that certainly applies to Sam Altman and that certainly applies to OpenAI and our entire AMA discussion from earlier. And I did love that moment that there was this big kind of pronouncement, literally, the good of humanity. And it's self-promotion for sure. We're regarded – and even –

called it a thought leader, which is normally I feel maybe some people out there use that seriously, but most people I know do not use that as like a serious term. And he kind of just was like, yeah, we're a thought leader. We're self-promoting. They are still, I believe, looking at IPO. So yeah, this is a question that –

If people start pushing on more, start asking more, not why is someone saying this, but – or sorry, asking why is someone saying that, not just the content but the motivation, I think the whole AI industry needs to ask that question for every announcement. Absolutely. I was stoked to see that story. I was stoked to see the headline be the exact question that you asked, and I was surprised and grateful that we were mentioned. Yeah.

Not by name name, but by podcast name. Okay, I will take it. Podcast name more than host name is what makes me happy. How do they...

Not just say Alex. How do they not just say Alex Kantrowitz from the Big Technology Podcast? It would hurt them. They would really, they would have to cry. Oh, come on, guys. So we are on the cusp of the Super Bowl. If you're listening to this, either the Super Bowl has happened or it's about to happen or maybe the Super Bowl is happening and you're one of the few humans on earth that's listening to the podcast as the game's going on, in which case we appreciate you. Thank you for choosing good content.

Guess who's going to be in the Super Bowl? Of course, the Chiefs and the Eagles, but also OpenAI and Google. This is from the Wall Street Journal. OpenAI is set to make its Super Bowl ad debut. OpenAI, the artificial intelligence company behind Chatship UT – again, I just love the soul coming out of the reporter having to write that explanation – is expected to air its first TV commercial during Sunday's Super Bowl. What?

Open AI's brand took off in late 2022 when it launched its wildly popular chatbot, ChatGPT. The big game ad is by far Open AI's biggest foray into advertising as the race to build the world's most powerful AI technology and win over users intensifies. And The Hollywood Reporter also says about Google, Google bets that the Super Bowl can turbocharge Gemini's ad business.

Google is planning a major Super Bowl ad for its Gemini AI product line, including a 60-second ad in the second quarter of the game and purchasing 50 different 30-second ads in every state, each one spotlighting a local business that uses its AI software. That's smart. I was reading this news almost instinctively saying, spend that money on buying GPUs and scaling your models. However, I think this is brilliant on behalf of OpenAI and smart on behalf of Google and

You got to get your products in the hands of people like we talked about at the beginning of the show. People have to use ChatGPT. People have to know ChatGPT and millions of people are going to use and know and talk about ChatGPT, especially if the ad is half decent after it's in the Super Bowl. I think this is the game. It's five, whatever, five, $10 million really well spent by OpenAI.

I think for OpenAI specifically as well, I mean, they clearly have been moving towards a more formalized professional marketing function. In December, early December of last year, they had hired the former Coinbase CMO who had been at Meta for 11 years and was global head of brand and product marketing for Instagram. Actually, the whole suite of products. So serious marketer. So I'm very curious to see what they're going to do. The challenge for me is,

is twofold. One,

The AI ads to date we have joked about have been terrible. Apple Intelligence, not to go back there, but if we remember, they had all these ads of basically people kind of like not wanting to pay attention to people, not as important to them, so summarizing their content in real time. But even Google had a disastrous ad in the Olympics, if you remember, where it's like a little girl wants to write a letter to her favorite athlete and the dad uses Gemini to do it.

Like how tone deaf – like it's still one of the greatest things I saw. I was like these companies need to just hire one person who's just normal and sits in the corner and does nothing but just gets shown the ad and says this is terrible. You love this idea and I don't think it's a bad idea. No, literally just in the corner. They get paid a lot of money and they just sit there and, OK, that is just terrible. OK, normal people will think that's good. Man, I go back and forth though because the other side of this is –

This generative AI has a branding problem. Like when I still talk to...

Most non-tech friends and family, they still associate generative AI content as bad. And like that's the whole joke. It's like, oh, that's so chat GPT. Did chat GPT write this? Yeah, did chat – no, no. I mean it's still the entire – Which is a fair insult. Which is. But it's – We used to call kids Wikipedia when they were saying generic stuff. Generic stuff. That's what I mean. And to me like –

Actual products, if you know how to use them, are so far beyond that stereotype of AI-generated content is like overly formulaic. And that's like two years ago, ChatGPT. So there's a clear branding problem. How do you solve this when people –

have a negative connotation of the technology, have a negative connotation of you, the company. Like, it's got to be a damn good ad. And I think the crypto bowl of 2022, I believe it was, January, with the Larry David ads, the Coinbase ads, remember the bouncing? They're brilliant ads and they're really well done. But like...

I don't think it helped the – in fact, it certainly was not a good moment afterwards for the industry. Well, it certainly got a lot more people to put their money in crypto and then they got the rug pulled. Oh, actually, that's true. But this is different. Like I don't think we're going to have the same scam. No, no, no. To me, that part isn't the scam. But how do you solve this branding problem? I think like if you're Katie Rauch, the CMO of OpenAI, you're sitting in a room. You're like, we have this branding challenge. I hope they recognize it. How do we overcome it?

I am very excited to see what this commercial looks like. What's your best guess of what it's going to be? I'll give you mine. All right, go. I think maybe you're going to have Shaq and Charles Barkley sitting inside the NBA desk, and they're saying nasty insults to each other that they're typing in chat GPT is giving to them. Or maybe something voice. Maybe it's just somebody driving in the car and having a conversation with chat GPT. And it'll be like a Snickers commercial. It's like, bored? Not bored. Bite into it, chat GPT. Oh.

All right. Hold on. Do you know the most successful kind of like even skeptical AI thing that converted AI skeptics I saw? I don't know if you saw like ChatGPT roast my Instagram profile. Oh, yeah. Where you literally just screenshotted your grid and then put it on. And that was a moment that I think a lot – I saw a lot of people being like, wait, this is genuinely creative. It's not formulaic. It's actually fantastic.

funny and interesting and creative. So I think that would be mine, Sam Altman or other famous people letting their Instagram profiles get roasted by ChatGPT. That's my ad. Yeah. Okay. So basically we both agree that it's some form of AI roasting humans. AI roasting humans. And it's got to be funny. It's got to be good. I don't think

Trying to tug at heartstrings in any way. Google is going to do that. Google is going to do that. I'm sure. It's going to be like a kid and a grandmother just trying to communicate with each other and then Gemini will solve it. But the weirdest – like I introduced my dad to Gemini voice and he really – he has Parkinson's and has trouble typing into his phone.

And it was this emotional moment. Like genuinely, that could have been the commercial right there. That's going to be the ad. Yeah, it could have been. And they're still – that is sitting there and somehow it's still going to get screwed up. They'll mess it up. Somehow. They have made some really beautiful search ads before in the past. The greatest tech ad of all time. And I realized how old I was when I brought that up to some younger people. Parisian love.

It was from 2009. It's a Google search ad where – and it really made Google search emotional where someone goes through the process of studying abroad, falling in love, getting married. It was amazing. If they can pull off the 2025 Parisian love, I'm betting it all on Google. If they can have a good ad on that. I wouldn't bet against their ad agencies. OK. We need to get out of here. But who do you think is going to win in the game? I'm a New England Patriots fan.

I don't want Mahomes to three-peat, so I want the Eagles to win. But my God, the Chiefs, somehow, they always do it. So I will grudgingly bet that the Chiefs will win.

And I am a Jets fan, and I want Tom Brady's legacy, especially his and Bill Belichick's legacy to fall apart. So I'm taking the Chiefs. All right. I mean, I'll take the Eagles just to take the other side, and that's where my heart lies. What's your prediction on what happens at halftime? We've got Kendrick Lamar coming out. I have this feeling that Drake is going to come out. They're going to hug, and then they're going to both take out fake guns and shoot them, and it's going to say, bing.

Wait, as in? As in? The search engine. As in being? That could be the most aggressive call of all time. And if you are correct about that, I mean, it's time to retire. They should hug it out on stage. I do like this. If they can do it, world peace will happen. That literally, We Are the World comes on just like Stevie Wonder at the Grammys and Kendrick and Drake sing it together. Canada and the U.S., friends again. Yeah.

Let's bring peace at this Super Bowl. Peace to all of us. Yes, as the Eagles and the Chiefs go at it. That's right. All right. Well, Ron John, great to see you in person. This has been so fun. This has been fun. Let's wave to the people at home.

All right, everybody. Thank you for watching us or listening to us. We do this every single Friday, breaking down the week's news. Sometimes we break some news. And we hope you join us. If this is your first time watching the show, you can subscribe to us here either on Spotify or whatever app you use to get podcasts. And...

On Wednesdays, I'll do one-on-one interviews with people in the tech industry, and then Ranjan and I will be back every Friday. So that'll do it. Thank you for listening, and we'll see you next time on Big Technology Podcast.