This is an iHeart Podcast.
Bloomberg Audio Studios. Podcasts, radio, news. The CEO of Alphabet and Google, Sundar Pichai, is here with us. Let's welcome Sundar to the stage. Thank you so much for being here. We talked last year, about a year ago, on the circuit, so it's really good to catch up. Good to see you again. Starting with the most important question, did you come in a Waymo?
I would have loved to. We're still working on making sure they are safe and can get through freeways. We're making great progress, but hopefully same time next year, I can do it all the way from Mountain View. All right. I'm going to hold you to it. So I just want to start with a vibe check post-IO. I feel like we saw a slightly more confident and cohesive Google. How would you describe it? Like, are you getting better at choosing your own dance music?
Look, I think when you undertake a set of things, it takes time, but internally we have known all of this was in progress. We've been training Gemini, releasing versions every few months. I think 2.5 was a real breakthrough in terms of capabilities, and it's at the frontier of where the models are. And so putting that in our products across our suite of products, I think that's what makes the story come alive.
So candidly, I'm using chatbots more and I'm using Google less. Maybe you are too. What is the fate of search in a world of AI agents and personalized answers? Is it evolution or extinction? Look, people have been asking this question now for a couple of years. You've had these chatbots scale up to hundreds of millions of users. We've grown in queries. So I think
It feels very far from a zero sum game to me. To use other areas in parallel, like TikTok came in, everybody started using TikTok. YouTube grew and did very, very well in those moments too. So, you know, I do think search is very, very good at what it does. Empirically, people value it for what it does and they're actually showing it by using it more.
so investors are clamoring to know but you haven't shared the specifics how much money are you pouring into ai and how much money are you actually making from ai well i mean uh in 2025 our capex is 75 billion dollars right so so we are definitely investing for the long run but you know it's the same investment which powers
businesses from search to YouTube to cloud to workspace to Android and Play to Waymo, right? And so, and to our subscriptions business. We just launched Google AI Pro and Ultra and you know, it's definitely doing well. I've been pleased to see the reception. So I think it's such a profound technology.
I feel the opportunity ahead is bigger than the opportunity we had in the past. So to double click there, if I may, what specific new revenue streams do you see increasing and what revenue streams do you see decreasing? Look, I mean, our entire, I mean, the growth in cloud, like Vertex AI is up 40x in 2020.
usage on a token basis just in the last 12 months. So obviously, you know, we have billions of dollars in providing AI-based solutions, in AI as an infrastructure, AI subscriptions. So there are many, many new businesses.
YouTube and the web more broadly is being overrun with AI-generated content. What effect is that having on Google products and services? Are you finding you need to allocate more resources to filter out the low-quality content? I mean, look, any time there are these technological inflection points, it's always a cat and mouse game, right? So the new technology creates new opportunity. More people can create content.
but you also have a rise in low quality content. I think moments like this is what we actually, it's what Google is good at, right?
separating, finding the needle in the haystack, making sure you surface the higher quality content. We are using Gemini to help improve YouTube's recommendations. So we're using AI to help improve how we did at content, etc. You know, there are good content being created using AI as well. So we're just trying to elevate high quality content.
There is a lot of anxiety out there about the health of the open web. AI, it relies on the good stuff. It relies on high quality content. As you push further into AI generated content, does that undermine the entire system? Does the web become just a watered down version of itself? - Look, it's another area, people have had questions about the web now for the past 10, 20 years.
I said this during IO, but if you look at the index, we index the web. The number of web pages that we are indexing is up 45% just in the last two years alone. What about the last two months? There's just definitely, people are creating a lot more content. I do think creation is getting easier, not just in web, across, if you look at YouTube, et cetera, the amount of content is exploding, right? And so I...
I do think as content creator you have to think about many, many modalities, platforms, etc. But the opportunity space gets bigger. You've said AI overviews are good for publishers. The publishers we talk to say it's tragic. Studies show dropping click-through rates. I'm not always... I don't always click through when I see an AI overview. You're answering the question for me right there on Google.
Publishers say they can't opt out or they might be de-indexed from Google entirely. So what is the concrete evidence that this is actually good for them and not just good for Google? A few things. I mean, I think compared to most companies in the world, we take care to design experiences which is going to showcase links, right? And it's, we took a long time testing AI overviews and prioritized approaches which
result in high quality traffic out. I'm confident that many years from now, that's how Google will work. We think the value proposition in Google is people come. Yes, sometimes they may get answers. That was true when we launched featured snippets many years ago. But people come back, people are curious, people are expanding their use cases, and people do seek out sources on the web. And we are going to prioritize approaches.
Obviously with AI overviews, the quality improves, the context we are giving users improve. What we are seeing is people are clicking and going to a more diverse set of websites and they are spending more time on average per click.
You announced new spectacles at I/O. Maybe Google Glass was ahead of its time. But I'm curious what evidence you have that we really want to put computers on my faces now, on our faces now, and give tech companies even more of our information. - Ultimately, you could have asked questions like this about driverless cars. Ultimately, it's the people who choose what they want to do.
you will only succeed in these things if you're building something delightful. I am wearing glasses right now, right? So for me-- They don't have a camera in them, do they? They don't. And they don't have a display in them. They don't have audio in them. But if you could make something which has no additional cost for me, but when I want it to, is going to give me important information, make it more useful, my life gets better. Look, I've been playing around with AR glasses.
In fact, I was talking with a friend who had it on and shot a basketball and he had an air ball and it told him like that was a bad air ball. Right. So look. And he still wants to wear them. He found the experience delightful. You know, his natural instinct was like, what should I be doing better? Right. And so you're going to have these things, be a companion, be a coach, all of it. So I think I think I think we have to do it tastefully.
But if done correctly, I think people will respond positively. But Johnny Ives, Sam Altman Alliance, it seemed perfectly timed to crash your I/O party. Did that rain on your parade a little? And are you at all worried?
Look, I mean, I expect in this moment to be that there'll be a lot of innovation, right? You know, enormous respect for what Johnny has done. And I'm looking forward to seeing what new computing devices are on the horizon.
Mark Zuckerberg has said that almost all of Lama's code will be AI generated very soon. Satya Nadella has said it's as much as 35% for Microsoft right now. You've said 25% last year, I believe you said, that of Google's code is AI generated. You have over 180,000 employees right now. Is it half that in the future? Look, I expect we will
grow from our current engineering base even into next year, right? Because it allows us to do more, right? I think the opportunity space is also increasing. I just view this as making engineers dramatically more productive, getting a lot of the mundane aspects out of what they do, allowing them to spend on higher value-added tasks,
But that means it's an accelerator. People will be able to do more, which means maybe we'll create new products, and hence we will need more people, at least in the near term. To me, it looks like we will expand engineering velocity. And that doesn't mean we're constrained in what we will do. We'll end up doing more as a company as well. What about the long term? Look, I mean, horizontally as a technology,
It's tough to predict all effects of it long term. Just the fact that today, 60% of the jobs today didn't exist in 1940, right? At least by a mighty study by an economist called David Otter, right? So it is so tough to sit at any given time. I was just looking at YouTube in India. There are 100 million channels in India alone.
There are 15,000 channels with 1 million subscribers. Just imagine describing this world to someone in India 15 years ago. It would make no sense. So I don't want to-- I think it's a bit pointless to think that far ahead. But I think we underestimate--
how expansionary this moment is. So this isn't too far ahead, so I hope you'll humor me, but Anthropic CEO says AI could eliminate 50% of white-collar jobs in the next five years, unemployment rising to 10 to 20%. Do you agree? Look, I think we should take those concerns super seriously, right? With this technology, to my earlier point about new jobs getting created, you're going to create new opportunities.
I look at something like VO3 with video. It's clear to me you're going to allow pretty much everyone in the world to be a sophisticated creator. I can't sit and linearly imagine what the impact of that will be, right? So you're creating all these new opportunities. You have tremendous positive externalities. We will tackle tough vexing problems, make progress on areas like cancer, all that.
educate a lot more people. As part of that, there could be job displacement. Those are serious concerns. So as a society, you have to think about how do you reskill people? What are new social safety nets that you would need? Those are super important conversations for society to have. But I think being too specific and saying that many jobs, I just don't see the
We have made predictions like that for the last 20 years about technology and automation. It hasn't quite played out that way. But I do, I respect that, you know, I think it's important to voice those concerns and debate them. I think those are important conversations to have. In two trials now, judges have said that Google is a monopoly in search and partly a monopoly in ads. How do you address the concern that
Your AI is built on an existing domination of search and ads and that this is just reinforcing original monopolies. First of all, we disagree with the rulings and we are in the process of appealing these things. Look, if anything, this moment has shown, I don't think there's anyone here who is using anything they don't want to use. You look at the success of ChatGPD or any other product, people literally have more choice than ever before.
The reason people use Google is because they want to use it, right? And so I think we continue to innovate. I think choice is good for users. Competition is good for the world. So that's how I see it. You've said the remedies proposed are too extreme. Would you ever voluntarily break yourself up, control your own destiny? Look, I mean, I see-- I mean, those are-- they seem, compared to what
the initial scope of the ruling was. Some of the proposed solutions are far overreaching. We'll see how it plays out. I look at the amount of, I mean, we spent over $50 billion in R&D last year. We are one of the top R&D investors in the world. We take such a lot, we didn't build things like Chrome, we've invested
a couple of decades into these, way more. We've been building it over a decade. We've been doing that with quantum computing for over a decade. This amount of R&D, this amount of innovation, I think makes sense to do when we take a long-term view. So that's how I think about it. I've heard you say a few times that you think of AI as an expansionary moment.
But so far it does seem to be favoring tech giants and well-funded startups with access to GPUs and data centers and enormous amounts of capital. Like, isn't this really concentrating power in fewer hands? And is AI just another winner-take-all game? Look, I think there are, you know, well, I'm confident there is a company that's going to be created with AI, just like when the internet happened. Many years after the internet happened, Google didn't exist.
So, you know, there's no doubt to me that three years from now there'll be a company which will be dominant in this AI age which we don't even know the name of today. That's the only way things work in the future. Right? So... You have all this information about us and, I mean, you have our deepest darkest secrets. It's already hard to... Not me. You as a user. Well, yeah, actually you do. Not you personally, but your company does.
It's already hard to trust big tech and now you're going to be using more of this information to integrate AI into more personalized experiences. Why should we trust you now more than ever before? I mean, we earn that trust by, you know, we've been storing people's emails now for many, many years.
but we've handled that content responsibly. I hope. We protected from bad actors, we fight against unwarranted requests. So I think more than any other company, I think we've taken, you know, this is a responsibility. People trust us in those moments. And we are only evolving the products in the way people are telling us, you know, with their feedback.
Look, the biggest thing people ask with Gemini and Gmail is, "Why can't it write more like me?" It's an ask which we are responding to. Google's rolling out Gemini to kids. I mean, I've got enough on my plate parenting with more screen time and social media. Are our kids' best friends going to be chatbots?
Look, there's always going to be a sense of discomfort with new technology. I don't know, like when online dating first came, people would ask questions like, are you really going to meet someone online? So people adapt to these things. Will people comfortably, naturally interact with AI in their lives in the future? Yes. We are seeing it empirically in terms of how people are using them.
Like I see people coming to Gemini asking questions like, at an aggregate we see questions like, "How should I prepare for this interview?" I mean they're talking as if it's a companion. So we do see evidence of that. But what about kids? Like Gemini for kids? With kids, just like today when we design YouTube for kids, we design it with a different set of guardrails. And so for kids I think you would scope it down to those appropriate experiences, obviously.
You just unveiled VO3, which you were talking about. I mean, it is mind-blowing seeing these super, super lifelike videos, but it is also really scary. Is this the end of truth as we know it? I think this is part of the role when you asked. I think this is part of the value proposition. This is why people will come to places like Google because they're trying to ascertain what's reality, right? And this is not.
And I do think it's-- well, we are going to evolve new norms around-- for example, with VO, we are watermarking videos. They are built with SynthID, so people can detect that they were generated with VO. You can upload any video to Google and ask about this-- any image to Google, ask about this image, and it will tell you if it was generated by VO3. We built a SynthID detector for researchers and journalists
So, you know, all these things are going to evolve in parallel. Down the line, we will need regulations maybe to say something which is a true deep fix. Just like your financial fraud, you know, you would need new norms around things like that. So all of, you know, we have to evolve as new technology comes along. I guess I just wonder, like, are we going to have a shared sense of reality again? Like, it's just, like, this is made with AI. It's insane. I think so. Like, you know, I think humans, as humanity, we would value that shared sense of reality.
And so you would value human experiences even more in the future, right? That's how I think about it. YouTube is arguably the most influential media platform in the world now. I mean, it is a political force, it's a cultural force, our kids are learning there, all kids think they want to be creators. 75% of teenagers, I believe, say they're watching it every single day.
I'm a mom, you're a dad, like, I know you're in charge of the thing, but does it ever terrify you to have that much power? I mean, I think that sense of power is an illusion. I think, like, when you're working in the tech industry, you know, you've simultaneously started with the conversation with, like, are you about to go extinct? And then you're ending the conversation with, aren't you the most powerful thing in the world?
Fair enough. So you got to choose in this continuum. Only one of that can be the truth. Fair enough. I'm curious about your approach to the Trump administration. You were front and center at the inauguration. Google has rolled back some of its DEI policies. Did you compromise on something that you believe in? Look, first of all, you know,
We are one of the leading American companies, global companies, in this particular moment with the point of inflection around AI. I do think the next few years are critical from many aspects. So we are committed to engaging. We did that with the first Trump administration. We are committed to doing that.
look around the world you know we comply with laws and regulations uh in all the countries we operate in doesn't mean we agree with every aspect of everything right but as a company we do have our set of values and like so we are committed to them as well so i think for example as a company i think we are committed uh to making sure we develop ai in a way that protects the planet that's an important value for us and
you've seen us prioritize efforts around renewable energy. So we'll continue doing those things, but look, I think it's important that as a company we engage, you know, there are many, many people in the administration, particularly around critical energy needs, critical infrastructure. These are all important areas we want to work together on.
When you were on the circuit last year, we spoke about the criticisms that Google and you personally have been too late to the AI game, too late to get AI to market. And I wonder, how has your own leadership style evolved over the last couple of years specifically? Have you changed at all to meet this moment of radical disruption? Look, I feel like
The main thing I did as a CEO setting up the company is to really be at the forefront of AI. When I look at the depth and breadth of what we are doing as a company across everything we are doing on multiple fronts, I think we are incredibly well positioned. But definitely, through this moment, it's a moment in which we realize it's a moment to accelerate.
at scale what we are doing. And that involve, you know, think stepping back and thinking about how you can make the company work faster, setting up Google DeepMind, bringing up our best teams into one team, scaling up our AI infrastructure. You know, our capex is 75 billion. That was $20 billion a few years ago. So we are dramatically scaling up our infrastructure. So you're undertaking these big bets
Waymo, people are talking about it now, but three years ago, people were very pessimistic on it. I increased our investment in Waymo at that time. So you are making decisions not based on what people are currently saying at any given moment, but with the long-term view in mind. Are you still getting involved in nitty-gritty product decisions or is that leaving that to Sergey? I mean, I'm fortunate to have Sergey. He's deeply working on the Gemini models.
but I'm very, very involved in various aspects of our product decisions. I think you have to do that in the tech space. You've been CEO for 10 years now. However many years out it is, what kind of person do you think Google's future CEO should be? Look, I think it's important to understand the products we build tremendously impact society.
And the journey of technology is doing the hard work to make sure you're harnessing it in a way that it benefits people.
and that takes a lot of work and you know and i think that'll be an important quality to have you're driving this massive technological change that's also bound to create like major societal disruption how do you personally wrestle with that like is there some philosopher or religion or deeper code that you turn to to help you answer some of these questions okay uh first of all there are many of us doing this i i think
Ultimately, it comes down to you don't need much more than having empathy for your fellow people on this journey. I think I'm doing what I'm doing and so are many others in the industry because you get a chance to positively impact people with the work you do. That's the true North Star at the end of the day. What should kids be studying these days? Should they still be learning how to code? Should we still be getting computer science degrees?
I've got four of them, so I really need some advice. They're probably going to be talking to AI asking this question, so my answer doesn't matter, maybe. Look, I think one of the things that's great about this moment is I think AI over time will allow us all to pursue our passions more, right? And I think that's the truly liberating aspect of a lot of this. You're giving pretty much everyone a powerful tool,
they express themselves in the way they want to. And so I wouldn't change anything of what you're studying. I think I would still ask, encourage people to follow their passions and find something that's of interest to them. I think, you know, pretty much all disciplines which are valuable today, there'll be a version of that valuable in the future. What are the limits of AI? Like, is it possible we don't reach AGI? Oh, with the current, oh, it's entirely possible. I mean, like we could be dealing with
Look, everything we can see, you know, I feel very positive there's a lot of forward progress ahead with the paths we are on. Not only the set of ideas we are working on today, some of the newer ideas we are experimenting with. So I'm very optimistic on seeing a lot of progress. But, you know, you've always had these technology curves where you may hit a temporary plateau. So are we currently on an absolute path to AGI?
I don't think anyone can say for sure. The pace of progress is staggering and looking ahead, I sense you will have that pace of progress, but there could be limitations in the technology. The technology currently feels like you're seeing dramatic progress, but then there are areas where this thing can't do this obvious thing.
Waymo is doing very, very well, but remember, you can teach a kid to drive in about 20 hours. Right. And so both the technology is amazing, but we are quite far from a generalized technology as well. We talked a little bit about your vibe coding backstage. Would you like to share? How often are you vibe coding and what are you working on? I wish I could do more, but I've just been...
messing around with either with cursor or I like coded with Repl.it trying to build a custom web page with all the sources of information I wanted in one place so I could type a location and get it all partially complete. But it's exciting to see how casually you can do it now. And compared to the early days of coding, things have come a long way.
It feels so delightful to be a coder in this moment in time. Even after that, you still need all those software engineers, are you sure? I think so, yes. Last question. We've got a new episode of The Circuit out this week actually about Microsoft's 50th anniversary. Google's 27 years old. What is Google at 50? What's Google at 50? Look, I hope we are...
nimble and innovating and in the technology industry, you have to earn your success every year. And so for me, it's more about building a culture that's really innovative at its core
It takes a long-term view and does deep technology work and translates that into products that impact billions of people. SPEAKER 1: All right. Is a human going to be running Google or an AI? VIKAS KHANNA: Well, I do think whoever is running it will have an extraordinary AI companion to help. SPEAKER 1: All right. Sundar Pichai, everyone.
This is an iHeart Podcast.