Can Google Cloud Platform ride AI into the field's top echelon? And how much is AI shaking up the trillion-dollar industry? We'll find out with the CEO of Google Cloud Platform right after this. Forward thinkers know that when people thrive, organizations thrive. This is the future of work. This is Workday, the AI platform that elevates humans.
Did you know that small and medium businesses make up 98% of the global economy, but most B2B marketers still treat them as a one-size-fits-all? LinkedIn's Meet the SMB report reveals why that's a missed opportunity and how you can reach these fast-moving decision makers effectively. Learn more at linkedin.com backslash meet-the-smb.
Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech world and beyond. Today, we're joined by Thomas Kurian. He's the CEO of Google Cloud Platform, and he's here for a realistic look at how companies are building with AI and how Google is positioning itself to win in the moment. We'll also talk tariffs, of course, for a bit towards the end of the show. Thomas, great to see you. Welcome to the show. Thank you for having me.
Thanks for being here. Let's talk about the surge that Google Cloud Platform has had in the past couple months or years, really. And a lot of that has been tied to artificial intelligence. I think it's fair to say that GCP, Google Cloud Platform, was running maybe a distant third behind Microsoft and Amazon when it came to cloud hosting. And every time I look at the earnings numbers, I see these massive growth rates, 30% per
per year, per quarter. How much is AI a part of that? AI has definitely driven adoption of different parts of our platform. People, typically, when they come in for AI, depending on the type of company, they come in at different parts of our portfolio. Some of them say, I really want to do super-scaled training or inference of my own model.
And so there's a whole range of people doing that, all the way from foundation model companies, whether that's Anthropic or Mid Journey or others.
And also traditional companies, Ford Motor Company, for example, when they brought their, they wanted to use our chips and our system called TPU, TensorFlow Processing Unit, to model airflow and wind tunnel simulation using computers rather than physical wind tunnels. So they're doing that as an example. So one set comes and says, I'll use your AI infrastructure.
A second set comes in and says, "I want to use your AI models." And that could be somebody building an advertising campaign using our image processing model, somebody wanting to write code using Gemini, somebody wanting to build an application using Gemini or one of our newer models like Veo, which is our video processing model. So in that case, they come in and use the platform
But along with that, they may say, "I want to put my data so that the model can access it quickly," and they start with one of our database offerings, for example. So it certainly draws more pieces of our portfolio as part of it.
And then the third is people coming in and saying, I want to use a packaged agent that you have. For example, we offer something for customer service. We offer something for food ordering. We offer something to help you in your vehicle, like in car. We offer stuff for cybersecurity. So there's a whole portfolio of these. And so depending on which customers coming in, they come in at different layers of our stack.
And it's so great to hear you talk about actual products that people are building with AI because a lot of the conversation has been around capabilities. How can AI's latest models perform on the math Olympiad tests? And very little, I think, of the discussion has been about what do they actually do? So we're going to cover in the second half some concrete products that you're seeing being built.
But let's go back just to this bigger cloud battle, because this is a multibillion or even multitrillion dollar fight right now to be able to get companies to host, run applications in the cloud as opposed to, you know, in their premises. Yeah.
When people are making decisions to buy, how much of their decisions are predicated on AI capabilities? Because what you just told me are a number of specific, I want to build an AI program. I'm coming to Google for that. Now, I imagine that's important. But when you think about the broader landscape of people making decisions to buy cloud services, how much does AI factor right now?
It's a good question. It depends on the country. It depends on the industry. It depends on the segment. Let me explain what I mean. If you're an AI unicorn, meaning you're funded to build a foundation model or you're building an application based on AI, that's really the central part of your decision. If you're in an industry that, for example, in retail, where we have a product called
retail search and conversational shopping, where you can take Google-like search using text, images, video, and put it on your catalog. And you can also put conversational shopping where I can ask a question, I'd like to return this dress and have the system handle that transaction for you. It's a super important thing, for example, for people in commerce, whether that's retail or telecommunication. On the other hand, if you look at a utility,
or an industrial manufacturer, it applies to part of their organization, but it may not be the central thing. And so it really depends by industry and by customer segment. And so, but we, part of our value proposition is that we offer all of these different capabilities. And so AI is helping us. It's not the sole reason for our growth.
Okay. And then just broadly, just talk a little bit about, okay, so definitely different segments have different approaches to it, but you're the CEO of Google Cloud Platform. So like when it comes to the broad Google Cloud Platform ability to compete, how important is AI across everything? Yes, of course, it varies for individual use cases, but broadly? It's going to be important going forward.
We've been very measured in how we brought our AI message to the market to avoid people feeling like we're over-hyping things. And we've always said we're going to build the best technology in the market. Right now, we're super proud. We have over 2 million developers building every day, every morning, every night using our AI platform. And you can see the strength of our models. Gemini Pro 2.5 is the world's leading model.
Gemini Flash is the most priced performant model.
Imagine and Veo are considered state-of-the-art for media processing, and we've got tons of new stuff that we're introducing at our event next week from audio, speech, etc. So we've been very, very thoughtful about how we've introduced stuff, and I'm not a marketer, so I will tell you it's an important factor. It will be an increasingly important factor, and our strength in it helps bring other products along with it.
Yeah, and we're not asking for hype man or marketing. I think this podcast, we're just trying to get to the truth. And I appreciate you being reasoned about the role of it and not saying something that's out of line with reality. So thank you for that. Now, you talked about some models. You talked about a lot of models coming out of DeepMind.
Here's what, let's say, Amazon might say. If they're talking to an AI customer, here's what Amazon might say. Google has its own models and it wants you to use them at Amazon. We have some proprietary, but our job is really to let you pick whichever model you want from anthropic on down. And you can just trust us to not push our own stuff and then therefore choose us over Google. What would you say to that?
I would say we offer 200 models in our platform. In fact, we look every quarter at what's driving popularity in the developer community and we offer them. We offer a variety of third-party models and partners, not just Anthropic, AI21 Labs, Allen Institute, there's a variety of models there. We offer all the popular open source models.
Lama, Mistral, DeepSeek, a variety of them. And we base it based on what customers want. So we track what's on the leaderboards and what's getting developer adoption and put them in the platform. And people have been super pleased that we have an open platform.
An open platform, companies, we always feel companies want to choose the best model for their needs, and there's a range of them. We're offering a platform. You can choose the model you want. The only model we don't offer today is open AI, and that's not because we don't want to offer their model. Would you welcome them on the platform? Of course we would. Okay. And he talks about that?
I don't want to tell you that we won't do it. We have always said we're open to doing it. I think it's their decision. Okay, so but the argument, I think would be just a pinpoint the argument from Anthropic, I'd really be curious to get your start from Amazon, I'd be curious to get your perspective on this. They might say, I'm just going to channel them. I haven't spoken with them about this. They might say something like, well, Google will still even though they can offer everything, they might still push you to use deep mind models. What do you think about that?
Well, our field is not compensated any differently. Our partner ecosystem is able to use all the models in the platform. And most importantly, we have very large, anthropic customers running on GCP. So if you don't have your own model or you have a model of your own but it's terrible, naturally you're going to say something like that. Are you saying that their model is terrible? No. Okay. Why don't we move to Microsoft then?
Microsoft would tell you basically that they have this partnership with OpenAI, which is going to build the best in breed. What do you think about that? I mean, OpenAI basically ushered in this generative AI revolution and have been the best at productizing it. They've done a good job, no question. I would say OpenAI has done a good job with that. Whether that's how much of credit goes to Microsoft outside of providing them
A bunch of GPUs, time will tell. Okay. Now, it's interesting because they do have that partnership, and that has been largely responsible for the surge that they've seen in the generative AI moment. But there is a pretty interesting difference between Google and Microsoft, and that is that Google does have DeepMind in-house, whereas Microsoft has this
I don't know if it's even arm's length or hand in hand relationship with open AI. So I actually am curious when it comes to, we talked again about all these businesses that are building AI applications. When it comes to that, what does DeepMind give you that might be an advantage there? Because it is a nows.
We work extraordinarily closely with Demis and his team. When I say extraordinarily closely, our people sit in the same buildings. We work extraordinarily closely. My team builds the infrastructure on which the models train and inference. We get models from Demis and team every day. In fact, we're staging models out to the developer ecosystem within a matter of a few hours after they're finally built.
And then we take also feedback from users and move it upstream into pre-training to optimize the models. And one benefit we have at Google is all our services, whether that's search or us or YouTube, this inferencing of the same stack and same model series. So the model learns very quickly from all that reinforcement learning feedback and gets better and better. So there's a lot of close collaboration.
Many times, if I can be frank, when we enter a new domain, like I'll give you an example. We built a solution for cyber intelligence using Gemini. So there's a lot of threats happening in the world. You want to collect all that threat feed. We do that using a team we have called Mandiant and also from other intelligence signals we're getting on what are the threats emerging.
you then want to compare it to your environment to see if you've been, you know, you're at risk. And most importantly, you want to compare it to what parts of my configuration will somebody use to try and get in. And so we used our Gemini system to help prioritize and also help people hunt faster. We call it threat hunting faster.
Now, in that environment, the model has to learn how to find patterns in a large number of log files that people are ingesting, and that requires specific tuning of the model to do that. And so there are things there that having a close working relationship with the DeepMind team has helped enormously.
Similar things when you look at, for example, customer engagement, customer service. We've got a project on at Wendy's to automate food ordering in the drive-thru. If you actually think of a drive-thru, it's an extraordinarily complicated scenario because there's a lot of background noise.
kids screaming in a car, people change their mind when they're ordering something. I didn't mean that one, I wanted that one changed to this one, and which one did you mean by that one and this one? - Thomas, it feels like you're describing the way that I handle these interactions, and I'm very embarrassed about it, but that is me. - And so there's a lot of things that we needed the model to do to have ultra low latency in being able to have that conversational interaction with the user.
So all those elements, the partnership we have with DEMIS has been super, super productive. And it's also, most importantly, it's people working together. We're all close personal relationships that helps us get through a lot of design changes and other things. And we're all rowing towards the same goal. Right. But okay, I was speaking with Mustafa Suleiman, the CEO of Microsoft AI.
just a few days ago so this is kind of a fortuitous back-to-back episode scheduling and what he said was look you can for without spending the billions and billions of dollars it takes to train the new models basically replicate what they're doing with a lot less money and put it into action just a little bit more slowly and so therefore what he's saying is basically microsoft gets the benefit without the cost what do you think about that argument
I don't want to comment on what he said. I can just tell you there's a lot of debate on cost of training and inference. First and foremost, in the long run, if AI really scales, the cost you really want to care about is inference cost because that's what's integrated into serving. And any company that wants to recover the cost of training has to have a large-scale inference footprint.
There are lots of things we've done with our Gemini Flash, Gemini Pro models that you can see, and also other people using TPU for inferencing, for example. Large companies are using it.
to allow them to optimize the cost of inference. Cost of inference can be on the efficiency with which you handle your serving fleet, how you go disaggregated serving, what you do with caching and key value stores. There's a hundred different variants of that.
The proof, I think, is in our numbers. If you look at our price performance, meaning quality performance of models and the unit price of tokens, we're extraordinarily competitive. That's number one. Number two, on the training, I think there's a bit of confusion that may exist in the market. There is research frontier research exploration.
Frontier research exploration, for example, could be how do I think about teaching a model a skill like mathematics? How do I teach a model, for example, a new skill like planning? How do I teach a model a new skill in a brand new area? So those are what we call frontier research that goes on. And many, many experiments like that are done.
And then after you find the recipe, you then actually train a model. And train a model is actually you do the model run where you're running the actual training. I think people are mixing up the total amount of money spent on research and breakthroughs as opposed to actual training.
And we are very confident we won't be investing in the way we are as a company without knowing the ratios between all of these. And so we're very confident that we know how to run very efficient model training, what we're investing in frontier research, and then most importantly, how we're handling model inferencing and being world-class at all three. Do you think there are still gains to be had by scaling up the pre-training of models?
There are gains to be had. I don't think they will be at the same ratio as earlier because just, you know, there's always a law of diminishing returns at some point. I don't think we are at the point where there are no more gains, but I think we won't see the same ratio of gains we used to see. With inference, so that will be the new cost, basically taking the models and putting them into production and using them.
I'm curious how much of the cost of that or how much of the use of your services is going to be toward reasoning? And what have these new reasoning capabilities allowed your customers to do that they couldn't do previously? It's a really good question. I mean, reasoning is something we are starting to see customers using in different parts of
our enterprise customer base. For example, in financial services, we've had people say, "Hey, I want to understand what's happening in financial markets. Summarize the information coming off, whether that's video feeds like CNBC, financial market indexes, and other financial information, and tell me what's happening."
The model can not only build a plan for how it collects the information, but summarize it and then reason on the summary to say, are there conclusions to be derived? We are starting to see people starting to do that.
how much of that will be versus other scenarios, time will tell. But we are starting to see people doing much more sophisticated, complicated reasoning. Even in areas, we have a travel company, for example, that's working on, give me a very high-level description of what you want to travel for. I want to fly to New York. I'm taking my son. We'd like to see Coney Island and the following three things. Build me a plan. And
In that, it can have multiple choices, but it may say, "If you're traveling in June, maybe hot in the afternoon, therefore, I think we should have you see Coney Island in the morning and go to the museum in the afternoon." Models are starting to be able to reason on those things. We are starting to see early adopter companies test in all these different dimensions.
Well, that's what wait. So are people I just need to ask you this follow up. Are people scraping the audio feed from CNBC and then using the summarized information to trade? There are feeds. When I mentioned CNBC, I'm using an example. They have personal feeds from their broker and dealer networks, which are private of their own, that they're feeding into this. Because when they have a broker or an equity analyst make a broadcast to their internal
You know, teams, they want to feed that as an example that I was using that just as an example to see what kind of a feed given your audience to explain what a video feed would look like. Right. And now what what about reasoning allows people?
these companies to build this stuff that they couldn't previously? For instance, this travel planning thing. I mean, in the non reasoning versions of large language models, I could say, build me a plan and it could do that. So what does reasoning do that either ups the performance or allows customers to be able to do stuff they could not previously? Reasoning, I think, allows. So historically, when LLMs were used, people were worried about hallucination.
And so they gave a large language model a single step task, meaning do this and come back to me so that I can determine if your answer is hallucinatory or not. And so I didn't delegate a complex task to you. Secondly, when I asked you a question, you gave me a single answer. You didn't generate a variety of different options and then reason on it or critique them to say this might be the best answer.
So, that is the nature of some of the differences we see in why people are using reasoning now as opposed to prior. And the more you can trust that the model can actually reason across a set, whenever you have a multi-step thought chain of thought, if you have drift, meaning early in that chain of thought, you had an incorrect answer.
and then it stepped on that incorrect path and reasoned a lot more, downstream you can get way off relative to what the right path ought to be.
So, as models have become more sophisticated, people have trusted them. Part of it is the accuracy can be higher. Part of it is that it can evaluate a set of different choices and give you an answer based on a set of choices, not just say, "Here's a single answer." The third is we also allow people to understand what the steps were in how it reasoned. So, they can look at it and say, "Yeah, maybe I agree with it. Maybe I don't."
So Jensen at Nvidia says reasoning costs 100 times more to do. You also have your own compute. You're also facilitating that. Is that in the ballpark or are you seeing different numbers? You know, it depends on how long, right? Like, for instance, you could give it a very complicated problem and a model can take hours to reason on an extraordinarily large data set that will be more expensive. At the same time,
In the example I gave you on travel, given the number of trips that are made, etc., that company is not going to spend millions of dollars to calculate the answer for what's the best choice of trip for me. Or in the financial markets area, given how much information is coming all the time,
and how quickly you need to reason on it to present your equity traders or your private wealth managers an answer, you're also going to time-bound the reasoning computation. And so there's controls in the platform to allow you to say what is the breadth of the reasoning, meaning how large a cluster do you want to reason across, how much data and how long do you want to reason. All those factors are in the user's control and therefore drive how much they want to spend.
So if you were selling the hardware and the systems and maybe the software to train this stuff, you might be incentivized to say it costs 100x. But that might be the most optimistic scenario there. But there are plenty of other reasoning use cases that are much less expensive than that 100x in compute. Does that sound like a reasonable takeaway here? What we've seen, just if you look at models themselves, people were talking about
You would need a billion times more energy if you straight-line extrapolated the cost of a model from an inference point of view in '23. If you look at just 2024, we've reduced the cost of inferencing and you can see it in our prices of the models by a factor of 20 times.
It's because there's a lot of optimizations you can do in that. Same thing on reasoning. There will be a lot of optimizations that we will continue to make to lower the cost of reasoning.
people will want to do more reasoning. As you make it more affordable, people will use it more widely. There will be a range of things all the way from relatively quick, short time-bound reasoning to much longer things. Like an example, there's a financial institution working with us to do fraud analysis on transactions that are happening on the payment network.
By definition, they need to do that in real time. Their reasoning is time bound because they have to flag a transaction within a certain period of time. Now, they also do anti-money laundering and other calculations. That reasoning is done in batch and can take a lot longer if they want to. That's why I think there will be a range of these things and saying it's all one or all the other is not correct.
Okay, I appreciate your your viewpoint again in this area. Reasonable, realistic versus hype. I can sense a pattern. This is good. This is what we like to do on this show. You mentioned deep seek. I just want to ask you about open source. Yes, there might be a view that if open source, well, let me just say it this way. If open source is
exceeds the proprietary models. And it seems like what we saw with DeepSeek wasn't that moment, but it certainly opened a lot of people's eyes to the fact that it might be possible. The notion might be that all cloud services are kind of going to be
it won't matter. It'll just be like, because like Microsoft might say you need us for OpenAI and you guys might be saying, you know, we have Gemini. The idea is if open source overtakes the proprietary models, then it really won't matter which cloud platform you use and it sort of levels the playing field. What do you think about that? It's a good question. I think it's very early to tell, first of all, whether open source versus proprietary models are going to win or lose.
An example of our own model, we put out an open source model called Gemma, which is getting a lot of adoption among the developer community for people wanting to build certain class of applications. We want to continue to see how open source and proprietary models evolve. One example was historically, open source models were used because people wanted to fine tune a model to have their own weights.
and when I say fine-tune a model, they would take an open-source model and really tune it on their dataset to have their own weights. Now, as more and more sophisticated techniques for optimizing models have come in where you don't need to depend on fine-tuning with adjustment of all the model weights,
that case has become less important. But there's always going to be a need for a combination of these and it's very early to tell. Now, separate from that, let's assume to your question, Alex, if open source became the dominant one, how would we do? We have a history with that.
Just a couple of examples. First of all, Kubernetes became an open standard for people spinning up cloud workloads in computation. Many people would say Kubernetes is a standard and has become the dominant programming paradigm through which people stand up containerized workloads, which are the dominant way forward.
we've got a great solution, something called Google Kubernetes Engine. And people still take vanilla Kubernetes but choose us because of performance, scale, reliability, and all the other things
Even if you said open source models become popular, you still have to serve the model, you still have to optimize the performance of the model, and we're confident we can do that better than others. Now, lastly, many people are coming in at different other parts of the stack where they're using a model as part of a service. For instance, I gave you the example in cyber. Inside the cyber tool,
they don't really care if it's Gemini or something else. What they're looking for is a great cyber hunting capability. If you look at data science, where people saying, "I just want to build, ask a question to my data warehouse using English. And can you understand what I'm asking and show me the calculations?" And that's actually a very complex technical problem.
And so, for those cases, do they really care? Is it Gemini? It works particularly well because it's Gemini, but they're just accessing our product. We have a new product called Agentspace. Agentspace is search, conversational chat, and agentic technology for your enterprise. They really don't see the model. They're using an application or a platform.
and underneath, we're providing the capability. So there's other ways to differentiate even if open source became extraordinarily popular. And agent space, if I'm right, is your fastest growing product ever? Yes.
We're very proud of it. Yes. Yeah. So, basically, it's a way for people to query different things within the workplace and get things done in the workplace using natural language. That's right. It's growing. How fast is it growing? I mean, we'll publish all the stats next week. But as an example, KPMG is one example of a customer. They are using it to help their professional workforce. We have insurance companies doing...
using it as a research assistance to help their insurance brokers. When you call to understand what healthcare benefits are you eligible for? How do I find whether you're eligible for this? And then to speed up things like pre-authorization for healthcare benefits.
We have banks using it, and banks using it to help their frontline understand the customer is calling in. I'm the private wealth manager. Can I research their portfolio to see what's changed in their portfolio? So there's a lot of different use cases, and it's basically Google quality search, conversational chat, and workflow or process automation using agents all in one system.
okay last last question here and then we're going to move on to some product examples you've made gemini a free add-on for the 30 dollars uh per seat option can you talk through that decision because it seems like that's kind of counter to what your competitors are doing uh and also i wouldn't say very easy to make that something that you throw in but this is for google workspace yeah which is our collaboration tool
We made Gemini part of Google Workspace rather than requiring somebody to buy a separate subscription. Why did we do that? So if you're using Google Workspace and, for example, you're using Gmail, people love the fact that when I receive a lot of email, it summarizes things for me.
or I want to write an email and I want to write it to recommend somebody for a position. You can ask it to help write the email. If you're doing slides in Google Slides, you want to have a great visual presentation of a set of information. I'm not very good at creating amazing slides, but now you can use our Imagine tool to create amazing images and put it into slides.
It requires people to change the way they work, and we want to drive daily usage of AI. Because it needs to change the way they work, you want them to get used to using it. If, hey, this group of users in a company gets it, that group of users is not allowed to do it, this group
is maybe going to be allowed, but they have to buy a subscription, you don't let them get used to using AI as part of their daily life.
And we learned doing it back in 2014, 2015, when we added auto-complete, auto-suggest to Gmail that a lot of people love. It was part of the product and that's what got people used to using it. It helps us improve our AI because of all the usage you notice patterns and the models get better and better, but it also helps condition the users to start using AI to assist them every day. That's why we put it into the base product.
Okay, and that is a great segue into what our next segment is going to be, which is there's all these AI capabilities. Are people going to use them? So why don't we cover that when we come back right after this. Forward thinkers know that when people thrive, organizations thrive. That's why you need a platform that paves the way for people to be at their best. Workday is your partner to unleash the full potential of your human and digital workforce. This is the future of work.
This is Workday, the AI platform that elevates humans. Small and medium businesses don't have time to waste, and neither do marketers trying to reach them. On LinkedIn, more SMB decision makers are actively looking for new solutions to help them grow, whether it's software or financial services. Our Meet the SMB report breaks down how these businesses buy,
and what really influences their choices. Learn more at linkedin.com backslash meet-the-smb. That's linkedin.com backslash meet-the-smb.
And we're back here on Big Technology Podcast with Thomas Kurian, the CEO of Google Cloud Platform. Thomas, it's great having you here. Let's just talk about how people are actually using this technology. There have been a couple of op-eds that we've talked about on the show recently, one from the New York Times calling AI mid, another one saying the problem with Apple intelligence isn't Apple, it's the artificial intelligence, and basically saying that the AI technology has been okay.
okay. Uh, but not overwhelming to this point. And it's interesting that you brought up the Wendy's example, trying to automate takeout because one of the examples in that piece is that yes, you can now do self checkout at the supermarket, but it hasn't really changed your life. It's still, you know, flawed, shall we say? Uh, I mean, I can't tell you the number of times I've been on, uh, the checkout line at stop and shop or in the checkout automation. Um,
And I do one thing wrong. I forget to put it exactly in the right space. And then a cashier has to come over 10 minutes later. They come over and let me out of the store. So what do you think about this argument that generative AI is mid or not living up to all the boasts? And what type of applications have you seen in the technology, if you were going to argue the other way, which I think you are, that make you believe that there's something here?
I always say, any major technology shift
It takes a while for adoption to happen and for people to understand it. If you look at the internet, it went through a similar thing. If you look back at '97, '98, '99, there was a lot of hype that it was going to change things. In 2001, some of the hype fell apart, but over the long term, it has definitely shown that it's transformed the way that people find information, they buy things.
They even run their businesses. I think AI is going through a bit early on. People had maybe too rosy a view. I think in the long term, we always say that technology is going to be really a fundamental transformation.
how quickly it changes in the day to day, every day. Time will tell, but I'll give you examples of things that we always say, let the customers tell the story. Let's not tell the customer story on their behalf. And we're super proud of the work we've done. I mean, Seattle Children's Hospital, they wanted their pediatricians when they see a child to be able to understand the guidelines for treatment. Guidelines are complicated.
You need to be accurate in the information put in front of the person. We've helped them do that. At the Mayo Clinic, they wanted us to provide a system through which a doctor could find information from the electronic health record, from their clinical trial system, from their radiology imaging system, and synthesize it so a nurse, before she sees a patient, can see the information.
If you look at what we did with Verizon, Verizon is the largest consumer customer base in telecommunications in the United States. They have over a million calls a day going into the call center. We've helped them build something called a personal research assistant.
So that if I'm a call center person and you call me saying, here is my set of issues and how long does it take to research that information and put it back in front of you so that you can handle customer service faster and better? And they are very pleased. 96% accuracy in the information placed. And the reason that's an important number is better than a human. We've had people do it with...
In the consumer world, in retail, we've had people improve the way they shop for things, helping people change accuracy of search results on their search page, improve the way their back office, a company called AES. It's an energy utility.
It's an energy company. It builds and delivers energy to different parts of the world. It used to take them 14 days to run their end-of-quarter audit. They do it in one hour now. These are examples of people doing it right to the core of their business,
Honeywell in industrial manufacturing has put our technology into the manufacturing control systems. Deutsche Bank is using it for their private wealth managers to summarize information for them. Are they transformative to the people doing the work and to those customers? It is transformative. They've seen the business results. Time will tell how transformative consumers experience it to be.
So it is interesting that this is happening in enterprise first. We mean, there's one, I would say one mainstream AI application, and that's chat GPT. And you're at Google. So maybe you can argue with me on that one. But the numbers show 500 million people are using it each week.
Why do you think enterprise has been so much quicker to adopt this than consumer? And is it going to be like the BlackBerry? Like, are we going to start to see some enterprise adoption? And then all of a sudden it will just shift over to consumer when the time is right? I think, you know, the enterprises find real value at the core of their business.
It's helping people like Wayfair write code faster and write better code. It's helping people like Mattel, the toy company, find answers so that they can be much more quick and efficient in managing their supply chain and operations infrastructure. It's helping people in the entertainment business build much better recommendations of titles for people to see. There's lots of companies using our recommendation system for it.
it. I think it helps them decide, one, do I want to improve my top line? Top line is get people to buy more product, get people to use more of my services, for example, recommendations on movie titles.
It helps them be much more efficient in their back office. In some places, it also helps Home Depot. We help them build an employee help desk that answers employee questions about the benefits, about medical insurance, about lots of things. It also helps them improve the way their own employees experience the organization. Enterprises are choosing it for a variety of reasons.
Time will tell whether there will be many killer consumer apps based on generative AI, but we're focused on making sure people have the best technology to build a great experience. Bending Spoon, for example, is a company out of Italy. Sixty million photos a day. They're using our tools to edit and do magical stuff with it. Samsung S24.
Every smartphone has our AI Gemini on it, and people are using it to create great images and do amazing stuff with it. There are lots and lots of examples of even enterprises now bringing these technologies to their consumer experience. Even the work that we did with Mercedes helped me drive and helped give me guidance by just talking to maps
is it transformative? You know, it's up to the consumer to decide. Right. But I feel like you probably have a perspective on it. But hey, look, I appreciate that you came prepared with lots of case studies. So let me just ask you quickly about agents. You talked a little bit about customer service. Agents, I would say, is one of the biggest buzzwords I've ever heard covering tech. It does seem like some companies are using this technology to have...
generative AI bots take action on their behalf, which to me, I would say that's the definition of agent. So how far do you think we are in the rollout? And then what is a multi-agent framework? That's a great question. It's early on, I would say. But let me just start with what we mean by an agent. An agent is an intelligent system, software system, that has a set of skills. One of the set of skills is, for example, that it can reason.
Another set is that he can use tools. Third, he can communicate with enterprise applications and systems and do that in order to, for example, automate, answer questions, or do something on your behalf. Here's a very simple example the way you think about a single agent and multi-agent scenario. I'm just going to use a communications example. I have a phone.
I want to decide whether I want to upgrade that phone or not. So I call my telephone company, a digital agent, not a human agent. Digital agent comes on and says, Thomas, I notice you're calling from this number. Let me find out what are you calling about? And I said, I'd like to figure out a trade-in.
I notice you're on your mobile. Can I text you a link? Please take a photograph of your phone and tell me and upload it. I notice you have X phone, Y model. You have a cracked screen, so you're authorized for this much of a trade-in. So it's handling that interaction with the customer
It's looking at my plan and my profile and says, he's a premium customer, so he's eligible for trade-in. So it's looking at using a set of tools to calculate, do I have the right profile and am I authorized for a trade-in? And then it's looking up a system to understand how much is that trade-in amount worth? So it's automating that flow rather than saying the customer's calling in for a trade-in, let me transcribe that for a human. And then the human says,
tell me what phone they have, and then saying they have X phone. Tell me is it screen cracked? Do you see what I mean? So, that's the example. Yes. Now, where is agent-to-agent interaction? Agent-to-agent interaction is when this agent is functioning, it may need to, for example, say, hey, I'm going to send you the new phone, but you have to activate it. In order to activate it, I'm going to schedule you to go to our nearest retail store.
So, it may need to call a scheduling system to schedule an appointment for you. That scheduling system may be in some CRM, Salesforce or otherwise, where it needs to create a ticket for you so that when you go into the store, it says, "Friday morning, Thomas is showing up with his new phone. Let's have people ready to activate it." So, there's one agent talking to another agent, and that needs an open protocol. So, what we've done at Google is build an agent development kit
which has an API through which you can, one, create agents. We provide you a tool set to do it. We provide you a set of tools that these agents can use. But we also have an open agent to agent protocol supported by a lot of companies. It's just an open source project that we're doing where you can connect our agent to any other agent.
Okay. All right. That's definitely something I'm going to keep in mind and keep watching as you guys keep rolling out these new products. All right. A couple more questions to get to. Now we get to the fun stuff, which is tariffs. We're talking today on Friday, April 4th. The interview is going to come out the following Wednesday, so the world might be changed by then. But I just need to ask you a question on tariffs. Okay.
This is a tweet from Gavin Baker, who's an investor. He said, geopolitically, nothing matters more than winning AI. These tariffs, as constructed, essentially guarantee that America will lose AI by making America the most expensive place on earth to build AI data centers. Do you agree with that? And how do you think these tariffs will impact your business?
I'm not going to comment on policy. We do have a global footprint. So, we do have data centers, machines, networks, all subsea cables in many, many different parts of the world. That's part of Google's infrastructure, and I am responsible for that along with the team. So, we have got...
Lots of places we manufacture things, lots of places we deliver things, and we are working through the implications of the tariffs for our part of the business. We're confident we can work through it, and we have lots of smart people way smarter than me working on solutions on how we manage through this environment, which is uncertain.
right but what about all the raw materials that come in this is continuing on from baker he says the semiconductor exemption was irrelevant for ai data center semiconductors come into america in finished goods from taiwan and other asian countries which include servers storage systems and networking switches by the time we have developed the capacity to domestically produce these systems we will have lost the ai race i mean you're buying this stuff what do you think about that
Some parts of our manufacturing, some significant parts are here. And we have solutions to some of this. And I'll leave it at that because the rest of it is confidential on how we're managing through this environment. OK, let me just ask you one more quick follow up broadly.
For the parts that come out from outside of the US, like do you rely on suppliers outside of the US? Does that mean your costs will have to increase if they go into effect? We have mitigations and lots of other ways to protect our infrastructure and our cost.
I don't want to give more details than that because it can lead to speculation on financial results and I'm not going to get into that. But we've run a global infrastructure for Alphabet for many, many years. And part of our success at Google has been having good, low-cost,
highly scalable training, serving infrastructure for all our services, YouTube, search, advertising, Waymo, etc. You can, you know, I always tell people trust that we know how to run a large global supply chain. And we've been working on contingency plans for quite a while. Okay. All right. You know, as we round out this interview and go to wrap up,
I want to tell you just something that I've been observing as an outsider for quite some time. There was the conventional wisdom a number of years ago that Google had all the technology in the world to compete in cloud, but none of the sales muscle. Google basically got used to selling in an automated fashion through AdWords and didn't know how to sell to people. I think you came into Google Cloud and revenue was a billion dollars a year. Now it's in the 40s. It's expected to be in the 50s in 2025.
How did you guys learn how to sell to people? We learned how to sell by listening to customers and building a great, great, great sales team. In order to do cloud well, I think you have to do three really basic things. You have to anticipate customer problems and solve them in different ways than other people did.
So that's number one and very proud of our ability to identify where the next customer pain point is going to be and solve it. Number two, we built a global sales team and credit to our go-to-market organization.
We've done it, you know, it's a grind to build such a thing. That's why very few companies have done it successfully. And to grow from the scale we were in 2019 to where we are now, no other enterprise software company has grown that fast. And that's a credit to our sales organization. We have to bring discipline. We have to start with a certain set of countries.
Get critical mass there then expand we had to find the right mixture of sales reps technical customer engineers people who do customer service customer support and
We had to ensure that, for example, our contracting, legal framework, all of the other things that sit behind the sales organization were world-class. Super proud of that. Third, we always have believed that cloud is a platform business. The way that you grow is you provide a platform that lets other people grow on top of you, whether that's independent software vendors like Salesforce, ServiceNow, Workday, SAP, all of whom have great relationships with us.
that you work with partners, for example, the relationship we have with Oracle and many other independent software vendors, Palo Alto Networks, et cetera, bringing them to our customer base jointly. And then lastly, for every customer who has in-house staff, there are many who don't, and they want partners to help them deliver the solutions. We made a decision early on, we're not going to have a big professional services organization specifically so that we can attract the partner community.
One stat we are super proud of, in 2019, we had about 1,000 partners. Today, we have 100,000.
And it's that allowing people to grow with you and building that great sales organization that's been what's transformed our business. And when we talk to customers and when you see them at the show next week, you'll see how proud they are at the difference in which the way that Google works with them. They listen to them, that we help them innovate their business. And it's not a IT vendor relationship with the vast majority of them.
Okay. Last question for you. Right now, cloud makes up like 15 to 20% of total overall tech workloads. So most of tech, most of hosting is still done on-prem. So 15 to 20%, where do you think it can get to in the future? Can it go to 100 or what do you think the cap is here? We definitely see it getting north of 50%. I mean, people...
There the the historical reluctance on I can do it cheaper. I can do it better You know my cyber security controls on premise are better. There were lots of those arguments I think those are increasingly people are seeing they don't make sense and as the breadth of Technology that you get in the cloud continues to mature, you know the cyber tools the AI platforms the analytical tools how fast you can do something and
it's helping people move. Just as an example, last year, we had Walmart speaking at a conference. Every transaction that happens at a Walmart gets into our cloud to allow them to do analysis of how much inventory do they need to replace, which customers are buying, what products are selling,
If you look at the volume of transactions and the accuracy and how quickly they can get analysis into the hands of their store managers, their retail store people, it's an order magnitude faster. Our job is not to criticize customers who run stuff on their premise. There's always some reasons for it.
But increasingly, we've also built technology to take our cloud into their data centers if they want to. So for example, for people who have classified and highly sensitive workloads, we've taken our cloud into their data centers, and that's also a new way to deliver cloud. If you look at the work we're doing with McDonald's, we're putting our cloud into the restaurants. And so when people think about cloud, they used to think it's one definition. It's these big cloud regions that we have.
Increasingly, cloud also means the same technology can come into your premises. And that's also changing this definition of what percentage of workloads can you reach. All right, Thomas, good luck with the event this week. And thank you so much for coming on. It's great to meet you. I hope we can do this annually and we can keep talking about the adoption of AI and where Google's role will be in that. So thanks for coming on the show. Such a pleasure to speak with you, Alex. Thanks again for having me.
Likewise. All right, everybody. Thank you so much for watching. We'll be back on Friday to break down the week's news with Ron John Roy. Until then, we'll see you next time on Big Technology Podcast.