We released the ability to ask follow-up questions. That doubled our engagement time on the site and also increased the number of questions every day. So I was like, "Okay, there's something here. It's not worth killing and pivoting to enterprise." It was not like I want to go and kill Google, like that sort of motivation. It was more like,
What is an idea of that scale and ambition is something like this. Today, my view of perplexity is a more intelligent Google search that's really useful in certain scenarios. What do you want me to think of it in three or four years?
Welcome back to another episode of How to Build the Future. Today, we're joined by Aravind Srinivas, co-founder and CEO of Perplexity, which in less than three years has grown to more than a $9 billion valuation. Thanks for joining us. Thank you for having me, David. How did you get into this world? I was pretty interested in AI, deep learning research. That's actually what got me into the U.S.,
I was an undergrad in India, came here to the US for doing my PhD here at Berkeley. Life really changed when I got to do an internship at OpenAI and Ilya Sutskiver was there. I still remember the day I first met him and I was very prepared and had all these fancy ideas that I thought were very interesting. And he listened for five minutes and said, all this research is useless. It feels really bad to hear that. So I got used to, you know,
hearing the right things, even if they're uncomfortable. And then he told me the only thing that matters is he drew two circles. One big circle called it unsupervised learning. And then inside he said reinforcement learning, another circle. And he said, this is AGI. Every other research doesn't matter. This was around the time when they were building GPT-1. They didn't even call it GPT-1. When I saw that research, I went back to Berkeley and said, hey, I was working a lot on RL.
That was the rage at the time because of AlphaGo and DeepMind. But that was kind of like chasing the trend. So I went back to my professor and said, "Hey, we have to actually go and study unsupervised and generative models and generative AI." So then I got into that.
and did more internships at Google. And during my Google internship, I stumbled upon this book called "In the Plex." So I would launch jobs during the day, training runs, and then go and read these books in the library because interns don't have any other thing to do, right? And it would feel amazing that, oh, like these guys actually were once upon a time grad students like me, and now I'm working as an intern in their offices. - Reading the book about them. - It feels nice.
It would be amazing to start a company like that in future, where there's a lot of research, there's a lot of AI. At the same time, it's very grounded in product building.
It's very difficult to do that. And I spent a lot of time thinking about it. I even spoke to Ilya Sutskever about it, where we said there are probably only two problems where you can work on AI and also build product at the same time. One is like search and the other is sub-driving car. Because all your product rollouts are becoming data points for improving the underlying AI in the product.
and that'll make the product even better and that'll lead to more users and more usage will lead to more data points and it'll become a flywheel. And it should also be on the AI completeness path. It's sort of a buzzword to say this, but basically what it means is better AI should keep making your products better. So that way you can keep working on your company until AI is solved
And once it's all, okay, sure, we'll worry about all those implications. But your company gets better as AI gets better, as opposed to your company gets run over by somebody else. Exactly. So search is like one of those problems. So you're at this moment where you kind of have this realization that you want to start a company. How did you get the kind of...
activation energy to quit your great job at OpenAI and go do that? How did you find your co-founders? I came across this blog that one of the former YC partners, Daniel Gross, wrote. But it was like how to build the next Google. And I think basically the core idea is like,
you could do so much more with better query reformulation. So you take a query and you just add some suffixes. So if someone's looking for reviews of a movie, just suffix site:roddentomatoes.com. If someone's looking for reviews of some new gadget, do site: that corresponding subreddit. You can get away with a lot of these suffixes and like our special strings to like filter results.
and already make Google so much better, even with the existing Google ranking. I'm not even talking about the ads problem, just
simple ranking and then you can do more sophisticated things of like classifying queries and he was talking about how LLMs could automatically figure out these suffixes and I was pretty interested in that. Okay, that seemed like okay, maybe generative AI might be as in like I used to just call it LLMs or generative models could be a better way to build search engines too. I also was pretty interested in like trying to do something agent-like.
When DeepMind had this Android environment that they built, where they kind of wanted to prototype a mobile app using agent that knows when to use what apps and control the apps. That's when I spoke to my co-founder and CTO. His name is Dennis. We had written the same paper a day apart. So we knew each other and he was a visiting student in my lab. And we used to talk about, we'd bring some ideas of how we can build agents to control the Android environment.
So we were definitely chatting about doing a lot of things, but never concretely about any company or product. The first thing that anyone would tell you is like, why? Why even work on this? Of course Google's going to do this, right? It's not even like you go build a better Google Docs, Google will eventually do it because it's a secondary thing for them. So companies like Notion can still be funded.
This is their core crown jewel. So why would you even try? I think the reason it actually made sense, this is again, after launching the product, we realized this, not before. So there's like some benefit to being ignorant. Ignorance is bliss. Is that,
people stop clicking on links the ad economy kind of dies. Now you can there's a lot of you know nuance to this but that core insight was only realized by us after launching. So once we realized that I thought okay we were on to something and that kind of took a
took us the last two years. Walk us through the first iterations of your experimentation. I know you did a bunch of demos that were very dissimilar from Perplexity. Yeah. So I was audacious enough to go and pitch to the first seed investor of ours, Elad Gill, that, "Hey, I want to disrupt Google, but I kind of want to do it
from pixels, from a glass. And I think that's where you're not competing with people typing on the search bar. They're just seeing-- But even at that point, you knew in your mind, I want to go after Google? Yeah, it was not like I want to go and kill Google, that sort of motivation. It was more like, what is an idea of that scale and ambition? It's something like this. It was also around the time when multimodal models were slowly beginning to work.
So I thought like if you were on the trajectory of improving technology, you could build something pretty amazing. My investor rightfully said like not to work on it in the beginning. So we focused more on searching over like specific verticals or data sets or databases, tables actually. And we were an enterprise kind of focus company, except like nobody wanted to give us their data. I remember I used to
Hustle for calls with like PitchBook or CrunchBase because I kind of wanted to build a demo that would first make sense to an investor. That way we can keep raising some capital and then actually hiring good people and then go and do the real thing. And so CrunchBase had all this data, PitchBook has all this data, but they just don't want to give it to us. And so- Next best step, Twitter. Yeah, Twitter. Twitter.
We're pre-Elon CEO moments. Academic access was allowed, legal. So we built a database of Twitter. We organized it in the form of tables. We tried to do it with the OpenAI Codex models. This was even pre-GPT 3.5. We wrote a lot of templates. Oh, for these kind of queries, these are example SQLs. And then the model would do kind of RAG, pull relevant queries from the templates, and then write the actual SQL based on the template SQLs.
And that was the only way to get it to work reliably. And then we had a lot of callbacks in case errors happen, it'll automatically correct it. And then it will go and query the database and then retrieve the records. It was very nice. And it was a chat UI. You could chat, you could converse, you could plot. And this was like the first real product or demo that you guys did? Yeah, we did it very fast. It took only a month to do this because there were three people only. But that energy in the beginning was insane.
And they showed it to a bunch of people and they all allowed it. Mainly, there are two reasons. One, something like that never existed before. Like you could never actually go on. Twitter searches. Exactly. To this day. Even today, right? And then also people allowed finding all these, I think the social search,
of knowing who other people are following, whose tweets are they liking, whose tweets are they not liking, who did they unfollow this week? Those kind of diffs. It's all funny. So you launched this Twitter search thing. How did you transition from that to now what we all know as Perplexity? Yeah, so we had that, right? And then we were trying to do something similar to that for many different databases. GitHub. Like if coders could go and search about repos.
or LinkedIn, if you could almost be a recruiter, just say, but even now it's pretty hard. I won't say like, I want all the people who worked, who have been YC founders and who also worked at a C or D startup, because they would know what it means to be scrappy. It's very difficult to do these queries. Oh yeah, using the LinkedIn UI to do that. You cannot do that, right? For whatever reasons, people don't want to give their data, their paywall.
Such technology, if it exists, we'd be creating way more value. It doesn't exist for many other reasons. We were beginning to see how even with the capability of the models at that time in 2022, pre-3.5 turbo,
uh things were actually pretty reliable to the extent where like people would like use this and find value i actually read this um paul graham tweet i think uh like where if you try to
Often when you figure out the better solution, when you try to solve a harder version of it, but you end up with a simpler solution that's more general and scalable. That's what we realized. Like, okay, there's one way of doing these things where we go to each of these domains and try to build an index of it and put it into specific formats like tables and then have the LLM read that in a structured language like SQL.
Or you could do the other way where you just keep it unstructured and expect the LLM to do most of the work at the inference time, at the time of the query, and don't do all this work in the indexing time. And clearly we knew that if second is where the world is headed, where the models will get smarter and smarter,
It gives you an advantage to build it that way because it's more general. And you also stand a chance against the legacy system that Google has built, which is a lot more in the first style. So we thought, okay, we would try to build a more general solution. And then we prototyped this thing one weekend, actually. Actually, John Schulman's team had already published this thing called WebGPT at the time. So I was pretty aware of it. OpenAI even had a bot when I worked there called the TruthBot.
John built with his team, where you could ask it a question and it'll go and search the web and then it'll give you an answer with some sources. And it was very slow and it was built with the 175B GPT-3 model. So incredibly slow and inefficient. It was more agentic, like it would actually be like an RL agent that decides if it wants to click on a link and browse it, scroll. It was very slow.
So what we tried is a very simple heuristic version, but much faster, which is, okay, you always take the top K links that a search API provides you. You always only take the summary snippets that the index has already cached. So there's no scrolling, there's no clicking. And you always feed all those links into the prompt. So there's no selection. Ask it to write a summary with sources in like the academic format.
And that's it. When these models were getting to a point like 3.5 turbo, sort of models were beginning to come,
This actually started working really good. Yeah, the instruction following capability increased enough that you didn't have to do it very very Rigorous. Gotcha. So you kind of did like the the dumb approach. Yeah betting on the fact that the the AI would get good enough that would make all of that relevant Right timing I would say one year ago and John and his team tried like the models were just so so much faster
worse, that if you tried the dumb approach, it just wouldn't work. And so therefore, you would conclude that you need a smarter approach. But then when the models began to be much better instruction following, the dumb approach actually works. And that fixes a core product UX problem of latency. You are used to links appearing instantly on a traditional search
Right? Even then, by the way, the first version we launched, which is the answer version, took seven seconds or something to... Because we didn't even have this concept of streaming answers. We would wait till the entire answer was generated. We couldn't control the verbosity, so sometimes the answer would be very, very big. We even had to hard code a prompt saying only write five sentences or something like that, or 80 words. To keep it fast. Yeah, exactly. Okay, so you launched this.
When was the first moment that you thought like, oh, I'm onto something here. So we tweeted it. Okay. I was, while writing the tweet, I was like, you know, people are going to ridicule it. It's going to make mistakes, blah, blah, blah. First moment of virality came when one annoyed, like intellectual academic came search for herself. It said she, it gave a biography in the past tense. And she's like, I'm still alive. What the hell?
But actually what happened was there was a person with the exact same name and spelling who died and LLM thought she died and she gave a past tense. I actually thought that was pretty clever reasoning on the model, except it's not even higher order to know that they're different people. So then that got us a lot of attention. People were beginning to start thinking, okay, look, the sources thing is good, but can we really trust the answers these things are saying? And then...
That got into this trend of people like searching for themselves. This is something that keeps happening time and again with all consumer products. When I got a chance to speak to Mike Krieger that vacation, he said the same that even though you can click on your own profile icon and go back to your photos, people always love to go to their
profile on Instagram by typing their username on the search bar. It's such a human habit. So a lot of people start putting their Twitter handles or social usernames, and then it would mash all their activity across the internet, including stuff they did in the childhood, like many years ago. Yeah.
and then give like this interesting summaries and they would screenshot it and share it. So I thought there was something there. So there's something driving it. Yeah, but I still wasn't sure. And then we released the ability to ask follow-up questions. That doubled the engagement time on the site and also increased the number of questions every day and the number of people, the number of questions every day was increasing exponentially.
So I was like, okay, there's something here. It's not worth killing and pivoting to enterprise. You have this initial momentum. And you said earlier, it wasn't until hindsight that you had the idea that like, oh, we actually have a chance of competing with somebody like a Google. When did that realization happen in this journey? How'd that go down? So I never really thought about the Google competition in a serious way, to be very honest, because I knew that like,
They cannot make this exact product on the Google homepage. It's so hard to know when a query is purely informational or not. And then the Google search page is already like so cluttered. That's the answer box, the knowledge panel. There's some ads, there's some links, there's, you know, like perspectives from socials, all these social cards. It's already too much information. So it's clearly like feels like, you know, fast food and like healthy meal.
difference for using Google and Perplexity on even informational queries. I was more worried about Microsoft in the beginning because they were launching Bing Chat. In fact, on the day we agreed to have a term sheet, like handshook on a term sheet with one of the venture capital investors, NEA here in Sandringham Road, after one week of torturous pitches,
We were just having coffee and then the words leaked screenshots of Bing chat. I was like, "Okay, there's this 30-day due diligence period." One of the other investors would give me a term sheet. He just increased it to 45 days. You could see the diff. It was in sneakily. I knew why clearly.
He also texted, what do you think about this thing? Okay, okay, I get it, I get it. You're getting a little sheepish. And then the other person I handshook with, he texts me the night saying, hey, do you have time for a call tomorrow? Clearly, this is it, right? So I told my co-founder, okay, maybe they're going to back out or ask us to pivot. So
Maybe we should just try to sell the company and like get it done. You know, this is not gonna go anywhere. The person actually who handshipped said, look, I'm not gonna ask you to pivot. I'm not gonna ask you to like do anything different.
You guys keep going and we already word is word. I was like damn, that's pretty impressive. And then the next week actually Google also releases a blog from Sundar saying they're announcing something called the Bard which is screenshots. So we knew that like this is gonna get pretty big and competitive but we were like
Look, at the end, Microsoft was never really good at consumer products for a long, long time. You can't suddenly change that. So they actually messed up the opportunity, in my opinion. Google, obviously, I knew.
they're going to have their own problems and challenges. So I felt like there was space for someone else here. Yeah. Having spent almost a decade at Google myself, I see a lot of the culture of the early days of Google, like the things I've learned about Larry or about Sundar. And I see a lot of that in the way that you have built your product. There's a lot of attention to detail. It feels like you are the primary user of the product yourself. Is that a thing that you deliberately tried to do? Yeah, I did. I did deliberately try to do it.
One thing that Larry said is like, you know, I keep reminding everyone in our company about it. The user is never wrong. So even today, while testing a new feature, it didn't work. But there was some ambiguity in the query. So the person, I was talking to the engineer and saying, hey, you know, this is not good. What else could AI have done here? And
you know what the AI should have done? It should have come and clarified to me and asked me, "Hey, I'm not sure either this or this. Which one did you actually want?" And then I should have clarified and then it should have gone and done it. Instead of saying, "I don't know." That is the user is never wrong principle. The other way of designing products is like,
make the user be a better prompt engineer. Blame the user and tell them to be a better prompt engineer. Teach them, educate them, get them to do it the way that the product wants them to do it. Yeah, exactly. Enterprise software is more like second kind, but magical consumer products are more the first kind. I agree. Like in Google, why should Google handle typos? They needn't have, right? We should have all been great at English.
It's like Larry says he was never good at spellings and that's why. I think the true story is YC partner Paul Bukai, he was just annoyed by it and he's like someone should build that. Yeah, exactly. And spell check corrector and it's all there. Similarly, auto suggest. Why is it there? Like easier, right? Similarly, cached results. I was even reading somewhere where Larry wanted the homepage to have a simulation of the weather outside your home.
so that you don't even need to type the weather query. It's just already there. So I was very influenced by that style of design, including subtle things like Chrome search bar. If you've already gone to a site, it's already there. You just have to click enter after typing the first two letters. So that influenced me to make sure we have the cursor ready to type
on the search bar. You don't need to take your mouse and place it there. It sounds like your main metric that you care about is number of queries per day, which is exactly what Google did, I think, in the early days, right? It's hard to grow that without retention. In the long run, I agree. You cannot just pay for a user and get that number up. User could install your app.
And maybe you can even game it where when they install that one query automatically submitted, but a repeat query doesn't need to be submitted. Yeah, I think the only counter example, which I don't think is happening in your case, is the product is not serving their needs. And so they need to issue a bunch of queries to get what they want, which is kind of the opposite of
Larry's approach on Google was you should be on Google as short as possible because we're trying to get you somewhere else to solve your problem. Yeah, yeah. So that's not happening. I mean, sure, I'm sure there are some errors and stuff, but most of the follow-up queries actually we see are completely irrelevant to the first query because they just want to keep continuing the session or questions that they never even knew they wanted to ask, but they want to keep asking. Yeah.
So I presume your team has grown a bunch, you've raised a bunch of money. How do you manage the team? How do you operate your team on a week-to-week or cycle-to-cycle basis with that number of queries per day is our primary metric? So every all-hands, we start with that number. I don't believe in this putting a TV and having a metric
you know, being seen every day because I think that's also distracting. But I do think like it makes sense to take a look every week, see the weekly growth rates, see the monthly growth rates and like if something declined and discuss about it, figure out ways to actually freak out if something declines. We do. And if something grows, like look into why, where. So we are very data driven and we share it across the company.
Actually, I've been trying to share it to the users too so that
they feel like, you know, it's something that's actually happening right in front of their eyes and they want to be part of it. There's a hierarchy. Like if there's some bug to be fixed, if I know some particular person's working on it, I can go and talk to the person directly. Nobody else feels threatened because I'm going and talking to that person. There is no feeling that because I'm raising a bug, it's like, oh, they're going to be fired or something because I have raised like 50 bugs a day. So, you know, like,
It's more like they understand, okay, it's important for the product to feel great. And if it doesn't feel good for ourselves, then the user is also not going to feel that. In fact, we have way more incentive to go use our own product, but the user doesn't. So the standards for the user should be even higher. So always feel like a user. I think that culture is there in the company. I love that. And did you...
intentionally select for that when you were hiring? Like people who are very product centric and in the details? I wouldn't say I explicitly have that as a criterion, but I look for people who cared about doing good work. If you don't care and you're just treating it as a job, then
It's very hard for you to get excited about things. And I think so much of it feeds off of the founders and your culture, your DNA. And it sounds like you're that type of person that obsesses over the details and you're just going to naturally want to hire people who share that trait. Yeah, I do get pissed off if answers are wrong. And I do get pissed off if people on Twitter are saying like, "Reflexity is degrading." But a lot of the things, some things are actually not true.
But I do try to see, you know, leave aside the cynicism, even if it was someone who's like a hater. But if there was something true there and I want to still know. Yeah. I love seeing you engage on Twitter with customers. Is that the primary way that you talk to users or are there lots of other ways that you are talking to users? So I mainly use X, Twitter. People are just like super like brutally honest there. And, yeah,
I think in email people are a lot more polite. Which is okay too, I like both sides. But I think the brutal honesty brings out the worst bugs.
and things that people are afraid to say. - Oh, and in person is the worst where you go show someone something and they're just gonna tell you good things. - Yeah, yeah. - Even if they hate it. - I kinda like don't like any, "Hey, tell me what do you think?" You're always gonna say nice things. - You're gonna grow your company, presumably, you're gonna need to hire more people. How do you avoid the fate of becoming a big slow company? - Well, it's beginning to happen already a little bit.
We're not as fast as we used to be. I think some of it is not because of people. It's also because things breaking in production, people start losing trust in the product. Like today we deployed some change and then someone got frustrated that there was some front end bug somewhere. It was actually something back end, but people are just assuming things. I think like
fast loading, all that stuff matters and not every new engineer has the full context of the code base in their head. The earlier ones do. There's some tension in moving fast and breaking things if you do want to grow to mass market usage.
So that's mainly slowing us down, I would say. And we haven't quite figured out the best way to do this fast. I mean, we do have staging, deployment testing, A/B tests, and all that stuff's happening. And that's naturally slowing us down and getting things out to production widely. Other than that, I would say the obsessive, detail-oriented people, there are only that many people in the world. So obviously you cannot expect
engineer number 250 to be like that. But I try my best to still go flag bugs to whoever's working on whatever new feature. I know who's working on what, even at the size. I still try. I think our co-founders are amazing. They care and they push that principle when they're building their teams. So we are trying our best. I'm not saying we've figured out or cracked it.
But at least we are trying to fight the entropy here. I think that's the only thing you can try, right? Yeah. And it's an uphill battle, but if you keep on it, yeah. Okay, let's talk a little bit about the future. I've seen your most recent launches are kind of like in different directions, more verticalized or more specific around shopping or other things. Where do you want to take it?
Today, my view of perplexity is a more intelligent Google search that's really useful in certain scenarios. What do you want me to think of it in three or four years? If you go and research what's the best sweater to buy or which is the best hotel stay in this location, perplexity will give you a great answer. But where do you actually go and fulfill the demand? You go to Google. And who gets credit for that?
monetarily Google us. Google. We got nothing. Maybe we get you a pro subscription, but then someone else will undercut us and give it away for free with like cheaper models or whatever they have bigger cash reserves.
So the challenge is like you want to be one place where people can have the end-to-end experience. They start with a problem in their mind and they seek your help. You give them the answers and you also help them fulfill the action. It's difficult because people think like if you have an answer of like, oh, what watch does Bezos wear? I think he wears some Omega or something.
I personally thought it would be amazing if it not only gave the answer, but it also had a product card for the specific Omega watch with a buy button and I just click buy and it's done. But there are other people in the world who think that's an ad. It's not even an ad, right? They think like the company is paying us to do this. So this is where like some of the tension with the early adopters who love the ad free informational experience
with like what is actually needed to get mass market and really be useful on a daily basis comes from. There are so many other things like checking the score of a game or quickly getting to a website.
if you just wanted to get a docs link of an API or if you just wanted to go and book a flight on United, the answer could just be a link. The answer could be the weather for tomorrow. That's the temperature or someone's age should be like, you're going to type like Elon Musk network. You'll just get the answer in like less than a second on Google, right? And perplexity, it'll pull the right source. Maybe it might be more accurate than Google, but people
People don't care about some of these minor details. So what you need to build is this amazing orchestration of small models, typical knowledge graphs, widgets, LLM streaming answers, and more complicated multi-step reasoning answers
But user doesn't care. User's not going to tell you when to use what. You decide. That AI, nobody talks about. When to use what, that sort of router, that orchestrator.
I think that's the hardest thing to build. And whoever builds that and can operate that at a scale of billion users and also knows how to monetize some of those queries really well is going to be the next Google. Because they'll have the search bar, they know exactly what to do, they'll go and ask clarifying questions, it truly understands the user.
and also does tasks for you, and also lets you surf the web in a typical way, all in one experience. You could even argue maybe nobody will ever be able to build this because it feels like a daunting task.
But I could say whatever Google has already built is the closest system to something like this. Agreed. So the next generation of this clearly can be built. You just have to persevere and work for a decade or two on this problem. If I talk to people at Google, they would say, yep, that's what we're building. In fact, I know they've been saying that for more than a decade. Probably same at OpenAI, probably the same at Anthropic. When you look at the people that you likely will be competing against in the next 10 years, what do you think is the...
piece that's maybe going to give you the edge to win? Obsession about the user and good product taste. There's a lot of these things that require a lot of domain knowledge. Out of the list you mentioned, Google is the only company that actually has the product taste to do this. And arguably, all the distribution in the world, everything, right? Except the dilemma is also there. It's funnily like
It's a search company, but it's also an ads company. And search is kind of almost exists in service of the ads company, not in the other way. And you could argue, okay, that's outside the search revenue every quarter, which is like close to 200 billion a year. There's still like 100 billion other places, YouTube and cloud, but the margins are...
are all coming in search, right? Cloud is only recently profitable. YouTube is never going to be a high margin business because number one, they don't serve ads on subscription users. And number two, they have to pay the creators, they have to pay the media partners.
So it's never going to be as high margins as search. So you're arguing basically the stock price is going to be their encumbrance. Correct. Because like Wall Street is like crazy. It just automatically, you know, panics if search revenue goes down.
But search revenue has to go down in a world where people are just directly talking to AIs and agents are doing stuff for them. That doesn't mean they're not going to do anything about it. They're still building Gemini and like the new app. The hypothesis is that like they're not going to be able to easily put it on the core Google where they already have all the
billion users. And that's true, right? Right. Yeah. You're arguing that whoever wins this in the long run will kind of by definition need to come up with a new monetization model, a new business model. Yeah. And there's like a ton of other problems to solve. Like
for shopping or travel or like all these things like which merchants do you use or like which hotels do you plug into or you know who's the middleman there and who handles the booking and if a customer wants to cancel stuff. Google actually solved a lot of these problems too, right? They're not just like a page rank or like MapReduce or you know all these advances that they made in like visual like deep learning.
and like BERT, Transformers. It's not just that. That is great, but they also did a lot of other boring work of bringing Google Finance, Google Shopping, Google Flights. I feel like Perplexity is better positioned to do these things than OpenAI and Anthropic because we have it in our DNA to care about the user and the product. We're not just talking about reasoning and models, right? But we are pretty much familiar with all those things.
And we are very much capable of taking the latest open source models and serving them ourselves and fine tuning them, post training them and doing evals. We're not like AI illiterate. We're not going to spend all our bandwidth building data centers and chips and trying to just talk about breaking the most recent coding and math benchmarks. I think there's value in that, but it's quite orthogonal to building the next generation information experience. All right, Arvind, thanks so much for joining us. It was great chatting. Thank you for having me again.