We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Sergey Brin, Google Co-Founder | All-In Live from Miami

Sergey Brin, Google Co-Founder | All-In Live from Miami

2025/5/20
logo of podcast All-In with Chamath, Jason, Sacks & Friedberg

All-In with Chamath, Jason, Sacks & Friedberg

AI Deep Dive AI Chapters Transcript
People
S
Sergey Brin
Topics
Sergey Brin:我最近重回Google工作,这段经历非常有趣。退休后不久,我受到OpenAI员工丹的启发,意识到AI是计算机科学领域最具变革性的时刻。作为一名计算机科学家,我对AI技术的指数级发展感到兴奋,其发展速度超越了以往任何技术。我回到Google是为了深入了解AI系统的各个方面,并确保自己能够掌握最新的技术动态。我对AI的预训练和后训练都非常关注,并积极参与其中。我认为AI的超能力在于它能以我无法达到的规模完成任务,例如快速处理大量信息和进行复杂的分析。我对公司内部禁止使用Gemini进行编码的规定感到困惑,并为此据理力争,最终解决了这个问题。我认为未来模型会更加趋同,专门的模型可能更小更快更便宜,但趋势并非如此。我们同时追求开源和闭源模型,并不断突破智能和上下文等方面的界限。在硬件方面,Gemini主要使用我们自己的TPU,但也支持NVIDIA。我推荐大家使用Gemini App,它拥有最好的模型。我们可能会一直有一些顶级模型,无法立即免费提供给所有人,但我赞成优秀的AI广告,并认为新一代的免费层通常与上一代的专业层一样好,有时甚至更好。

Deep Dive

Chapters
Sergey Brin discusses his return to Google, driven by the excitement of AI advancements. He emphasizes the exponential pace of AI development and its transformative impact on computer science.
  • Sergey Brin's return to Google motivated by AI advancements
  • The exponential pace of AI development dwarfs anything seen before
  • AI development as the culmination of 30-40 years of progress

Shownotes Transcript

Translations:
中文

We've got a special guest who's going to come join us. This always happens. Another guest. Here he is, Sergey Brin, everybody. Oh, my god. Somebody told me you started submitting code.

and it kind of freaked everybody out that daddy was home. - Role models tend to do better if you threaten them. - If you threaten them. - Like with physical violence. - Yes. - Management is like the easiest thing to do with AI. - Absolutely. - It must be a weird experience to meet the bureaucracy in a company that you didn't hire. - But on the other side of it, I would say it's pretty amazing that some junior muckety-muck can basically look at you and say, "Hey, go yourself." No, but I'm serious. That's a sign of a healthy culture, actually.

You're punching a clock, man. I hear the reports. You and I have talked about it. You're going to work every day. Yeah, it's been, you know, some of the most fun I've had in my life, honestly. And I retired like a month before COVID hit, in theory. Yeah. And I was like, you know, this has been good. I want to do something else. I want to hang out in cafes. Yeah.

Read physics books. And then like a month later, I was like, that's not really happening. So then I just started to go to the office, you know, once we could go to the office. And actually, to be perfectly honest, there was a guy from OpenAI, this guy named Dan. And I ran into him at a little party. And he said, you know, look, what are you doing? This is like the greatest...

transformative moment in computer science ever. And you're a computer scientist. I'm a computer scientist. Forget that. You're a founder of Google, but you're a PhD student for computer science. I haven't finished my PhD yet, but working on it. Keep working. You'll get there. Technically on leave of absence. Right. And he told me this, and I'd already started kind of going into the office a little bit, and I was like, you know, he's right.

And it has been just incredible. Well, you guys all obviously follow all the AI technology. But being a computer scientist, it is the most exciting thing of my life, just technologically. And the exponential nature of this, the pace of it, it dwarfs anything we've seen in our career. It's almost like everything we did over the last 30 or 40 years has led up to this moment today.

And it's all compounding on itself. The pace, maybe you could speak, you know, you had a company, Google, that grew from, you know, 100 users and 10 employees to now you have over 2 billion people using, I think, six products or five products have over 2 billion. It's not even worth counting because it's the majority of the people in the planet touch Google products.

Describe the pace. Yeah, I mean, the excitement of the early web, like I remember using Mosaic and then later Netscape. How many of you remember...

Mosaic, actually, my weirdo. And you remember there was a What's New page. The What's New page is great. Two or three new web pages. Yeah, it was like this last week, these were the new websites. And it was like such and such elementary school, such and such a fish tank. And you were like, wow. Michael Jordan appreciation page. Yeah, whatever it was, these were the three new sites on the whole internet.

So obviously the web, you know, developed very rapidly from there. And that was a very exciting. And then we've had smartphones and whatnot. But, you know, this...

The developments in AI are just astonishing, I would say, by comparison. Just because of, you know, the web spread, but didn't technically change so much from, you know, month to month, year to year. But these AI systems actually change quite a lot. You know, like if you went away somewhere for a month and you came back, you'd be like, whoa, what happened? Somebody told me you started submitting code to...

and it kind of freaked everybody out that Daddy was home. Daddy did a PR? What happened? The code I submitted wasn't very exciting. I think I needed to add myself to get access to some things and a minor CL here or there. Nothing that's going to win any awards. But you need to do that to...

To do basic things, run basic experiments and things like that. And I've tried to do that and touch different parts of the system so that, you know, first of all, it's fun. And secondly, I know what I'm talking about. It really feels privileged to be able to kind of go back to the company, not have

any real executive responsibilities, but be able to actually go deep into every little pocket. Are there parts of the AI stack that interest you more than others right now? Are there certain problems that are just totally captivating you? Yeah, I started, you know, like sort of...

I don't know, a couple of years ago and maybe a year ago, I was really very close with what we call pre-training. Actually, most of what people think of as AI training, whatever people call it, pre-training for various historical reasons. But that's sort of the big, super, you know, you throw huge amounts of computers at it. And...

I learned a lot just being deeply involved in that and seeing us go from model to model and so forth and running little baby experiments, but kind of just for fun so I could say I did it. And more recently, the post-training, especially as the thinking models have come around. And that's been another huge step up in general in AI. So,

you know, we don't really know what the ceiling is. - When you explain what's happening with prompt engineering then to deep research and what's happening there to like a civilian, how would you explain that sort of step function? 'Cause I think people are not hitting the down carrot and watching deep research in Gemini's mobile app, and you got a mobile app and it's pretty great. And by the way, I got the fold after you and I were talking about it. Okay, Google.

kick Siri's ass now. Like, it actually does what you ask it to do. When you ask it to open up, it does stuff. But the number of threads, the number of queries, the number of follow-ups that it's doing in that deep research is 200, 300? Maybe explain that jump and then what you think the jump after that is. To me, the exciting thing about AI, especially these days, I mean, it's not, like, quite AGI yet, as people are seeking, or it's not superhuman intelligence. But

It's pretty damn smart and can definitely surprise you. So I think of the superpower is when it can do things in a volume that I cannot. Yes. Right? So, you know, by default, when you use some of our AI systems, you know, it'll suck down whatever top 10 search results, you know, and kind of pull out what you need out of them, something like that. But I could do that myself, to be honest. You know, maybe it would take me a little bit more time.

But if it sucks down the top, you know, thousand results and then does follow-on searches for each of those and reads them deeply, like, that's, you know, a week of work for me. Like, I can't do that. This is the thing I think people have not fully appreciated who are not using the deep research projects. Before we had our F1 driver on stage, I'm a neophyte, I don't know anything about it. I said, how many deaths occurred per decade? And I said, I want to get to deaths per mile driven. And at first was like, that's going to be really hard. I was like...

I give you permission to make your best shot at it and come up with your best theory. Let's do it. And it was like, okay. And it

was like, there's this many teams, there's this many races. Which model did you use? OpenAI? No, I used Gemini 2.5. Gemini's fabulous version? The fabulous one. And it was like, let's go. I treat it like, I get sassy with it, and it kind of works for me. You know, it's a weird thing. Is he drinking the wine? We don't circulate this too much in the AI community. But not just our models, but all models tend to do better if you threaten them.

If you threaten them. Like with physical violence. Yes. But like that's, people feel weird about that. So we don't really talk about that. Yeah. I was threatening with not being fabulous and it responded to that as well. Yeah. Historically you just say like, oh, I'm going to kidnap you if you don't blah, blah, blah. Yeah. They actually. Can I ask you a more. But hold on. But it went through it. Okay. And it literally came up with a system where it said, I think we should include practice miles. So let's say there's a hundred practice miles for every mile on the track.

Then it literally gave me the deaths per mile estimated. Then I started cross-referencing and I was like, "Oh my God, this is like somebody's term paper for undergrad." Like, whoa, done in minutes. It's amazing and all of us have had these experiences where you suddenly decide, "Okay, I'll just throw this to AI. I don't really expect it to work." Then you're like, "Whoa, that actually worked." So as you have those moments,

And then you go home to your just life as a dad. Have you gotten to the point where you're like, what will my children do? And are they learning the right way? And should I totally just change everything that they're doing right now? Have you had any of those moments yet?

Yeah, I mean, I don't really know how to think about it, to be perfectly honest. I don't have a magical way. I mean, I see I have a kid in high school and middle school, and the AIs are basically...

you know, already ahead, you know. I mean, obviously there's some things AIs are particularly dumb at and they, you know, they make certain mistakes a human would never make. But generally, you know, if you talk about like math or calculus or whatever, like they're pretty damn good. Like they, you know, can win like math contests and coding contests, things like that against, you know, some top humans. And then I look at, you know,

okay, my son's going to go on to whatever, from sophomore to junior, and what is he going to learn? And then I think in my mind, and I talked to him about this, well, what is the AI going to be in one more year? Are there areas where you would tell your son, look, don't, or not yet? I don't know if you can plan your life around this. I mean, I didn't particularly...

plan my life to like

I don't know, be an entrepreneur or whatever. I just liked math and computer science. I guess maybe I got lucky and it worked out to be useful in the world. I don't know, I guess I think my kids should do what they like. Hopefully it's somewhat challenging and they can overcome different kinds of problems and things like that. What about specifically? What about college? Do you think college is going to continue to exist as it is today? I mean, it seems like college was already undergoing this kind of...

even before this sort of AI challenge of people are like, is it worth it? Should I be more vocational? What's actually going to be useful? So we're already kind of entering this kind of situation where there's sort of questions asked about colleges. Yeah, I think AI obviously puts that at the forefront. As a parent, I think a lot about, hey, so much of education in America, in the middle class, upper class, is all about education.

What college? How do you get them there? And honestly, lately, I'm like, I don't think they should go to college. Like, it's just fundamentally. You know, my son is a rising junior, and his entire focus is he wants to go to an SEC school because of the culture. And two years ago, I would have panicked.

And I would have thought, should I help him get into a school, this school, that school? And now I'm like, that's actually the best thing you could do. Be socially well-adjusted, psychologically deal with different kinds of failures, you know? Enjoy a few years of exploration. Yeah. Yeah. Sergey, can I ask you about hardware? You know, years ago, Google owned Boston Dynamics, maybe a little bit ahead of its time. But the way these systems are learning,

through visual information and sensory information and basically learning how to adjust to the environment around them is triggering these kind of pretty profound like learning curves in hardware. And there's dozens of like startups now making robotic systems. What do you see in robotics and hardware

Is this a year or are we in a moment right now where things are really starting to work? I mean, I think we've acquired and later sold like five or so robotics companies and Boston being one of them. I guess if I look back on it, we built the hardware. We also had this more recently, we built out everyday robotics machines.

and then later had to transition that. You know, the robots are all cool and all, but the software wasn't quite there. That's every time we've tried to do it to make them truly useful. And...

Presumably one of these days that'll no longer be true. But have you seen anything lately? And do you believe in the humanoid form factor robots or do you think that's a little overkill? I'm probably the one weirdo who doesn't, who's not a big fan of humanoids. But maybe I'm jaded because we've at least acquired at least two humanoid robotic startups and later sold them.

But the reason people want to do humanoid robots for the most part is because the world is kind of designed around this form factor. And you can train on YouTube, you can train on videos, people do all the things. I personally don't think that's given the AI quite enough credit.

Like, AI can learn, you know, through simulation and through real life pretty quickly how to handle different situations. And I don't know that you need exactly the same number of arms and legs and wheels, which is zero in the case of humans, as humans to make it all work. So I'm probably less

bullish on that, but to be fair, there are a lot of really smart people who are making humanoid robots, so I wouldn't discount it. What about the path of being a programmer? That's where we're seeing with that finite data set, and listen, Google's got a 20-year code base now, so it actually could be quite impactful. What are you seeing literally in the company? The 10x developer is always this ideal that you get a couple of unicorns once in a while, but are we going to see all developers?

their productivity hit that level, 8, 9, 10, or is it going to be all done by computers and we're just going to check it and make sure it's not too weird? Because it could get weird. If you vibe code, yeah. I'm embarrassed to say this. Recently I just had a big tiff inside the company because we had this list of what you're allowed to use to code and what you're not allowed to use to code, and Gemini was on the no list.

Oh, you have to be pure. You can't... I don't know. For a bunch of really weird reasons that it boggled my mind. You couldn't vibe code on the Gemini code. I mean, nobody would enforce this rule, but there was this actual internal webpage. For whatever historical reason, somebody had put this and I had a big fight with them. I cleared it up after...

a shockingly long period of time. You escalated to your boss. Oh, I definitely told Sunderer about it. I want to remind you. I don't know if you remember, but you got super voting foundership. You are the boss. You can do what you want. It's your company still. No, no, he was very supportive. It was more like, I was like...

I talked to him, I was like, "I can't deal with these people. "You need to deal with this." I'm beside myself that they're saying we can't. - It's weird that there's bureaucracy in a company. It must be a weird experience to meet the bureaucracy in a company that you didn't hire. - But on the other side of it, I would say, it's pretty amazing that some junior muckety-muck can basically look at you and say, "Hey, go yourself."

No, but I'm serious. That's a sign of a healthy culture, actually. I guess so. Anyway, it did get fixed, and people are using it. So they got fired. That person's working in Google Siberia? No, we're trying to roll out every possible kind of AI. And trying external ones, whatever the cursors of the world, all of those, to just see what really makes people more productive.

I mean, for myself, definitely makes me more productive because I'm not coding. Do you think the number of foundational models, like if you look three years forward, will they start to cleave off and get highly specialized? Like beyond the general and the reasoning, maybe there's a very specific model for chip design. There's clearly a very specific model for biologic precursor design, protein folding. Is the number of foundational models in the future, Sergey,

a multiple of what they are today, the same, something in between? That's a great question. I kind of, if I, I mean, look, I don't know, like you guys could take a guess just as well as I can. But if I had to guess, you know, things have been more converging

And this is sort of broadly true across machine learning. I mean, you used to have all kinds of different kinds of models and whatever, convolutional networks for vision things. And, you know, you had whatever RNNs for text and speech and stuff. And, you know, all this has shifted to transformers, basically. Yeah.

And increasingly, it's also just becoming one model. Now, we do get a lot of oomph. Occasionally, we do specialized models. And it's definitely scientifically...

a good way to iterate when you have a particular target. You don't have to do everything in every language and handle whatever, both images and video and audio in one go. But we are generally able to, after we do that, take those learnings and basically put that capability into a general model. So there's not that much benefit there.

You can get away with a somewhat smaller specialized model, a little bit faster, a little bit cheaper, but the trends have not gone that way. What do you think about the open source, closed source thing? Has there been big philosophical movements that change your perspective on the value of open source? We're still waiting on this open AI, open source. I mean, we haven't seen it yet, but theoretically it's coming.

I mean, I have to give credit to where credit's due. I mean, DeepSeq released a really surprisingly powerful model when it was January or so. So that definitely closed the gap to proprietary models. We've pursued both. So we released Gemma, which are our open source or open to wait models. And

Those perform really well. They're small, dense models, so they fit well on one computer. And they're not as powerful as Gemini. But I mean, the jury's out which way that's going to go. Do you have a point of view on what human computing interaction looks like as AI progresses? It used to be, thanks to you,

a search box, you type in some keywords or a question and you would click on links on the Internet and get an answer. Is the future typing in a question or speaking to an AirPod

Or thinking. Or thinking. Or like, what's the, yeah, and then the answer is just spoken to you. I mean, by the way, just to build on this, it was Friday, right? Neuralink got breakthrough designation for their human brain interface. I mean, that's a very big step in allowing the FDA to clear everybody getting an implant. Yeah, and is it, like, if you could just summarize what you think is kind of the most commonplace human-computer interaction model,

in the next decade or whatever? There's this idea of glasses with a screen in the glasses, and you tried that a long time ago. Yeah, I kind of messed that up, I'll be honest. Got the timing totally wrong on that. Early again. Right, right, but early. There are a bunch of things I wish I had done differently, but honestly, it was just like the technology wasn't ready for Google Glass.

But nowadays, these things I think are more sensible. I mean, there's still battery life issues, I think, that we and others need to overcome. But I think that's a cool form factor. I mean, when you say 10 years, though, a lot of people are saying, hey, the singularity is like five years away. So your ability to see through that into the future is...

- Yeah. - I don't know if this-- - Sorry, just let me ask about this. There was a comment that Larry made years ago that humans were a stepping stone in evolution. Okay, can you comment on this? Like do you think that this,

AGI, super intelligence, or really silicon intelligence, exceeds human capacity and humans are a stepping stone in progression of evolution. - Boy, I think sometimes us nerdy guys go and have a little too much wine and chitter chat. - I've had two glasses. I'm ready to go. - I need just some more for this conversation. - Human implants, let's go.

I mean, I guess we're starting to get experience with these AIs that can do certain things much better than us. And they're definitely, you know, with my skill of math and coding, I feel like I'm better off just turning to the AI now. And how do I feel about that? I mean, it doesn't really bother me. You know, I use it as a tool. So I feel like I've gotten used to it. But, you know, maybe if they get even more capable in the future,

I'll look at it differently. Yeah, there's a moment of insecurity, maybe. I guess so. As an aside, management is like the easiest thing to do with AI. Yeah, absolutely. And I did this, you know,

At Gemini, on some of our work chats, kind of like Slack, but we have our own version, we had this AI tool that actually was really powerful. We unfortunately, anyway, temporarily got rid of it. I think we're going to bring it back and bring it to everybody. But it could suck down a whole chat space and then answer pretty complicated questions. So I was like, okay, summarize this for me. Okay, now...

assign something for everyone to work on. And then I would paste it back in so people didn't realize it was the AI. I admitted that pretty soon. And there were a few giveaways here or there. But it worked remarkably well.

And then I was like, well, who should be promoted in this chat space? And I actually picked out this woman, this young woman engineer who like, you know, I didn't even notice she wasn't very vocal particularly in that group. But her PRs kicked ass. No, no, it was like, and then...

I don't know, something that the AI had detected. And I talked to the manager, actually, and he was like, yeah, you know what, you're right. She's been working really hard, did all these things. I think that ended up happening, actually. So I don't know, I guess after a while you just kind of take it for granted that you can just do these things. I don't know, it hasn't really... Do you think that there's a use case for an infinite context link?

- Oh, 100%. I mean, I don't know about-- - All of Google's code base goes in one day. - Exactly infinite. But sure, you should have access to-- - Quasi-infinite. - Yeah. - Stateful.

And then multiple sessions so that you can have like 19 of these things, 20 of these things running in real time. Eventually it'll evolve itself. Yeah, I mean, I guess if it does everything, then you can have just one in theory. You just need to somehow tell it what you're talking about. But yeah, for sure, there's no limit to use of context. And there are a lot of ways to make it larger and larger. There's a rumor that internally there's a Gemini build that is...

a quasi-infinite context line. Is it a valuable thing? Say what you want to say, Ben. I mean, for any such cool new idea in AI, there are probably five such things internally. And the question is, how well do they work? And yeah, I mean, we're definitely pushing all the bounds in terms of intelligence, in terms of context, in terms of

You name it. And what about the hardware? When you guys build stuff, do you care that you have this pathway to NVIDIA? Or do you think eventually that'll get abstracted and there'll be a transpiler and it'll be NVIDIA plus 10 other options, so who cares? Let's just go as fast as possible. Well, for Gemini, we mostly use our own TPUs. But we also do support NVIDIA and we're one of the big...

purchasers of NVIDIA chips, and we have them in Google Cloud available for our customers in addition to TPUs. At this stage, it's for better or for worse not that abstract, and maybe someday the AI will abstract it for us. But given just the amount of computation you have to do on these models, you actually have to think pretty carefully about

how to do everything and exactly what kind of chip you have and how the memory works and the communication works and so forth are actually pretty big factors. And it actually, yeah, maybe one of these days the AI itself will be good enough to reason through that. Today it's not quite good enough. I don't know if you guys are having this experience with

the interface, but I find myself, even on my desktop and certainly on my mobile phone, going immediately into voice chat mode and telling it, nope, stop.

That wasn't my question. This is my question. Nope. Let's say that again in short of bullet points. Nope. I want to focus on this. Definitely. It's so quick now. Last year was unusable. It was too slow. And now it stops. Okay. And then you sell it. I would like bullet points. It's what I want to go to. I don't want to type. I want to use voice. And then concurrently, I'm watching the text as it's being written on the page, and I have another window open, and I'm doing Google searches or second questions.

queries to an LLM or writing a Google Doc or a Notion page or typing something. So it's almost like that scene in Minority Report where he has the gloves or in Blade Runner where he's in his apartment saying, "Zoom in, zoom in, closer to the left, to the right." There's something about these language models and their ability to the response time, which was always something you focused on response time,

Is there a response time thing where it actually is worth doing voice and where it wasn't previously? Everything is getting better and faster. Smaller models are more capable. There are better ways to do inference on them that are faster. You can also stack them. This is Nico's company, Eleven Labs. It's an exceptional TTS, STT stack. There are other options. Whisper is really good at certain things. This is where I...

I kind of believe you're going to get this compartmentalization where there'll be certain foundational models for certain specific things. You stack them together. You kind of deal with the latency. And it's pretty good because they're so good. Like Whisper and Eleven, for those speech examples that you're talking about, are fucking kick-ass. I mean, they're exceptional.

- Well, wait till you turn on your camera and it sees your reaction to what it's saying and you go, and before you even say that you don't want it or you put your finger up, it pauses. Oh, did you want something else? Oh, I see you're not happy with that result. It's going to get really weird. - It's a funny thing, but we have the big open shared offices, so during work, I can't really use voice mode too much. I usually use it on the drive. - The drive is incredible.

I don't feel like I could. I mean, I would get its output in my headphones, but if I want to speak to it, then everybody's listening to me. It's weird. I just think that would be socially awkward. But I should do that. In my car ride, I do chat to the AI, but then it's like audio in, audio out. But I feel like, honestly, maybe it's a good argument for a private office. I should spend more time like you guys are. You could talk to your manager.

They might get one. I like being out in the bullpen, so to speak. I like being with everybody. But I do think that there's this AI use case that I'm missing, which I should probably figure out how to

Try more often. If people want to try your new product, is there a website they can visit? Or something? Or special code? Now go check, I mean honestly, there's a dedicated Gemini app. If you're using Gemini just like you're going through the Google navigation from your search, just get to download the actual Gemini app. It's kick-ass. It really is the best models. I think it is. And you should use 2.5 Pro. 2.5 Pro. You got to pay, right?

Yeah, you get a few prompts for free, but if you do it a bunch, you need to pay. You're just going to make all this free, right? It's like 20 bucks a month. Yeah, it's great. You got a vision for making it free and throwing some ads on the side? Yeah, one step down in hardware costs, the whole thing will be free. Okay, it's free today without ads on the side. You just get a certain number of the top model. I think we likely are going to have always now top models that we can't supply infinitely to everyone right off the bat. But

wait three months and then the next generation. It seems to me like if I'm asking all these queries, just having a little on the sidebar of things I might be, a running list that changes in real time of things I might be interested in. I'm all for really good AI advertising. I don't think we're going to necessarily, our latest and greatest models, which take a lot of computation, I don't think we're going to

just be free to everybody right off the bat. But as we go to the next generation, you know, it's like every time we've gone forward a generation, then the sort of the new free tier is usually as good as the previous pro tier and sometimes better. All right. Give it up for Sergey Brin. Thank you.

Okay, thanks, everybody for watching that amazing interview with Sergey Brin. And thanks, Sergey for joining us in Miami. If you want to come to our next event, it's the all in summit in Los Angeles. Fourth year for all in summit, go to all in.com slash events to apply.

A very special thanks to our new partner, OKX, the New Money app. OKX was the sponsor of the McLaren F1 team, which won the race in Miami. Thanks to Heider and his team, an amazing partner and an amazing team. We really enjoyed spending time with you. And OKX launched their new crypto exchange here in the US. If you love All In, go

Go check them out. And a special thanks to our friends at Circle. They're the team behind USDC. Yes, your favorite stablecoin in the world. USDC is a fully backed digital dollar, redeemable one for one for USD. It's built for speed, safety, and scale.

They just announced the Circle Payments Network. This is enterprise-grade infrastructure that bridges the gap between the digital economy and outdated financial rails. Go check out USDC for all your stablecoin needs. And special thanks to my friends, including Shane over at Polymarket, Google Cloud, Solana, and BVNK. We couldn't have done it without y'all. Thank you so much. Brain Man, David Sack. And instead, we open source it to the fans.

I'm going!