We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Man Who Invented Prompt Engineering on AI, AGI & The Future of Humanoids w/ Richard Socher & Salim Ismail | EP #152

The Man Who Invented Prompt Engineering on AI, AGI & The Future of Humanoids w/ Richard Socher & Salim Ismail | EP #152

2025/2/25
logo of podcast Moonshots with Peter Diamandis

Moonshots with Peter Diamandis

AI Deep Dive AI Chapters Transcript
People
P
Peter Diamandis
创始人和执行主席 của XPRIZE基金会和单点大学,著名企业家和未来学家。
R
Richard Socher
S
Salim Ismail
知名指数组织专家、连续创业者和技术策略师,新奇大学创始执行董事和ExO Works创始人。
Topics
Richard Socher: 我认为,如果给我足够的资金,我可以在相对较短的时间内构建一个数字超级智能。这需要整合硬件和软件资源,并充分利用现有模型的优势。目前AI模型已经能够满足大多数人的需求,未来的发展方向是解决更复杂的任务,例如编程。 在衡量AI模型的智能水平时,我们应该考虑多个维度,例如推理能力、学习效率和不同类型的智能。单纯依靠单一指标进行评估是不够全面的。 开源AI正在快速发展,并逐渐超越闭源AI。未来,纯粹的基础模型公司可能会更像电信公司,拥有巨大的资本支出和基础设施,但其价值捕获能力可能有限。You.com通过构建信任层,整合多个模型并根据用户反馈进行调整,来帮助公司利用AI技术,并避免被锁定在特定模型上。 我认为物理操控并非超级智能的必要条件。人类的智能具有多样性,即使缺乏某些生理能力,仍然可以拥有高度的智力。 在投资AI初创公司时,我们需要关注创始团队的专业能力、产品市场匹配度以及数据循环的良性发展。 我对AI代理技术非常看好,认为其应用前景广阔,尤其是在知识工作领域。然而,隐私和数据收集等方面的限制,以及互联网盈利模式的转变,都将对AI代理技术的普及产生影响。 Salim Ismail: 我认为埃隆·马斯克的执行力很强,他在短时间内构建大型且连贯的GPT集群的能力令人惊叹。Grok 3的表现可能比宣传的要逊色一些,但其在短时间内取得的成就仍然令人难以置信。 我认为开源AI正在超越闭源AI,这与软件行业的发展趋势类似。未来,成功的AI平台需要与最终用户和数据建立良好的联系。 我对AGI的定义比较宽泛,可以从经济角度或学术角度来定义。 我不理解为什么人们总是倾向于制造人形机器人,我认为拥有更多功能的机器人可能更有效率。 我对量子计算的突破感到兴奋,但这项技术应用范围有限。 我对加密货币生态系统的稳健性感到鼓舞,尽管存在一些挑战,例如安全性和用户体验问题。 Peter Diamandis: 我们正处于探索超级智能的最佳时机。 未来大部分科学突破都将来自AI,人类将与AI共同进行科学研究。 未来十年,生物医学研究可能取得百年来的突破,这可能导致人类寿命翻倍。 OpenAI人才流失令人担忧。 我认为我们可能正在过度建设AI基础设施,但对能源的需求会增加。 比特币的涨幅主要集中在少数几个交易日,投资需要谨慎。

Deep Dive

Shownotes Transcript

Translations:
中文

If you were given a couple of billion dollars, you'd be able to build a digital superintelligence. How quickly? I think probably like a year and a half to two years. Richard Saucer. Richard Saucer. Richard Saucer, often called the father of prompt engineering. He's one of the top five most cited researchers in AI. Former chief scientist at Salesforce. Co-founder of the AI-powered search engine. View.com.

We're too late to explore the oceans and the world. We're too early to explore maybe different galaxies. We're right on time to explore superintelligence. Why haven't we seen yet a kind of an agentic version of a Jarvis that just watches your tasks? Programming, science, research, that's where the next frontier is for a lot of these amazing models. I cannot believe that we're alive right now. It's like people should realize how extraordinarily lucky we are. Undeniable.

Now that's a moonshot, ladies and gentlemen.

Everybody, welcome to Moonshots, another episode of WTF Just Happened Tech this week here with Salim Ismail, Peter Diamandis, and we have AI royalty with us today. Richard Socher is the fourth most cited individual across AI. And Richard, what's the proper way to phrase your domination and being cited? I have over 200,000 citations, invented one of the most popular word vectors, got

Got neural networks into the field of natural language. Invented prompt engineering. That's right. Incredible. And, you know, Richard is the founder and CEO of You.com. We'll get into that a little bit later. He was acquired by his company. Metamind was acquired by Salesforce and he was the chief scientist and EVP at Salesforce and a lot more. Salim, welcome as well, buddy.

Good to be here. Yeah, so a lot happening this week in the field of AI and I want to dive into that Richard, get your extraordinary point of view here. I want to start with the launch of Grok 3. If I had to sort of like

tier all of the activities that have just occurred. And I want to contextualize it on the notion that it wasn't very long ago that Elon raised $6 billion. I was, you know, full disclosure, an early investor in XAI. And he announces he's going to create the largest

GPT cluster on the planet, make it coherent. And he does that 122 days and blows people away. Were you shocked, Richard, on how fast he built what he did? Elon executes. And with $6 billion, you know, you can do a lot of damage in AI. I mean, we've seen companies like DeepSeek and that hedge fund build amazing models with much less. So, yeah,

In some sense, it's amazing and it is surprising how quickly they got that far. But in some ways, you can expect some of these with exponential technologies like AI, enough resources, you can go hard pretty fast. Yeah, my standard phrase is don't bet against Elon. I just saw him last week in Miami. I was there for the FII Summit.

And the guy does execute. He's got an incredible team. So I'm curious about how you're benchmarking Grok 3. You know, apparently it's outscoring, you know, ChatGPT, Gemini, DeepSeek. How do you rank it as an AI engine? So...

So we actually have CROC 2 already within u.com too, and it's a popular model, though there are others that are even more often chosen by our users. I think what's interesting is, you know, Sam Altman also talked about how the next generation of models are going to be almost the level of a PhD student.

But what we notice is that not many people are PhDs and have PhD level questions in their lives. So for more and more people, I think we've reached a level of informational needs and knowledge needs that is good enough for them. So now you kind of push harder on really hard tasks like programming. We've seen some exciting announcements today of Anthropix new 3.7 model.

I think programming, science, research, that's where the next frontier is for a lot of these amazing models. Salim, what have you been hearing on the ground? I'm hearing Grok 3 is incredible, but the outperforming all other AI models seems to be a little bit more hype than reality. I think it's coming in, as far as I can see, when I scan Twitter or X, a little bit lower than them, but still unbelievable that he's been able to achieve this in...

such a short period of time. I'm fascinated, Richard, when you, because you guys do like federated AI, because you have access to many models, right? So I'm really interested in hearing more about your model and what you guys are doing. But just on the Grok 3 thing, I think for me, the biggest thing is his ability to achieve coherence across such a large cluster. That part blew my mind, because as far as I could see, every AI expert said you can't do it.

And Richard, I'd love to get your kind of take on that piece of it. Yeah, I think not many people have been able to set up a big cluster that quickly. I think in many ways that is a combination of hardware and software. And a lot of folks like me are more software people. And a lot of AI folks have been spending most of their time in software. And so I think it kind of speaks to his ability to work in both hardware

hardware, just sort of where he comes from much more of Tesla and SpaceX and such. But now moving into like scaling that up and actually getting all the software components. At the same time, of course, there are companies like AnyScale and others that are making it easier and easier to deal with massive clusters. AnyScale allows you to like up

scale from like five GPUs to 5,000 GPUs within a few lines of code. And so the layers of abstraction are going higher and higher. And, you know, we're all, thanks to AI, partially are operating at higher levels of abstraction. I'm curious about how people can evaluate these against each other. At the end of the day, you know,

I think about human IQ tests as an interesting metric to evaluate them. You know, I was I was fascinated when when Claude three came out in IQ of one oh one and then was it GPT one or GPT three came in at an IQ of one hundred twenty. And I've been wondering about when we'll see, you know, something coming out at an IQ of one fifty.

Is that a relevant measure? You know, IQ has like a lot of different dimensions. I think intelligence overall has a lot of different dimensions, which we briefly talked at our FII conference conversation. I don't know if it makes sense to just boil it down to this one number. I think even the Turing test is essentially broken in the sense that the best way to fail the Turing test is to answer questions so much better than a human could. Like write me an app in 30 seconds and then if it can do it,

It's an AI. If it can't do it, it's human, right? So it's like there are many ways that we measured intelligence that are broken. And I'm working on helping the world kind of structure that measurement a little bit better by understanding sort of what the dimensions are of intelligence and if there are upper bounds to some or if it can just keep on growing. So, you know, it's interesting, right? So you're providing access to large corporations across most of the AI models. How many AI models do you have on you.com? Like 40 plus. Yeah. 40 plus. Amazing. Yeah.

If you were going to, just for people to get a sense of the largest and most powerful models out there, what's your list of the top five thereabouts? You know, you can't ignore OpenAI still. A lot of folks want to use OpenAI and especially 01 and 03 are quite popular. We have a lot of fraud too. People trying to trade accounts and then like just make us into a free API and then make...

like 10,000 calls in one hour and you're like, no one can read that. This is clearly a bot attack. That happens all the time. Sonnet is still very, very popular too. Sonnet 3.5 and I'm sure 3.7 now. - This is Anthropic, yeah. - Yeah, Anthropic's model is probably one of the best models for programming still.

So we actually have our own models, which we just fine-tune open source models. And then we also federate and ask different models depending on where people give the most positive feedback given the intent that they have. So we classify the intent. Is it a programming intent? Is this a history or a medical intent? And then we route it.

to different models and it changes actually the most surprising thing maybe is how often it changes and how much mindshare deep seek also got in such a short time with not much of a marketing budget right so that was a very popular model for for quite some time everybody peter here if you're enjoying this episode please help me get the message of abundance out to the world

We're truly living during the most extraordinary time ever in human history. And I want to get this mindset out to everyone. Please subscribe and follow wherever you get your podcasts and turn on notifications so we can let you know when the next episode is being dropped. All right, back to our episode.

Let me head to the next slide here. So Grok three benchmarks versus the competition. And here are the numbers. So these benchmarks, are they relevant and valuable? I'm I'm curious because everyone wants to know how fast they're progressing. This is on reasoning and test time compute.

Richard, how do you view this? Yeah, I think there are two interesting insights here. Indeed, most normal people don't have crazy hardcore coding, science, and math questions every day in their life. So this is where we push science forward, like I just mentioned earlier. And that's where that frontier is really exciting. The other really interesting bit here is that we're looking at test time compute. And so it doesn't even make sense anymore to think about a

single models intelligence because it turns out there's some fun research that came out where you just say, wait before you answer this and give it some more thought. The same model actually does better and gives you more accurate answers. So speed is becoming kind of a

dimension of intelligence, obviously overlapping with a lot of other kinds of intelligence. And the faster you have to be, you know, the less intelligent you are, the less intelligent your answers are from these models. And so what that also means is we may not have to worry about sort of AI running away and open source because you're going to have to have a lot of technology

compute even at test time if you want to get the smartest possible answers from these models. So lots of interesting insights. Salim, any questions, Richard? I've got a big one, which is, you know, as we move towards AGI, I struggle massively when people say AGI and what the hell does that mean even? And so I'd love, you're one of the few people that I think could give a cogent answer on how do you define AGI? And if we achieve it, how will we even know?

And you just put out a you just put out a tweet, Richard, that I found interesting that said something like if you were given a couple of billion dollars, you'd be able to build a digital superintelligence. How quickly?

I think probably like a year and a half to two years. Was that a call for funding? Everybody, listen, give me $2 billion and I'll give you your digital superintelligence. Yeah. I mean, you know, I miss going on the research side, going hard, you know, when you build the products and you make revenue, it's amazing. It's very meaningful. But I think there's still a couple of ways that the community is stuck on in terms of research where we can really push it forward. I think in terms of AGI,

Indeed, the definitions are so broad, right? Some folks say, well, it's 80% of work can be automated. And that's a very pragmatic way of just like, you know, sort of financially defining intelligence. Of course, I would say that maybe 80% of all digitized work

And then, you know, maybe 80% of all those workflows, and that's already a huge amount of GDP. And that could be a reasonable financial definition of intelligence. But of course, if you're sort of more academically inclined, you have to acknowledge that there's certain kinds of intelligence and types where you want to really be able to

really get faster at learning too. Like humans are able to just with one or two examples, learn something. So we call this sample efficiency, right? And if you're really that intelligent, you should be able to learn with much less data along certain dimensions. And so I think as we want to define it really properly, we're going to have to go into the different types of intelligence, visual intelligence, language, reasoning, mathematical reasoning. There's some type of social intelligence to even among AI is like what actions are

could I take to modify your internal state in order to influence your actions? So there are these different dimensions of intelligence. Knowledge is a good dimension too, which is quite unbounded. We can learn more and more about the universe.

end up hitting sort of physics-based boundaries of how much knowledge you can accumulate based on the speed of light going around the different sensors that you may have. So the full definition is probably takes too much time here. But a financial pragmatic one of just like we automate a lot of digitized work seems reasonable. And what's your view of going into the physical realm? For example, Wozniak, his test is, can you make me a cup of coffee?

And now you're getting into robotics or the other one I've heard is, can you put and take an Ikea box and put the piece of furniture together right now? You're getting the physical manipulation, which is really is one of the core rationales for intelligence. Do you go into that world or do you stay in the digital side? Because it's just you can boundary it more easily. I actually think that physical manipulation is another definition that dimension of intelligence or group of dimensions. And at the same time,

A deaf person can be very intelligent. A blind person can be very intelligent. So none of those abilities, a person that's paraplegic can be very intelligent, even though they can manipulate matter. So I think we have to accept the fact that these are not necessary capabilities to have a super intelligence. You can have a super intelligence that I think is strong.

purely digital and it's just different to our intelligence. And I think people who require to say, oh, you gotta have a bunch of fingers and move around, they're just like not, they haven't read enough sci-fi maybe, or not like sort of creative enough in their definitions of intelligence. At the same time, I'm loving the humanoid robots.

The tricky bit is that oftentimes we use robots when we can do certain things, when we want to do certain things many, many times, very efficiently and very quickly. Like washing the dishes or vacuuming the carpet. Exactly, which then has like a simple robot Roomba or dishwasher.

And then we call it a dishwasher. And then we call it a vacuum. We give it a specialized name. I am, you know, Salim, you and I have had this debate a bunch. And I'm curious about your opinion still and Richard's, which is the whole open versus closed issue.

AI debate. And do you feel like open is gaining on closed? And is that a definitive future? Undeniable. Undeniably, open source is gaining. When you have this much excitement around something, and it is a product and experience that any normal person can appreciate, there

There's so much energy that goes into open source that it is very hard to compete with that in the long term. The more niche you are, the more technical it is, the fewer people can appreciate using that technology. Like let's say you do ion thrusters for satellites. No one's going to build an open source model for that with millions and millions of dollars and that excitement.

It's undeniable of DeepSeek that it's been catching up. And I'm hoping we can build one system eventually where almost like Wikipedia, people can contribute to it. No one does that. I'm going to have to do that at some point. I have the same view. We saw this in the software world when you had Microsoft running its internet server and

And then you had open source web servers and the open source web servers just absolutely took over. It's 99.9% of all web servers are now open source. And therefore, over time, that will always win. So my question then is, OK, we're going to be heading towards open source. Got it. We have still a number of closed source companies. Are they eventually going to go open source?

Is there a winner-take-all scenario here? I think there's a good chance that if you're a purely foundational model company, you're going to look more and more like a telco, like huge CapEx, very expensive to build, creates a ton of infrastructure that creates value, but it's unclear you can capture all that value yourself. Thank you for using that analogy. And I think that's the perfect analogy here. So we're commoditizing and demonetizing technology.

all of this stuff. I mean, if you look at the demonetization curves in terms of the cost per, you know, per transaction, it's like just this rapid de-escalation. So how do you rationalize? So in the telco space, right, folks need to realize we had massive amount of bandwidth being built out in terms of fiber, in terms of cable, in terms of

3G, 4G, 5G, and all of the value was captured not there, but captured on YouTube, captured on Netflix, captured on apps on top of that.

And so how do you think about that, Richard? Yeah, you can't build an Uber without internet everywhere, you know, but Verizon doesn't get a cut of Uber. Yeah. So I think that is why at u.com, we haven't spent a ton of money on training models from scratch. And we've built a trust layer on top that sort of professionalizes this so that companies can really use that technology. And I do think

more and more thanks to deep seek of our existing and now new customers are realizing, oh, yeah, we should partner with someone like you because the new model comes out in two months, and I'm stuck on a one year contract with one of the closed source companies. Now I can't benefit from that makes it makes a ton of sense, because there's a continuous competitive and everybody's it's a race down to the bottom.

And if you become stuck with a particular model, you have no guarantee that you're going to be using the most efficient, lowest cost model. Yeah, we call it future proofing. So what does a trust layer mean for u.com?

A trust layer is highly connected to data and helping people actually train on how to use the technology. So we do certifications so everyone can become a manager of their AIs and of their agents. And we incorporate not just public data better than anyone else, because we're

We've been doing it longer than anyone else, but we're also incorporating company internal data. And so then you can actually start to trust it. And then when you click on citations on u.com, especially in our more advanced research modes, you will actually get sent directly to the quote and the browser will scroll down and highlight, oh, this is where I found this fact. So you can very quickly build that trust with them. We taught our models to say,

I don't know. A lot of models, if they don't find the information somewhere on the web, they'll just make up something and be like, don't do that. So there are a lot of different moving pieces to making it more accurate and building that trust.

You know, Salim, you and I have talked about when we're advising companies and investors about investing in AI, it's like investing companies that have a great connection with their end customers and with data.

And then assume that the layer in between is just going to be constantly, you know, you know, flick over, replace it, get the latest lowest cost model. But its relationship with the with the customer base and customers.

and the data sets. Yeah, I think this is going to be key to success in AI platforms, right? And I think, Richard, it sounds like you at u.com has done an amazing job of creating that layer of abstraction that protects people from the underlying thing. Because otherwise, one of the huge questions everybody has, as we talk, both Peter and I, yourself, we talk to CEOs around the world, when do you place your chips? Because the minute you put your chips down on a particular model, it's out of date in three months.

And so therefore, you really need platforms like you.com to help with that. And I think it's fascinating to see what you've done there. Here's another article from the New York Times. For those listening to the podcast, not watching, it says open AI and covers evidence for AI powered Chinese surveillance tools.

So, of course, we've had this entire, you know, incredible back and forth with TikTok. And now we potentially have it as well on DeepSeek. What's your what's your views here, gentlemen? I'm not surprised. How would it not be the case would be my question that that that you would ask. And then all these companies have downloaded, you know, DeepSeek and put it into their systems.

But it's not is it are those if you download the model and are utilizing it in isolation, is it still reporting back information that that it's gathered?

So you can teach that you can take the open source model and still force it to take stuff from a prompt and from a search engine backend. So that is possible. And you can actually also fine tune the model to get rid of all the CCP alignment. Fascinating. All right. Our next our next story here is and I love this accelerating scientific breakthrough with an AI co-scientist.

You know, I love the fact that we saw we saw the Nobel Prize going to Demis and John Jumper for the creation of a model able to predict the folding of a protein.

My expectation, Richard, and you're both a deep scientist and a deep programmer, is that almost all breakthroughs are going to come from AI in the not too distant future. And we'll attach it to a human so a human can get the Nobel Prize. But it's going to be fundamentally in materials and mathematics and science and medicine. Am I wrong there?

100%. Yeah, I'm writing a book on this sort of in my nights and weekends on AI for science. And it's called The Eureka Machine. So the working title, and I'm a big believer. Interesting enough, also, when you ask a lot of folks all over the world, you

in areas where they're scared of AI. Most folks are scared that it takes their jobs, but in terms of science and medicine, no one wants more jobs. They just want more breakthroughs and cool discoveries. So everyone worldwide is a lie. Let's just have AI do a lot of science. So there's a lot of positive momentum behind it, and I think we'll see more and more discoveries

first with the help of AI and eventually you mostly guide it, right? You need to kind of tell the AI, this is what we care about the most. And then it can go off and do more and more in an automated fashion. This is the area that I'm most interested in because I think there's just so many, if you provide it with data sets and go formulate 5,000 hypotheses and start testing them, it can do virtual testing of all sorts of things. And I'm incredibly excited as to what's going to come from this.

I love this last bullet here. It says replicated 10 years of antibiotic resistance studies in just 48 hours.

Um, Dario was at Davos, Dario, the CEO of Anthropic. And he said something which I clipped, which I love. He said, listen, we're going to see a century worth of biomedical research in the next five to 10 years. And one could imagine that during that century of biomedical research, that we would potentially double the human lifespan. And so it's not unlikely it could double the human lifespan in the next, within the next decade. So I'm always listening for those signals because, you know, that's like,

I'm in it for in it to win it on the on the doubling the human lifespan. And then we'll negotiate where we go from there. We saw Larry Ellison when he was on stage on Stargate announcing the idea that we're going to have, you know, personalized mRNA vaccines against your cancer should you have it.

And so for me, this is like one of the most extraordinary areas of reinventing medicine, curing cancer, curing viral infections, curing death, perhaps. Who knows? Yeah, I think a lot of people who now say, oh, like Brian Johnson and longevity folks are just like, that's a bad idea. I think, one, most of those people are healthy and aren't currently battling anything. And two, they're just like people before the baby pill came out.

They're like, oh, that's not natural. I'm like, yeah, you know, there's a lot of bad stuff that's natural, like murder and no laws are natural. There's just animal kingdom stuff. And so there's all kinds of bad natural things. And humanity has been pretty good at improving from that natural state. And I think it lacks a certain creativity when people think we can't ever solve aging and health spans and things like that. So we, in 2018, started the largest project for a large language model for proteins.

And we actually published that paper when I was still at Salesforce. And we've had incredible success. In fact, we believed in it so much, we worked with wet labs and actually synthesized those proteins.

And they were 40% different to naturally occurring proteins. And just to put that into perspective, Frances Arnold, about four years ago, won a Nobel Prize for what she called directed evolution, which was random permutations with a lot of experimental science in the loop. And then saying, oh, this random permutation improved this particular property. So let's keep this and then keep iterating. By the end of her very long process, those proteins were 3% different to naturally occurring proteins.

And ours were 40%. And what taught us that we actually captured the syntax, the grammar of these proteins was that they folded properly and they had the properties we predicted them to have and we wanted them to have. And so there's a lot more work that comes from this. A bunch of startups have already started. And once you understand the language of proteins, all the medicine will follow. This goes back to, Salim, your point about AI.

interfacing with the physical universe. Right. So another friend, Alex Zavirankov, the CEO of Insilico Medicine, one of the things that he's done and he was he was very early in generative AI and and drug discovery. But he's built a massive robotic laboratory where he can basically have the AI come up with experiments and

and run those experiments, you know, a hundred times faster than humans, get the data, iterate the experiment, run the experiment. And so you literally create a theoretical world and a physical world. I find that extraordinary.

I think we're going to see hundreds of examples like this where people now, the only limit is our imagination and how fast we can apply some of these because the speed of the technology is now at a level where we can pretty much go down any avenue we want. Me personally, I'm looking for how do you reconcile quantum mechanics with relativity as a physics major. That's my thing. And I think AI will be able to figure it out. Yeah, I'm...

I cannot believe that we're alive right now. It's like people should realize how extraordinarily lucky we are. I don't think this is, you know, every generation feels like they're alive during the most extraordinary time, whether it was, you know, at the beginning of flight and electricity and the internet and so forth. But I think we're alive.

I think we're better. - This is it. - We're too late to explore the oceans and the world. We're too early to explore maybe different galaxies, but we're right on time to explore super intelligence, yeah. - For sure. The other area besides medicine is material sciences. So we just saw MatterGen out of Microsoft.

Right. Talk about prompt engineering, my friend. Your prompt engineering has now gone into a completely different like, please build me, design me a material that is superconducting, that includes these elements, that is this cost that can be manufactured. You know, it's like crazy. It's like insane.

If we get like a room temperature, normal pressure superconductor from that, it would be world changing. And I'm very, very excited for that. The nice thing about chemistry is that unlike biology, you can iterate even faster, right? There's no living tissue or you don't have to run FDA trials and so on. You can just iterate even quicker in that loop. And, you know, Salim, you and I have always said material sciences is at the foundation of everything else. And, you know, we consider material scientists heroes in our world.

All right, Saleem, what do you think about this one? Satya Nadella on quantum breakthroughs, quote, we believe this breakthrough will allow us to create a truly meaningful quantum computer, not in decades, but in years. I think Google, now Microsoft. Yeah, I think this is beyond huge. I think as we get to this, we have to keep in mind the limitation that quantum computers are only good for certain classes of problems.

So there's that limitation. But the fact that you can create stable environments is really something huge. I go back to Helmut's comment that the existence of a quantum computer... Hartmut Nevin. Hartmut Nevin, yeah. His comment that it gets very kind of metaphysical very quickly because he said the existence of a quantum computer may be proof of a multiverse.

And your head kind of just breaks right then. So, Richard, I'd love to get your take on this because you're crossing both of these areas. He goes a step further, right? He says the only way quantum computers can do all of the calculations as rapidly as they do is that they're borrowing resources from a near infinite number of adjacent universes. Yeah, we're doing the computation in parallel universes and bringing the answer back. I love it. At which point they're going to be pissed when they find out we're stealing their resources. So there's that to think about as well. Hey, Richard, what's your view on all of this?

I'm super excited. I think anything you can simulate, any domain you can simulate, AI can solve pretty much every problem in that domain. It's just a matter of time and whether humans want to put that effort in. So you can simulate Go, you can simulate chess.

So chess is obviously solvable by an AI because AI can learn through two ways, right? Either imitation or exploration, aka supervised and fine-tuning and supervised training or reinforcement learning. And so when you can allow a simulation to just train and try billions and billions of things, it can get smarter over time. What quantum computers will enable us to do once we scale them up.

is to simulate much more in the physical reality. My favorite science influencer, Sabine Hossenfelder, put a little bit of a damper on this particular announcement, saying, oh, you know, we'll see if they really can scale it. But I'm very excited. I'm excited that there are different ways of approaching that, you know, like the trapped ions, the neutral atoms. It's interesting. You hear a lot of quantum scientists

scientists kind of diss the other approaches and think their approach is the best. And then comes like this total left field one of these topological qubits that no one had been working on. And I just love the fact that there's this energy and that, you know, in some ways we have companies that have such a massive monopoly in their space that they have all these extra resources to do 17 years of research in one before something comes out. Amazing. And it's in...

Honestly, thank you to Google and Microsoft for investing in this direction because there was no immediate return. You know, we saw Hartmut Nevin's latest. Remind me what his breakthrough was a few months ago. You know, it announced that the larger the number of qubits, the more stable it became. Yeah.

And Majorana, is that how you pronounce it? Majorana one? Majorana, yeah. Yeah. Incredible. It was about 13 years ago. I had my two kids, my two boys. And I remember at that moment in time, I made a decision to double down on my health.

Without question, I wanted to see their kids, their grandkids. And really, you know, during this extraordinary time where the space frontier and AI and crypto is all exploding, it was like the most exciting time ever to be alive. And I made a decision to double down on my health. And I've done that in three key areas. The first is to be able to live.

is going every year for a fountain upload. You know, fountain is one of the most advanced diagnostics and therapeutics companies. I go there, upload myself, digitize myself about 200 gigabytes of data that the AI system is able to look at to catch disease at inception. You know, look for any cardiovascular, any cancer, neurodegenerative disease, any metabolic disease,

These things are all going on all the time and you can prevent them if you can find them at inception. So super important. So Fountain is one of my keys. I make that available to the CEOs of all my companies, my family members, because health is a new wealth.

But beyond that, we are a collection of 40 trillion human cells and about another 100 trillion bacterial cells, fungi, viri. And we don't understand how that impacts us. And so I use a company and a product called Viome. And Viome has a technology called Metatranscriptomics. It was actually developed by

in New Mexico, the same place where the nuclear bomb was developed as a biodefense weapon. And their technology is able to help you understand what's going on in your body to understand which bacteria are producing which proteins. And as a consequence of that, what foods are your superfoods that are best for you to eat?

Or what food should you avoid? Right. What's going on in your oral microbiome? So I use their testing to understand my foods, understand my medicines, understand my supplements. And Viome really helps me understand from a biological and data standpoint what's best for me. And then finally, you know, feeling good, being intelligent, moving well is critical, but looking good when you look yourself in the mirror,

Saying, you know, I feel great about life is so important, right? And so a product I use every day, twice a day is called One Skin, developed by four incredible PhD women that found this 10 amino acid peptide that's able to zap senile cells in your skin and really help you stay youthful in your look and appearance.

So for me, these are three technologies I love and I use all the time. I'll have my team link to those in the show notes down below. Please check them out. Anyway, I hope you enjoyed that. Now back to the episode. All right, let's go on to our next topic here. So Microsoft dropped some AI data center leases. So cancellation of U.S. data center leases raised concerns about AI infrastructure overcapacity and shifting partnerships.

moves sparked industry reactions with European energy stocks. So there's been a lot of build out. This ties directly to energy as well. I keep on hearing-- and Richard, I'm curious, and Salim, your point of view-- that there is an open checkbook for building out capacity.

and building out energy. We're seeing small modular reactors, SMRs, this is fourth generation nuclear, setting up next to these. We're seeing, I mean, you know, I don't get into politics here, but Trump is like drill, baby, drill. You know, it's like we need as much energy as we can in the U.S. to support this industry. Are we overbuilding or are we not even close?

I believe we're overbuilding. You think we're overbuilding? Yeah, I believe we're. I'll tell you why. Because we're, you know, you look at, say, DeepSeek and the massive breakthrough for a much smaller cost, right? The incremental effort to create the next generation is dropping 10x every time we go through this.

And therefore, we should get to a point where training can be done very inexpensively and then you've spent a lot more time on inference. And therefore, the amount of build out is exaggerated because it's aiming for a model or size of model that was there six months ago when you started the building.

And that will not be the case when you finish the building. So that demonetization aspect of it, I don't think is being taken into account. They're building for the capacity they think they'll need given the projections without realizing that those projections will be wrong.

That's my general complaint. Richard, you may have some more specific counterpoints. I mostly disagree. Excellent. I've been talking for over a year, and a lot of other folks have recently picked it up about Jevons paradox, right? When we make things more efficient, we actually will use more of that resource. And I think we're seeing that play out with intelligence.

And so we'll just use intelligence in more and more places. Everyone will have a personal assistant, a personal health team, a personal tutor, and we'll just use all of that. On top of that, there's sort of so many things on the eye, and I can talk about that forever, but a lot of human problems are related to not enough energy. So even like when people say, oh, there's a shortage of water,

There's obviously no shortage of water. It just happens to have too much salt in it, which is an energy problem. So all these water fights that are going on, it's like, well, if you have more energy, just desalinate ocean water and problem solved. There's all these deserts that you can't live in right now because there's not enough water. Well, if enough energy, those problems go away too. So my hunch is we're going to find a lot of uses for that energy. Now, where I do agree with you on that one small bit is when you build a lot of data centers, you also need to have data that actually goes into those data centers.

And you don't want to have like a real estate crisis where you build a lot of buildings, but people don't move into them. And so I do think, you know, I have some ideas on how to fix that. But my hunch is data will increase, energy needs will increase, and intelligence will get cheaper and cheaper. But we'll just use more of it everywhere.

So let me distinguish between energy needs, of which I think we need lots of energy, right? And specifically data centers, which apply that energy in a particular way. I think we'll need less than people think of that, but we definitely will use all the energy we can for desalination of things. So yeah, I think we're kind of generally coming to agreement there. Before we get into thinking machines here in this article from TechCrunch about Miro's new startup,

I am curious. I mean, over the last year, we've seen this constant flow of the leadership of OpenAI out of OpenAI, which is concerning. I mean, I'm not an investor in OpenAI. If I were, I'd be very concerned. What do you think is going on there? I'm curious.

Open question, either of you guys. I think the doors are very open. I think that the basic general thing is if you get to that level and you're suddenly the hottest property executives or deep researchers in OpenAI, you can essentially go follow your passion and

go find your MTP and go build something with Amira's doing, what she's doing or any of the other Rafter people. Some may be interested in healthcare and the specific application there and they can now have the currency to go do that. I think a lot of it has to do with that and a secondary layer of the speed and

move fast and break things, a person that Sam has for how to build stuff that is concerning a lot of people. Then you've got the third class of people kind of really nervous that we're moving this quickly without adequate wisdom and thought as to what we're building here.

And I'd be curious, Richard, as to where your reaction is towards the emphasis on those two or three different areas. I think at a very high level, zooming out a little bit, the fact that California has non-competes and the rest of the U.S. is actually moving towards that or like doesn't, you know,

doesn't have non-competes as in like non-competes are not enforceable in california uh is tough for companies uh very often research costs a lot of money but once you show the world that something is possible at all it's much much cheaper to copy it and it's also much easier to knowing how you've done in one place go and take that knowledge without taking any code but it's that's you know stuff's in your head and then go do it cheaper somewhere else and honestly it's sort of

Overall, for the ecosystem, it's a positive thing where we're just going to see cheaper, better, faster models. So let's talk about thinking machines.

Any clue about what Mira is going to focus on? So I guess lots of smart people joined her. John Shulman, who led the ChatGVT application of the LLMs that had been available as APIs before. We had already incorporated them before ChatGV came out inside u.com in a search engine-like context. And so having some amazing folks that really understand the technology and also have ideas for building products is probably...

a very positive thing. And I mean, she describes and they describe a lot on their, on their website. My hunch is they're going to try to explore. I hope they don't just build another LLM. I think there's so much more stuff out there, but yeah, we'll see. Yeah. What I find fascinating and I'm curious about your point of view here is I think her, her starting valuation is $30 billion. Yeah.

I mean, it's crazy. Everything's gone up. Everything's gone up. Yeah, it used to be millions of dollars and now it's billions of dollars. I don't know how quickly you can justify and monetize this stuff. I think we're headed for a pretty big bubble as we get to the application side of this. Because when you get to the end user, it's demonetizing so quickly that...

Where's the revenue will be the big question over time. - I think putting my investor head at AIX Ventures on for a little bit, the way we think of this is that it's essentially seed stage risk combined with late stage returns. And so as an investor that expected value just doesn't quite work out, but it doesn't mean that no one will succeed, right? It's just seed stage risk. Once in a while, every five, 10% of seed stage companies

actually do something amazing. And like one or two of those in the power law as an investor really blow out and return the entire fund multiple times. And so there are a few such possibilities, but man, it's really tough. And the bar is so high to be able to get enough revenue to eventually, you

be able to justify these high valuations. So can I just riff off that for a second? Richard, when you're kind of trying to invest in AI startups, right? You've got to figure out, A, does the founder or the team have something really magical? And B, can they get to market? And can they find product market fit? And that's a big, big challenge today. How do you guys assess that? And can they get revenue? Yeah.

Can they get revenue? Or do you invest in stuff that has massive breakthrough and has the potential and you hope that the potential yields? Where do you put your chips on that? Yeah, so we've been doing really well. Fund One's already like 5x TVPI and it's only like four years old or so. And so we're looking, there's sort of two ways to slice and dice it. One is there's a horizontal investment

new infrastructure layer, right? And in that you have companies like Hugging Face. I was very fortunate. They were my students when I was a professor at Stanford, invested at a 5 million valuation round. They're 4.5 billion now. So there are a few of those that can break out and really become part of this new stack of building software that is fundamentally different with AI.

And Cursor, a similar one, we're investing in Qoom 2. The Cursor CEO is actually an intern of mine. I was really bummed I didn't get to invest in that one. And so then there are thousands of application companies vertically that are sitting on top of this new stack.

And so there we look for deep industry insights and deep AI expertise, like teams that actually understand my buyer will want this feature and they don't just sort of go off and try a bunch of different things and spend a lot of money. Are proprietary data sets something that you look for or find exciting in all of this?

The best companies will have what I call virtuous data cycles, at least. If they don't have a direct data access already, they're building a product that as you use the product, you collect more data. One of the reasons why Tesla is much better suited and why we've seen a lot of self-driving car startups die is that they have to

pay for every mile driven by a human to collect data versus if Tesla, we all drive the car and we give the data for free and we actually pay to drive the car to collect that data. Right. And so that that is a perfect example of a virtuous data cycle. And you see that in various SaaS softwares like you.com. We get people, people give us feedback of like, this was a good answer. This wasn't a good answer. I didn't like this part and so on. And so those are sort of ways where you can build some kind of advantage over time.

So I get two out of the two things out of this one, Elon owes us money. And number two, to be really successful in AI, be Richard's intern at some point. That's right there. All right, guys, that was fun. And I think the other side of AI is one of my favorite topics. It's humanoid robots. I was building robots when I was in junior high school, but they didn't do what the robots today do. So I'm

I'm going to share a short video here. This is a robot called Clone. I contacted the CEO and he's going to be bringing his robots to the Abundance Summit next year. But let's check out a little bit of a video here.

So what Clone is doing is basically creating, what's that, Westworld? So these are muscles. They're hydraulic systems. And that video is underrepresenting what it can do in terms of moving the hands. They hope to have it walking in the next few months. They're based in Eastern Europe where they're doing a lot of the work.

but talk about an interesting future of robots where, I mean, a lot of the robots today out of the U S and China are clunky work walkers. They do walk, but they don't have that, that human emotional, fluidic moment movement. But, uh,

It's interesting that they chose to work like in that way, you know, in a sense like brushless motors have kind of helped us do an amazing amount of like cheaper prices and incredible capabilities in robotics. That's my first thought. The second one is I think the black horse here is similar to...

to DeepSeek is Unitree. Unitree has some insane videos. They look like CGI where you have four-legged robots that also have wheels, which I think is a clever idea.

And so super fast, but also jump and climb up stuff and spin the wheels at the same time. That's the second thought. And the third one is, yeah, I'm excited. And now the question is always like, what's the really most amazing use case for humanoid robots versus, you know, like a tractor factory where you just have a bunch of little lasers and thousands of arms and things like that. You wouldn't want a bunch of humanoid robots walk over a field similar to the dishwasher stuff we talked about earlier. At the same time,

It's not a zero sum game. There's a ton of cool stuff. I would totally buy a humanoid robot to have stuff be done at my house and just kind of clean and they can do it at night, right? So they don't have to be super fast. And now the fourth comment is I feel like everyone works on the AI version of robotics, like the original

terminator no one works on the t1000 and one of my many ideas actually to build a t1000 like robot and i have a bunch of ideas i recently like jammed on with a really brilliant hardware hacker and he's like you know this could actually work and make sense so project number five um if i have some oh you heard it here folks jim cameron was right and it's all going to be due to richard charsher um i gotta i gotta say something here

You know, if you want a musculoskeleton humanoid robot, you get a man and a woman and you have a baby and you grow the baby. I mean, I don't like I really struggle with this. Like if you know, we talked earlier, right? If you want a dishwasher, you have a machine that that sprays water in particular way. And it looks like a box and you have trays to put dishes in whatever.

I was saying with the vacuum cleaner, to the point that Richard just made, it's so much more powerful to have wheels on the legs, et cetera, et cetera. Why are we constantly going back to the human form? We've had this argument. Which is frankly, we have. How many times? It drives me nuts. You're just wrong. I'm just wrong. Richard, the argument we've had is I kind of say if you're going to build a robot, have one with seven arms that can do many more things. Why make it look like a human?

Well, because it's cool. So I am an investor in Makina Labs too. They built these massive arms and they can form sheet metal and they work with SpaceX and a bunch of folks. Whenever you don't want to build an entire factory to make that same large piece of metal millions of times, but you need it like 200 times, they're perfect for it. They can literally ship a factory that

creates any spare part into the field somewhere and then just have like almost like a blacksmith but massive uh and and ai and they're also like oh anti-humanoid now again it's not a zero-sum game right i think some people want like a beautiful humanoid like robot in their house but we can still have dishwashers and factory robots and so on that are very custom purpose and look

crazy, funky with 20 arms. And, you know, that's the excitement for robotics. It doesn't have to be zero-sum. All right. We have a lot of robot announcements this week. So let me continue on here. Next up is NEO's Gamma. So, you know, listen, I think this looks pretty damn cool. I mean, this is, you know, in terms of its motions. Now, how staged this is,

And how practiced, you know, we don't see the 37,000 shots that went wrong, but that looks like a pretty friendly home robot. You know, one of the questions I ask everybody is how many will you own? You know, when I interviewed Elon and Brett Adcock, Brett's the CEO, we'll see him in a minute, CEO of a figure. And of course, Elon oversees Tesla and Tesla bot now called Optimus.

The projection is as many as 10 billion robots by 2040. And I can imagine that. I have no problems imagining I would own, you know, two or three, maybe 10. Salim, no, not you? You know my struggle with this. I mean, one robot moving very quickly is the same as seven of them. And again, why does it have to look like a human being? It would be much better with wheels and seven arms. You could have them. Making coffee at the...

So I struggle with that. I think I feel more comfortable having a humanoid robot walking around the house than some strange looking contraption. I think we're going to end up with the problem in the same way with virtual reality with the uncanny valley.

where it's very disconcerting. I think we're going to have the same thing with human-armed robots. For sure. And sci-fi kind of is underrated in showing us sometimes also the positive ways. People will fall in love with their robots and they'll have these androids. Now, I think short-term...

We're going to see a lot of folks just remote controlling a robot, collecting training data that way. And so there's going to be part of the uncanny valley is you may have someone in India or somewhere sitting, looking into your entire home, being able to navigate everything, seeing your kids, opening your doors and everything. And you kind of have to be okay with that invasion of privacy potentially, right?

And then once they get good enough, then you're right. They could be faster. I mean, they could put on wheels and shoes with wheels on and then attach another arm if we really want them to. They can be more modular that way. So I'm excited for it. All right. So that was Neo Gamma from 1XTech. Let's go to the next robot here.

And this is Figure AI. So just for disclosure, I'm an investor in Figure. I don't know if you are, Richard. This is Brett Adcock's company, and they just announced their software. Interestingly enough,

Figure used to have a software relationship or a gen AI relationship with open AI. And they have, they shut that down and they decided to build their own AI team internally and to build Helix. And I think the logic there is in the same way that Tesla got so much data from autopilot as we were driving it around that allowed them to create these incredible models that

that figures AI. I really hope they come up with a separate name for it because calling the company figure and the robot figure, it gets a little bit confusing. But they're going to get a lot of data and that's going to train the AI in the physical universe. Let's take a look at their video. So, Selim, instead of having four arms, you have two robots instead and they collaborate. It's called collaboration.

I think this is going to take a much longer time for it to work out than people realize. But, you know, it's fantastic to see the speed at which it's moving forward. Because 10 years ago when we were first looking at robots, it was really hard to imagine they would get to this level. Oh, the docker grant challenge? That's correct.

Yeah. Remember the drop-and-run challenge? It was so clunky and so, I think, so it's fantastic to see that. But the use cases and the application areas is where I think it'll be. You know, my Roomba still cannot clean a room without me moving all the furniture around for it.

So who's working for who? Yeah. And I think robotics has done a phenomenal job if we can constrain the environment a little bit more. That's why self-driving is also a fairly constrained environment. It is standardized in a lot of places. The highways all look the same. Things like that, road signs are standardizations. Houses have very little standardization. And you're right. It will be very, very hard. And the companies that are actually able to get through and get one use case so nailed that is big enough and important enough for folks...

will be in a huge advantage. But it is harder than most people think. It'll be very capital intensive. And then the question is, can you be a fast follower out of China and just say, oh, this is how they do it now. We reverse engineer it. And then you can leapfrog, skip all the expensive research stage. And I'll go to my favorite use case, which is going to be a while before you get one of these humanoid robots and say, go change the baby's diaper. There's just so many things that can go wrong with that.

Yes, I still love the walk into the room and the robot is holding the baby by one foot. The funniest comment I saw on this figure video was this reminds me of two of my buddies being really stoned and trying to unload the lot. That's perfect. The doorbell is about to ring. It's my figure robot coming over there to give you a hug. So answer it and be nice. Okay.

All right. I can't help do an episode without Bitcoin. Let me begin the question to you, Richard. Are you a believer in Bitcoin? There's a faith component here when I say believer. Are you a holder in Bitcoin? I have just a tiny bit here and there. I'm invested in a fund that does a lot of crypto things just to have a little bit of exposure. But I mostly want to focus on AI and find it a bit of a distraction. So I'm not really deep in it.

Well, when focusing on AI, I mean, listen, AIs are going to need to have and agents are going to need to have mechanisms for transacting financially. So, you know, let's take it slightly sideways to cryptocurrencies for for AI agents to do business amongst each other.

What do you think about that? I mean, it makes sense, but they can also do that with credit cards, right? We'll have AI kind of make credit card purchases fairly quickly. I was a little bit dismayed when I actually tried to play around with the technology. And then it's just like the gas fees and so on were also pretty high. And I'm like, wait, this is like a credit card fee almost. This already costs a lot of money. I'm like, this doesn't seem right. So...

I don't know, I feel like they need to really lower the prices so that the transactions themselves are insanely cheap. Yeah, there's a whole stack of, you know, you've got Bitcoin with very expensive transaction fees and proof of work to proof of stake. And as you get closer to the end use where you need less security,

So if I'm moving, if I'm storing jewelry in a bank vault, then you have a lot of security. But you don't do that many transactions when it comes to a debit card. You can have much less security. The transactions are limited to like $50 each. And therefore, you can lower the security in exchange for the volume. I think that's the kind of thing we're going to see in the crypto world as well.

How nervous do you get, Saleem, when you see the price now at this very moment? I'm actually, yeah, so I'm really encouraged by what's happened here. So two things happened over the last few days. One was the BitBite, whatever, BitBit hack, which was the biggest hack ever.

And in previous years, this would have caused massive collapse in the crypto world. And it barely even noticed. Right. So I think that's one of the very interesting. And the second was the response from the exchange with the CEO. We're going to get everybody made whole again very quickly, etc. Gives me encouragement that there's robustness being built into the ecosystem here.

which gives people a lot of confidence going forward. So I'm pretty excited about where this will go. The Trump meme coin did not help the crypto world at all. And so that's really unfortunate. But that's life. You get what you ask for. You didn't buy it, did you? No, no. No, no, no. Because you can see it's only going one direction. So if you don't mind, you mentioned the Bybit platform.

billion-dollar hack. Can you unpack it for us? Yeah, so what's happened was one cold wallet which stored a lot of Ethereum got hacked and suffered a massive withdrawal. Now the challenge here is that if you're the hacker, you want to move this into anonymous places and kind of watch the transactions because crypto is fairly traceable. There's appeals to Ethereum right up to Vitalik to say, can we just roll back the thing before the hack?

and it'll just undo the hack, basically. So there's a call for that. But trying to wash all the currency out is going to be very, very tricky to do. And everybody's watching all these wallets where they're going very, very carefully to find out who it is.

I don't know how you sustain this. I'm just I'll just repeat. I'm really encouraged by the response from Ben Zhao and the and the and the Bybit folks saying we're going to just navigate all this. We're going to keep everybody whole. And the fact that they had enough backup to do this in general, what we found in the crypto world is you want to not keep major wealth on a cryptocurrency.

on a centralized exchange for this exact reason, Mt. Gox, a lot of people lost a lot of money on early on. So you keep it offline and you do trading on these exchanges, but not storage of value. Yeah, I know. But every time I, you know, I use a treasure or a,

What's the ledger, you know, sort of thumb drive wallet. But I pucker up every time I go and plug it into my computer. It's non-trivial. It's very tricky. And, you know, this goes to that whole usability idea, right? I remember your comment about when a technology goes from deceptive to disruptive.

the usability becomes much 10x, 100x better. So Steve Jobs made the smartphone usable and boom, it took off. Coinbase made the purchasing of Bitcoin usable and very user-friendly and that took off. But the rest of crypto is still a hot mess.

anybody that tries to buy an NFT or trade an NFT knows how sticky it is or execute a smart contract. You have to be like geek level 14 to be able to even touch that stuff. Yeah, I'm using Abra from my major holdings, but I still have, again, on Coinbase and a number of different places. But it turns out to be a significant amount of capital and you've got to be careful about it. That's right. I think the tricky bit is like why credit cards work is that you're kind of insured.

Like if someone steals your credit card and you see a bunch of purchases, you can just tell them like, that wasn't me. And then the bank will give you your money back. And part of the problem with the decentralization here is you decentralize also the risk, the security that you have to have and the liability that each user has for their own wallet. And then people are just not sophisticated enough to be able to deal with all the cybersecurity threats very often.

You know, switching here to MicroStrategy's now called Strategy. Richard, Michael Saylor was my roommate, fraternity brother at MIT. So we go way back. He is extraordinarily brilliant. And I was just with him in El Salvador.

I was there speaking with Carlos Slim, Mike Saylor, and Mark Andreessen and Ben Horowitz. And Mike gave a massively compelling 90-minute presentation to this room full of billionaire family offices.

and every time I hear him I'm like okay I mortgage my house sell everything buy Bitcoin the guy is you know it's very dangerous to listen to Michael for any time it's it's compelling you know it's interesting though that I want to just point one thing out for those of you who are nervous about this this fall the equivalent is you know it's hodl and buy on the dips but uh I have to verify this something I wonder if you know that if

If you try and buy into an out of like sell out of and buy into Bitcoin, that's problematic that most of the gains. This is a memory. I wonder if it's true that most of the gains last year were made on like five trading days.

Yeah, this is historically accurate. In any given year, Bitcoin accelerates at some point in the year, and it's very, very few trading days that make up 80% of the upside. The problem is you don't know which those five days are.

Right. And I've managed to spectacularly miss four out of the five of those. And so and then you buy on the other side of it and it goes horribly wrong. So it's a very tricky thing that what I tell people is just buy as much of it as you can and close your eyes for 10 years. Yeah. If you can. Well, this is, you know, Michael made another move. He acquired another 20,000 Bitcoin for about two billion dollars.

It's pretty extraordinary moves. I mean, yeah, I wish... He has a lot of incentives to give 90-minute presentations to everyone to buy more Bitcoin, yeah. He does, for sure. It's definitely... You know, there's one other way I look at it. If you wanted to have somebody be the prime evangelist for technology, the

The articulation he brings to the table is hard to beat, and you could spend a lot of time trying to find a better one. It's incredible.

He is amazing. Richard, open forum here. What's been the most amazing events, breakthroughs, technologies, companies that you've seen in the last few months? Oh, boy. We just covered quite a few. And, you know, I saw Agent Force. I did a podcast with your buddy and mine, Mark Benioff. Mark's amazing. Agent Force 2 coming on strong. Yeah.

What do you think about the whole agentic world? I'm a huge fan. I think, you know, when you think about what kind... So, essentially, large language models can be thought of neural sequence models, right? They're very large neural networks. They can be trained on any kind of sequence of things, and you can train them both with imitation and with exploration. And so...

When you think about what are other interesting sequences, you know, in 2019, we started 2018, 19, we started on these large language models for protein sequences. So boom, you got biology. But then the very obvious sequence is a sequence of actions too. And so I'm very excited about

We already have over 50,000 custom agents built on the U.com platform by our users. You can select which LMs you use. Give us examples of the agents that people would use. What are the top? So for example, you're in marketing. You say, oh, every time, like every two or three weeks, I get a huge PDF file with a bunch of new features and some website that describes a new feature.

that product engineering have been shipped. And then I'm tasked to write two email marketing campaigns for specific industries, tasked to write three LinkedIn messages. I have to go out on the web and compare these new features to the competition. So I don't say this is super novel. No one has it, even though other people have it and so on. And what we've done is like,

We talk to these marketers and they say, oh, well, just describe that. Explain that very well to an agent on you.com. And the next week when a new thing comes in, like you just drag and drop that PDF and it just goes through all those steps. It writes the LinkedIn messages for you. It writes email campaigns for you and you're just done. And then we have journalists who say, well, I need to like research a new thing. I'm supposed to write an article about prostate cancer, like advances, right?

Then I go to these 50 different sources. I read a bunch of research papers and then I put it together. Perfect use case. You know, this is like the kinds of sources you described, like use medical journals only. You can just say that in your prompt. You don't need like a special sort of feature switch on the UI UX. You just prompt it differently. You explain that and then it writes like more and more of that for you. And then you just need to start comparing. So we have journalists and chief editors and writers that, uh,

told us that tasks that used to take them multiple days now take them like two, three hours and they're done. Maybe the last fun one that's relevant for you is we have venture capital firms that say, well, if I get a new data room, I go through 10 steps. I look at net dollar retention. I do CAC LTV ratios, blah, blah, blah.

And then you just describe that again and you drag and drop and hold data room into u.com and it just goes through those steps. So whenever it's like knowledge work, you can already automate a ton of it. Can you create an agent that says, go out there and raise me a billion dollars of venture capital and go find the companies that are going to be unicorns and invest in those and then just send me the bank account information? That's step two.

So my description of a genetic AI is white collar job description. Yeah, that'll be epic. And then I think the next level will be they actually start taking actions for you. They start booking flights and things like that. Now, the interesting bit is that just like with robotics, like we're going to have an uncanny valley or like just a trough of disillusionment potentially. Because when I saw like this Rabbit R1, for instance, and in the demo, they said, oh, I want to book a flight with my four kids to London on these dates.

And then boom, boom, boom. Now it's done. And I'm like, no way that was real because you have so many details, right? Like this hotel, I wanted to be close to these kinds of sites I want to see. And then over time you change, right? When I was a poor graduate student at Stanford, like on like less than minimum wage, I would have been willing to wait 10 hours for a layover in order to save $200.

Now I spend thousands of dollars extra just to have a one stop like or zero stop flight and have a direct flight. Right. And so you need to know all these subtleties of like, when are you willing to wait for how long? How much extra do you pay? And then you need to like have much more personalization still to make those agents work, too. But for knowledge work, you can already automate a lot.

Richard, why haven't we seen yet a kind of an agentic version of a Jarvis that just watches your tasks and says, hey, last time you booked these, you always did this. So are you sure you don't want to do that again? And tracks you and learns from your patterns and therefore that can then represent you more easily. I would have hoped to have seen that by now. Have you seen anything like that? Give it permission to listen to your phone calls, read your emails, watch you, all of that. Yeah, they're two...

two or three problems of why we haven't seen it yet and sort of lockers. Nothing impossible to fix, but so number one is you're not allowed to record other people without their consent. So that

puts a damper on a lot of things a lot of countries will sue you and like california like in europe and so on so so that's why you can't have it the second thing is microsoft actually tried to like launch this where it just watches everything you do on windows and people just went crazy they're like no way you're going to send a screenshot of every one of my things people do private things sometimes in their browser they don't want to share all of that with the world so it's

that will be a privacy, like it's just a privacy thing. You need to build an insane amount of trust with those companies. Then you have a lot of AI companies that the AI forward, like AI first kind of novel startups, they don't have all the users trust yet and that ability to collect all of the data and so on. But then, you know, I think we will eventually get to it. I think someone will be able to like

Apple is very good. They care about privacy and probably more likely you're trusting Apple with everything you might do on your phone. And then the fourth thing is that eventually we're going to have more AI agents surf the web than people. And that is a massive change for how the Internet monetizes.

because there are basically a few companies that make money actually selling physical goods like Amazon. But even those companies are getting more and more into the second main bucket, which is advertisement.

Turns out your AI assistant doesn't get distracted when it has to just book a quick flight for work to Utah with this Bahamas like ads to like go on your next vacation. And so Expedia, even Amazon makes a lot of money with ads. If you start ignoring all of those, it changes how the Internet monetizes. So those companies will try to block ads.

all these operators, all these AI agents from just being able to get the work done. And so, you know, these are just like, oh man, you can have the intelligence, but the infrastructure around it will slow things down for adoption. Amazing. Richard, who are your main customers at you.com? Who should check out your site and tell us how to check it out?

Yeah, so you can just go to you.com, why you.com. Our biggest customers are cybersecurity companies like Mimecast. We have publishers, a lot of publishers that basically improve internal efficiencies for journalists or allow you just ask questions on your website and then get citations only on articles from your own network. So you can keep

users longer. I want every journalistic outlet eventually to have their own GPT version where it just answers questions about an article. You can eventually even think of these articles having very personalized follow-up questions. Like, let's say you never understood why the Hutus and Tutsis were fighting each other and you read a new article and the

outlet knows is the first time you read about this particular human conflict. Maybe they show you like some more explanations and background stories and stuff. We can, we are building that for, for media and publishing companies. We have universities with like 30,000 students going live on you.com where all the students can use it and the professors.

which I think will push those universities and all their professors to realize, wait, my students can just drag and drop this assignment in here and it just gives them the perfect answer. I need to think of my assignments, like rethink all of that. So we're excited about those. And then there's a whole host of like consumer companies that want both the search APIs that power sort of the plumbing of the LMs, as well as the answers be done for them and have some API customers that are ramping up massively and

Revenue is increasing a lot. It's been really great. Amazing. It's been a pleasure to get to know you and build our friendship.

Salim, as always, thank you for making time. I used to feel like I had a grip on what just happened. Now it's at an insane rate. I can't imagine next year. But, yep, incredible week in technology this week. Richard, Salim, thank you guys. Thanks for having me. Thank you.

Thank you.