当下,人工智能领域的讨论往往围绕着大型语言模型(LLM)和自然语言处理展开。然而,我与Fei-Fei Li(World Labs联合创始人兼CEO)和Martin Casado(a16z普通合伙人)的对话,让我意识到,我们可能忽略了一个更基础、更重要的组成部分:空间智能。
Fei-Fei认为,语言只是对现实世界的一种有损编码。虽然语言强大,但它无法完全捕捉三维物理世界的复杂性。真正的智能,需要理解并与三维空间进行交互。她创立World Labs的初衷,正是为了构建能够理解和模拟真实世界的“世界模型”。 这并非仅仅为了机器人技术,而是为了解锁创造力、重塑计算接口,甚至创造无限的虚拟宇宙,让我们以一种“多元宇宙”的方式生活。想象一下,无数个虚拟世界,有的用于机器人操作,有的用于艺术创作,有的用于社交互动,有的用于探索旅行,有的用于讲述故事……这种可能性是无限的。
Martin与Fei-Fei的观点高度契合。他指出,语言在传达现实世界的复杂性方面存在局限性。当我们试图在现实世界中完成任务时,例如导航或操作物体,我们更多地依赖于对空间的直接感知和大脑对三维空间的重建能力。 他认为,空间智能对于机器人、视频游戏、艺术设计等诸多领域至关重要。World Labs构建的世界模型,能够从二维视图中创建完整的3D表示,这使得计算机能够理解、操纵和测量三维空间中的物体,从而实现更广泛的应用。 这就好比,蒙上眼睛只靠语言描述房间,你很难完成任何任务;而一旦摘下眼罩,你就能直接感知和操作房间里的物体。
我深感认同他们的观点。目前的AI对话主要集中在语言模型上,但这可能忽略了更基础的东西:空间。空间智能是理解和与物理世界互动的关键,它对于AI的进一步发展至关重要。 Fei-Fei甚至分享了她因角膜受伤而短暂失去立体视觉的经历,这让她更深刻地体会到立体视觉(3D感知)对于空间感知和日常活动的重要性,例如驾驶。 这进一步强调了3D空间理解在AI中的关键作用,单纯的2D视频信息不足以让AI真正理解和操作三维世界。
World Labs的技术突破,在于将人工智能、计算机视觉、扩散模型、计算机图形学和优化算法等多个领域的专业知识整合在一起,致力于构建一个真正理解和模拟三维世界的模型。 这并非易事,但其潜在的应用范围极其广泛,从机器人技术到创意设计,从虚拟现实到增强现实,都将因此而发生革命性的变化。 这将是AI发展历程中的一个重要里程碑,标志着我们从单纯的语言理解走向对真实世界的全面理解。
That space, the 3D space, the space out there, the space in your mind's eye, the spatial intelligence that enable people to do so many things that's beyond language is a critical part of intelligence. Vivian leaned over to me. She's like, you know what we're missing? I said, what are we missing? She said, we're missing a world model. I'm like, yes!
We can actually create infinite universes. Some are for robots, some are for creativity, some are for socialization, some are for travel, some are for storytelling. It suddenly will enable us to live in a multiverse way. The imagination is boundless.
When we talk about AI today, the conversation is dominated by language, LLMs, tokens, prompts. But what if we're missing something more fundamental? Not words, but space, the physical world we move through and shape. My guests today think we are. Fei-Fei Li, a pioneer in modern AI, helped usher in the deep learning era by putting data at the center of machine learning. Now she's co-founder and CEO of World Labs, building world models, AI systems that perceive and act in 3D space.
She's joined by A16Z general partner, Marcin Casado, computer scientist, REPE founder, and one of the first people Fei-Fei called when forming the company. Today, they explain why spatial intelligence is core to general intelligence and why it's time to go beyond language. Let's get into it.
As a reminder, the content here is for informational purposes only. Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com forward slash disclosures.
Feifei, thank you so much for joining us here today. Martine, why don't you briefly brag on behalf of Feifei a little bit and how would you summarize your contributions to AI for people unfamiliar? Yeah, well, someone that doesn't need a lot of introduction and she's done so many things that I can't fill in. So maybe I'll just do the ones that are appropriate to this. Of course, she was on the Twitter board. She was a Google executive.
Founder and CEO of World Labs. But very, very importantly, like we all know AI and we all talk about kind of neural networks and there's a number of people that focused on making those effective. But Fei-Fei really singularly brought in data to the equation, which now we're recognizing is actually probably the bigger problem, the more interesting one. And so she truly is the godmother of AI, as everybody calls her.
And Fei-Fei, why did you have to have Martin as the first investor? Well, first of all, I knew Martin for more than a decade. I joined Stanford in 2009 as a young assistant professor and Martin was finishing his PhD there. So I always know, and of course, Martin's advisor, Nick McKeown, was a good friend. And I always know Martin went on to become a very successful entrepreneur and very successful investor. So we see each other, we talk about things.
But as I was formulating the idea of World Labs, I was looking for what I would call my unicorn investor. I don't know if that's a word, but that's how I think about this. Who is not only obviously a very established and successful investor who can be with entrepreneurs on this journey through the ups and downs.
who can be very insightful, who can bring the kind of knowledge, advice, resource. But I was also particularly looking for an intellectual partner. Because what we are doing at World Labs is very deep tech. We are trying to do something no one else has done. We know with a lot of conviction it will change the world literally. But I need someone who is a computer scientist
who is a student of AI, understand product, market, customers, go to market, and just can be on the phone or in person with me every moment of the day as an intellectual partner. And here we are. We talk almost every minute.
It is true. Yeah. Amazing. It's actually the origin story of us first connecting is actually pretty interesting. So Fefi has clearly been thinking about this idea for a very long time, like well before it started. So maybe years even. And she has this very deep intuition of what AI needs in order to basically navigate the world, right? Yeah.
But we were at one of Mark's fancy lunches and there's a bunch of AI people and everybody was so excited about LLMs, right? And it was talking about language. And I'd come to this independent conclusion just because I've actually done a lot of image investing that like that wasn't the end of the story. And so if they were in the end of this table, all these people talking about it, if he leans over to me, she's like, you know what we're missing? I said, what are we missing? She said, language.
We're missing a world model. And I'm like, yes! And it fell into place then because I'd been like thinking about stuff at a high level, but as she does, she just kind of perfectly articulated this. So she had a year's worth of thinking about this and talked to people, et cetera. And so in some way, we kind of in our own crooked paths had arrived at a very similar intuition. Hers was like way more filled out. Mine was just this kind of fancy thing. But then after that, we actually had a number of conversations where we both agreed that we were aligned on this kind of idea. Actually, I don't know if you know this, but
So of course, during that lunch, we hit it off on this world model idea. But I was at that point already talking to various people, not just computer scientists, technologists, but also investors, potentially business partners,
And to be honest, most people didn't get it. You know, when I say world model, they nod, but I can just tell that was just a polite nod. So I called Martin. I'm like, do you mind coming over to Stanford campus and have coffee with me? And then as soon as Martin came and sat down, I said, Martin, can you define your world model to me?
I really wanted to hear if Martin actually meant it and the way he defined it about an AI model that truly understands the 3D structure, shape,
and the compositionality of the world was exactly what I was talking about. And I was like, wow, he's the only person so far I've talked to who actually meant it. It's not just nodding. Wow. Okay, so we're going to get to World Labs and the specifics of this. But first, I want to take you back both to your PhD days, your professor days, and reflect on...
If you could go back in time and sort of have knowledge of what's happened the preceding 10 years in AI, what do you think would have been the biggest surprises or what's the thing that you didn't see coming that would have shocked your younger self? Or did you have a good sense of how this field is going to play out? Yeah, it's ironic to say because as Martin said, I was the person who brought data into the AI world, but I still continue to be so surprised by
Not surprised intellectually, but surprised emotionally that the data-hungry models, the data-driven AI can come this far and genuinely have incredible emergent behaviors of thinking machine, right? Yeah. Let's get into the specifics. Why start another foundation model company? Why aren't LLMs enough?
My intellectual journey is not about company or papers. It's about finding the North Star problem. So it's not like I woke up and say, I have to do a company. I woke up every day, day after day for the past few years, thinking that there is so much more than language. The language is an incredibly powerful encoding of things.
thoughts and information, but it's actually not a powerful encoding of what the 3D physical world that all animals and living things living. And if you look at human intelligence, so much is beyond the realm of language. Language is a lossy way to capture the world. And also one subtlety of language is purely generative. Language doesn't exist.
in nature. We look around, there's not a syllabus or word. Whereas
the entire physical, perceptual, visual world is there. And animals' entire evolutionary history is built upon so much perceptual and eventually embodied intelligence. Humans, not only we survive, live, work, but we build civilization beyond language upon constructing the world and changing the world. So that's the problem I want to tackle. And in order to tackle that problem,
Obviously, research was important and I spent years doing that as an academic. And it's still fun, but I do realize, and especially talking to Martin, that the time has come that concentrated industry-grade effort, focus effort in terms of compute data talent is
is really the answer to bringing this to life. And that's why I wanted to start World Labs. Amazing. Eric, you can do a very simple thought experiment that kind of highlights the difference between language and space.
So if I put you in a room and I blindfolded you and I just described the room and then I asked you to do a task, the chances of you being able to do it are very little. I'm like, oh, 10 foot in front of you is like a cop. I'm like, you know, like this is just, it's this very inaccurate way to convey reality because reality is so complex and it's so exact, right? On the other hand, if I took off the blindfold...
and you can see the actual space, right? And what your brain is doing is actually reconstructing the 3D, right? Then you can actually go and manipulate things and touch things, right? And so one way to think about it is we do a lot of language processing and we use that to communicate and high-level ideas, et cetera. But when it comes to navigating the actual world, we really, really rely on the world itself and our ability to reconstruct that. And how and when did you realize that language might have worn enough? Because it seems like it's not super widely known. I don't hear about this all the time.
Well, so if you ask me, like, what is this surprising breakthrough? It's that language went first because we've worked so hard on robotics, right? I mean, I feel like even to look at autonomous vehicles as an industry, we've invested like
$100 billion in it. I remember when Sebastian Thrun like actually won like the DARPA Grand Challenge in 2006. And we're like, hooray, AV is done, right? And then 20 years later, like we're finally there, $100 billion in, et cetera. This is like a 2D problem. And so that was the path we were going on is do you actually solve like world navigation? And it's harder than out of nowhere comes these LLMs and they
they are unit economic positive. They solve all of these language problems like basically immediately. And so it's,
It just took me a moment, actually Fei-Fei said it beautifully early on when we were talking, which is the part of our brain that actually deals with language is actually pretty recent. And so we're actually pretty inefficient at it, right? And so the fact that a computer does it better is not super surprising. But the part of the brain that actually does the navigation, you know, the spatial has been around, it's a million brains. Maybe the reptilian brain has been around four million years. It's even more than that. It's trilobite brain.
Yeah, yeah, right. We've talked about heartbreak. Right. 500 million years. Yeah. So it's almost like we're unrolling evolution, right? So the language part is actually very, very important for like high-level concepts and like the laptop class type work, which is what it's impacting right now. But when it comes to space, and this is everything from robotics to anything where you're trying to construct something physical, you have to solve this problem. And then we know from AV that it's a very tough problem.
And then maybe this is what is worth talking about. Like the generative wave gave us some insight on how you might want to do it. So it really felt like that was the time. My journey is very different because I've always been vision, right? So I feel like
I didn't need LLM to convince me LWM is important. I do want to say we're not here bashing language. I'm just so excited. In fact, seeing ChatGPT and LLMs and these foundation models having such breakthrough success inspires us to realize the moment is closer for world models. But Martin said it so beautifully. It's that
space, the 3D space, the space out there, the space in your mind's eye, the spatial intelligence that enable people to do so many things that's beyond language is a critical part of intelligence. It goes from ancient evolution
animals, all the way to humanity's most innovative findings, such as the structure of DNA, right? That double helix in 3D space. There's no way you can use language alone to reason that out. So that's just one example. Another one of my favorite scientific examples is Buckyball, carbon molecule structure that is so
beautifully constructed. That kind of example shows how incredibly profound space and 3D world is. Let's paint even more of a picture. When World Labs has achieved its vision or Language One models have achieved their vision, what are some applications or use cases that we can present to the audience to help make it concrete? Yeah, there is a lot, right? For example, creativity is very visual. We have creators from design to movie to architecture to engineering
industry design and creativity is not just only for entertainment. It could be for productivity, for machinery, for many things. That alone is a highly visual, perceptual, spatial area or areas of work. Of course, we mentioned robotics.
Robotics to me is any embodied machines. It's not just humanoids or cars. There's so much in between, but all of them have to somehow figure out the 3D space it lives in, have to be trained to understand the 3D space, and have to do things, sometimes even collaboratively with humans. And that needs spatial intelligence. And of course, I think one thing that's very exciting for me is that
For the entirety of human civilization, we all collectively as people lived in one 3D world, and that is the physical Earth 3D world. A few of us went to the moon, but, you know, it's a very small number. But that's one world.
But that's what makes the digital virtual world incredible. With this technology, which we should talk about, it's the combination of generation and reconstruction. Suddenly, we can actually create infinite universes. Some are for robots, some are for creativity, some are for socialization, some are for travel, some are for storytelling. It suddenly will enable us to live in a multiverse way. The imagination is boundless.
I think it's very important because these conversations can sound abstract, but they're actually not. But the reason they sound abstract is because it's truly horizontal, just like LLMs are, right? So like if you say like, what are LLMs good at? The same LLM we use for like an emotional conversation. We use it to write code. We use to do lists. We use it for self-actualization, right?
And so I think we can get actually pretty concrete about what these models do, right? And so let me just give it a shot and then Feifei is the expert, of course. So with these models, you can take a view of the world, like a 2D view of the world, and then you could actually create a 3D full representation including what you're not seeing, like the back of the table, for example, within the computer.
So given just a 2D view, you have the full thing. And then you ask, okay, well, what can you do with that thing, for example? Well, you can manipulate it, you can move it, you can measure it, you can stack it. So anything that you would do in space, you could do, right? I mean, you could do architecture, you could do design.
But it turns out the ability to fill out the back of the table means that you can fill out stuff that was never there to begin with, right? So let's say that I just had a 2D picture of this. I could create a 360 of everything, right? And so now you have fully generative. And so what does that mean? That means that's video games, that's creativity. And so it's a super horizontal piece that takes basically a computer with a single view in the world or maybe multiple views in the world and creates a full 3D representation that that computer then can act on. And so you can see that that's a very, very
concrete, pivotal thing from everything from like robotics to video games to art and design. Yeah. It seems like we haven't fully been appreciating sort of 3D components until now. Is that fair to say? It is fair to say. In fact, I think
It took evolution a long time. 3D is not an easy problem, but I always come back to the fact that I had a conversation with my six-year-old years ago about why trees don't have eyes. And the fundamental thing is trees don't move. They don't need eyes. So the fact that the entire population
basis of animal life is moving and doing things and interacting gives life to perception and spatial intelligence. And in turn, spatial intelligence is going to reinvent horizontally, as Martin said, so many of the way of work and life that humans are doing. Yeah, fascinating. But it is definitely worth asking the question, why can't you just use 2D video for this, right? 3D is very, very fundamental to this.
Vivi, you suggested let's get deeper into the technology. What can we share more about how it works or what the breakthrough is or what's worth commenting on the technology? To Martine's point, does it need to be 3D or why can't you just use 2D? I think you can do a lot of things using 2D. But the fact is that 2D won't get you very far. In fact...
Today's multimodal LLMs is already making a big difference in the robotic learning world, helping guiding you to know what's next, the state of the world. But fundamentally, physics happens in 3D and interaction happens in 3D. Navigating behind the back of the table needs to happen in 3D. Composing the world, whether physically, digitally, needs to happen in 3D. So
Fundamentally, the problem is a 3D problem. One way to think about it is if it's a human being looking at, say, a 2D video, the human being can reconstruct the 3D in their head, right?
But let's say I've got a robot that has the output of the model. If that's 2D and then you ask the robot to do, I don't know, distance or to grab something, that information is missing. You've got the XYZ plane. The Z plane just isn't there at all, right? And so for many things that are spatial, you need to provide that information to the computer so that you can actually navigate in 3D space.
And so 2D video is great if it's a human because we already can turn it into 3D. But like for any computer program, it'll need to be 3D. Actually, I want to tell you a personal story. About five years ago, ironically, I lost my stereo vision for a few months because I had a cornea injury.
And that means I was literally seen with one eye. And like Martin said, my whole life has been trained with stereo vision. So even if I was seen with one eye, I kind of know what the 3D world looked like. But it was a fascinating period as a computer vision scientist for me to experiment with.
what the world is. And one thing that truly drove home literally was I was frightened to drive. Wow. First of all, I couldn't get on highway. That speed, I could not, you know. But I was just driving in my own neighborhood, and I realized I don't have a good distance measure between my car and the parked car on a local small road.
Even though I have perfect understanding of how big is my car almost, how big is the neighbors, the parked cars. I know the roads for years and years. But just driving there, I had to be so slow, like almost 10 miles an hour so that I don't scratch the cars. And that was exactly why we needed stereo vision.
That's actually a great articulation of why 3D is just actually key if you're doing some processing, right? Yeah. So I don't recommend it, but if you're there, park your car one and drive your car two with one eye and feel it. That's your own car. On the tech side, with LLMs, a lot of the research was done at the big companies. What's the state of the research here? This is definitely a...
newer area of research compared to LLM. It's not totally fair to say new because in computer vision as a field, we have been doing bits and pieces. For example, one important revolution that has happened in 3D computer vision was Neural Radiant Field or NERF. And that was done by our co-founder Ben Mildenhall and his colleagues at Berkeley. And that was a
a way to do 3D reconstruction using deep learning that was really taking the world by storm about four years ago. We've also got a co-founder, Christoph Lassner, whose pioneering work was part of the reason Gaussian splat representation started to, again, become really popular as a way to represent volumetric 3D. And of course, Justin Johnson, who was my former student, also co-founder of World Labs,
were among the first generation of deep learning computer vision student who did so much foundational work in image generation when before Transformer were out, we were using GANs to do image generation and then style transfer, which was really popularized some of the generations
components or ingredients of what we're doing here. So things were happening in academia, things were happening in industry. But I agree what is exciting now is that
At WorldLab, we just have the conviction that we're going to be all in on this one singular big North Star problem, concentrating on the world's smartest people in computer vision, in diffusion models, in computer graphics, in optimization, in AI, in data. All of them come into this one team and try to make this work and to productize this.
I will say from an outsider standpoint, and so I'm not an expert in any of these spaces, but it really feels like to solve this problem, you need experts both in AI, and that's like the data and the models, like the actual model architecture and graphics, which is like, how do you actually represent these things in memory, in a computer, and then on the screen? So it's a very special team to actually crack this problem, which Fei-Fei's managed to put together. Well, that's an inspiring note to wrap on. Fei-Fei, thank you so much for joining us. Thank you. Thank you, Eric.
Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z. We've got more great conversations coming your way. See you next time.