We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode What Comes After Mobile? Meta’s Andrew Bosworth on AI and Consumer Tech

What Comes After Mobile? Meta’s Andrew Bosworth on AI and Consumer Tech

2025/4/24
logo of podcast a16z Podcast

a16z Podcast

AI Deep Dive AI Chapters Transcript
People
A
Andrew Bosworth
Topics
@Andrew Bosworth : 我相信人工智能将彻底改变我们与科技互动的方式。从过去二十年以应用程序和触摸屏为中心的智能手机时代,我们将转向一个更加主动、自适应和沉浸式的计算体验。这将涉及到新的界面和市场,例如增强现实眼镜和更先进的虚拟现实技术。我相信十年后,我们将拥有比现在更多的方式来获取信息和内容,而五年后,我们将看到各种各样的智能眼镜,从高端到大众化,它们将成为我们日常生活中不可或缺的一部分。 我们过去在Facebook的成功,很大程度上是因为我们专注于解决用户实际问题,而不是仅仅追求技术本身。当前的人工智能革命之所以令人兴奋,是因为它能够解决许多实际问题,并且应用范围广泛。这与以往的技术突破不同,它并非局限于特定领域,而是能够改善各个方面。 我们十年前就预见到移动手机的形态因素将达到饱和,并开始探索更自然的交互方式,例如通过眼睛和耳朵获取信息,并通过神经接口表达意图。人工智能的出现加速了这一进程,它能够更好地理解用户的意图,并使交互更加便捷。 我们正在开发各种各样的产品,从Ray-Ban Meta眼镜到Orion等全功能AR眼镜,它们代表了不同阶段的技术水平和市场定位。我们相信,未来人们将能够通过AR眼镜获得所需的一切信息和服务,而AI将成为关键的交互界面。 虽然硬件和软件方面都存在挑战,但我们相信AI能够帮助我们克服这些挑战,并创造出更自然、更便捷的用户体验。我们不反对与其他平台合作,但我们也希望开发者能够为我们的设备开发原生应用,以充分发挥其潜力。 我们相信,AI最终会颠覆现有的应用程序模式,用户将直接表达意图,而不是选择特定的应用程序。这将改变现有的市场格局,并对品牌产生深远的影响。 我们致力于开源AI模型,因为我们相信开放合作能够促进AI领域的进步,并且这与我们的商业模式相符。我们认为AI模型最终会成为商品,而我们则专注于利用AI改进自己的产品。 发展AI面临发明风险、采用风险和生态系统风险。我们对硬件和社会接受度感到乐观,但生态系统风险仍然很大。AI有可能成为解决生态系统风险的银弹,因为它可以成为主要的交互界面。 我们对AI技术的未来充满信心,并愿意投入大量资源进行研发。我们相信,这将是一个具有划时代意义的变革,它将重新定义人机交互的方式。 @David George : (由于访谈中David George的发言较少,难以提取200字以上的核心论点,故此处省略David George的核心论点)

Deep Dive

Shownotes Transcript

Translations:
中文

Is there a better way? I think there is. Every single interface that I interact with, every single problem space that I'm trying to solve are going to be made easier by virtue of this new technology. If you were starting from scratch today, you probably wouldn't build this app-centric world. You can imagine a post-phone world.

The past 20 years of consumer technology have been a story of apps, of touchscreens, and of smartphones. These form factors seemingly appeared out of nowhere and may be replaced just as quickly as they were ushered in. Perhaps by a new AI-enabled stack, a new computing experience that is more agentic, more adaptive, and more immersive.

Now, in today's episode, A16C's growth general partner, David George, discusses this feature with arguably one of the most influential builders of this era. That is Meta's CTO, Andrew Boz Bosworth, who spent nearly two decades at the company, shaping consumer interaction from the Facebook news feed all the way through to their work on smart glasses and AR headsets.

Here, Boz explores the art of translating emerging technologies into real products that people use and love. Plus, how breakthroughs in AI and hardware could turn the existing app model on its head.

In this world, what new interfaces and marketplaces need to be developed? What competitive dynamics hold strong and which fall by the wayside? For example, will brand still be a moat? And if we get it right, Boz says the next wave of consumer tech won't run on taps and swipes. It'll run on intent. So is the post-mobile phone era upon us? Listen in to find out.

Oh, and if you do like this episode, it comes straight from our AI Revolution series. And if you missed previous episodes of this series with guests like AMD CEO Lisa Su, Anthropic co-founder Dario Amadei, and the founders behind companies like Databricks, Waymo, Figma, and more, head on over to a16z.com slash AI Revolution.

As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see asextency.com slash disclosures.

Boss, thanks for being here. Thanks for having me. Appreciate it. Okay, I want to jump right in.

How are we all going to be consuming content five years from now and 10 years from now? 10 years, I feel pretty confident that we will have a lot more ways to bring content into our viewshed than just taking out our phone. I think augmented reality glasses obviously are a real possibility. I'm also hoping that we can do better for really engaging in immersive things. Right now, you have to travel to like the sphere, which is great, but there's one of them. It's in Vegas and it's a trip.

Are there better ways that we can have access to if we really want to be engaged in something, not just immersively, but also socially? So it's like, oh, I want to watch the game. I want to watch it with my dad. I want to feel like we're courtside.

Sure, we can go and pay a lot for tickets. Is there a better way? I think there is. So 10 years, I feel really good about all these alternative content delivery vehicles. Five years is trickier. For example, I think the glasses, the smart glasses, the AI glasses, the display glasses that we'll have in five years will be good. Some of them will be super high end and pretty exceptional. Some of them will be like actually little and like not even tremendously high resolution displays, but they will be like always available and on your face.

I wouldn't be doing work there, but like if I'm just trying to grab simple content in moments between, it's pretty good for that.

So I think what we are seeing is, as you'd expect, we're at the very beginning now of a spectrum of super high-end but probably very expensive experiences that will not be evenly distributed across the population. A much more broadly available set of experiences that are not really rich enough to replace the devices that we have today. And then hopefully a continually growing number of people who are having experiences that really could not be had any other way today. Yeah.

Thinking about what you could do with mixed reality and virtual reality. Yeah, we're going to build up to a lot of that stuff. So throughout your career, I would say one of the observations I would have is you've been uniquely good at piecing together various big technology shifts into neuronal,

New product experiences. So in the case of Facebook early days for you, obviously you famously were part of the team that created the newsfeed. And that's a combination of social media, a mobile experience, and applying your like old school AI to it. Yeah, exactly. But that's pretty cool. And like a lot of times these trends, they come in bunches and then that's what creates the breakthrough products. So maybe take that and apply it to where we are today with the major trends that are in front of you.

Let me say two things about this. The first one is I think if there was a thing that, not me specifically, but I think me and my cohorts at Meta were really good at, was like we really immersed in what the problem was. What were people trying to do? What did they want to do? And when you do that...

You are going to reach for whatever tool is available to accomplish that goal. That allows you to be really honest about what tools are available and see trends. I think the more oriented you are towards the technology side, you get caught in a wave of technology and you don't want to admit when that wave is over and you don't want to embrace the next wave. And you're building technology for technology's sake. Yeah, yeah. So like solving a product problem. But if you're embracing like what are the issues that people are really going through in their life and they don't have to be profound. I bring that up just because I think we're in this

interesting moment where i think all of us have been through a phase where a lot of people wanted a new wave to be coming because it would have been advantageous to them yeah but those things weren't solving problems that regular people had i think the reason we're so enthusiastic about the ai revolution that's happening right now is it really feels tangible these are real problems that are being solved and it's not solving every problem it creates new problems it's fine so it feels like a substantial real nuke capability that we have

And what's unusual about it is how broad-based it can be applied. And while it has these interesting downsides today on factuality and certainly compute and cost and inference, those types of tradeoffs feel really solvable and the domains that it applies to are really broad. And that's very unusual. Certainly in my career, you almost always, when these technological breakthroughs happen, they're almost always very domain-specific.

It's like, cool, this is going to get faster, or that's going to get cheaper, or that's now possible. This kind of feels like, oh, everything's going to get better. Every single interface that I interact with, every single problem space that I'm trying to solve are going to be made easier by virtue of this new technology. That's pretty rare. Mark and I always believed that this AI revolution was coming. We just thought it was going to take longer. We thought we were probably still 10 years away at this point. But what we thought would happen sooner was this revolutionization

revolution in computing interfaces. And we really started to feel

10 years ago, like the mobile phone form factor, as amazing as it was, this is 2015, was like already saturated. That was what it was going to be. And once you get past the mobile phone, which is again, the greatest computing device that any of us have ever used to this point, of course, it's like, okay, well, it has to be more natural in terms of how you're getting information into your body, which is obviously ideally usually through our eyes and ears and how we're getting our intentions expressed back to the machine. You no longer have a touchscreen. You no longer have a keyboard. You no longer have a

So once you realize those are the problems, it's like, cool, we need to be on the face because you need to have access to eyes and ears to bring information from the machine to the person. And you need to have these neural interfaces to try to allow the person to manipulate the machine and express their intentions to it when they don't have a keyboard or mouse or a touchscreen. And so that has been an incredibly clear-eyed vision we've been on for the last 10 years.

But we really did grow up in an entire generation of engineers for whom the system was fixed. The application model was fixed. Yeah, of course. The, like, interaction design. Sure, we went from a mouse to a touchscreen, but, like, it's still a direct manipulation interface, which is literally the same thing that was pioneered in the 1960s. Yeah. So, like, we really haven't changed these modalities. And there's a cost to changing those modalities because we as a society have learned that

how to manipulate these digital artifacts through these tools. So the challenge for us was, okay, you have to build this hardware, which has to do all these amazing things and also be attractive and also be light and also be affordable. And none of these existed before. And what I tell my team all the time is like, that's only half the problem. The other half of the problem is, great, how do I...

Like, how do I make it feel natural to me? I'm so good with my phone now. It's an extension of my body, of my intention at this point. - Yeah. - How do we...

make it even easier. And so we were having these challenges. And then what a wonderful blessing, AI came in two years ago, much sooner than we expected, and is a tremendous opportunity to make this even easier for us because the AIs that we have today have a much greater ability to understand what my intentions are. I can give vague reference and it's able to work through the corpus of information it has available to make specific outcomes happen from it.

There's still a lot of work to be done to actually adapt it. And it's still not yet a control interface. Like I can't reliably work my machine with it. There's a lot of things that we have to do. We know what those things are. And so now you're in a much more exciting place, actually. Whereas before we thought, okay, we've got this big hill to climb on the hardware. You've got this big hill to climb on the interaction design. But we think we can do it. And now we've got a wonderful tailwind where on the interaction design side, at least, there's...

the potential of having this much more intelligent agent that now has not only the ability for you to converse with it naturally and get results out of it, but also

to know by context what you're seeing, what you're hearing, what's going on around you, and make intelligent inference based on that information. Let's talk about like Reality Labs and this suite of products, what it is today. So you have Quest headsets, you have the smart glasses, and then on the far end of the spectrum is Orion and some of the stuff that I demoed. So just talk about the evolution of those efforts and what you think the markets are for them and how they converge versus not over time. So when we started the Ray-Ban Meta Project,

they were going to be smart glasses and in fact they were entirely built and we were six months away from production when llama 3 hit and the team was like no we got to do this and so now they're ai glasses right like they didn't start as ai glasses but the form factor was already right we could already do the compute we already had the ability so yeah now you have these glasses that you can ask questions to and in december to the early access program we launched we call live ai so you can start a live ai session with your ray-ban meta glasses

And for 30 minutes until the battery runs out, it's seeing what you're seeing. And it's funny because on paper, the Ray-Ban meta looks like an incremental improvement to Ray-Ban stories. And this is kind of the story I'm trying to tell, which is the hardware isn't that different between the two, but the interactions that we enable with the person using it are so much richer now. When you use Orion, when you use the full AR glasses, you can imagine a post-phone world. You're like, oh, wow, like if this was...

attractive enough and light enough and battery life enough to wear all day, this would have all the stuff I need. It would all be right here. When you start to combine that with images that we have of what AI is capable of, so you did the demo where we showed you the breakfast,

Yeah, it did. And it's, yeah, and for what it's worth, I mean, I'll explain it because it's very cool. Kind of walk over and there's a bunch of breakfast ingredients laid out. And I look at it and I say, hey, Meta, what are some recipes? That's right. And these ingredients. So that is, for me at least, when we think about Orion,

Initially, it didn't have that AI component when we first thought about it. It had this component that was very direct manipulation. So it was very much modeled on the app model that we're all familiar with. Yeah, of course. And I think there's a version of that. Yeah, of course, you're going to want to do calls and you're going to want to be able to do your email and be able to do your texting and you want to be able to play games. We have a Stargazer game and you want to do your Instagram reels. What we're now excited about is, okay, take all those pieces and layer on the ability to...

have an interactive assistant that really understands not just what's happening on your device and what email is coming in. - Yeah, of course. - But also what's happening in the physical world around you. - Yeah, outside of you. - And is able to connect what you need in the moment with what's happening. And so these are concepts where you're like, wow, what if the entire app model is upside down? What if it isn't like, hey, I want to go fetch Instagram right now. It's like, hey,

The device realizes that you have a moment between meetings. You're a little bit bored. Hey, do you want to catch up on the latest highlights from your favorite basketball team? Those things become possible. Having said that, the hardware problems are hard, and they're real, and the cost problems are hard, and they're real. And come for the king, you best not miss. The phone is an incredible centerpiece of our lives today. It's how I operate my home. I use it in my car. I use it for work. It's everywhere, right? It's everywhere, yeah. And...

the world has adapted itself to the phone. So it's weird that my ice maker has a phone app, but it does. Like, I don't know. I'm not sure. It seems excessive, but like, so somebody today who's like, I got to make an ice maker, number one job, got to have an app. It's like the smart refrigerator. You're like, I don't need this. Take it out of me. I do think it's going to be a long, this is what I said, the 10-year view for me is, I think, much clearer. I think these things are going to be available, watchable,

widely accepted, increasingly adopted. The five-year view is harder because man, like even if it seems amazing-- - Knocking out the dominance of the phone in five years, it just seems so hard. - It's unthinkable for us, right? That's why I said like Orion was the first time I thought maybe. Orion, like putting that in my head, I was like, okay. - It's the first glimpse I've had into that. - I was like, okay, like it could happen. Like there does exist a life for us as a species past the phone. - Yeah, it still has the whole dynamic of, well, how do I envision my life without the operating system that I'm so accustomed to? So obviously with the physical stuff that you do,

but just the familiarity and all the stuff that's working in there. So what do you think of the interim period? So maybe you get to the point where the hardware is capable, it is market accessible, but do you tether to the phone? Do you take a strong view that you will never do that and let the product stand? Like, how do you think about that piece?

The phones have this huge advantage and disadvantage. Huge advantage, which is like the phone is already central to our lives. It's already got this huge developer ecosystem. It's this anchor device, and it's a wonderful anchor device for that. The disadvantages, I actually think what we found is the apps want to be different.

when they're not controlled via touchscreen. And that's not super novel. A lot of people failed early in mobile, including us, by just taking our web stuff and putting it on the mobile phone and being like, "Oh, the mobile phone, we'll just put the web there." Yeah. But because it wasn't native to what the phone was, and I mean everything from interaction design to the actual design to the layout to how it felt, like, because we weren't doing phone-native things,

We were failing with one of the most popular products in the history of the web. This is like the major design field, like the skeuomorphic idea versus the native idea. Yeah, and I think having the developers is a true value, and I think having all this application functionality is a true value. But then once you actually reproject it into space and you're manipulating it with your fingers like this as opposed to a touchscreen, you have much less precision. It doesn't respond to voice commands because there's no...

tools for that. There was no design integration for that. So having a phone platform today feels like, wow, I've got this huge base to work from on the hardware side, but I've also actually got this kind of huge anchor to drag on the software side. And so we're not opposed to these partnerships. I think it'll be interesting to see once the hardware is a little bit more developed how partners feel about it. And I hope they continue to support

People who buy these phones for $1,200, $1,300, being able to bring whatever hardware they want to bring, didn't take the full functionality of that with them. The biggest question I have is whether the entire app model, because we were imagining a very phone-like app model for these devices, admittedly a very different interaction design, input and control schemes are very different, and that demands like a little extra developer attention.

I am wondering if like the progression of AI over the next several years doesn't turn the app model in its head. Like right now it's kind of an unusual thing where I'm like, I want to play with it.

to play music. So in my head, I translate that to I have to go open Spotify or open Tidal. And the first thing I think of is who is my provider going to be? Yeah, of course. As opposed to like, that's not what I want. It's extremely limiting. What I want is to play music. Yes. And I just want to be like, go to the AI. I'm like, cool, play this music for me. Yeah. And it should know, oh, like you're already using this service.

We'll use that one or these two services are both available to you with this one has a better quality song or this one has lower latency where everything is or it's like hey the song you want isn't available on any of these services. Do you want to sign up for this other service that does have the song that you want? I don't want to have to be responsible for orchestrating like what app I'm opening to do a thing. We've had to do that because that's how things were done in the entire history of digital computing. You have an application based model that was the system.

So I do wonder how much AI inverts things. That's a pretty hot take. Yeah, that's a hot take. Inverts things. And that's not about wearables. That's not about anything. That's just like even at the phone level, if you were building a phone today, would you build an app store the way you historically built an app store? Or would you say like, hey, you as a consumer express your intention.

express what you're trying to accomplish. And let's like see what we have. Let the system see what it can produce. Yeah. But I do think if you were starting from scratch today, you probably wouldn't build this like app-centric world where I, as a consumer, I'm trying to solve a problem and first have to decide which of the providers I'm going to use to solve that problem. Yeah, of course. That's fascinating. And

Again, I think it's a function of where the capabilities are today. And I think where we have line of sight into orchestration capabilities, as I'd say, knowledge wise, that is probably capable today. I think orchestration wise, it's probably a little bit away. And then, of course, you got to build the developer ecosystem.

to develop on the platform. Which is incredibly hard. That's the thing I want to see. That's the hardest piece, right? That's the hardest piece. Yeah. The stronger we get at agentic reasoning and capabilities, the more I can rely on my AI to do things in my absence. And at first it will be knowledge work, of course, that's fine. But once you have a flow of consumers coming through here, what you're going to find is that they're going to have a bunch of dead ends. Yeah. Where they're going to ask the AI, hey, can you do this thing for me? And it's going to say, no, I can't.

that's the goldmine that you take to developers. And you're like, hey, I've got 100,000 people a day trying to use your app. They're trying to use your app. They don't know they are, but they're trying to use their app. Look, here's the query stream. Here's what's coming through. And we're going to tell them no today. If you build these hooks, you've got 100,000 people clamoring for something today. Coming in for your service. Yeah. And it's totally fine for developers

RAI to go back and say hey you got to pay for this. There's a guy who does this for you, but you gotta pay for it Yeah, and I by the way, I'm not just talking about apps. I'm like it's a plumber. It's like there's like anything There's something of a marketplace here that I think emerges over time. So that's how I see it playing out I don't see it playing out as like someone goes into a dark room and comes up with this app platform No, what's gonna happen is there's gonna become a query stream of people using AI to do things and the AI will fail and

in certain areas because that's a type of functionality that is currently behind some kind of an app wall and there's no... Or it hasn't been built native to whatever consumption mechanism. There's no bridge that's been built. And everyone wants to build the bridges. It's like, no, no, it's going to manipulate the pixels and it's going to manipulate... It's like, fine, it can do those things. I'm not saying the AI can't cross those boundaries, but I think over time that becomes the primary interface for...

humans interacting with software as opposed to the like pick from the garden of applications. Yeah, that makes a ton of sense. That's a very alluring end state just as a consumer, right? Yeah, it's messy. And I think it creates these very exciting marketplaces for functionality inside the AI. It abstracts away a lot of companies' brand names, which I think is going to be very hard for an entire generation of

- Brands. - Yeah. Like, the fact that I don't care if it's being played on one of these two music services, that's hard for those music services who, like, really want me to care. - Yeah, yeah, yeah. - And, like, they want me to have a stronger opinion about it. And, like, they want me to have an attachment. - Yeah. - I don't want to have an attachment. There are some things where you may value the attachment, - you know, whatever. - Yeah, in the world where I'm like, "Here's an AppGarden, and these two are competing for my eyeballs," the brand that they've built is the hugely valuable asset.

In the world where I just care if the song gets played and sounds good,

a different set of priorities are important. I think that's net positive because what matters now is performance on the job being asked. Yeah, actual product experience. And value and price for performance matters a lot. Yeah. I think a lot of companies won't love that. Well, abstracting away, that's like effectively articulating abstracting away margin pools, which puts a lot more pressure on us trusting the AI or the distributor of the AI. And so far as I'm floating between different companies that are each providing AIs.

The degree which I trust them to not be bought and paid for in the back end, they're not giving me the best experience or the best price for money. They're giving the one that gives them the most money. Yeah, of course. So it's the experience of Google Search today, right? It's a very different world. It's a very different world. It's a very different world. But you can actually see inklings of it today, right? So certain companies are willing to work with the new AI providers in agentic task completion. And then they're like, well, actually, wait a minute.

I don't just want the bots executing this stuff. I want the humans coming to me. I think I need that. It's existential that I have this brand relationship directly with the demand side. So that's potentially messy, but a bright future, especially if we don't have to pay that like brand tax. Yeah, it'll be very messy. I don't know it's avoidable because I think once consumers start to get into these tight loops where more and more of their interactions are being moderated by an AI, right?

you won't have a choice. That's like where your customers will be. Yeah. But it's going to be a pretty different world. Yeah, it'll be a different world and there'll probably be some groups that try to move fast to it as a way to compete with things that are branded. Yeah. And just say, I'm going to compete on performance and price. Yeah, that's right. Where do you think that could potentially happen first?

It probably will mirror query volume. I think of this a lot. We do have a model of this, which was in the web era when Google became the dominant search engine. So before that, the web era was like very index based. It was like Yahoo and it was like links and getting major sources of traffic to link to you was the game.

And then once Google came to dominance, which happened very quickly over maybe a couple of years, I feel like, all that mattered was like SEO. All that mattered was like where you were in the query stream. Yeah. And the query stream dictated...

what businesses came over and succeeded. Yeah. Because like the queries that were the most frequent, those were the ones that came first. Yeah. And so like travel sites, travel, travel, travel is the one that's like, right away, right? Like it was a huge disruption and travel agents went from a thing that existed to a thing that didn't exist in a relatively short, and they all competed on the basis of like execution of the best deal. It was literally like seamless fashion with the highest conversion. I think SEO has gotten to a point now where

where it's kind of a bummer. It's like made things worse. No, it's just like game. It's just like game. Everyone's gotten so good at it. Especially with AI. That's right. So I actually think it's like we have this incredible flattening curve and now it's like starting to kind of rise up in terms of... Especially with paid placement too. Yeah. That's so dominant. So do. Yeah, that's right. And this is like probably the cautionary tale for how this plays out in AIs as well. I think there will be a

pretty good golden era here where the query stream will dictate what businesses come first because those are the queries that are, that's the volume of people unsatisfied with the existing solutions that they have. Yeah. Otherwise they wouldn't be asking about it. And product providers and developers will follow that. And build specifically to solve those problems. That's right. Once it tips in each vertical, we get a lot of progress very quickly. Yeah. Towards better solutions for consumers. Yeah.

And then once it hits a steady state, it starts to be gainsmanship. Yeah. And that's the thing we fight. And that's the decaying era. That'll be the true test of AI. The true test. Can it get through that? Can it avoid falling into that trap? Can it avoid that trap? Yeah, yeah, that's right. Exactly. Well, a lot of that is business model driven and we'll see how that evolves over time too. That's right. You guys have also been

leading from the front on this idea of open source. And so talk about some of your efforts on that side of the business. And then what is the ideal market structure of the AI model side for you guys? There's two parts that came together. The first one is Llama came out of FAIR, our fundamental AI research group.

And that's been an open source research group since the beginning. You know, since John LeCun came in and they established that. It's allowed us to attract incredible researchers who really believe that we're gonna make more progress as a society working together across boundaries of individual labs than not. And to be fair, it's not just us. Obviously, the Transformer paper was published at Google. And like, you know, big-- self-supervised learning was our contribution. Like, everyone's contributing to the knowledge base.

When we open sourced Lama, that's how all models were open sourced at that point. Yeah, of course. Like, everyone was open. The only thing that was unusual was... Everything else just went closed source over time, effectively. That's right. But before that, every time someone built a model, they open sourced it so that other people could use the model and see how great that model was. That was mostly how it was done. If it was worth anything. Certainly some specialized models for translations and whatnot were kept closed. But if it was a general model, that was what was done. Lama 2 was probably the big decision point for us. Lama 2, and this is where I think the second...

The second thing that came in was a belief that I've had that I was advancing really strenuously internally that Mark really believes in too. And he's written his post about this, which is first of all, we're gonna make way more progress if these models are open. Because a lot of these contributions aren't gonna come from these big labs. They're gonna come from these little labs. And we've seen this already with DeepSeq in China, which was put in a tough spot and then innovated incredibly in the memory architectures and a couple other places to really get amazing results.

And so we really believe we're going to get the most progress collectively. The second thing is, inside this piece, is, you know, this is a classic, I believe this is going to be commodities. And you want to commoditize your compliments. Yes. And we're in a unique position strategically where our products are made better through AI, which is why we've been investing for so long. Whether it's recommendation systems in what you're seeing in feed or reels.

whether it's simple things like what friend do I put at the top when you type you want to make a new message, who do I think you're going to message right now, little things like that to really big expansive things like, hey, here's an entire answer, here's an entire search interface that we couldn't do before in WhatsApp that now is a super popular surface. So there's all these things that are possible for us that are made better by this AI

but nobody else having this AI can then build our product. The asymmetry works in our favor. Yeah, of course. And so for us, like commoditizing your compliments is just good business sense and making sure that there is a lot of competitively priced, if not almost free models out there helps the entire industry.

Helps a bunch of small startups and academic labs. It also helps us. Yeah, you as the application provider are a huge fan of Fisher. So we're all super aligned. Yeah, you're looking good. Business model alignment and industry alignment. It's a strong alignment there. Yes, yes. So it comes from both this fundamental belief in how this kind of research should be done and then aligns perfectly with our business model. And so there's no conflict. Yeah, societal progress plus business model alignment. It's all together. It's all great.

It's all going the same direction. That's awesome. It's great. I want to shift gears to talking about the impediments to progress and like what you think, you know, are kind of linear versus not. So,

The risks to the vision, to the overall vision that you articulated, obviously hardware, AI capabilities, vision capabilities and screens and all that, resolutions. We talked about the ecosystem and developers and native products. So maybe just talk about what you see are kind of the linear path things and the things that may be harder or riskier. We have real invention risk. There exists risk that the things that we want to build

We don't have the capacity to build as a society, as a species yet. Yeah. And that's not a guarantee. I think we have windows to us. You've seen Orion, so like it can be done. Yeah, there's, yeah, it feels like it's a cost reduction exercise. It's a materials improvement exercise, but it can be done. There is still some invention risk. Far bigger than the invention risk, I think, is the adoption risk is

Is it considered socially acceptable? Are people willing to learn a new modality? Like we all learned to type when we were kids at this point. We were born with phones in our hands at this point. Are people willing to learn a new modality? Is it worth it to them? Ecosystem risk, even bigger than that. Like, great, you build this thing, but if it just does like your email and reels, that's probably not enough. Do people bring the suite of software that we require to interact with modern human society to bear on the device? Those are all huge risks.

I will say we feel pretty good about where we're getting on the hardware, on acceptability. We think we can do those things. That was not a guarantee before. I think with the Ray-Ban metaglasses, we're feeling like, okay, we can get through. You feel like the acceptability. Humans will accept that I'm using technology. Within that super interesting regulatory challenges, here I have an always-on machine that gives me superhuman sensing. My vision is better. My hearing is better. My memory is better. That means when I see you,

a couple years from now, and I haven't seen you on the internet, I'm like, "Ah, God, I don't remember that guy. We did a podcast together. What's that guy's name?" Can I ask that question? Am I allowed to ask that question? - Yes. - What is your right? It's your face. You showed me your face. - Yeah. - And if I was somebody with a better memory, I could remember the face. So, like, that happened, but I don't have a great memory, so am I allowed to use a tool to assist me or not? So there's really subtle regulatory privacy, social acceptability questions that are, like, embedded here that are super deep individually.

and can derail the whole thing. Yeah, absolutely. Easily derail the whole thing in slow progress. That's the thing. I think we sometimes think in our industry, it's like feel the dreams. If you build it, they will come. No, a lot of things have to happen right. Well, you can also overstep, too. That's the risk. You're sure you can get your hands slapped. Great technology can get derailed for long periods of time. Nuclear power got derailed. Yeah, for absolutely stupid reasons. For 70 years,

for bad reasons. We know we're bad now. And it was like, they just played it wrong. Yeah, of course. And they were like, ah, ignore this. It's like, no, these people actually feel this way. So I think, yeah, I feel pretty good with the invention risk. Acceptability risk is looking better than it has been. But like, I think there's still a lot of big hedges to cross there. I actually think the ecosystem risk was one I would have said previously was the biggest one.

But AI is now my potential silver bullet there. If AI becomes the major interface, then it comes for free. And I will also say that we've had such a positive response from even just set aside Orion, even the Ray-Ban metas.

companies that want to work for us and build on that platform. It's not a platform yet. There's so little computing. There's so little computing. We literally don't have any space yet. But we did do a partnership with Be My Eyes, which helps blind and hard of vision people navigate, and it's really spectacular. And so there's a little window there where we can start building. So yeah, I would say the response has been more positive than I had expected to that. So everything right now, tailwinds abound. Right now, and to be honest, after eight years of headwinds, it's been a lot of nine years of headwinds,

Having a year of tailwinds is nice. I'll take it. I'm not gonna look at the face. No victory laps, yeah, but that's good. But it's all hard. At every point, it could all fail. I like that you just started with it's invention risk. There's many ways this just won't work. Yeah, that's right. Even if it does work, it might not take. Well, I'll say two things about this, and this is where Mark just deserves so much credit, is we're true believers. Like, we have actual conviction. Mark believes this is the next thing

It needs to happen, and it doesn't happen for free. Like, we can be the ones to do it. Our chief scientist, Michael Arash, who's one of my favorite people I've ever gotten a chance to work with, he talks a lot about the myth of technological eventualism. It doesn't eventually happen. There's a lot of people in tech who are like, yeah, AR will eventually happen. That's not how it fucking works. AR is a specific one that would just absolutely not. You have to stop...

and put the money and the time and do it. Somebody has to stop and do it. And that is the difference. The number one thing I'd say is like the difference between us and anybody else is we believe in this stuff in our cores. This is the most important work I'll ever get a chance to do. This is Xerox PARC level new stuff where we're rethinking how humans are gonna interact with computers.

It's like JCR Licklider and the human-in-the-loop computing. We're seeing that with AI. It's a rare moment. It's a rare moment. It doesn't even happen once a generation, I think. It may happen every other generation, every third generation. Like, you don't get a chance to do this all the time. So we're not missing it. We're just like, we're going to do it. And we may fail. Like, it's possible. But we will not fail for lack of effort or belief. Great. Thanks a ton, boss. Cheers. Yeah, cheers.

We're sunsetting PodQuest on 2025-07-28. Thank you for your support!

Export Podcast Subscriptions