Every person, superhuman vision, superhuman hearing, superhuman memory, superhuman cognition. That is the vision that we have for these wearables. A tremendous equalizing technology.
who has incredible memory, incredible vision, incredible hearing. Some people are born that just have those talents. The vagaries of the bell curve and the ranges of human capabilities are beyond just the demographic ones. They exist in terms of fundamentalist capabilities. And how interesting will we be as a society if everyone has the full access to those facilities?
Hi, I'm Reid Hoffman. And I'm Aria Finger. We want to know how together we can use technology like AI to help us shape the best possible future. We asked technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future, and we learn what it'll take to get there. This is possible.
It's no surprise that innovations in tech are changing the way we interact with each other. But now we're living in an era where devices and programs are changing the way we interact in the physical world. And they're doing so through advancement in augmented reality, virtual reality, and of course, AI.
The future is already here, and it is quickly redistributing more evenly. And while we talk a lot about software, it's happening increasingly with wearables, namely electronic devices designed to blend seamlessly into our everyday lives, like watches, clothing, and glasses. However, these aren't just fun gadgets, but devices that could refactor how we navigate our world. Our world is soon to be one where glasses don't just help you see...
but where they enhance what you see. Now, add AI to the equation. The glasses could potentially guide you, talk to you, and troubleshoot your problems while keeping you rooted in the physical world. The possibilities and applications in this space continue to grow. One of the key figures shaping the future of this technology is Andrew Boz, Bosworth, Meta's chief technology officer and the head of Reality Labs.
Baz has spent nearly two decades at the forefront of digital innovation. He joined Facebook back in 2006, where he developed pivotal features like the newsfeed, messenger, and groups. Those functions playing a crucial role in transforming Facebook into the tech giant that it is today. But his influence doesn't stop there.
Boz's journey in tech runs deep. From his early days as a teaching fellow for an AI course at Harvard, working on Microsoft Visio, and later founded Meta's AR and VR division, which became Reality Labs. Today, as Meta builds towards the metaverse, Boz is focused on the next frontier in mixed realities. His latest project, Orion, a next-generation pair of AR glasses still in development, is
aims to revolutionize how we interact with the digital and physical world while striving to make this groundbreaking technology accessible to a wider audience. In this episode, we'll be diving into the intersections of Boz's career, Meta's ambitious AI hardware projects, and the broader implications of having AI and mixed reality in our day-to-day lives. So let's get into it. Here's our conversation with Boz.
So you were part of the California program where you raised livestock and essentially grew up on a farm, which, by the way, I went to a high school on a farm, so I'm very sympathetic. I'm curious, how did that experience influence the person you are today?
People are often surprised by the fact that, you know, I grew up on a farm and my family's farming from way back and still is farming to this day. But actually, really, if you know about farmers, three things that are important to know about farmers. Number one is they're governed by time, daylight and seasons. They have X number of daylight hours to which they get the work done and they got to get it done because the seasons are moving forward. And that forces two other things. The first one is every one of them is an engineer.
You got to fix that tractor now because you got to get that crop done. You got to mend that fence today. You don't have time to be dealing with getting these livestock back in the pen. So they're all engineers, not scientists. I mean, real, like get it done with what you have on hand kind of engineers. And the second one is they're entrepreneurs. You know, my family runs a horse ranch. Well, a horse ranch means you got a lot of manure. Well, you got
Two ways to handle that. The one is you can pay someone to pick that manure up and haul it off, and that's going to cost you money. The second one is you can market that manure as a fertilizer and you can make a little money. That's a two for one swing. And so my cousins who are still farming and my uncles who are still farming, it's either a cost or an opportunity. Then the margins are slim. So you got to make those opportunities work. You know, in my experience, at least, it's not as big a stretch as people thought.
But I do want to shout out National 4-H Organization and California State 4-H. I learned how to program in 4-H. It's not just cows and cooking, as we like to say. The first person who taught me to program was a fellow 4-H-er and got me into computers that way. So it's a great program, not just from an entrepreneurial and engineering standpoint, but also directly programming computers. Was there anything in addition in the kind of farming that also shaped how you think about the interface between the digital and physical worlds?
Well, it's interesting. I think the point that humans are busy, they got a lot of things on their plate and the tool either has to work for them or it's just not worth it. I really think that when we think about how we build these tools, a lot of times we're beset by the tremendous value that we see, especially in our industry. We see how great it could be. And this isn't new. This goes back to Douglas Engelbart, right? Who invented the mouse and
to some degree, his vision at SRI failed because he had such a complex idea. He wanted humans to do a ton of work to get skilled enough to unlock the full power of the machine. And my understanding is even in his retirement, you know, late in the 90s, he lamented the fact that we took the easy exit. We took the computer mouse and we ran with it and just did this point and click stuff, whereas he wanted to replace the keyboard with the cord set. He had all these ideas. And I
I think that's a lesson we keep relearning, which is like, it has to be so easy. Like the big pot of value at the end is great, but it still has to be so, so easy to get at it that you can lead people down that path. If they have to take the course to learn how to do it, they're not going to do it.
It doesn't matter how valuable it is. And that's not nothing to do with anything, but just how humans are. But that's like, that's true in a wood shop. That's true in an auto shop, the tool that you can pick it up and you're like, yep, I see it. I get it. I use it. It works. That's the tool people reach for time and time again, not the super elaborate complex one. I think about that all the time. That was the truth growing up on a farm is you just, you had to get it done. You didn't have time to be trying to learn a new thing. You had to get it done.
So I love that way to start because when people think about technology, at least lately, they think so much about the digital world and it feels like there's nothing more real world than like working on a farm and using tools to fix your equipment and making sure the cows get milked and sort of all that stuff that's in the real world. But we're in this new era, this new ecosystem of device wearables and smartphones and of course glasses. And so how do you
do you think about sort of this new world where we're actually going to navigate and engage with the physical world through digital in a way that we used to interact with digital spaces? Yeah, this is the construct of the metaverse, which I think has been pretty broadly misunderstood, or at least it's understood differently by different people, is this idea of blending of the digital and the physical together. Actually, let's keep with the farming thing because it's great. Farmers are pioneering some amazing work.
autonomous driving is not as far along as autonomous farming is. Tractors that are able to plow the fields automatically, drones that are out there doing reconnaissance on what fields need what kind of treatment. That is actually pretty advanced technology. And it really is very much about blending the physical and the digital capabilities together. And so we had this amazing explosion with the internet and software. And I think we took software as far as it could go
within the construct of a phone, a laptop. It's like we just took it to the absolute limit. And what's exciting me now is these really physical manifestations where through advanced sensors, whether it be audio mechatronics, whether it be drones or robotics, automation, and then ultimately, I think through wearable devices, we're getting to a new plateau of hardware.
that can further allow software to breathe and expand. AI is such a fun example because it's so vogue right now. And if people have had a chance to ignore all the hype and really try to use these tools in their daily lives, there are some areas where it's mind-blowingly useful. You know, I'm doing a little home automation project and I'm debugging and there's these obscure, you know, Internet of Things devices that have unlisted APIs. And it would take me a long time to...
build a fuzzer to discover all those, man, you can do it in minutes with these tremendously useful AIs. Of course, there's a ton of things that they're not good at, but I still find the interface to them very awkward where I'm going to my phone or I'm going, whether it's voice or text, it's this very transactional thing. And what's so funny, I find myself
I'm like the cut and paste machine suddenly, like I'm doing a coding project here. I'm like, okay, I got to cut this result, the debugger output into here and then it gives me the answer. I'm cutting and pasting back into the thing. This should all be integrated. So let's broaden out to AI generally. I mean, like I'm enough of a geek and the interface with kind of the digital and the physical world and how that transforms what it is to be human and homotechnic. I could spend the entire discussion on this, which will be amazing.
But there's also AI more broadly. So what's the meta view of AI and in using the products used in the world? What is the AI philosophy? So there's three layers to this in my mind. The first one is we're building these very exciting models.
And I find myself actually sometimes at war with both sides of the AI discussion. I have a profound belief that these are hugely important, meaningful things that will meaningfully advance human capability. I kind of liken it to a word calculator, right? In the year 1960, a calculator was a person
In the year 1970, it was not a person anymore. It was a different thing. And at first they like ban them from school and get rid of them. And I grew up in a, you won't have a calculator with you at all times. I have three calculators with me at all times. My high school teachers were wrong about that. I think of the AIs that we are building as word calculators. I really mean that word image, like visually they're really complex calculators that have moved beyond the simple symbolic space of mathematics into this higher order space.
I also don't think it is even the kind of thing that is human intelligence as we understand intelligence and agency and consciousness and thought. And so I have to fight both sides. Like AI is both a huge deal and also not that kind of a huge deal. So that's like my first kind of belief about it, which gives me tremendous confidence in how I use the tool. The second one is.
We're running into the information theoretic limits of it. If you go all the way back to like Norbert Wiener and his cybernetics of the first, you know, constructs of information theory. Hi there, I'm Pi, and I'm here to add some context. Norbert Wiener's construct of information theory can be explained in simple terms as the study of how information flows and is used to control systems, including mechanical, biological, and social systems.
His work on feedback loops and data processing laid the foundation for modern control systems. This idea of how many bits can we pull out of something that are sufficiently generalizable bits. And we're finding out that for all the corpus of human media ever produced, it's not enough.
It's not enough. We're finding that in robotics. Robotics is an effort that we picked off recently instead of meta in partnership as kind of an adjunct to our Lama program. And no matter how many videos you have of somebody grabbing a coffee cup,
you're actually not getting the data you need because you don't know the proprioception of how much force is applied and how we detected, okay, this is a plastic cup. It's going to deflect to a certain point and there's condensation on it. So I need to apply a little bit more force to counter the loss of friction that I'm experiencing.
We do that autonomically. There's not a single conscious thought in our head when we're doing those things. Aria, when you're taking your phone out of your pocket, you don't know what the angle of your second digit is or how much force you're applying with your thumb to avoid getting the keys. Yeah. To some degree, the things that we think of as intelligence, we're talking about the higher order functions of the human brain. That's arguably the less impressive part of intelligence. The deep brain, the mammalian, the amygdala, that lizard brain intelligence, that is...
wildly hard for us to capture in the modern era. So as much as I'm excited about the word calculator, I really do believe in Yann LeCun's vision that you have to do this pioneering work to break through to a world model that has common sense, that understands causality in a more substantial way, not in a statistical kind of soup way, but in a model-based way. And then my third layer of this stuff is I want it to be embodied. It
It almost goes back to like JCR Licklider, who was one of the first computer scientists to sit down at a terminal and do a live programming. And he believed in that vision. And he really was the one who then funded what would become SRI, found Doug Engelbart, funded a bunch of DARPA. It was at the ITO at DARPA. Like,
I feel like we're in that era. We're in the terminal era of AI and it doesn't want to be like that. It wants to be everywhere. It wants to be ubiquitous. It wants to be in full context of what your life is and with history of who you are and what your life is. I think what Douglas Hofstadter's book, I Am a Strange Loop and how we all have little mini versions of each other's consciousness simulations of each other's consciousness running in our brains that allow us to collaborate effectively.
And my AI obviously doesn't have that. It has no idea what I want, what I'm about. It can't infer anything from context. So for me, like, I love where we are. I am a huge believer. I also want us to invest in these world models and I want to free it from the terminal. And is the world models and the free from the terminal, are those fundamentally new different technologies from the kind of scale transformer? What's the kind of looking through a glass darkly thinking about how this would be done?
I think the embodied part can do both. The embodied part will benefit a lot and maybe help a lot with world modeling. Once you have these sensors out there and you have better, richer data, when you have robotics data, which will give you proprioception, which will give you friction, like I think that is going to be a big unlock. It benefits the current models a huge amount to get that data and to be in that context. And also it probably is some of the data that you need to start to understand what it takes to build a world model, which we appear to be born with.
Tell me about the Orion project. I had the pleasure and honor of coming by, you know, MetaHQ and playing with it and getting some of the detailed exposures. Tell me about like kind of what you see as the significant use cases and where the metaverse is moving towards with Orion.
So we've got the Orion glasses are on right now. I've also got this wristband, this neural interface wristband. It's got these little kind of metal bumps on the back of it, all down the band. EMG sensors, electromyography, they're measuring electrical impulses going down my hand. And what I'm able to do, I've got a screen in front of me so I can do my email. I can do Instagram.
We have little games that you can play. I have been caught playing the games in meetings before. They weren't my meetings, to be fair. I was just listening in, but I have been caught. And I can do it with my wrist at rest by using a small number of gestures, but I'm using eye tracking to try to direct things. And so we had to do a lot of tough problem solving on photonics and optics.
Some of the stuff we understand, we understand how to build apps, we understand how to do these things. And there's some novel interaction design here with the wristband and eye tracking. But I think we've made that pretty straightforward. Doing the neural interfaces was super hard. You're building an AI model for
of what the hand is doing based on these electrical impulses that you're able to observe from the surface. And so you just need a lot of people to build a generalized model that works so that anybody can put this on. And I think we had success with you, Reid. Like, I think we're well into the 95th percentile, I think, of people who put this on and can right away, we know what shape the hand is in. And that allows us to do these gesture-based controls, even with your hands in your pocket, behind your back,
We need to be regular glasses first and foremost. When they're powered off, I have to be able to see your eyes to see my eyes. And like that has to be a human connection that we have is important. Otherwise, I wouldn't use the glasses.
You have to do all of this in a comfortable, all-day wearable form factor. It's a lot of challenges. You know, this was 10 years in the making. And we thought when we started this program that we had less than 10% chance of being able to build it. So the fact that it exists is a true testament to a vision that Mark Zuckerberg had, that the research team, Michael Abrash, our chief scientist, and his team had for a long, long period of time. Now, we always thought, to your point about the metaverse, that the first thing that would happen would be these holograms in the world. And at first, there'd be just...
referenced to you. So this is your personal interface. And then over time, they would be attached in the world. And then eventually, you would have an AI that was doing it. What's been surprising to us is the AI came first, we have a sequencing wrong, the AI showed up earlier than expected. So what's been so exciting is there's a lot of products now that are totally valid products that are easy to use between full AR glasses, which are spectacular, but will be expensive, and
and the Ray-Ban Metas, which are super affordable, but like a little bit more limited in their functionality. That entire spectrum is now open to us and it's really, really exciting. And we have demos internally in our wearables. With the Ray-Ban Metas that are in markets today through our early access program, you can use this tool called Live AI. 30 minute session until the battery runs out because it was kind of bolted on after the fact. But for 30 minutes, it can see what you're seeing. It hears what you're hearing.
And the difference in how useful it is to me as I go about my day in the world, I was doing a film development project and film developing is a real fussy business with what kind of chemical and what temperature for how long.
Usually I'm doing it with like a laptop next to me and I'm trying to type into the laptop. Okay, what's the thing? Doing it with the live AI session was incredible. It just sees what you're doing. It's like, okay, you got 20 more minutes of doing that. And then you're going to do this other thing. And why don't you go ahead and prepare this now? It's stunning. So for me, it is about blending the physical and digital together in both directions. You have to give the AI access to the physical context in which you operate and the digital context in which you operate on your desktop or on your phone.
And conversely, you want to be able to bring those digital constructs into reality through robotics, through automation. So I see really a very, very exciting kind of decade ahead of us in that synthesis. So I am someone who sees 90% of my vision out of one eye. Will I be able to use these?
Yes. So these are binocular. You'll experience the same limited depth perception that you would experience normally. That's right. But I have now. Yeah. So you'll still be able to use these. Okay. Over the future of time, I do expect a, one of the real option spaces here is what if you go to monocular? What if you just had the display in one eye?
Yeah. And there's some challenges there. It creates some binocular rivalry. So for people who have full vision in both eyes, they might struggle sometimes to know, hey, my eyes are seeing different things looking at the same space. Where should I look? Yeah. You would actually be better off in those displays. I love it. Can't wait.
I finally have a superpower. You would have a little superpower. That's right. You'd have a little advantage that monocular displays would probably be great for you, cheaper, lighter, and probably just as good. So I think you've got maybe a little inside opportunity on the rest of us. I love it. I love it. Okay. So for some of the skeptics out there, I feel like this is especially a good question for you because you were essentially the person who created the newsfeed. And for those of the youngs among us, they might not remember this, that the newsfeed came out and everyone was like, well,
oh, this is horrible. What are we doing? And now we're like, oh, a newsfeed. This makes so much sense. This is perfect. We can't imagine a world without it. And so some people might be saying the same thing. They might be saying, why are you creating these things that we don't need? Like, what would you say to the skeptics? And then also, as you see people using it, what do you think some of the mainstream adoptions are going to be?
Things like AI are different kinds of things. These are different types of innovations. These are truly new tools. This is a new space. The smartphone was a kind of a disruptive thing where it was like, hey, you have a phone and you have an iPod and you have like, you like the web. We're going to put all those in one place. That was the pitch. This will be similar to that. This will be like, hey, look, you already have a phone that you like doing this. You already like Instagram. This is a better way of doing a thing that you already do. And then once you've established that beachhead through
through comfortable paths, then the huge opportunity presents itself. You know, you talked, Arya, about having superpowers. That is the vision that we have for these wearables. Every person, superhuman vision, superhuman hearing, superhuman memory, superhuman cognition,
That's what I'm talking about. That's what I really believe is going to happen here. A tremendously equalizing technology. You know, we talk often in our society today about the vagaries of the birth lottery, and we're rightly doing so. We usually talk about the birth lottery in the context of whether it be race and gender or physical geography. And those are hugely important factors in how our lives play out.
But we don't talk about the other parts of the birth lottery who has incredible memory, incredible vision, incredible hearing. Some people are born that just have those talents, incredible ability to think creatively, to pattern match. And...
There's no reason we couldn't all be Garry Kasparov in chess if we have the wearables. You know what I'm saying? Now, it probably takes the fun out of chess, so I'm not recommending anti-competitive performance. But my point is, like, the vagaries of the bell curve and the ranges of human capabilities are beyond just the demographic ones. They exist in terms of fundamentalist capabilities. And how interesting will we be as a society if everyone has the full access to those facilities? That is the future I see. And I think that's a pretty compelling pitch to people, to get them to try a new thing.
Exactly. And I'm going to take this moment to ask you a question that I hate getting, so it's kind of entertaining to be the asker of it, which is, you know, if you're predicting out, call it three years, you know, and with kind of wearables and AI and everything else,
What kinds of things do you see in the future that will be the kind of thing that people will be doing? Is it kind of like a, it's giving me a constant scan on my life. It's doing the proactive search. Oh, you're looking at like film developing or you're looking at this thing. Oh, let me tell you some stuff about this. What is your future prediction that might cause, call it semi or non-technologists to go, oh my God, that's coming in three years.
Yeah, I think what's interesting, the hard part of this question is the timeline of it. I think you and I probably would do better in one year and 10 years. Yes, three years is the most awkward time because, you know, you can kind of I know one year I have a good sense of 10 years, three years is tough. I don't think we'll be at the proactive place in three years quite yet. Early adopters will probably have always on systems that are capable of it.
The degree to which I think it's going to be reliable enough for the average consumer, I think we're probably a little further out from that from a world modeling and cognition standpoint. But for a decent portion of people, tens of millions, we're going to have people who are in regular conversation with AIs. Everything from...
I cook all the time. I'm the cook in my family. Hey, like, you know, how many quarts is it in a gallon? How many, you know, what's the, what's the conversion tablespoons to ounces of water to, Hey, what did my wife say? I need to get the grocery store. Remind me. She told me yesterday, Hey, when I opened the fridger, did we have, did we have that?
cheese that I like, you know? That's the best one. And there's stuff that I really want to push for, but we have to advocate for regulatorily. I mean, I think one of the classic ones is the cocktail party problem. And Reed, you must run into this all the time. You know, you see somebody coming up to you, you recognize them, you know, you know them. You don't remember why you know them or how you know them.
And right now we're just like, hey, guy, good to see you, you know, friend. And you're just like looking for clues to try to remember. I'd love to be able to have your AI whisper to you like, hey, you know, this is this person. The last time you saw him was here. You know, oh, right, right, right, right.
and that kind of thing. But that one we need help with, right? You need, right now there's regulations in Illinois and Texas, BIPA and QB that make that kind of thing tenuous at best if it's doable. So there's a bunch of very human problems where we could be a little bit more proactive and I think comfortably so in a totally privacy way. We've got to do work. So three years from now, I do think you'll be in this kind of, people at the leading edge
But not the earliest adopters, not the bleeding edge, just the leading edge, just the early adopters will be having tremendous usefulness, kind of cognition and memory help from their assistants. And where do you see the panoply of wearables? Like, is the glasses going to work with a watch, work with a phone? Or will there be some like, well, once you have the glasses, you don't need the watch as much anymore? No, I think they want to work together. I mean, we probably are...
more than 10 years away from having the efficiency of compute from a thermal perspective to have the glasses stand alone.
I think for the foreseeable future, you do want this to be a constellation of devices. And there's a lot of value in being on the wrist. The neural interfaces that we've developed here show that there's a tremendous amount of incremental signal and control we can give consumers without having to hijack their eyes or make them reach up and tap something on their temple arm. I think there's also a huge opportunity for the range of glasses. So if you have glasses that have no display, you want to pair them with a watch that has a display or with a phone that has a display.
If your glasses do have a display, okay, now maybe you just have a simpler band on your wrist and you're wearing maybe a conventional wristwatch on your other wrist. We're in this space of wearables. So one thing that we actually haven't had to grapple with in the industry for a while is the luxury presentation, the identity that people want to bring to the world about themselves.
They want to look a certain way. And so if I'm somebody who wants to look a certain way, I have to have the options to maintain that vision of myself while also being a part of the modern era. It's funny. There's been a lot of discussion about the app stores on Google and Apple devices. I really don't have a problem with those. That's not the issue I see. What worries me more is the degree to which these devices, which are really
The natural center of compute for a constellation of wearables are locking down the access to third-party wearables manufacturers, to things like critical Bluetooth channels. The famous example that I'll use is the AirPods. There are better headphones that you can get than the ones happening in the manufacturer by Apple. AirPods are great. It's a great device.
They have 70% of the market share and they don't have that because they have the best product. It's really not the best value product. They have it because they have a proprietary Bluetooth channel that makes it super easy to pair. And also because I think they're easy to lose. You have to buy a lot more, but it really bothers me. Like that shouldn't be that way. And I'm like, listen, I'm a nineties guy. So I'm an old school, you know, I'm an old school computer construct guy.
Right. Absolutely. I mean, I think about that all the time. I bought non-AirPods, but it just, they were too hard to use. And so I was like, forget it. It's like, this is not working for me. One thing you said earlier is that you, you know, you tell your team over and over again, like what human life will be better? Like what human on this earth will be using this? So can you talk about a time, it could be for the Orions or a different product that you really saw like user testing got you, ah,
this is the aha, you saw something interesting, it made you have a new leap. Like, it's just such a great way to operate. There's quite a few, but I'll tell you one that surprised me recently, which makes total sense in retrospect. One of the most popular demographics purchasing our Ray-Ban Metaglasses are blind people. Uh-huh. If you watch them in the user research sessions...
It makes total sense. And we have, you know, members of the team who are blind. We have a partnership with Be My Eyes, which is a great service. You know, in my head as a seeing person, I have no problem. I'm like, oh, I wonder if they have problems navigating to the restaurant or getting across the street. They don't. They have solutions for that. They've got Google Maps. It's reading directions into their ears. They've got a stick or a dog or they, you know, there's a bunch of systems they have. They can get to the restaurant. You know what they can't do? They can't find the door.
And so what they do is they ask the glasses, hey, Meta, look and tell me where the door is. And it's like, oh, okay, like, you know, it's to your left. And if they can't get that kind of thing done, they can call in to be my eyes. And now they've got a live video stream going to an AI agent at first, and if it fails over to a human who helps them out. And so there's these moments where you know it's going to happen because you yourself are a human and you're like, yes, I as a human am like other humans and I would want this, so other humans will want this.
But there are also these really fascinating times where you build the capability and this stuff comes out of the woodworks that you never saw coming. On this podcast, we like to focus on what's possible with AI because we know it's the key to the next era of growth. A truth well understood by Stripe, makers of Stripe Billing, the go-to monetization solution for AI companies. Stripe knows that when launching a new product, your revenue model can be just as important as the product itself.
In fact, every single one of the Forbes top 50 AI companies that has a product on the market today uses Stripe to monetize it. See what Stripe can do for your business at Stripe.com. And so obviously, Meta, Yama Kun, you guys are leaders in the AI space. And so what do you think are the things additionally that set you all apart? And what are you going to be focusing on for the years to come?
Well, we're pretty proud of our open source stance with Llama. I think we were one of the earlier ones, in my opinion, like some of its strategic benefit, like if anyone builds great AI, our products get better, but people building great AI doesn't let them replicate our products. So we have this asymmetric benefit from AI. So there's a strategically commoditized your compliments construct here.
But it really is deeper than that for us. And if you've spent any time listening to Jan, you've heard that we really think that this is the best way to accelerate progress is you open source these things and people learn from them and you get back tenfold. You know, when we launched Llama one, I think it was a matter of days before somebody had a version of it running on a laptop. And I think it was a matter of weeks before someone had a version of it running on a phone.
Just from a resource standpoint, I'm sure we had the talent to do it. We weren't going to. It would have taken us years to get to go do that because we just had other things that we were doing. And so, wow, what a spectacular closed loop to have stumbled into almost with this powerful policy.
Look, it's great when the open source stuff goes to startups and goes to entrepreneurs and academics and people building stuff. More challenging if it goes to rogue nations, terrorists, criminals. What's the way to navigate that, making sure more of that innovation benefit in the loop happens that way and less of it with North Korean hackers holding hospitals ransom or other kinds of things? Yeah, for sure.
Yeah, well, there's two parts to this. I mean, I think, again, getting back to this kind of asymmetric construct of AI, information wants to be free. And I have absolutely zero faith that the most closed source thing that exists isn't actually widely available in the nation states that are inclined to attack those, whether it be through espionage directly or indirectly. I believe that's likely the case.
But setting that aside, even I still stand by this because I think the, the, the opportunity that you have to try to handcuff yourself to, um,
Slow the progress of your enemies the far bigger risk is that you just get laughed by your enemies the people were discussing Especially China are highly capable. They have a tremendously talented pool of engineers They're looking at the same thing. We're looking at it is the race it is, you know, this is this is our space race this is like this is what it looks like in our era and there's very few secrets and there's just progress and You want to make sure that you're never behind?
I agree with, by the way, your two counterpoints as important points, Boz. But I do continue to linger on the, you know, and I get the, look, we're just accelerate and we try to accelerate to get past what might the bad actors might be doing. But it is important to also kind of to some degree slow contain limit bad actors. And I tend to think that there's must still be some things we can do. So I'm kind of curious about like what what's the thought about the navigation?
Yeah, it's funny. I think a lot of people in our space read have had this conversation around safety and I think it's an important one. You know, the ones that come up most often are bio cyber and nuclear. I think there is a model there with bio, the knowledge exists. And a lot of times the threats that I hear ascribed to AI fail the Google test. Can I Google for this thing? And very often I can, in which case the AI isn't really the threat.
The information isn't the threat. It's the fact that like you can mail order these things.
And I have some friends and family who work in bio who are pretty consistently alarmed at what they're able to acquire for their labs without any kind of control. So I think there are regulatory solutions in the bio space. I think on the cyberspace, I actually feel way more optimistic about AI's ability to detect cyber attacks than to generate them. Of course, it will generate more, but I actually think we have been struggling on the detection side. And there's a lot of evidence that if you look at
what nation state actors have been able to do the U.S. with the OPM hack, with the hack on some of the crypto...
I think AI is a much more asymmetrically valuable tool in defense than it is in attack. I think we have been on the wrong side of that for a little while. So that one I'm more bullish on. To tell you the one that I'm actually the most worried about, it's not any of those. It's fraud. Good old fashioned fraud. You know, I've already had the conversation with my parents. Hey, if somebody calls you and it looks like me and it sounds like me, but they're asking you for money, ask about a fact that only I would know.
Right. There's a real education that has to happen. I talk often with people about this one and it's a hard one to wrap their heads around. But I have to remind people that actually the period that we grew up in was very unusual historically. Before the photograph and before video, all media was presumed to be possibly fake. Right. Like letters, the newspapers were presumed. You know, you didn't know the veracity of it.
There was a very unique period, never probably happened again, where you could produce a piece of media, a photograph or a video that it was impossible to imagine faking it. It was just, it was like orders of magnitude more expensive to fake than to have it be real. And so these were presumptively true. That's not going to be the case anymore. So we're going to return to like a pre 1900s media relationship that we have in media. The kids are already there, by the way, the kids are already there. The kids are already, they already know.
We have a generation to look after. It's us. It's our generation. We have a generation to look after who didn't have the antibodies for that. So for me, at least, I think that's a piece of education that I would put a lot of energy into as a national policy endeavor.
And so I think that begs the question that a lot of possible listeners have about the new AI world and how it relates to information accuracy. And sometimes that's AI created, sometimes that's user generated. Like, how do you think about this? Obviously, a lot of people have seen the new news out of Meta and your changes to fact checking and content review. Like, what's the positive case for what you guys did? And how do you think about that? I think community notes is just a better feature than...
than fact-checking was. It works at a larger scale. Like, do you use an encyclopedia these days or do you use Wikipedia? Like, it's just, it's not, it's not that hard of, it's like, this is kind of my broader thought, which is like, you know, I think it's not surprising for us as an American company. I think it may be more surprising in other parts of the world for me to say like, yeah, people are allowed to say things, believe things that aren't true.
And I think we've all learned a tough lesson that the nature of truth and what is true is also not as firm as we'd like it to be. COVID taught us some tough lessons there. We have to adjust to the fact that we grew up in this relatively golden era where, by the way, the two political parties were effectively almost the same political party. We're easing that period into what is probably more than we'd like to admit a normal period of American or global democratic upheaval of tension and
and different ideas about the future of where things are going. And the technology is playing a huge role in that. And I think education about these tools is hugely important. Going back to...
kind of the startup landscape. One of the questions I often get is, you know, because of the importance of compute, size of data, etc., is AI a game that is only going to be won by the hyperscalers? And one of the things I think is, you know, kind of very helpful about what Meta is doing with the open source stuff, right, this is in the positive category, is obviously making that a much more, you know, field where a lot of different people can play. What do you think are going to be the things that the hyperscalers, you know, Meta and the others,
are going to be kind of like, this is the areas we're going to be deploying a bunch of stuff. And what are the things that you think are some of the range of interesting things with startups and how will that play out? We're super thrilled about the role that Lama's played in building up the ecosystem of startups and giving them a better shot to innovate. And we're seeing that really play out materially as hyperscalers are forced to take on innovations that came out of these little startups and obviously vice versa is happening.
Listen, the wisdom at every generation, and you know this better than anyone read it, the wisdom of any generation is these big companies from the last generation are obviously going to win the next generation. And it almost never happens. It almost never happens. And we never know why or how until it happens. And then we're like, oh, obviously, that's why and how. And so I don't know why or how.
But I suspect there is a lot of room for truly disruptive technologies. Now, I will say, structurally, we actually know a lot more about the challenges the hyperscalers face. Google has a business model challenge, right? Like, are they willing to undermine and cannibalize one of the most successful business models, if not the most successful business model of all time? Boy, they've got the technology, the capability, they've got this tension that's tough.
easier for us. This is all gravy for us. All of our products just get better. Like, they just get better. Like, it's all good news for us. Microsoft, I think, is actually in a similar strong position. Their products get better. The consumers who use Office products get better. Having all the AI doesn't make you able to build Office, but having Office with AI is better. So I feel like us and Microsoft, when
Kind of no matter what, like, you know, with respect to the haters, I'm sorry to tell you, like we're there. I think Google's got the tension. I think Amazon somewhere between AWS certainly could be helped tremendously, but is it a race to the bottom? And they're just adding one more incremental service. So maybe it's a, it's a no op for them. They're announcing their partnership with Anthropic. They have a huge investment in Anthropic. Alexa has got a huge footprint. Can they rejuvenate Alexa with this new program?
Panos is there, I think he's obviously a tremendous talent. So I'm rooting for them. I think that'd be great to have these AIs in the homes and more interesting places. So ironically, what we have is a lot more visibility into the hyperscalers and the landscapes they face. The startups are a total wildcard. And that's what I love about them. They come out of nowhere.
I love that state of the state and reality check. It reminds me of that meme that's like, yeah, I knew search was going to be big. So I invested in Yahoo. And then I knew smartphones were going to be huge. So I did research in motion. And then social was obviously huge. So MySpace was my biggest investment. You know, it's like it's really hard to predict what's going to hit. That's right. And we underestimate the last mile. We underestimate the interface design. We underestimate the use cases.
Absolutely. So on this podcast, Reid always gets to ask the questions about science fiction, and I'm always woefully behind. I would make a wager that since you have young kids, you've probably either seen Wild Robot or read the three books. And so I'd love to ask you a question about your thoughts on it.
Of course, yeah. You have this robot who has empathy, who is, you know, talking to the animals in the forest. Like, just talk to me about Wild Robot. And do you think it's a positive depiction of AI and robotics? Negative? Warning? Like, how do you see it? I'm sure your kids have seen it. And that's one thing that's shaping their vision of robots. First of all, you know, I'm a huge film fan. And I cried like three or four times when I watched that movie.
And I watched it like again and cried again at the same point, which is unusual for me. I'm a bit of a crier and I got no problem with that. But, but no, so it was a touching film and really a film, not about robots, really a film about motherhood, really a film about parenthood. And, and so touching on, on that, you know, if I'm being a critic here,
The I'm not exactly sure how Roz is. Hey, let's all get along and not eat each other works out more than one season when the carnivores need to eat things like I don't know, kind of unresolved in the film is how exactly the carnivores survive in Roz's brave new world that she's trying to craft for them.
So I think the morality of that part is a little heavy handed for me. And I would have loved them to embrace more the circle of life construct of like, yep, this is like this is the natural way of things. And I protected you as your mother. And but I'm not sure that would work to the Fox anyways. As far as a tree is on robotics go, it's a it's an infinite magical robot that has permanent energy and has life.
arms that extend to infinity and and all those kind of things what i've been thinking a lot more about in in science fiction is like engineering oriented science fictions uh andy weir is the best of this right you know project hail mary if you know people are familiar with the martian which is a great fun one and i do think the book is better than the movie with all respect to matt damon i think project hail mary is even better it's near future science fiction so it's tangible it's optimistic i
about humanity's ability to engineer our way out of grave problems. Those I love. And those are the ones that I decided to start reading to my kids.
All right, rapid fire. Is there a movie, song, or book that fills you with optimism for the future? Oh, wonderful. You know, it's funny. One of the reasons I started getting involved with film a little bit is because I want more optimistic stories out there. I think I'm going to go with Star Trek. And you, by the way, the current series are fantastic, but you don't have to pick the current series.
It really is a beacon of optimistic science fiction in a landscape of otherwise relatively dystopian works, which I think are just a little easier to write, frankly. That's awesome. Baz, what is a question that you wish people would ask you more often? Well, we've gotten into some of them today. It's funny, people hear about my job and they certainly want to understand why.
What's happening right now? What is AI? Is it a big thing? Is it a bad thing? But they so rarely get into what is the positive vision of the future? The thing that I don't get asked often is like, paint me the picture of the beautiful future. I've spent time doing that in the mixed reality and virtual reality space. This idea of people unbounded by geography. I talked earlier about the birth lottery and where you're born is a huge factor in it.
Because it limits what opportunities you have. What if you didn't have those limits? Because the metaverse enabled you to bring the full strength of your talents to bear. And how much would humanity benefit if people, no matter where they were born, were able to bring that brilliance to the forefront? And then I talk about AR. And what if everyone had that memory, the cognition, the hearing, the vision, the capabilities to
How does society move forward when we all have those capabilities in equal measure and at measures that are superior to what biology can provide us? Well, here's another positive question. Where do you see progress or momentum outside of your industry that inspires you?
Medicine. Man, it feels like the unlock we're getting to with cell models, the unlock that we're getting to with AI being able to take on modeling tasks that were previously incomparable. We built a custom AI at FAIR to do material exploration for the optics for our glasses.
It's one of those solutions space where it's like, you know, bigger than the number of grains of sand on the earth. And you have to look at every molecule and assess what properties it has. And it was short work to build this incredibly dedicated model that narrowed us down to like 20 possible solutions of which it looks like two are going to work for our purposes. Incredible, incredible result. And so you do that and you think about medicine where you're so often trying to do a certain kind of protein folding, like medicine.
I just think the combination of these two things is going to produce an explosion in great health outcomes for people. And I'm really looking forward to that. Couldn't agree more. And Baz, you teed me up so perfectly for our final question on Possible, which is, can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity's way in the next 15 years? And what's our first step in that direction? Ooh.
I think 15 years we're looking at significant shared blending of digital into physical. So people coming together, wearing wearables, wearing glasses, and having rich conversations with people who are both present and not present.
collaborating on work on models with a true feeling of presence, a true feeling that they're all there. And someone, by the way, they're not on a wearable, they're on a phone, but they're being projected into three-dimensional space effectively through a Kodak avatar. Everyone is getting the most sense they can through whatever tools they have, whatever modalities they have access to, to be
Present we're so much more in touch with the context of our lives than we give ourselves credit for I talked earlier about the gap between the conscious and the subconscious You know the context in which I consume information. I exchange information the subtleties of facial gesture and body posture are huge portions of our brain, you know the infratemporal cortex just entire air brain dedicated to reading faces and
No one thinks that video calling is that, which makes us bound for certain things by geography. You have to be there. Now, I don't think anything will ever be as good as being there, but we can get a lot closer than we have so far. Boz, always look forward to our conversations. Such a pleasure to have you on the pod. Thanks so much. Yeah, thank you guys both for having me. Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young.
Possible is produced by Katie Sanders, Edie Allard, Sarah Schleid, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Saida Sepieva, Thanasi Dilos, Ian Ellis, Greg Beato, Parth Patil, and Ben Rellis. And a big thanks to Metta, John Erla, and Tatin Yang.