I think the gap between simulations and reality is getting closer and closer, right? Like the GTA is just like kind of one example, but even in like a lot of, like I said earlier, a lot of like AAA games, they're getting closer and closer to like reality, right? Like graphics level, like fidelity, like all of that.
I actually think that the sim to real gap is closing. And if you are able to build and rig up basically all the controls that a robot is in a 3D video game or a 3D simulation, and you basically...
trained agent to be allowed to do all the scenarios that a robot could do in real life, you can actually... That gap, this simulation to reality gap, that sim to real gap, is actually pretty close. And you should be able to generalize that to the robot in real
Like, you know, a couple of years. Welcome back to the Free Code Camp podcast, your source for raw, unedited interviews with developers. This week, we're talking with CTO and robotics engineer Peggy Wong. We'll learn how she grew up a first generation public school kid from Milwaukee who used Free Code Camp as a key resource to build her developer chops.
Her love of robotics helped her get into Stanford. And from there, we'll talk about her work on augmented reality at Oculus, self-driving cars at Lyft, and AI agents at her Y Combinator-funded game dev startup. Support for this podcast comes from a grant from Wix Studio. Wix Studio provides developers tools to rapidly build websites with everything out of the box, then extend, replace, and break boundaries with code. Learn more at wixstudio.com.
♪♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪♪
Thank you, Wong. Welcome to the Free Code Camp podcast. Thanks for having me, Quincy. This is super great to be here and it's an honor as well. Yeah, well, it's great to talk to somebody who's working on the leading edge of AI and applying a lot of these tools because we hear so much hype about AI, but what is it actually being used for? And you strike me as somebody who is picking up the state of the art and figuring out ways to apply it.
Oh, thank you. Yeah, I mean, I'm happy to talk more about it. I'm sure we'll get into a lot of this on the podcast, but I've been working in robotics since high school, and then I've been working on AI since freshman year of college. And so this is really my life's passion. I'm a huge proponent of...
AI is going to kind of change the state of robotics, agents, what we're doing as a company, ego, and also how that's going to change human lives for the better in the future.
But yeah, I'm sure Quincy will get into this a lot later. I can talk for hours about this topic. Awesome. Well, we are going to dig deep and learn as much as we can from you in terms of what the current capabilities are and what you're excited about. One thing I did want to discuss is CES, the Consumer Electronics Show held in Las Vegas every year, just wrapped up. And I wanted to see whether, as of time of recording, it literally just finished. So why...
was there anything that was on display that you thought was like a particularly striking or interesting application of AI? Gosh, there's so many interesting things, but for me personally, I think the best two things that struck me was the NVIDIA digits, which Jess and Huang showed like, I think like a $3,000 like personal computer GPU that you can run. That was super interesting because that if,
If it is true that that could be mass produced and launched very soon, that will actually change the state for the AI costs because if you're able to run these AI models locally instead of using cloud providers like OpenAI and Anthropix, Cloud, that
That means that you basically don't have to pay per token cost, which is like you pay a certain amount of money every time you run an AI call. And so if you make it something that's available on device, essentially using these NVIDIA GPUs,
That will hopefully decrease the cost so that, you know, an everyday person can only like pay like a one-time fee to run, you know, as many AI models as they want on their personal like computers or using this like personal hardware, like the NVIDIA digits space. So that's, that's something I'm really excited about. And I think that will also enable a lot of applications, new applications in robotics. Yeah.
And I can get into that too, but I think... Yeah, well, I think a lot... The big question a lot of people still have is we're several years post-ChatGPT, I guess, raising awareness of the capabilities and the rate at which capabilities are improving.
Like how are people applying these tools in exciting ways? Like, did you see any applications at a CES where like, Oh wow, I never thought of that. Or that's going to be a big game changer in terms of people actually using AI and
in kind of a consumer facing way and not just as something that's kind of abstracted away. Obviously the price performance of AI is shooting up through the roof and that's exciting. Uh, but in terms of actual applications that like you as a dev might use. Yeah. I think like the biggest consumer use case is actually still probably ChachBT. Um, I think like,
Today, I was talking to my brother who's in college, and they literally use ChachiBT to do all their homework assignments. And this is kind of crazy because I think one of the neat things about AI adoption is that the people who start using it are, I guess, instead of digital natives, they're now AI natives. They're all younger kids. They're all students. They all use...
AI to help them finish their homework assignments. And they kind of grew up in that era. And eventually, I think in like a couple of years, when they get to college, when they enter the workforce, they're going to be like, because they grew up on this technology and have used it in their school and their work, they're going to continue using it, um,
and be more open to the application of AI in the future as they grow up. And I guess like a concrete example that I'm very excited about, especially at CES, is just like the cool, especially the cool new robots that especially like kind of like that low cost like manufacturer, mostly in like Asia and China,
Where there's like human like robots that are like basically now like actually way cheaper in order of thousands of dollars instead of like.
tens of thousands or hundreds of thousands of dollars, which makes it actually pretty affordable for consumers. And then the second thing that makes it really interesting is that in conjunction with the whole like NVIDIA GPU, like NVIDIA digits announcement, if you add basically local AI on these robots, theoretically,
But we could see something very soon where these robots are able to do very like generalized tasks in today's world, such as like helping you pull your laundry, wash your dishes, do all the household chores and having like one robot to do that instead of like building like specialized robots to do like each of these tasks.
So I think that's something I'm very excited about. And I think like we're finally reaching a point where like, you know, personal robots and personal assistants can like physical assistants can actually become potentially affordable for the average consumer in a couple of years. Yeah. Well, let's like if let's say hypothetically, like AI technology.
just becomes like an appliance. Like it's a literal Rosie the robot. Like if you're familiar with the Justin show and you're like, Hey Rosie, can you cook dinner? Can you, uh, you know, wash the clothes? Can you do other kind of like helpful tasks around the house? Like we've had washing machines for like nearly a century probably. And, uh, those have been incredible labor saving device. Uh, it's not necessarily like,
I guess we have a robot that interfaces with the washing machine to put the laundry in and then maybe they fold it, things like that. I can definitely see how that is an improvement being able to give more declarative like, oh, the laundry – or maybe they just look at the hamper and they're like, oh, I better go do the laundry, right? Maybe they can make those kinds of decisions on their own.
How much of a game changer do you think that really is in terms of saving people time? Let's say, hypothetically, you had a live-in robot friend that just did stuff around the house and you didn't need to worry about it anymore. How much time do you think you could save a week? Anywhere from two to ten hours, I guess. I hate doing laundry, so I think having a robot that is able to...
empty out my dirty clothes, put it in a washing machine, stand there for like two hours. Right. Because like whenever you're doing laundry, you kind of have just like be at home, just like stand, stand like near the laundry, like like switch out the clothes, like mix and match. Right. Like, you know, several different types of delicates and colors and like blacks and whites and, you know, all that crazy stuff.
And then like some of them can be dried, some of them can't be dried. Right. And then like folding the laundry and like putting it back in your closet or in your wardrobe or something like that.
I feel like for me personally, like laundry is like definitely like a game changer, but also just like keeping things clean around the house. Right. Like potentially a robot that can also cook for you too. Like, I feel like that would be awesome for sure. I don't particularly like cooking. I hate cooking as well, but I have to, I have to learn it to, you know, survive in the modern society. So I think like, just like,
Cooking something that's like pretty good or like better than what I can cook, you know, it's going to be a game changer as well. And it also saves on like just like food costs, right? Like, like I can just like buy groceries instead of like going out to eat if I'm like, you know, feeling hungry and tired and, and don't want to cook. Yeah.
And I think like what's really interesting is that like, even though we have like these sort of appliances for ages, like humans, like people, people like us still have to use them. Right. Like it's a huge time saver for like doing the dishes and like wash, washer, laundry, but you still have to like spend time, like physically, like put, put these objects like in, in the places and like do, do the errands. And I think like, you know,
a generalized robot would be able to, you can have one robot that does like all of these things, but also, you know, like do it in the same way and like save you like hours per week. I mean, you said hours a week. That's a lot. I mean, that's a lot early rate is as a software engineer. I mean, we're talking about hundreds of dollars saved a week. So like hypothetically, if you were to take,
that, let's say hypothetically, they can introduce a humanoid robot or something like, it doesn't necessarily have to be humanoid, but it has to be able to
reach into a dishwasher and get the dishes out and put them up on the shelf. So, you know, obviously like the way our space is already designed, our houses and our apartments and everything are with human form factor in mind. So I'm a lay person. I don't know a lot about robotics, but I'm just kind of like imagining that humanoid robot would be like an ideal approach considering that our environments are already, is that one of the arguments for not just having like, they have like,
Clothes machines where you just dump the clothes in and it folds the clothes and it takes a long time with the crap You know robotics a lot of space. Yeah, yeah and and like you have to fit it in your house somehow and like, you know have a
have a place to put it. And, and like, like spaces are not very designed. Like they're not really designed for this, especially like I live in San Francisco and like, uh, her houses here in the city, like San Francisco, New York are so small that you like literally don't have room to like fit another like space.
But if you have a robot, like, well, maybe it can replace your vacuum cleaner or like, you know, like it's like a humanoid robot. That's like relatively like small that you can just like fit in a corner somewhere and it can just like do all the tasks for you. Like, I think that would be, you know, a huge time saver. Like it will be a huge cost saver as well. I mean, if you think about like the,
The iPhone and like smartphones in general. The iPhone was the one that brought in the revolution, but of course there are lots of types of smartphones now. But smartphones, like there was like this thing that really stood out to me. Somebody was like flipping through like a RadioShack catalog from like 15 years earlier before smartphones. And they were like literally all the things in this, like practically everything in this catalog that would cost me thousands and thousands of dollars, take up tons of space, would involve
tons of material that would ultimately be solid waste in the landfill somewhere. Like those things can be done with an iPhone, like flashlights, you know, um, different ways of measuring different things, uh, different ways of recording things, different things of accessing media, like, like smartphones for everything, right? Yeah. I mean, they just became kind of these Swiss army knives, uh,
like technology knives that we can carry around in our pocket and we can do so much. It's like almost like humans have superpowers because of that. So you think that there could be like a single type of robot that is essentially kind of like the iPhone for, you know, home automation?
Oh, yeah, for sure. I mean, I think that's, like, literally the future. It's, like, you basically have, like, whatever it is, like, human-like robots, human-like agents, whatever, like, kind of that new term is these days. Like, that's definitely going to be a future. I think, like...
the emphasis on humanoid is a bit more important because like Quincy you said like the iPhone is like so general and it can do like many different tasks that it's like it's not just like specific to one thing so and and I guess like phones before that were actually like very specific right so um if you look at the pre-iphone era you have like kind of like these all these different like
They like consumer tech that does different things. So like you mentioned, like flashlight, like, well, we have a we have an actual physical flashlight that people would use or the phones before that were like flip phones or BlackBerrys. You had like pagers. Right. You had like walkie talkies. You had like all these like devices.
different specific forms of technology and then the iPhone kind of combined them into all one thing. And so I think like this is a very similar analogy to what we talked before with the whole like, oh, you have these like washing machine and you have these like dishwashers and you have these like ovens. But if you have a humanoid robot, they're able to kind of almost combine them and like be able to do a little bit of everything.
Right. And they're able to generalize. You wouldn't even necessarily need a washing machine or a, like, like if a robot had all the abilities that a washing machine, they could just use any sink to like wash your clothes and wring them out and dry them and everything like that. And you could really small waste.
And then now we're back to the medieval ages. Yeah. But like, I mean, like little scrubbing board, like the whole reason people don't use scrubbing boards outside of like, you know, pioneer reenactment and stuff like that is because it's incredibly time intensive. And actually I've heard that if you try to wash,
with water, you're going to end up using more water than you would if you just use a washing machine because washing machines are more efficient. You can reuse that water. You can stack them all at once. Yeah, yeah. And it's possible that a robot wouldn't necessarily need to have all those different trappings of a washing machine with the cycles and all the motors and everything. And they could just...
You know, because their time is inexpensive and maybe it would take a little bit longer for them to go through and wash your clothes or, you know, ring dry your clothes. But you wouldn't need to buy a dryer. You wouldn't need to buy a washer or a dishwasher. I mean, there are probably at least four or five major appliances that require maintenance and breakdown and are multi-thousand dollar items that a humanoid robot could potentially solve. And again, when I say humanoid, I mean like...
form factor, like approximately the size of a human and with like two arms, you know, to potentially do manipulate objects in physical space. Yeah, definitely. I think humanoid form factor is super important because as humans, we're able to like do a variety of things. We're not, I mean, obviously people have like specialized professions in their daily jobs, but like,
you know, if we take that away and like, just like what we do in our personal lives, like humans are actually able to do like a variety of different tasks and like different scenarios. Like, I mean, you can, you can run and you can play sports. You can, you can do like all these errands. You can talk to other people. Um, you can like do specialized tasks and your job, play musical instruments, play musical instruments, chess pieces and play chess on a physical board. Uh,
And you can sit in front of a keyboard and type and just kind of effortlessly your fingers will move in a way that communicates whatever it is you want to a computer, right? And code and build anything that you want, especially with Free Code Camp, right? Yeah. So I guess one of the observations I've had from this – and I could talk about this all day. I imagine you could too. It's just –
There is a tremendous amount of potential in getting robotics right and potentially incorporating – like we've had very rudimentary robots for decades. I mean there was like the robot on Lost in Space. My dad watched that when he was a kid. So it's like 70 years old or something like that, right? Like we've had that notion of robots and we've even had like the notion of humanoid-like robots like if you've seen like Blade Runner or a lot of these movies. Oh, yeah. Robocop. The thing that has changed –
is the actual brain like the smarts of these robots and their their capabilities and and that is like the big kind of step change we've had in ai or in like robotics i guess has been the actual software side yeah but have there been big breakthroughs in hardware recently um so it's really interesting because because i think like the big breakthrough in robotics in part mostly actually does come from the software and the ai side especially like generalist robots right like
Specialized robots are very, they're not easy to build, but they're very execution-based. It's like building a washing machine. If you just want the robot to do one thing, you can build a robot that does one thing. People do it all the time in manufacturing. To build a generalist robot, especially a generalist human robot, that's a very different problem. And that actually kind of parallelizes the advancements in AI technology.
As well, like in previously, a lot of AI is like very specific. It's very like object detection orientated, right? Like you have to identify whether a picture is a dog or a cat. And but like when you train that AI, it can only do that one specific task. It can't like, I don't know, like identify a car if, you know, that's not in the training data set. Right. It can't like identify that that's a house or it can't identify that that's, you know,
some other object or even like do other things. Yeah, call it crossing the street. But in today, like, and like, this is like the big shift with AI between kind of like these very specific small, like machine learning, like training supervised learning models to today's like kind of like large language models, LLMs. And people are like, like Sam Altman are talking about, oh, we're going to reach AGI. And I think like,
artificial general intelligence. But I think that's actually possible because we're already kind of seeing that shift in the AI space from these very specific models that can only do one thing really right. And if they see anything that's outside of the training set, they completely fail.
you can kind of like parallelize like that whole advancement in AI from these specific models to these general models with LLMs. And you can kind of see that same mirror, that same, I guess, advancement in robotics where you have like a machine that does a very specific task to like a humanoid robot that can do like a variety of different tasks. And so I think like the advancements in AI is like actually like one of the biggest unlocks for robotics.
I think a secondary unlock is actually the cost of hardware has like decreased. So, I mean, obviously like Jensen Huang is at the forefront of this with NVIDIA. NVIDIA's CEO and founder. Yeah. He like is making these like GPUs like better, faster, cheaper. And that is allowing a lot of like new ability to train these like large AI models that can do like all these generalized tasks.
But at the same time, there's also on the robotic side, just like hardware, advanced manufacturing, all that has gotten cheaper as well. So now you have, again, like these like couple thousand dollar robots, right? You can buy like a robot dog for like $2,000, $3,000 now, and then maybe like a small humanoid robot for like 10K. But like before that, right? Like these robots will cost like,
Like, 40K, 100K, like, 200K. Yeah, like the Honda Asimov, I think, was one of those. Oh, yeah. Like, there was, like, insane. The short little robot that had, like, the backpack and could... Oh, yeah, yeah, yeah. Boston Dynamics, too. Yeah. Like, all of, like, those robots. I mean, obviously, they're, like, much more advanced, and they're, like, designed for, like, you know... Yeah. Like, harder conditions, but I think, like...
in terms of just like how like consumer costs and hardware has gone down, like that does open the door for a lot of people to actually be able to afford to, to buy like some of these robots or like train their own AI models. Right. So it sounds like you're almost as excited about just like the, uh, I guess accessibility in terms of like robotics being something that people, humans, normal humans, and not just like nation States can, uh, you know, uh, potentially invest in like,
Like, and that's why we had that conversation about like, okay, if it can save 10 hours a week and you multiply that toward my hourly compensation, like what's the payback period? I think that is the kind of math that a lot of people do when they're trying to decide whether a labor saving or time saving invention is worth investing in. Like one of the ways I can justify having like a really nice MacBook pro is I've probably used it more than 4,000 hours. And even though it costs $3,000, the hourly, I guess, cost of ownership is like,
75 cents or something like that, right? Oh, gosh. Yeah, yeah, yeah. Yeah, and it improves their productivity, too. That's another thing. It improved your productivity. I think, like, other than the fact that robots just, like, save you time, but, like, maybe, you know...
It saves you time, and then you can use that saved-up time to do something else that you, like, really want to do, whether that's, like, a new hobby, whether that's, like, catching up with friends, right? Whether that's, like, you know, learning how to code more on FreeCodeCamp, right? Like, it just opens up a lot more opportunity than just, you know, the time-saving and the cost itself. Yeah. Well, I want to dive into, like, your background and how you got interested in robotics, because
I mean, was this something that you were always interested in as a kid? Was there like some moment that you remember in your childhood that you were like, whoa, I'm like, this is what I want to be doing? Yeah. So I kind of...
I definitely credit robotics as like getting me into coding, which is really interesting. So, so I actually, I guess I can give a little bit more about my background. So I was born in China. I came to the U S when I was about two years old. And then like,
My parents got their master's degrees around Milwaukee, Wisconsin. And so I actually moved out here to Milwaukee, Wisconsin when I was about two years old and
They ended up getting jobs, you know, in around the Midwest, mostly in the Chicago area, sometimes in like rural Illinois, and then like back to Wisconsin, also near Milwaukee. And so I've always kind of been around the Midwest. We spent like maybe like five years around like Chicago, like rural Illinois, before we moved back to Milwaukee. And so, yeah, I mean, I call like Milwaukee my home, you know,
or I guess like I call San Francisco my home, but like Milwaukee is kind of like where my hometown, I guess. Um, and what's really interesting about Milwaukee is that it's a old school, like industrial manufacturing town. So, uh, when people think of the Midwest, they typically think of like flyover States besides like maybe like Chicago. Um, but I think
you know, even like as late as in like the 50s to the 80s, like there was a huge, I mean, industrial revolution in the US and a lot of that actually came from railroads and a lot of that path also like came through Chicago, which is why Chicago became one of the major transportation hubs of the United States. And
That kind of like industrial like revolution and that manufacturing capability actually like expanded. I mean, Milwaukee always kind of due to its close proximity with Chicago had a lot of that manufacturing capability as well. And
there are still like a lot of like uh i guess like old school like manufacturing yeah robotics companies based out of milwaukee um like johnson controls like rockwell automation like ge healthcare um they're uh i think even ge it in general although i can't confirm that um
And so you kind of just like grow up, like growing up in Milwaukee, it's like a lot of your friends' parents kind of like work in these areas. And whenever you like talk to them about work, they're always like, oh, like, you know, we're making this like cool new surgical robot to like make, you know, better surgery or making this like cool, like a better MRI machine for GE or they're like,
You know, making like robots like much more efficient at like manufacturing cars in the case of like Rockwell automation. And so we actually have like a variety of like really cool, like I guess like that culture made it very cool to kind of have like robotics clubs in school. Yeah.
Yeah. But you had lots of friends that were into robotics too. It was not like you were just like the lone kind of geeky kid who was in robotics. Did you have other friends that were interested in, you know, actually like Maker Faire type stuff like building things?
I would say so. Actually, I definitely found like more. I mean, it was almost like the same amount of people who are interested in that and like Milwaukee than like the people who are interested in that, like out in San Francisco, which which is kind of surprising. But I would say like a lot of.
Some of my friends, especially as we grow older into high school, are very interested in manufacturing in general, whether that's robotics or whether that's just cars, car manufacturing, welding, a lot of these industrial applications. A lot of them were pretty interested in that. And
Yeah, and I think, like, you know, I got pretty interested in that, too, through all, like, these stories. And I ended up joining my high school robotics team. And that actually was super interesting because it eventually, like...
I think like this kind of goes back into like what we were talking about before, like what is like kind of the cool, like newest thing to do? Like what is the newest innovation in robotics? And to me, robotics has always been a combination of like hardware, the electrical boards and stuff, and also the brain and the computer. And yeah,
When I joined the robotics club, they basically asked me, there's like three main teams on the robotics team. There's the mechanical manufacturing team, there's the electrical team, and then there's the computer team. And I was like trying to choose. And then, you know, what was really interesting was that
they always brought this up. They were like, oh, we can definitely build any type of robot to do a specific task, but what actually makes the robot work is actually the brain and the computer. And so...
I feel like that line still stuck with me from that day all the way to today as well. And I was like, oh, that's really interesting. If you compare that to humans, what are humans most valued for? Obviously, they could do specific physical tasks, but a lot of the GDP growth and the knowledge work actually come from the brain. Yeah, I mean, like a forklift.
is way stronger than a human and way more efficient at like taking pallets and putting them up on, on top of other pallets. Right. Like on giant, giant racks or something like there, there are so many robots, uh, machines that are like way more efficient than the human form is. Well, what makes humans useful is the thinking, uh, there, there's this great scene in, um,
Star Trek Voyager, I believe. That's the one with the doctor who's like a hologram. He's like stuck on the holodeck. And he's a very competent doctor and everything. And they're like – I think at one point they like lost the doctor or something. He went off the ship because he had like this hologram thing. Sorry, spoilers for, you know, a 20-year-old. But basically he gets this 29th century piece of technology that allows him to like –
basically leave the holodeck and just, but, but they needed a doctor and they're like, Oh, we'll just build one. And like, they just kind of like took his form factor and everything. And it looks like him, but it just didn't have his capability. And it didn't matter that he had hands that could like, you know, steadily, you know, hold the scalpel and all this stuff. It,
just wasn't the same, right? Because he didn't have that medical knowledge and that, I guess, tacit experience, the brains of that robot, if you will, that made him capable. It's a very funny, silly episode. But yeah, I feel like there's something deeper than just watching the robot recite Jay's anatomy verbatim. Oh my gosh. Which, you know, is a very interesting thing to do. And, you know, something that, you
AI can do today even, which is kind of crazy when you see these science fiction movies basically come to life within the last couple of years. Yeah, it's a...
So you joined the software part of the robot? Yeah. So I learned how to code basically on my high school robotics team. And, you know, a lot of the older students were very, very kind and they kind of mentored me and got me started. And what's really interesting is like while I was like kind of learning how to code, I came across your free course.
Recoding Camp, you know, website. And that's one reason, like, how I kind of, like, you know, like, kind of try to learn how to code on my own. And then, obviously, like, I was only, like, yeah, I was, like, pretty involved with my robotics club, like, all four years of high school. And I think, like,
I was like pretty excited about like kind of the future applications of robotics, even like back then. And I really wanted to do more of that in college. And so I ended up, you know, graduating and going to Stanford and, um,
Yeah, kind of like pursuing more like research, like AI research, robotics research at Stanford. And yeah, I mean, I can talk more about that, but also wanted to ask Quincy if there's like anything specific you want me to focus on. Yeah.
Well, one thing that I'm really interested in learning a little bit more about is what the experience at Stanford was like for people. Not everybody gets into Stanford. It's a very selective school to attend. You were able to get in with, you know, just your test scores and your extracurriculars, like working really hard. My understanding is you didn't have like this, you know, smooth path into there. You had to work really hard to get into Stanford. Yeah. So like,
I'm probably one of like 10 people from Wisconsin in my year at Stanford. Again, and I think like five of those people got in because they were athletes. And then not all of them are from Milwaukee either. So I think like from Milwaukee, I'm probably one of,
like three people from Milwaukee that year that got into Stanford. So it was, and I went to a public high school, so it wasn't like a private school or anything. And,
Yeah, I mean, getting into Stanford was kind of a culture shock because it seemed like a lot of the students who were there come from the East Coast or West Coast, and they went to very, very good high schools. Sometimes they went to private high schools, and they had a lot of peers who also get into Stanford or other Ivy League institutions. And I was like, oh, I really can't relate to that because I'm like, I think...
I was the first person in like 10 years in my high school who had gotten into Stanford. And like, again, Stanford just like doesn't really accept people from Wisconsin. There's only like 10 of us maybe every year. Um, and so, yeah, I was actually quite pleasantly surprised when I got in. Um, cause I, I just like, didn't think I would always going to get in just like, because they just didn't accept people, people like, like me. Um, and,
And I think like what, like obviously everybody works hard, right? Everybody who gets into Stanford and Ivy League institutions, everybody works hard. So the biggest question is like how you like differentiate yourself. And I think like for me specifically, I talked a lot about my passion for robotics and
getting into Stanford and talking about how I want to kind of bring this technology into the world in the form of a business or a startup. And I think like that actually kind of relates longer to, you know, what I've been working on today with my company, Ego. But it's kind of interesting like how that's like –
It's more about that story you tell and what motivates you in addition to all those high test scores that are almost like a baseline necessity. Yeah, and I want to talk a little bit more about that because a lot of people listening to this may be in high school themselves, but more likely maybe they have kids that they would like to eventually go to a really good school, like a really good engineering program like Stanford, one of the best in the world, that many, many people from all over the world know.
Try very hard. Like, I don't know the exact figures for applications, but they're extremely selective. And it is not trivial. One does not simply get into Stanford. I want to talk about, like, what you had to do to get into Stanford in terms of, like, test scores. And obviously your personal narrative. Extracurriculars, like, if you don't mind, like, just spending a minute or two talking about that for the benefit of...
of people who are considering applying to an elite institution like Stanford or who want their kids to be able to – maybe their kids are still young. My kids are young, but I would be thrilled if they could get into a school like Stanford 10 years from now. So what should parents encourage their kids to start doing? Yeah.
I think a lot of it is honestly like personal motivation. Like I'd say like one of the biggest things I see amongst my friends at Stanford is that like a lot of them are very like personal motivated and like, and they have like a particular passion for,
or like a specific thing that they're very excited about. And I think that actually shows a lot and like all these applications or like what you do. Obviously like test scores are kind of a necessity. Like you should try to get straight A's. If you don't mind telling me like how you did on standardized test scores, like which I understand some universities don't really require those anymore, but like they may come back. I don't know. But like how hard did you have to work to prepare for the SEP? Yeah.
I took the SAT and ACT. My SAT was worse than my ACT. I got a 35 out of 36 on my ACT, and I was valedictorian of my high school class. So I think those things definitely helped, but those things are not the differentiator factor. You don't have to be valedictorian or you don't have to get a 35 on your ACT, but you should probably
get like you know above like a 32 or 33 and you should probably be in like the top like I don't know like 10 to 20 students of your high school and just just you know have like a like a baseline kind of where like
like academically where you where you need to be. But yeah, I would say like, but then there's, you know, the opposite side, which is like a lot of people in the top 10 to 20 of their high school and have like a 36 under ACT don't end up getting into, you know, Stanford and Ivy Leagues. And I think like, the reason is because they couldn't tell a good story about like what they're personally very motivated by, and also like what they're passionate about.
Yeah, and a lot of them may not really be that differentiated from one another. They're probably – and again, I don't mean to slight anybody, but I've met kids whose parents are software engineers at Intel who grew up with half a million dollars in household income and stuff like that. They're a dime a dozen. There are tons of people like that in Palo Alto and San Jose and stuff like that. There are far fewer people who are –
first generation or second generation Americans like yourself who, you know, like one of the things you told me before we started talking was like for the first two years, you didn't even get to see your parents when you were living in the States. They were busy working and finishing graduate school and stuff like that. And then you're living in Milwaukee, which is not
Even Chicago, like yes, Chicago is like this big industrial hill. Milwaukee has stuff going on. But it is, as you said, you even said the term flyover state. I grew up in Oklahoma City, which is also considered a flyover state. That's something like coastal elites say about anything that's not touching the figure of the Atlantic.
Right. So so I do think that the fact that you had that interesting background maybe helped you differentiate, differentiate yourself from the children of elites in New York City and in San Francisco and stuff like that. Right. Yeah. I'm I'm hoping that's that's probably the reason. Yeah.
We'll never know. I mean, like, it is so competitive that there are people who have perfect SAT scores that don't get into these schools, right? Exactly. And that's something that you do need to go above and beyond merely being academically, you know, excellent. And you need to be excellent in other areas that are distinct and interesting. And it sounds like for you, programming and robotics was that kind of key differentiator. Yeah.
Yeah. And it is definitely really interesting because that actually, like, I feel like
I am a planner and I feel like I plan like several years in advance, like what I want to be doing in the next couple of years. And once I commit to something, I'm like pretty like locked in and focused. Um, and so I think like, you know, once like I started, you know, high school in robotics, I did it all four years. Um, I was pretty committed. I did like robotics research at Stanford. Um, I ended up getting a bunch of internships, um,
So I guess like I can talk a little bit more about like, like Stanford and how that got me into like ego, which is what I'm currently working on right now. So I guess, oof,
Yeah, at Stanford, I decided to basically, well, one thing, I never really realized that AI was like that big of a thing until I got to Stanford. And then everybody was like talking about AI and computer science and how it's going to be like the next big thing. And so that was actually really interesting, kind of being at the forefront of that innovation. Yeah.
And at the time I had already kind of decided that I want to focus more on the computer and like the brain side of robotics. And so a lot of like the robotics research I was doing was also more focused on kind of AI and like how to use the brain to basically control the robot. And so,
It was really interesting because actually one of the first large-scale applications of robotics at the time, and this is before we have humanoid robots, which is why I'm so excited about humanoid robots, is pre-humanoid robots. You can't actually make a humanoid robot that do a variety of tasks like you can or on the edge, like custom breakthroughs today. You made very specific robots for very specific tasks.
And the most general robot at the time was actually in self-driving cars and autonomous driving. And I was like, oh, like, that's really interesting. And a lot of the developments actually that enable self-driving cars are mostly kind of the brain side and the AI side, in addition to like some level of algorithm like sensor fusion and sensors and
And so that was like something I saw and I was like, oh, I like really, really want to get into that.
and learn more about what is that new technology that actually makes self-driving cars possible. And so obviously I ended up doing computer science with a focus in the AI track. And I actually ended up doing a lot of just talking to a lot of people in the self-driving car space, just about their thoughts and where they think the direction of the industry is going.
And I ended up getting an internship at Lyft Level 5. So Lyft is obviously, along with Uber, one of the two largest ride-sharing... Yeah, the big ride-sharing site. Yeah. Ride-sharing app slash network of drivers is...
And it's crazy because my parents have not heard of what Lyft was when, once I got the internship, because again, Wisconsin, everyone has a car. Right. So it's, it's kind of really crazy how like big the disconnect between like, you know, like, uh, everybody who uses like Lyft and Uber and like San Francisco and New York, and then like places like Wisconsin, where it's like, everybody has a car, so you don't really need to use like Uber and Lyft. Um,
Yeah, so I ended up doing a bunch of research on the behavior planning team. So the behavior planning team is essentially the planning behavior. Like it's exactly what it sounds like, how how you tell the car, like how to drive in like different scenarios. And how do you generalize that across like multiple scenarios? And I was on that team and I wrote some notes.
like mildly interesting algorithms, which is basically telling the car how to like move and stop in like different stop signs. And yeah, and then that was like something that really was super interesting. And then what happened was that COVID hit.
And then all, like, you can't really work on hardware anymore because the two years of social distancing. You have to go to a lab and stuff, right? Exactly. And, yeah, so I ended up actually switching away from self-driving cars and into Oculus. And the story behind it. Yeah, VR headset. Found it by VR headset. Yeah, yeah, yeah. It's a great program. AR, VR. Yeah.
Yeah. So I did an Oculus internship in college at Stanford because I wanted to try like different applications of robotics outside of just self-driving cars. So that was kind of like a experimental phase for me. And I ended up doing an internship at Oculus where I did like transplants.
trained like ground truth, depth sensing algorithms and depth sensing. So figuring out how to determine how far something is away. Exactly. The two dimensional image. Exactly. Yep. Um, so I did a ton of work on that. Um, and my work actually ended up being shipped as a part of the Oculus quest. Um, and, um,
Yeah, so that actually has huge applications. And interestingly enough, all of these themes kind of revolve around robotics as well.
Because basically the way that you perceive a 3D world, like that perception system is very similar, whether that's using, you know, basically essentially trying to do 3D reconstruction or depth sensing using like AR, VR technologies versus using like self-driving cars and like robotics because it's
In all of these cases, you still have to figure out what are the different things around you and how far they are. In the case of a robot, if they're that far, you have to be able to grasp and pick it up very accurately to be able to track that. In the case of ARVR, it's definitely much more of...
of like, oh, how can I warn the user who is like playing a VR game to not hit like that table or not that couch that becomes super close to me. So the work that I've done ended up being a part of the Oculus Guardian system, which warns you when things are too close. I haven't seen it in action, but my understanding is, and sorry to like, you were probably going to know what it is, but my understanding is like,
You don't actually see things in the periphery until it's relevant for you to see it. Like, oh, your hand is swinging very close to this window. Exactly. Or something like that, right? Yeah. So it is a way of like – because it would break immersion if you could just constantly see the living room around you. So they figure out how to selectively kind of hide and show things to keep you safe while you're swinging your lightsaber around or whatever it is you're doing there. Yeah.
Exactly. And part of that figuring out how close things are is that depth sensing, right? It's like figuring out exactly how close things are because Oculus doesn't have any type of 3D sensors. Like when I say 3D sensors, I mostly mean like LiDAR. But yeah, Oculus only has cameras. So like predicting how far things are from cameras is basically an AI task. Yeah.
Yeah, so... And then I obviously, after graduation, went back to Oculus. I'm more on the AR side of things at this point, and I worked on AR avatars, which is basically face tracking for AR avatars, which is basically how...
If you do something with your facial emotion or show some emotion right now, like when we're talking to each other, how do you mirror that in an AR or VR setting with a 3D avatar? Yeah, because a 3D avatar – and by the way, AR is augmented reality if we failed to define that earlier. I like to always define acronyms. So for example, if I'm using some sort of app, they're not going to try to –
reproduce every pore of my skin exactly here on my head and like you know the exact amount of gray hair I have in my brain oh no no you don't have any gray hair Quincy you look really young so what it will do instead is it'll just kind of like this Nintendo Wii version of the Nintendo Wii yeah is what they're called me with two eyes um so oh my gosh that is such a yeah um
Yeah, exactly. I mean, Quincy explained it. I couldn't have explained it any better than Quincy had. So basically, yeah.
Yeah, it's basically trying your best to mirror whatever your expression has on a 3D character with potentially like less fidelity, right? A 3D virtual character that doesn't look exactly like you in kind of like almost like low fidelity, kind of me setting, like both in augmented reality and in virtual reality. So that was a lot of, you know,
with a mix of like new kind of AI technology and machine learning to collect massive amounts of training data and do 3D reconstruction on humans to be able to train that pipeline, which is eventually released on like basically phones and VR headsets worldwide in real time. So that was super interesting. And yeah,
I actually met my co-founder.
you know, in like Oculus and he actually worked on the Horizon world part of things. So Horizon is the like kind of the VR like social, like social space, like UGC platform where I think they're trying to do something very similar to Roblox where they can have like people like hang out in the 3D space and like play like video games together in that setting.
And, um, uh, I was thinking about leaving just, just for, you know, personal reasons and some of the politics, uh, in, in the org. And we actually hit it off really well. And we decided to go ahead and start a company together. Um, and it's, it's called ego. Um, we're building, uh,
human-like AI agents in games. And what's really interesting is that we've kind of come a full circle because robotics is basically how AI embodied, I would say, embodied agents, right? Embodied AI agents can do things and do a variety of different things with the body in the physical world. But
was really interesting is that that is essentially the same technology stack where you can have agents that do everything a human could do in a virtual 3D space in things like games. So if I understand correctly, you're kind of giving like...
the AI a form factor like having a physical human body. Like if they need to go somewhere, computers can instantaneously transport themselves anywhere, right? But they're not corporeal. They don't have some physical form. They're just, you know, there. Yeah.
Exactly. I don't know how to articulate it because there's no such concept in normal natural language to articulate that sort of stuff. I'm sure there's technical terms for it, but they are kind of everywhere all at once wherever they need to be and stuff like that. But once you apply, like you take like – I'll use Grand Theft Auto as an analogy because – Yeah. Oh, we love GTA. Yeah.
A lot of self-driving car training was actually done in Grantham. Now, I'm not sure if they do, like, the serious industrial training, but that's a common thing to use as, like, a robotics student, is my understanding, is to, like...
kind of initiate some sort of like being in GTA, like whether that's a goat that just walks around and like walk right through cars and destroy everything or whatever. I think it was like a deer. I saw some demo of like this deer that was like, it's very chaotic. Yeah. But some, some crocodiles and GTA six is, is the new thing. I think. Yeah. So, so I guess my question then is, so a lot of what you're doing with ego is kind of,
Giving an AI a body. So it's like, wow. Exactly. Kind of like the first thing an AI ever does in a movie whenever it takes over a body or something. It picks up their hands and goes, whoa. You know, like that kind of thing.
It's exactly... That's exactly what we're doing. We think that robotics, obviously, like, a lot of very cool people are working on that. And, obviously, I've had a huge interest in robotics for a very long time. But I think it's just so interesting how that can be applied to, like, different industries, like gaming. So I guess, like, I didn't, like, mention this at all during this podcast because I was talking about robotics. But I actually, like...
partly like this all kind of fits together because I also partly got into coding because of gaming. I am, I obviously played Pokemon, uh, like Emerald. I'm showing my age here, but yeah, Pokemon Emerald and old, uh, Game Boy Advance ones probably. Yeah. Game Boys, um, Game Boy Advance. Um, I was not a part of the, you know, fire red leaf green generation. I'm not that old, but yeah. Um,
Yeah, and then obviously I had the Wii, right? I played, like, all the Nintendo, like, devices as I was a kid. And I actually, like, during the time when I was, like, learning how to code in robotics, when I was, like, going online, I was, like, literally looking up, like, how do I, like, mod the Pokemon games on emulators so that I can change, like, get cheats, right? Get unlimited master balls, right?
And that requires some assembly coding. And that got me pretty interested in coding beyond the fact that it can just be used for robotics. And it was kind of an intellectual exercise itself. Yeah, and then, I mean, obviously, I still game. I play a lot of games. I can't play too many games because then I wouldn't have time to do anything else.
And so it's kind of interesting how my personal life and my professional life kind of converged in the sense of ego, because games are essentially simulations for robots in some sense. GTA is a great example. But, you know, also, like...
you know, like a lot of AAA games are almost indistinguishable from reality these days, right? Like Red Dead Redemption, I think is a classic example.
Baldur's Gate 3 is also pretty cool, especially the graphics. Even Final Fantasy these days look more and more realistic. Yeah, and usually they're stylized, but they could be somewhat photorealistic if they wanted to be, but usually because of the uncanny...
Uncanny Valley. They don't try to make it too photorealistic because then it could be creepy, I guess. Yeah. Too similar but not exactly human as creepy.
It's also like less of an art form when it's like more realistic in some weird sense, because then you're like, oh, it's just real life. Right. It's like that weird thing between like, oh, what is art? Is it, is photography art or is photography not art? Like, you know, um, yeah. So, so, so, uh, with ego, let's just talk about what it, what it does. Like you use this term, uh, I think it's like endless games or something like that. Infinite games. Infinite games. Yeah. Yeah.
What exactly is an infinite game? So the vision of ego started when my co-founder and I were like, oh, we want to build an infinite game, which is a game that you can play forever. It's a...
It's basically what's explained in the Sword Art Online or the Matrix. Okay, yeah, I've watched at least one episode of it. My kids didn't like it. Sometimes we tell people we're building Sword Art Online and people are like, oh, what? What is that? But yeah, that's actually the vision of Ego. We're building infinite games, Sword Art Online, where people...
you can like essentially play like any game that you're interested in and because the world and the agent will just like generate that for you while you're playing it. And if based on your own like personal interests, based on like what style, like of art you want, based on what type of game play you want, the game will like generate it with, with AI based on what,
Now, we realize that that's like a very ambitious and huge, huge project. And we actually don't think that the technology and the infrastructure is there yet. So we kind of have to build all the building blocks to get to there eventually. And we're
What is most interesting about the rise of AI and ChatGPT and large language models, like all these other models, like Lama and Claude and whatnot, is that you can actually make human-like agents in games. So essentially, like...
Some might call it, like, AGI or sentience. There's, like, lots of hype-y terms being thrown around on the internet. But I think it's actually, like, a real thing because sometimes, even if you're just talking to, like, ChatGPT, like, and you pretend, like, it's your therapist or your girlfriend or your boyfriend and you, like, talk to it the way that you would talk to a human, it feels like it actually has, like, real emotions. Yeah. And so...
when you give these AI agents a body in a video game, like a virtual 3D body, where it can actually move around and it can perform actions and it can maybe shake your hand or give you a high five, right? They actually look and feel and behave as if they were real humans. And that's actually super interesting to us. I think that enables a variety of new applications and games.
And some of the ones that we're focused on is human-like NPCs, right? NPCs that you can talk to that behave. Non-player characters. Yeah. So, man, there's so much to unpack there. One thing is this phenomenon of humans kind of perceiving AI agents and characters that
aren't even real as kind of real and building like a kind of a relationship with them. I mean, like Hatsune Miku, we got like a lot of Hatsune Miku. And that was like pre-AI, right? Yeah, that was pre-AI. It's just like a bunch of... It's just a human. Engineers and musicians bringing her to life, right? Have you ever seen like the blue...
long blue twin tails, uh, anime characters. She's everywhere. Uh, and it's basically like this company that makes like a, a voice thing. Yeah. Oh, well like the original product was you could just like program what you wanted her to sing and then you could control the pitch and everything. And they had like all these really high quality samples and they stitched them together. So it seamlessly sounded like a human woman was singing.
But you could be singing whatever you want, whatever words, whatever notes, and whatever sequence and all that stuff. So it gave you the control of basically having your own programmable vocalist just like you could program a drum machine to kind of act like a drummer, right? But that suspension of disbelief, if you will, the human's experience, is…
And an interesting one because as long as you know that you're actually interacting with an AI agent and it's not somebody trying to scam you like at scale with like, you know, I think it's called like pig butchering scammers or something. Oh, gosh. Like all those...
Romance scams. Tinder scams. As long as it's something like that where it's computationally inexpensive to potentially scam millions of people and most of them will know what's going on but some people will fall for it and that'll pay off the cost of the compute and all that stuff so it's even positive to keep scamming.
if you have absolutely no morals and you're just a Machiavellian bastard. That's very sad. But like, as long as there's consensual interaction with an AI agent and you know what's happening,
A lot of people may be creeped out about it by this, but there's another human phenomenon that is very important to the way pretty much everything works in society. And that is the human brain will perceive static images that are in rapid succession as like kind of like a video type phenomenon, right? Like enteromorphicized, I think, right? Yeah. I'm not sure what the exact term is, but basically like,
If I'm staring at a movie and it's 24 frames per second and I'm in a movie theater, it's not like, okay, there's an image of a guy. He's standing there. Oh, okay, here's a new image. What is this? Oh, this guy is standing like slightly farther to the right. Oh, look, here's another image. He's standing even farther to the right. Like that's not how the human brain works. It kind of interprets the, you know, even 24 frames per second as being a fluid kind of like visual experience. And the human brain could easily have not worked like that. And then movies just wouldn't be possible.
Exactly. Like the actual human frame rate is like 60 frames per second. I think it's, yeah, it's actually, I think like there's the human frame loop and then there's also the, I mean, 60 frames per second is like already indistinguishable from like reality. But I think we actually tested this. The top pro gamers like reaction speed is somewhere between 100 to 300 milliseconds.
Yeah. So even if you see an image like on the screen, like for you to be able to react, it will take at least a 10th of a second. And if you go to a science museum, they'll often have this thing that will randomly drop a ruler and you catch it. And where you caught it tells you your reaction speed to, uh, being able to catch like the, the,
ruler getting dropped and i think like most people have a reaction speed of like 0.25 seconds so 250 milliseconds whereas a pro gamer might have half that which is phenomenal and unfortunately they're going to lose that as they get older because everybody gets older but um but to get back to so to some extent like the phenomenon of humans being able to build relationships with
characters that are not real that are like ai uh essentially is like a positive quirk uh it can be used in a positive way to create like these kind of like agents that people can build relationships with uh whereas if that phenomenon didn't exist people would just be the whole time oh it's not a real human being whatever exactly just walk away yeah but because that quirk of humanity exists there is space for these infinite games where you
can have like extremely esoteric characters. Like, let's say I want to bring back from the dead, like some very, very specific musician from like the broke period that very few people are interested, but I want to jam with that person. Right. Like,
AI could make something like that possible, and it's so specific that there would not be a market for creating a Hatsune Miku version of this specific composer from the broke period. It just wouldn't be viable from an economic perspective. But with AI in the mix, suddenly things that were not feasible can be done kind of on the fly inexpensively.
And it's super interesting because obviously there's this whole generation of AI therapists, AI boyfriend, girlfriend, which is interesting, but a lot of these apps are still pretty chat-based.
So during the pandemic, I played a lot of Animal Crossing and a lot of my friends also played a lot of Animal Crossing. And a lot of people have like really fallen off of playing Animal Crossing after after the pandemic ended and people started going back to work.
But one thing that I've noticed across all of my friends who continue playing Animal Crossing, even after the pandemic ended and would continue to spend hundreds of hours in this game, are people who actually really develop a personal relationship with the characters. They're playing because they want to interact with the characters more. They want to build their relationship with their characters. They want to give them gifts and things like that. And
Already you can kind of like see like how building a relationship with a virtual character in a 3D space is already like a huge phenomenon, especially amongst like today. I think like people are going like spending more and more their time online and like less of their time in real life. Not sure if that's a good thing or not, but we that's but that that is kind of the reality. And yeah.
virtual characters that only talk to you via text or like voices only get you so far, but virtual characters that can, you know, behave like humans and have a body in like a 3D, like virtual space. Like that's actually super, super interesting.
And that has applications actually beyond NPCs and building relationships with these virtual characters. Obviously, that also has applications where they can help you, for example, train a player up and coach them for more competitive games. That has applications where these AI companions can play with you
as a character across both single-player and multiplayer games when your friends are not online, has applications in terms of just playtesting games, right? And trying to find every bug and listing that report. And having...
like pinging the engineering team to fix it. Or in some cases, maybe the AI can do code gen and like fix the bugs themselves. Yeah. That's pretty exciting. The notion that like something's broken and you'd be like, Hey, like you see that, why is that tree like floating above the ground? Oh, let me fix that real quick. And then the AI agent puts in a pull request. I mean, that was remarkable. Yeah.
One thing that you said there about Animal Crossing and the characters keeping people come back, I'm convinced that's why World of Warcraft, if World of Warcraft and games like that, MMO or MMORPG, where they have a physically instantiated human-like body, whether that's a dwarf or an elf or something like that, but they're running around, they're doing stuff together, they're going on raids together.
Imagine that you have all these friends and they're interesting people that are living in Omaha or wherever that you're getting on. You're grabbing your Dr. Pepper and you're sitting down and you're playing with them for a few hours and going into some dungeon or going and fighting some other guilds or something like that. And it is the people that keep...
Exactly. Gameplay is not that competitive. It's like a kill, loot. It's also very old now, which is kind of crazy. Yeah. But the people are what keep people interested, right? Like the conversations and like, oh, hey, how's your kid doing? Those kinds of conversations and that feeling of camaraderie.
And, yes, you can't really achieve that if you know the AI is not a real person and they don't have a life outside of this AI. If they do, that backstory is fabricated because they're not real. They don't have to make rent. But what's really crazy is now –
if you make an AI that has a virtual body, right? Like exactly the way that a human player would have. What if you can't tell the difference whether that player is an AI or they're human, right? That's the current... I mean, that to me is kind of like an ethical...
Boundary, like I would be disappointed and upset if like somebody I built up a long personal relationship, I found out they were an AI. Now, if I know, like characters in Animal Crossing, you know they're not real people. Yeah, that's true. You know, so you can build up a relationship with them and you go, oh, that's cute, you know? Like in Mario...
Like I would pick up the little baby penguin and I'd carry it over to the mama penguin. And I thought I had a personal relationship, but nowhere in that process did I feel like I was interacting with a real, you know, penguin that, that like, you know, mortal fear of dying and stuff like that. Yeah.
Yeah, it would be so sad. Yeah, exactly. I mean, Hollywood loves to do movies about these kinds of things like, oh, what if so-and-so didn't realize there were a character in a novel or a character in a video game or something like that, right? But like...
I think there has to be consent. Like a disclaimer. Hey, you know, so-and-so is a character in this game. They are not like logging off and going to work at 7-Eleven and then going home and fighting with their parents. You know? They're not a real person.
But the beauty of AI is now you can have both, right? So, like, if your human players or, like, if your human friends are not online because they have work and they have school and they have, like, all these, like, other personal obligations and they have to make friends, right? You now, theoretically, right, now have...
24-7 available AI companions or AI players that you can play with whenever you want just so you're not the only one online. Yeah, and that's not really fundamentally different from like, oh, my friends aren't available to play chess with me, so I guess I'll play against the computer. That's true, yeah. But I guess my point is... The computer would be smarter. No, yeah, you can always just choice me. You can set it up easier than you... Yeah, anyway...
You don't feel any accomplishment when you do it. I want to get destroyed. So I think that is like one little thing that I will opine upon is I don't think it's healthy for people to get like catfished, so to speak, into like either talking with a human or actually talking with an AI agent. Like I think that it needs to be like
illegal or something like that. Like there needs to be some sort of like required disclosure whenever you're operating with an AI. Cause I feel like it just feels extremely violating when you get bait and switch and you're like, Oh, you know, I really have a strong, you know, sentiment toward this person. And like, I love checking in with them. And then you find out they're not real. Like that's like,
I mean, just, you know, like pardon anybody who's listening with kids around, but like, it's like, it's almost kind of like that stab in the heart when you realize, you know, so-and-so isn't real. The holiday is based around, right? And so I won't, I won't get too explicit. Really? All these years. Destroys your childhood. Anyway. So,
I want to just ask you a couple quick questions about... Yeah, of course. Because I'm very excited to learn your perspective. You've worked in self-driving and you've worked in AR, VR and stuff like that. How close are we to, like, I guess...
true, you know, automated full self-driving in your opinion, where I can get in my car in Dallas and I can say, take me to Peggy's place in San Francisco. Cause we're going to go eat some, uh, we're going to go eat some, what is something people love to eat? And, uh, we're going to eat some tapas mission, mission, mission burritos, mission burritos, mission has the best burritos. I went to the best. Oh my gosh. Okay. So,
Just drive me there. I'm going to, I'm going to sleep. I'm going to, you know, play some video games on my Game Boy Advance that I've modified to have like better battery life or something like that. And I'm just going to hang out. Right. And maybe my cat will be at my side and we're going to arrive in approximately 36 hours.
in the middle and we're going to be able to get some briefings. Like how far are we? And Quincy, now you have to do this. Now that the, you know, once the technology becomes available, we have to, we have to, you have to schedule some time for that. That would be like an entire week of my life. If I'm really, you can,
You can live stream it or record the YouTube video of that too. It'll be like the Desert Boss Challenge. Like, okay, we're still looking at flat ground. Oh, what Pokemon game am I on now? I'm going to play all the generations. So not like the view of the road. Well, we can do both. Both with cameras. Okay, yeah, yeah.
All right, so that hypothetical goal of me just being able to sit down, turn the keys in the ignition or press the button, and then just the car figures out everything that needs to happen between there and then to safely get me to San Francisco. Oof. I think – I'm an optimist. I think we are about three to five years away from that. Three to five years. Yeah. I think – okay, so for –
Actually, if you come to San Francisco right now, or I think in a couple other cities like L.A., even though there's fires there right now. In Phoenix or something? Phoenix. Arizona is very flat and has very grid-like roads, so it's a common testing ground. Yeah, yeah, yeah. Phoenix. But they're carefully mapped out, and they have lots of training data for all the different roads and stuff like that. Yes. It's not the same as driving like –
you know, on highway conditions driving, like, and if it starts raining, it's like a lot of different things can basically like if the car doesn't feel it's safe, then it will stop operating basically. Yeah. Yeah. So, um, the reason why I say this, it's like for like people who don't know, uh, Waymo is, has been operational in San Francisco for, I think like the last two years. Um, and, um,
Yeah, last two years. And then Cruise, which unfortunately recently got shut down by GM, had also been operational in San Francisco for about the same amount of time that Waymo had. And Waymo has been working on this problem, I believe, since like 2008 or like 2010. So they've been working on this research on self-driving cars for a very, very long time. And the reason why I think it'll happen in the next three to five years is actually...
I think the technology has actually gotten there. I think it's a matter of engineering and productionizing the technology. And the reason I say that is, yes, because they do do a lot of the manual mapping and they do have a lot of fail-safe systems to ensure that these cars don't go rogue and start crashing or whatever. And
It's, uh, it's interesting because the safety standards for like self-driving cars is actually like way higher than human, human drivers. And so like Waymo hasn't like, I think it had like a couple of like minor accidents, but none of it was, um, it's, it's fault actually. And, um, it's usually the fault of the human driver. Um,
And the fact that Waymo has been like operating a fleet of cars in San Francisco for the last two years and had like zero, nearly zero accidents, right. Is, is something insane. And like the approach of kind of like generalization is, is definitely like a hard problem to work on, but I actually think that we are there already in terms of like, just like, yeah,
capability with human drivers. I think something that self-driving cars have to show is that they're actually better than human drivers. And that's like, especially with, you know, the regulations and like, just like how people, how safe people feel like being in them. So the bar for them to reach that level of quality is actually much higher than like a human, like car, like human driving a car or a car manufacturer. Yeah.
And I think the technology to do that actually does already exist. Right. So Waymo has been doing tests for highway driving. They're opening up highway driving very soon. They're available across like a variety of environments. They have tested it in like bad weather conditions such as rain or snow. Waymo actually drives fine in the rain if you've ridden one in San Francisco. Right.
And I think the main, and it's actually trivial for Google to map out every city because they own Google Maps. And they can, it's so, in terms of like, for any other company, I'd be a little bit more worried about the whole mapping process and like them updating the maps like for every city that they launch in. But for Google, that's kind of a trivial problem. So, yeah.
Yeah, I actually think like three to five years, if not sooner. That's very bullish. One question I have is like, are there any big engineering breakthroughs that you think would accelerate that? Fast, large language models that are able to generalize. Because one of the cool things I think, like people talk about artificial general intelligence, AI, like robotics that are able to do a variety of tasks, right?
Large language models are, in some ways, if you can think of it as an approximation for human reasoning and human brain, if you enable large language models to make decisions at a very, very fast pace, almost like a human driver would in accident-prone scenarios, you can actually help mitigate a lot of these edge cases that Waymo is going to see on the road. And then
So I think that's like the, they already exist in some sense. They need to get better and they need to get faster. And if they're able to do that, then I think like self-driving cars that are able to generalize, like minus, you know, kind of the whole engineering effort will be able to scale very, very, very quickly. Self-driving cars and robotics in general. Awesome. And on a related question, like how far do you think we are from like
Ready Player One. I don't know if you've read the novel. Oh, yeah. I read a novel. I watched a movie. Oh, Oklahoma City. That's your base. Yeah. But how far are we from having this? Obviously, there's the treadmill, the 3D treadmill that helps with... My understanding is for VR, there are some fundamental limitations like how humans perceive. They make it disorienting and nauseating to run around without actually having the body run around. Yeah.
But like, let's assume that the, the, the, the eight directional or multi-directional treadmills existed just like in, in the book or in the movie where you can like be walking around and you can be in a stationary place. You don't have to worry about reaching the edge of your room and you, and you can do things and it could be like you're walking around in,
you know, World of Warcraft type environment. How far are we from that, from not like the hardware associated with like the treadmill type things, but in terms of other aspects of VR that could get us to where it feels like a compelling experience and it's not just like a kind of a simplified like
you know, Nintendo, we, me type, uh, experience, but like, it is actually like, it feels like you're in world of Warcraft. Cause my understanding is, is it's a lot harder to have world of Warcraft render, like on two different things and like how high enough resolution, high enough frame rate and all that stuff to make it feel real than it is to just look at a monitor. That's, you know, 128 Hertz or something like that.
That's actually a hardware limitation. So actually, in terms of software capability, and actually somebody should build this and prove that it's actually possible, because I feel like that would actually be super inspiring to the whole field of VR.
is that, like, you can actually, like, as a one-time thing, and Apple Vision Pro kind of proved, like, some aspects of this, you can build a high, a super, super high-fidelity VR headset for, like, a very specific use case, and that is to basically exactly what you're talking about, render World of Warcraft in, like, super fast, I think it's, like,
more than 120 hertz per second in full 360 degree view with decent quality graphics. I think that is actually possible to build today, but it's not possible for it to be economical and be able to mass produce because it'll have to be higher level quality than the Apple Vision Pro
And the Apple vision pro is like $4,000 and like, you know, like most people don't and like, don't use it because there's like not enough content. So if somebody like, I don't know, like Apple or like a meta or like another like billion dollar company would want to like take on this research endeavor. And, um,
basically build a super high like fidelity, like VR hardware that can render things and both 360 at like 150 Hertz or whatever. And somebody actually builds a game, right. With that level of graphics and quality. And, and in that like 360 degree view, um,
I think that that's actually possible. I just think that it will cost a lot of money in terms of research and development and just like hardware costs. But yeah,
Yeah, I mean, I'm bullish. I think we'll get there, like, pretty soon. But, again, it's, like, very few companies are pushing the forefront of VR today, and so that's always kind of a sad state of affairs. Like, I'm not sure, like, how much more money Apple is investing in VR after the Vision Pro, like, didn't quite take off. Yeah. Well, one question I have related to that is,
Does it need to take off? Do we need VR or can we continue to suspend disbelief by looking at 2D screens and still have really compelling video game experiences? Because 3D TVs didn't take off. People still watch movies. It's just they don't bother with the 3D aspect because it turns out that it's immersive enough to watch a really good movie on a 4K monitor or something like that. And as I think Sergey Brin pointed out, like...
If you have like an, you know, a smartphone and you hold it a few inches from your face and you watch it, it's like, you're watching it. I'm actually basically like Google even had that little cardboard thing where they, you just put a cardboard, right? Like,
Do you think there's some limit to how immersive something can be if it's just on a 2D screen? Because I can immerse myself in a game of chess or Dominion or something like that that I'm playing in a browser, and that's totally sufficient because of the way the human brain works. Are there phenomena like that where you don't necessarily... Kind of how I was talking about you don't necessarily need to have 150 hertz to make a...
what seems to be like a continuous video because the way the brain works, 24 frames per second is enough to like help someone feel like this old Al Pacino movie from the 1970s is sufficiently, you know? Yeah. I mean, I think in VR it's a technology, I guess like barrier is a lot higher just because part of the reason the refresh rate has to be so fast and it has to be like super high quality in terms of 360 is because the
Whenever you move your head, like, the screen also, like... Like, the perception that the screen also moves has to move like that with you. And that all has to be, like, synced up. And then...
if you, if it doesn't sync up in like the correct frame rate, you feel like really nauseated. Right. And so that's like the biggest, like kind of like tech blocker is like, um, whenever like you move your head, like the scene also moves. Right. Um, and refreshes like really fast. Um, but in terms of like the immersivity question, I think it actually, um,
Well, one, I don't know because I feel like if we reach that point of like ready player one VR, that's actually going to look... If it truly looks and feels indistinguishable from reality, I think a lot of people who have escaped its tendencies, right? People who watch movies, who read fantasy novels, who like blitz through like 12 seasons of Game of Thrones, right? Like they're going to want that, right? And I don't know how...
big of a portion of the population that's going to be, but I think that a good amount of people would probably want that. Now, they might be, like, you know, like, even, like, gamers, right? Like, people who are, like, I'd say, like, casual gamers or people who are more, like, hardcore gamers, right? Like, you never, like, hardcore gamers are always gaming. They're, like, they're, like, playing League, like, 24-7, right? You never see them out of their basement. And I feel like
Those are the types of people who would be down to kind of be in a more fully immersive world. Whether the general population wants that, my guess is probably no, because you're right. Like a lot of people are totally OK with just like watching a movie at a movie theater or just like watching a movie and just like a living room TV.
or are just okay with going out and taking a walk in a park and seeing the sunlight without all this AR, VR stuff in real life. But I think given that a lot of people do game very, very heavily and who are willing to spend a lot of time and money and resources on gaming, I think...
There's a good amount of the population who would be very, very into this sort of thing. Yeah. Well, let's talk about the role of these AI agents in making games more compelling. We talked a little bit about it, but what does the future look like? Can you paint us a picture of, let's say hypothetically, I wanted to go back and live in kind of the Baroque composer meta, where it's just a bunch of composers trying to one-up each other and impress the...
the King or the Kaiser, wherever the heck they are in the world. And they're, they're just trying to, uh, and everybody's wearing these fancy like clothes. And it's like, you're, if you watch the movie Amadeus, amazing movie. Uh, it's like, it's like that. And I just want to go to that world. And it's not cost effective for, uh,
AAA game studio to create a Baroque simulator type world, but we have enough historical documentation about how people talked, how people acted then, that we could potentially create a bunch of AI agents whom I could interact with so I could live out my fantasies of being a composer and one-upping Mozart's Handelman, something like that, right? Yeah.
So it's a gaming experience just specifically for what I want to have my power. You have a really unique vision of what exactly what you want. Maybe there are hundreds of thousands of people that would be interested in that, but there aren't necessarily 10 million people that would rather be playing Call of Duty or something. Right. Yeah. Yeah. So, um,
Yeah, I think that's actually super compelling. That's actually one of the use cases that we do want to enable with Ego is you can basically create your own personal simulation of exactly what you want, right? Whether that's a Baroque period style composer fight or whether that's an Animal Crossing style hosy villager or whether that's
I don't know, like Ready Player One-esque landscape where you're in a dystopian world and you're trying to save the world and all of the characters in that world feel pretty realistic. Yeah.
Like, yeah, I think like, I think that's, that's effectively the vision of what we want to create with Ego. I think the biggest blocker to that vision isn't actually the character's part. It is actually the art part of like how that's going to be generated. Because I think like the big, big thing, like blocking a lot of this from existing and why people,
game studios are, you know, so they spend a lot of like big budget on games is because you have to budget out like where the production cost goes. And that's usually more in the case of building these like immersive environments and building these, these art assets. Yeah.
So, yeah, I think we'll get there, but we have to kind of build all the infra that gets us there first before that vision becomes a reality. So essentially like tooling, just creating the tools that allow for game designers to sit down and just have, I guess, more powerful primitives that they're working with. I don't know if that's a correct way of putting it.
I would say we're specifically focused on agents and human-like agents. We do see the opportunity for a lot of game designers to kind of like design their own scenarios, whether that's like scenes or characters or characters that have different motivations, different memories, different ways of interacting with the world. I think that is something that's like super compelling and what we're working towards. But yeah,
I think in terms... There's been a lot of discussions on AIR, and I don't really want to get into the philosophical and ethical quandaries about that. But I think that is probably the huge limitation about that. And I do think that even...
like if you have if you make it really easy for people to be able to create their scenarios on a fly and to be able to procedurally generate um worlds um on the fly with with different characters um that could already start to be pretty compelling for people and i think like obviously like the vision is you know exactly what you described like be able to create any scene and then for it to
generate whole like simulations whole worlds um basically generate the game as you're like thinking about what to play next not even like typing what's about to play next and and then the ai will generate the the world and the characters for you and you can like build relationships with the characters and and um yeah and you and you can like maybe romance them or or like
make them your best friend, right? And I think that's actually super, super interesting. Yeah, or make them extremely adversarial. Enemies. A lot of things are based around creating great passionate friendships and stuff like that, both romantic. But I think the notion of creating a nemesis who's constantly against you.
you, you know, undermine you like, like a kind of a Moriarty to your Sherlock. I think that could be a really cool use for these two. And one thing I will say, just like we touched, we're not, we're not going to talk about the AI art. You know, there are a lot of ongoing lawsuits and stuff like that. And of course I think the artists have plenty of reason to be aggrieved musicians, everybody who's creating anything free code camp authors, you know,
of which we have more than, I think, like 700,000 or 800,000 forum threads that were most likely scraped as part of training data. We get more bot traffic now than we've ever gotten, like just people training LLMs and stuff like that and scraping, even though we have no scrape. But I will say there is a lot of public domain books, live,
No, Baruch. No, Baruch.
If somebody does want to train an LLM just on like publicly available information and create games using that, let me know. Cause I'd like to talk to you if you're, but one thing I will say is that I'm really excited about the possibility. Like we built like this visual novel game a while back and even visual novel. I love, I love, I love, love those games. I mean like it's, it's,
The production value, it's like the true indie game dev. Okay, let's talk about indie game development. Currently, there are people like, I think Derek Yu created Spelunky. He did everything himself, I believe, including the music. And it's just like a passion of one man's vision for what a game should be. And my kids love that game, and they probably watch me play it like 20 hours or something like that. It's a great game. Yeah. Do you think tools...
Like Ego, like, I mean, assuming that Ego doesn't, isn't just like a standalone game, but that is like packaged kind of like Unreal Engine was actually based off of Unreal the game. Yeah, Unreal Tournament. Yeah. Like a lot of the, you know, like everybody can say, wow, this game looks amazing. Okay, well, how would you like to license this engine and use it to build similar games?
And that ultimately became a much bigger business than creating the game itself. Are you all interested in potentially going in that tooling direction and potentially licensing out the capabilities? I think so. So I think one of the – we actually went through Y Combinator, which is a startup accelerator, and we got the chance to talk to Paul Graham, PG. Mm-hmm.
And we told him about our vision of creating an infinite game. And the way that he pitched our idea back to us, and he's phenomenal at this sort of thing, is he basically said, you're building a game that's also a game engine. And the reason it is like that, because while you're playing the infinite game, you're basically...
have to have like some sort of game engine to be able to build out like all these like different scenes, all these different simulations, all these different scenarios and characters. So you already have effectively a game engine that's running in the background. I see no reason for us to like, not like give this technology to, um, other people who also like game designers and game developers who also want to build the games, um, like that. Um,
And but I do think that there might be like some limitations on our part, like, you know, as we like get there is like, oh, potentially we would want them to kind of build build it on our platform.
Or, like, you know, build it on ego, right? But that's a pretty big, I guess, like, that's a little bit long-term. And then, obviously, I can talk about that, like, later on. Well, I mean, there are plenty of, like, analogs and examples, like the Unreal Engine, like Unity 3D, which is, like,
Open source, but, uh, like, and it's even free until you hit a certain point. And then, yeah, exactly. And I think that's a very egalitarian because it ensures that like people, hobbyists and people that are just creating extremely niche experiences don't have to pay a bunch of money up front because that would restrict creativity. And I think like indie developers are like, um, I think like even it's really interesting because even like tools like unity and unreal, um,
they're not actually that indie friendly. If you think about it, like they're, they're like way more indie friendly than they were in the past for sure. But like, it's, it's, uh, if, if it's your first time getting into like game development, um, it's actually still like quite hard to, uh, like wrap your head around it and, and get ready. Um,
uh, like just like kind of build things out of the box. And we actually think that, especially like on, on like the coding side and just like, even in terms of like the UI and like making it more streamlined, there's a lot you can do to, uh, make it easier for like, uh, indie game devs and kids, kids, or like people just beginning to learn, um,
how to code or make games better. And actually one good example of that is Roblox, right? Roblox Studio is actually way easier to use than Unity or Unreal. It's obviously not as powerful, but again, you know, it's way easier and that's kind of the trade-off, right? And so, yeah, I think there's like definitely a lot of opportunity there and I can't wait to, you know, see what people think
create more in the future with especially with better tooling with better AI and potentially more time on their hands with human like robots yeah 100% I have two more closing questions first let's say just like we were talking about earlier developers and researchers using Grand Theft Auto as a environment in which they could inexpensively test out like self-driving car algorithms and stuff like that
How far do you think we are from humanoid robots that are empowered with the kind of AI agents you're using in your game from actually being embodied and able to walk around the world and actually do things? That's an extremely open-ended question. I know there's a lot of assumptions. Yeah. But how, I guess, just very vague, how many decades do you think we are from that? Do you think that's like a 2070s thing or a 2050s thing or...
So I think it would be sooner, but I also am an optimist. I'm a technology optimist. I think things will be happening sooner than they would.
I think the gap between simulations and reality is getting closer and closer, right? Like the GTA is just like kind of one example, but even in like a lot of, like I said earlier, a lot of like AAA games, they're getting closer and closer to like reality, right? Like graphics level, like fidelity, like all of that.
I actually think that the sim to real gap is closing. And if you are able to like build and rig up basically all the controls that a robot is in like a 3D video game or 3D simulation, and you basically have to train the agent to be allowed to do like
you know, all the scenarios that a robot could do in real life, you can actually like that gap, this simulation to reality gap, that sim to real gap, it's actually like pretty close. And you should be able to generalize that to the robot in like, you know, a couple of years. So gosh, I think if we were to like want to build this in the future,
simulations and video games, it'll probably take like, I'd say like three to five years for it to like work pretty well. And then maybe it'll take like one to two years to transfer that onto a robot. So maybe I would say this decade, hopefully, you know? Yeah. I would, I would, I would, I would love for that to happen. Yeah. That's really cool. I love your enthusiasm and your passion. And I think that's like,
A Silicon Valley, like San Francisco type thing. And I certainly experienced that when I go to China as well. A lot of people in China are very optimistic, and India as well. And people are just really optimistic about technology. And a lot of people in the United States are guarded. They're like, oh, no. But what if Terminator –
I think you have more things to worry about about Terminator. I feel like as this AI revolution gets closer and closer to AGI or whatever happens, I feel like we're beginning to realize the very human nature of these conflicts and the human nature of
political nature of these situations. And I feel like at the end of the day, it's still humans making the decision. It's not the AI making a decision. Like, humans would still want to control and subjugate the AI to their will, right? However that may be. And I think, like...
At the end of the day, we don't have to worry about Terminator. We have to worry about, you know, like Putin or whatever. Right. So we, I think we have to worry about like a few people making decisions, thinking that they're making decisions on behalf of all humanity. Exactly. It's kind of like the, the, the agency problem. Right. At the end of the day, like,
I could release a self-driving car on the road that I'm like, this is perfectly trained. And it could just go and proceed to run a bunch of red lights and honk at a bunch of people or maybe start driving through some cow pasture and threatening all the cows or something. Exactly. And I could impose my will upon the world, but there's not a reciprocal kind of like,
countervailing force that stops me from like, like it's much easier for me to do something dangerous or put a bunch of people in peril than it is for somebody to offset that. If that makes sense. Kind of like it's much easier for me to put misinformation out there than it is for somebody else to come and correct that misinformation. The old Mark Twain thing, like a lie can get halfway around the world while the truth is still putting its boots on.
That's true. Yeah. And so, so because of that fundamental asymmetry, I think that's the main concern that people have. Yeah. Terminator is just like some corporation that thinks doing the profit maximizing thing. Oh, this will be great for military applications. And next thing you know, you're self-aware and decides that it wants to do something that is divorced from the,
I guess, interests of humanity. But it wasn't like every single human rubber-stamped that or gave the thumbs up for that AI to be launched. It was just like an extremely small...
somewhat inbred, you know, academic circle of like corporate people. We all know what's best. Right. And they didn't know what was best. I think that's the parable that people are afraid of. Yes. But I totally agree. I'm heartened to hear that. Like, you're not that sweating it. Yeah.
Yeah, I mean, I think whoever ends up making these decisions will be, like you said, a small group of people making these decisions. So instead of us worrying about AI, maybe we should really worry about who are the people at the forefront of AI or who are the people controlling this type of technology. I think our focus should be less on the technology itself and more about the human nature and the political nature of AI.
of these problems. And maybe we should, you could, one can argue, we should have been doing that anyways. Like, you know, given kind of the, the state of like the government inefficiency in the United States. Right. So there's, yeah. Technology moves a lot faster than human organizational structures. These things are slow for a reason because we don't want them making snap decisions. And we've seen throughout history, whenever an organization is too efficient and making decisions, then like the decisions are,
aren't necessarily what we wanted in the long run to happen. So like, I definitely think that there's like bureaucracy and there are like, you know, guardrails and they're like breaks and stuff like that. And they're well-founded, but we can talk more about the kind of philosophical implications. But my other big question for you that I wanted to ask is like, let's transport, let's say you have a cousin and she is in like ninth grade and
If she wants to be where you are or maybe well beyond where you are 10 years from now, like I don't know, maybe 10 years ago. I don't know your exact age. You don't have to say it. But let's go back to you as a freshman, but you have the benefit of everything you know now and you're in 2025 instead of 10 years ago. And like what kind of decisions do you think you would make?
If you like with that benefit. And I always like if I had a superpower, it would be able to see like 10 years into the future or something like that. Like even like 15 minutes in the future was incalculably rich on the stock market. But like for practical decision making, like finance aside, but like you can make decisions based on because you one of the things you said that really struck me early on in this conversation is that you are a planner. You plan things out several years in advance and then you stick to the plan.
With that perspective, what would you do if you were a high school freshman in 2025? Ooh, that is very, very tough. I think I would – I know the most about AI and robotics, and I think we still have a good –
10 years, like I said, it will happen this decade, but I don't know exactly when. And there's still a lot of opportunities, even after the technology breakthrough of different industries and applications. So if we're looking at a 10-year horizon of these things getting deployed and just getting newly deployed, you can probably catch the early end of that deployment by the time you graduate college.
So I would say just like knowing the industry that I know best, it's still going to be AI and robotics, kind of like focus on computer science, focus on AI training, focus on robotics. I think that for sure, like something will happen in this decade and that would be a very good spot to be in by the time
My cousin, my future cousin, or my invisible cousin graduates college. But I think in general, anything in the space of AI is super worth going to. Obviously, I'm biased. Anything in the space of AI and robotics in particular, and anything in the space of...
AR VR virtual, like augmented virtual reality that I will say AR VR is a little bit more of a question mark just because a lot of that hinges on like big corporate, like big tech, like company sponsorship effectively. Um, but I think, I do think that AI and robotics are a little bit more democratized, so that could help with that. Um,
Awesome. So robotics – okay, one last question that came up as you were answering that question. So I talked with a gentleman a while back, Bruno Haid, I think was his name. He's on the podcast. I'll find the episode. I'll link it below. But one of the things he talked about, he's big on –
internet of things. He's big on like, uh, just like leveraging like microcontrollers and things like that, like sensors, things like that. And, uh, he built like a, a fridge for lack of a better word. It's kind of like a fridge that like simulates different climates. So you can have, you can grow different, uh, props inside of it with like ultraviolet light. And like, you can simulate like the level of humidity and,
you know, air pressure and like all these different considerations to grow crops as though you were back in Sicily. Oh, that's amazing. Yeah. Cool. Yeah. He sells these to restaurants. So like really high end restaurants that care that much about. Oh my gosh. Amazing. Well, one of the things he said is that like over the past few years, thanks to like,
you know, production improvements and stuff like that. A lot of the innovation happening around Shenzhen in China, the ability to quickly create custom hardware for pretty much any use and the specificity with which you can order just a few units where previously you would have to do an entire shipping container full of this thing. And you didn't have the prototyping. You didn't have the turnaround time that you have now. I don't know how,
you follow the hardware aspect now that you're mostly focused on the software, but have you noticed any big improvements and like how have things changed since you were in high school robotics club? Oh yes. Uh, definitely 3d printing has gotten like so much better. Um, so, uh,
When I was in high school, 3D printer was either really expensive or just was not a thing. And now, apparently, you can 3D print whole rockets with metal, and that's kind of insane. And yeah, I think I still kind of dabble in hardware. Obviously, not as much now with my focus in software, but...
I'd been a tinkerer for a long time. I built physical robots and stuff. And I've thought about just getting a 3D printer, and I actually looked into buying one. And apparently, these 3D printers are just $200, $300. And you can also get a small laser cutter, which you can use to make a lot of crafts and custom crafts and things like that. You can get a laser cutter for $500.
And like, like before, right. When I, when I was in high school, these things would cost like thousands of dollars at the, at the bare minimum. And now they're like, you know, a fifth of the price. Right. If, and, and they're only getting cheaper as well. So I think like,
The cool thing about 3D printing and especially like, you know, bigger, larger industrial scale 3D printers that aren't just like printing plastic parts, but they're printing like more metal parts such as the parts for rockets and whatnot. I talked to some of my friends in the aerospace industry and they are saying that it's just like, you know, easier and more efficient now to like build custom parts.
and especially custom metal parts, both through a mixture of 3D printing and also CNC machining. And yeah, I think that's actually really interesting as well. I think the biggest bottleneck there is actually the supply chain and not the technology. Obviously, a lot of that does come from China, and a lot of innovations and manufacturing does come from China. And I'm not sure if there's a...
Big, like, supply chain, like, outside of China or, like, big kind of, like, human cost labor and, like, parts supplier in the U.S. or, like, close to the U.S. So that would probably be my biggest, like, question mark. What about, like, PCB, like, printed circuit boards and things like that that you can get, like, really custom-
Oh, that'd be cool. Yeah. Yeah. And like with specific sensors and things like, like you can basically have kind of like a salad bar of like thousands of different components that you have incorporated into your own custom, uh, circuit board that you could incorporate into your electronic, like your, your robot. Do you think that is, is that a limitation? Like the actual hardware development of that stuff? That's actually not a limitation. Uh, my understanding is like, it's,
like it's pretty cheap to, to make your own like breadboard or PCB or whatever these days. So to make any like robot prototype is actually like super affordable to, to do that. So the Honda Asimov that was built, you know, 30 years ago, 25 years ago or something like that. The tiny robot that we were talking about, not tiny, he's like four feet tall or about a meter tall. But like,
If you were to try to build like a replica of that, not exact, but like if you were to order parts from like Chinese websites or something like that and assemble that, how much money do you think it would cost to build something that had the capabilities and like the general form factor and everything that he has? Like five to $10,000. Okay. That's not bad. Yeah. I'm not sure what the actual cost of the original, let's see, Honda Asimov robot costs. Uh,
$2.5 million was what they were planning. Oh, this is an order of magnitude cheaper. Yeah. Yeah. And, uh, it was released in 2000. So it was 25 years ago. Uh, wow. Yeah. So, uh, and the Boston Dynamics Atlas was 1.6 million. Uh, but, uh, it sounds like things have moved quite a bit since then. Well, okay. So I think like, yeah, I think Boston Dynamics,
I don't know too much about the Honda robot, but the Boston Dynamics, yeah, I was like one. But the Boston Dynamics is like much more capable, like in terms of like hardware and capability and sensors and whatnot. So I think like to reach Boston Dynamics level, that would still probably cost like tens of thousands, if not like,
hundreds of thousands of dollars in terms of hardware. But I think once you basically get out of the prototyping phase and you start putting things in production, especially with the supply chain and Chinese manufacturing factories, you can get the costs of these robots down, like hardware to these robots and custom parts down to really, really cheap. And that's partly...
Why? You can get really cheap unitary robots for robot dogs for $2,000 or humanoid robots for $10,000. So that's...
Yeah. Awesome. That's really exciting. Well, I'm going to end on the fact that even though it's expensive in the prototyping phase, once you get economies of scale and scope from going into production, you can dramatically grow. And would you join me in encouraging people to pursue robotics as you still think it's a field to go into based on your answer? Everything is not software. There is still a lot of innovation to be done in the physical world with atoms and not just bits.
Yes, I think there's a lot going on, both in terms of software and hardware, and that students or people just kind of getting into computer programming and computer engineering should definitely look into that. Because people think like, oh, like all the research has been like done already, everything's set. But I was like, no, there's like so much we can do and there's like so much.
cool things that we can do and and that will probably last for at least another 10 years if not more so i'm actually very excited to see how the world will change like this next decade um and i'm excited for you know robots that can help me do everything for me very soon final final final question what should people do in terms of their information diet if they want to like
keep up on these things? Are there any Twitter? Okay. Are there any like podcasts or YouTube channels or anything that you're like, Oh, I'm a huge fan of this. Like they, they do like the really good hardcore robotics, anything that you would recommend? I'd say Twitter is like my main source of information these days for, for better or for worse. Um, I, I,
I mean, it's kind of weird because I'm like on a podcast right now, but I don't actually listen to that many podcasts because I just don't have time. Like, because I know like a lot of people like listen to podcasts while they're like driving from like one place to another or like if you're like... You live in San Francisco, you don't need to drive. Exactly. Or if you ride a cruise, you can just put that in your laptop and work. Yeah. Waymo, working. Yeah. Waymo, sorry, not cruise. Cruise is dead. Yeah.
sadly but yeah I think yeah most of my information is from Twitter I do watch like YouTube videos on specific topics you know so that's that's my other source of information as well I have heard that like TikTok could be good if you use it right but it's also getting banned pretty soon so maybe not but yeah I think like
Twitter is my main source of information these days. Awesome. And Substack too. Substack is kind of like people's newsletters. Yeah. But I think there aren't that many more technical Substacks, a lot more philosophical. So again, Twitter is probably the better information for more technical stuff.
Awesome. That's super helpful. Well, it's been an absolute blast talking with you, Peggy. Yeah. Thanks for having me. I'm thrilled that pre-co-camp could play a part in your ascent as a robotics engineer and now a CEO at a device company. That's super chill. Uh, so yeah, uh, I'm excited to learn more from you in the future and eventually meet up with you again in person and hopefully get some mission burritos. Yes. And please ride the Waymo when you do that.
Awesome. All right. Thank you so much for having me, Quincy. This is an honor. Absolute honor. Absolute blast. Like you're, you're the best podcast host. So this is super great. Thank you so much for saying that before we stopped recording. Awesome. Until next week, everybody. Happy coding. See you. Bye.