We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Why Google search isn’t going anywhere anytime soon (from The Next Wave)

Why Google search isn’t going anywhere anytime soon (from The Next Wave)

2024/8/20
logo of podcast The TED AI Show

The TED AI Show

AI Deep Dive AI Chapters Transcript
People
B
Bilawal
M
Matt Wolfe
作为《The Next Wave》播客的主持人,Matt Wolfe 深入探讨了人工智能和未来技术的发展趋势和实用应用。
Topics
Bilawal: Google凭借其强大的模型、广泛的分发渠道和现有的广告销售生态系统,在AI冲击下仍占据搜索领域的优势地位。虽然生成式AI模型如ChatGPT带来挑战,但Google拥有强大的搜索索引和广告业务模式,使其仍具有竞争优势。Google拥有最完整的物理和数字世界模型、广泛的分发渠道以及完善的广告和销售生态系统,这些使其在AI搜索领域仍然具有强大的竞争力。未来的搜索引擎可能会结合大型语言模型和强大的搜索索引,提供更全面、更准确的搜索结果。人们对AI的期望值过高,导致实际应用与预期之间存在差距,这可能导致人们对AI的热情下降。虽然GPT-4到GPT-5的进步不如GPT-2到GPT-3显著,但这并不意味着AI技术发展停滞,可能只是由于计算能力的限制。Sam Altman对GPT-4的评价可能带有营销策略的成分,旨在保持公众对AI技术的热情。创建地球的数字孪生体可以为人工智能模型提供大量的训练数据,从而提高其在现实世界中的导航能力。基于神经辐射场技术的3D建模技术将是未来地图发展的方向,Google地图也将受益于此。基于现实捕捉技术的3D建模技术在实用性和趣味性方面都有广泛的应用前景。在AI技术发展中,应重视同理心,理解不同群体的利益诉求,寻求平衡点。 Matt Wolfe: 由于AI生成内容的泛滥,Google正在调整搜索算法,更加重视权威性和领域权威性。 Nathan Lanz: (没有在核心论点中明确表达观点,主要参与讨论)

Deep Dive

Chapters
The episode explores the potential challenges facing Google's search engine supremacy due to advancements in generative AI models and discusses the implications for the future of search engines and advertising.

Shownotes Transcript

Translations:
中文

Hey, what's up, y'all? This week, I'm sharing a conversation I had on a podcast I think you'll enjoy. It's called The Next Wave. It's hosted by entrepreneurs and tech enthusiasts Matt Wolfe and Nathan Lanz. We discuss search dominance, the future of Google, ethical considerations and AI content creation, and a lot more.

The next wave brings you fresh takes, industry insights, and a trustworthy perspective on how to implement AI to grow your business. It's like your chief AI officer. And I absolutely love this podcast because they distill down all the craziness in AI and make it super digestible. You can check out the next wave on YouTube or wherever you get your podcasts. And we'll be back with new episodes of the TED AI show next week.

I'm still very bullish on Google because I think it's like the tip of the iceberg of what you see in tech companies. And the submerged part is just amazing. They've got the most complete models of the physical and digital world. They've got ubiquitous distribution and an existing ecosystem of ads and sales to plug monetization in. So I think it's still a magical combination. Hey, welcome to the Next Wave podcast. I'm Matt Wolfe. I'm here with my co-host, Nathan Lanz.

And with this podcast, it is our goal to keep you looped in with all of the latest AI news, the latest AI tools, and just help you keep your finger on the pulse so that you are prepared for the next wave of AI that's coming. And today we have an amazing guest.

amazing guest on the show. We have Bill of all to do on the show. He is the host of the Ted AI podcast. He's an ex Googler. We're going to talk to him about what it was like working on AI and visual effects over at Google. We're going to talk about

The difference between whether or not we should be accelerating AI or slowing down AI. We're also going to learn about how some of these AI visual effects tools work because this is the field that Bill of all worked in for so long. It's an amazing episode. You're going to learn a ton and can't wait to share it with you. Want a website with unmatched power, speed, and

and control? Try Bluehost Cloud, the new web hosting plan from Bluehost. Built for WordPress creators by WordPress experts. With 100% uptime, incredible load times, and 24-7 WordPress priority support, your sites will be lightning fast with global reach. And with Bluehost Cloud, your sites can handle surges in traffic no matter how big. Plus, you automatically get daily backups and world-class security. Get started now at bluehost.com.

So let's jump on in with Bill of all to do. Thanks so much for being on today, Bill. Thanks for having me, gentlemen. Pleasure to be here. Yeah. So I want to just kind of dive right into it. Your background is Google, right? So I think when you and I first connected and we first started having some chats over Twitter DM, you were still actually working over at Google at the time. And you also were kind of doing a creator business on the side with your YouTube channel and everything you had going on. But

Fill us in. What were you doing over at Google? What was your role there? What was your experience like over there? Gosh, yeah, it was awesome. So I've spent a decade in tech, six years at Google. I've been able to work on projects that blend the physical and digital world. And I started off in the AR VR team really when spatial computing, as it's now called, was first popping off. This is like right after the DK2 came out, Google Glass was a thing. And everyone was talking about what is the next iteration of computing platforms?

Where are we going to go from this mobile revolution? I had a chance to work on a bunch of cool stuff there. YouTube VR content, live streaming Coachella, Teen Choice Awards, Elton John, the camera systems that we used to do stereoscopic 3D capture, AR SDKs when that became popular, augmented reality hit the scene. Then after that, I spent four years at Google Maps, basically creating a ground-up 3D model of the world, remapping the world, if you will,

And then turning the world into an AR canvas with the AR core geospatial API. It's been a lot of fun. And yeah, it's been awesome to work with some really talented folks to work on these projects that have been blurring this line between the world of bits and atoms. So I'm curious, working on all this stuff, in my mind, I can't even imagine what a day-to-day looks like at Google. I've been on the Google campus and it looks like a giant...

playground for tech nerds. So I'm just kind of fascinated by what it's like to work at Google. What does a day-to-day look like over there? So I was a product manager. And so a day-to-day for me is going to be very different than if you go talk to an engineer or designer. For me, really, it was a lot of meetings. Let me be perfectly honest. It's just like a ton of time. But there's some very cool things. I think Google and big tech companies generally are sort of this interesting microcosm

where it's like, you know, I'll send out an email and the, like the guy that wrote the book on computer vision, like the computer vision book that like everyone reads, like response to it. And I get a bunch of, bunch of pings being like, Oh, so-and-so responded to your thing. And it's like all these Pokemon that are like, you know, these companies have caught that are available at your back, back and call to share ideas with pull into your own projects. And really just like, you know, there's such a, uh,

It's like the tip of the iceberg of what you see in tech companies and the submerged part is just amazing. So it's like when I moved over to the Maps team, I was thinking of working on glasses at the time. And the reason I went to Maps is like, I met this engineer. He's like, oh yeah, we write CLs to move satellites around in the sky. And I was like, wait, what, huh?

satellites around the sky. And it's like, yeah, like literally orchestrating a fleet. Like, you know, like most people don't know this, like Google owns their own fleet of like, not just Street View cars, but airplanes. Oh, wow. And so like the ability to like task those. So like, hey, we need to, you know, we've got this like Sundar Ayo thing coming up and we're going to be presenting Immersive View.

We got to go capture this high resolution model of London and like not, you know, it's suddenly things in the world of atoms are moving to make that happen. I think it's like absolutely amazing. I think people deeply underestimate the data mode that Google has. Obviously the most complete digital twin of the world we're talking about, but like search, right? Like YouTube. Oh my goodness.

all this stuff is available. There's cool things and products you can build around it, but along with it comes, which may not be a surprise to people in tech, but a ton of responsibility and guardrails for how you actually use this data. It's like the size of the prize and the datasets you get to play with are amazing, but to be able to do stuff with it, you really have to be exceedingly thoughtful and there's a lot of process involved in unlocking that innovation.

so yeah that's how i would describe it i think like it's just like a it's like it's like a disneyland for nerds to be honest so it sounds like you're still really bullish on google because i know we were having like some playful banter last year about like you know i was like maybe google's gonna die and you're like what the hell are you talking about yeah yeah we've actually had that conversation on this podcast before where i'm like you know what i i think i give more credit to google i think nathan's a little more i don't know if they're gonna be like the top

dogs and AI. Like, where do you stand on that? Do you count Google out? Do you think Google will pass the Microsoft AI, you know, Avenger mega team? It's hard to say anything with certainty, but what I will say is like, you know, after I left Google, I think I was like one of the few people, I felt like I was in isolation saying good things about Google. Everyone was just like, oh yeah, they're just too slow. They fumbled. They came up with a transformer. All the transformer folks are left. I think it's a situation where...

there was no real disruption in sight for the search space. Yes, there were talks about like, hey, like kids are like searching on TikTok and like YouTube like now, but YouTube is owned by Google. TikTok is kind of short form. Are people really going to be like, is that going to be a resilient thing? And one might argue now social networks are places where people do a lot of searching, but traditional search sort of as like,

you know, to, you know, maps guy to just give the maps analogy. It's like, you know, like maps is how you discover stuff in the real world. And Google is how you discover stuff in the digital world. It's literally your window to the world wide web. Right. I don't think anything had like sort of questioned the

the strong position Google was in in that regard until ChatGPT came out when suddenly people could start connecting the dots and see, yo, like, if you connect large language models with, like, you know, Knowledge Graph and Search Index, kind of like Perplexity and, you know, Microsoft Copilots and whatever the heck else OpenAI is going to announce in order to kneecap any Google limelight next week. Like, I think people started saying, hey, there's a disruption in site development

And I think combined with the fact that like the search ads business model was just such a money printing machine and still is. And the fact that, you know, the cost per query of these generative AI models is obviously going to be higher. And how do you do advertising and attribution and all this stuff? Like that it would like kind of represent a, you know, contraction in the money printing machine and the pie and the business that Google created had all the signs of sort of innovators dilemma. And I think like,

Google has sort of adopted the playbook of the innovator solution. And, you know, initially they had some of these reorgs that felt more like exec reorgs. And now they're actually bringing together like the brain and the deep mind teams. And they're actually shipping at a really good cadence of

And I think they still have some of the most unique data sets that other folks are talking about, you know, that may or may not have been scraped. You know, case in point, the CTO of OpenAI being asked by Joanna Stern about what does it exactly mean that you train on publicly available data? So all this to say, I'm still very bullish on Google because I think they've got the most complete models of the physical and digital world. They've got ubiquitous distribution and they've got the right infrastructure chops to basically like

bring that cost per query down and an existing ecosystem of ads and sales to plug monetization in. So I think that there's a like monetizable sort of like answer engine model. I think Google is one of the few companies that could crack it.

that isn't to say that I think like opening on Microsoft can take meaningful market share, but let's be honest. How many of us actually use Bing? Like I don't, right? Like I used it for a little bit and I probably use perplexity more now. Yeah. I mean, I am using chat GPT and perplexity instead of Google a lot these days. Agreed. Me too, to be honest. Yeah.

And then, you know, I've been following, you know, a long time ago, I used to do SEO like a long, long time ago. And I've been kind of following that space and like, like in the last two months. And I kind of predicted this a year ago. They're like having major changes to the algorithm where they're really focusing on authority. Yeah. Domain authority. Yeah. Yeah.

Yeah, the reason they're having to do that is because of all the flood of AI content, right? Good Lord. They just can't deal with the flood of AI content. So it's like, okay, how do you deal with that? Let's go back to really valuing the big brands and the big names or the famous people too. That's the other thing is maybe they're taking social signals. You have a lot of followers on social media. Now that's a signal that you're an author they should listen to. Yeah. I mean, why do you think we signed on with HubSpot? We want that backlink domain authority. Yeah.

That's the only reason. Yeah, yeah, yeah. I mean, this is going to be a meta problem, though, for the industry, right? Just the explosion of synthetic content. I mean, and some social networks are almost like incentivizing it. Like on LinkedIn, it gives you, it uses GPT-4 to suggest a comment. And now you have just the most cringe, like,

regurgitations and summarizations of the original post from, clearly a normal human being would never write this. But if there was a meme about how to respond to somebody on LinkedIn, I mean, that's like encapsulates the style I see. And so- But that's happening on Quora too. So like right now, Quora is starting to rank at the top of Google results.

And Quora is being dominated with chat GPT responses now. 100%. I mean, like books on Amazon too. And I think the way they came out with a restriction as well, you can only upload X number of books per day. Like, I don't know if that's the solution. But like, this is like the deep fake, shallow fake problem too. It's like, everyone's talking about detecting deep fakes and like, how do we figure this out? It's like,

well, the thing that causes maximum harm today aren't actually deep fakes. They're like super shallow fakes where you take a photo from a different time or a context and like, you know, kind of put it against another context. This is exactly the type of stuff you'll see like community noted on Twitter. And that stuff's like relatively easier to detect because you can actually find the source imagery if you do like reverse image search. And so like you add in the generative problem on top of that, it's like even crazier. But like,

most platforms haven't really even solved the shallow fake problem, right? It's really like when it reaches a certain threshold of distribution, there's sort of this retroactive, let's go throttle this thing versus like, how do you get ahead of this? Anyway, I could talk about that forever because some of the ways to avoid that is like ubiquitous surveillance, which is also not, you know. Oh yeah, that sounds great. It's like somehow the solutions to things that sound 1984 is like 1984 technology. It's like,

It's kind of weird how that works. But it's funny. Somebody shared an article with me just like the other day that was an article about, well, not just me, but it was like an article about like these seven AI influencers are, you know, changing how we see AI or something like that. Right. And then like one of the seven like was my name and I read the blurb about myself and the blurb about myself was like, I grew up in Louisiana doing real estate. You're my neighbor. Yeah.

And like transitioned into computer programming and then started teaching AI. And I'm like, other than the fact that they're like, Matt makes content about AI, everything else about that was just completely wrong. I think that's where also like these models need to be anchored in some sort of real knowledge graph, you know? And like, that's not to say that like, you know, an approach like search is only going to give you the truth, right? Like there's like, what is even the truth? And there's like differing opinions on it.

But I think these models to be able to like just kind of fact check themselves, at least with known information and come up with, oh, at least three sources that are reputable are saying this is Matt's bio would be better than

I don't know. So I'm like, this is the equivalent of like SEO dribble that sort of started bleeding into Google around like 2019. I think like, you know, it's, this is going to be a huge problem and not to mention the implications of like, if we've run out of content on the internet and we are actively disincentivizing human generated content, like how are we going to train these models? Like what's going to happen there? Yeah. I want to, I want to talk real quick about, um,

about search because just a peek behind the curtain to anybody who might be listening to this episode, we're actually recording it right before Google IO, right? Like Google IO is next week from the time we're recording this, uh, Bill of all and I are actually going to be out at Google IO, um, attending in person, but,

But, you know, one of the things that you mentioned is that OpenAI and Microsoft have a tendency that whenever Google announces something, they need to jump in and like sort of one up them. So, you know, by the time this episode comes out, we'll probably already know what OpenAI did with search. But the rumor right now is that OpenAI is creating some sort of their own search engine with Google.

Maybe with Microsoft involved, maybe not. There's still a lot of rumors and speculation flying around. But knowing what you know about Google, do you think OpenAI and ChatGPT can come in and compete? I think no. Like, I think the search index Google has is a very strong moat. The fact that they can sort of

almost like map the internet in like almost real time is like just a hard technical and infrastructure problem. And they're really well set up for it. I'm curious what it is. Like, obviously this is complete pontification rumor mill. Like what it is opening eyes going to roll out. What I think it's going to be is like,

something at parity with perplexity with maybe a better like search index involved. Like, and if it's, even if it's something like that, where you get sort of this like multimodal summary where it looks at a bunch of links, you get, you know, some images, maybe some embedded videos and like a summary of whatever it is you asked for with citation. So you can go validate sort of the quality of those, like the links that were summarized. I think that would be a huge step up, right? Like just getting, just being able to invoke like

like search inside of chat GPT right now is clunky. You have to be like, Hey, well, and look this up, research this and like explicitly prompted to do that. And being able to do that in a fashion that is like really about like

leaning on like sort of real time and like sort of, you know, content that has real provenance along with, you know, like sort of the distilled wisdom, you know, the wisdom is debatable, the distilled wisdom that is in these large language models to summarize that I think is still a magical combination

And I don't know if you all feel like this, but I think the vibes have been shifting with regard to just the conversation and sort of, Matt, I know you had this like post about like, we think things are slowing down, but here's a bunch of announcement events coming up. Right. But doesn't it feel like just like those leaps that we hoped for haven't quite come? I guess Sora was kind of

I think the trend from like a million to 10 million to maybe infinite context is like an interesting leap too, but maybe we're just getting too used to it, you know, like versus the technologies, like the pace of advancement slowing down. Curious what y'all think. Yeah.

I don't know. I feel like it's kind of like it's on social media. The perception is, you know, the vibe is shifted. But I feel like a lot of people who actually are closer to Sam Altman, they don't feel that way. And so I so I so I still believe that, you know, they have something amazing coming. I follow Gary Marcus. I know. I think you actually have Gary Marcus coming on the TED AI podcast. So we'll probably get some more insights from him in the near future. But

you know, the sort of Gary Marcus thing that he's been sort of all over Twitter about is that we're not seeing that same leap from GPT-4 to GPT- or the same leap from GPT-2 that we saw to GPT-3. He's sort of arguing that that exponential curve that everybody's talking about that we're on with AI is not true, right? We're not on this exponential curve. Otherwise, like, why didn't we get from GPT-4 to GPT-5 and half the time we got

from GPT-2 to GPT-3, right? Like why is it not showing that? But I also, like I think my counter argument to that is just 'cause we're not seeing it doesn't mean it's not there. - Happening, yeah, totally. - I think there's a lot of stuff happening like Nathan sort of alluding to behind the scenes over at OpenAI that we're not seeing.

I think it's probably more related to like compute requirements to actually use some of these newer, more advanced models. If they were to release it right now with like the compute that's available, it would be like really expensive. And you know, the 20 bucks a month that people are paying to use chat GPT is probably not going to cover the cost of inference to run, you know, these newer models, same thing with Sora. I mean, that's kind of the leading theory. I know that I, that I believe with the GPT two, like the mysterious model that came out and it's like, well,

what the hell is that? Yeah. Like,

Like maybe that's what it is. It's like they've actually, maybe this is something they actually developed like a year or two ago. And it's like a, it's a more efficient architecture or something like this. And possibly that's what GPT-5 is built upon. Then they, you know, they have less issues with the cost issues in theory. But yeah. I think it'd be hilarious if the GPT-2 stuff is actually open AI. I mean, like what an interesting way to sort of test a model in the wild versus, I don't know, running some sort of AB where like, or some sort of like experiments where,

on the chat GPT website where a subset of users get a certain model versus another set of users. Maybe like making it explicit that there's these two versions of the models and having people respond to it separately is interesting. Maybe there's an intention to create a bit of a PR, like sort of like, you know, kind of seed the conversation and kind of like grease the wheels before the real, like sort of, I don't know, race car jump moment happens or whatever. I don't know, but it's like,

Yeah, like I'm inclined to agree. Like there's the compute stuff. Certainly Sora hasn't been rolled out widely because like it's just so compute intensive, right? Like you need to come up with a completely different like pricing model way beyond the $20 one. Maybe hence them talking to studios and stuff like that. But all these models will get optimized too. To the Gary Marcus point, it's interesting. I had a conversation with him last week and I was like, it's like he's been very consistent about like this not being the right paradigm.

And, you know, I think people like to, for the lack of, just put it bluntly, people like to shit on Gary a lot. But, you know, if there's one thing Gary's been, it's like it's been exceedingly consistent. And so I don't know. Like, I would like to see this sort of agentic,

co-pilot that feels more like an employee. We certainly haven't seen it yet, right? The converse of everything else is the expectations around AI, like a year ago, if we go back to when GPT-4 came out, were just so freaking high that people just thought, like, whether you were a knowledge worker or a visual creator, you would look at the narrative and you'd be like, holy crap, like, this thing's going to take my job.

And then I don't know if y'all saw the tweet. I was like, and then you go use the tech and it feels less like this, like fricking Kaiju Godzilla, that's going to stomp you. And more like this, like chaotic golden retriever that you can kind of coax to do cool stuff with you. And I think that, that Delta between expectations and reality is so stark. And, you know, like there was all of these,

I think even in a recent Sam Altman interview, he was asking one of his biggest regrets is like GPT-4 didn't have that like economic impact everyone thought it would. And he's worried that the pendulum will swing sort of the other way now, where if expectations were so high, people were like, oh yeah, whatever. And so, I don't know. I think the answer is always going to be in the middle, but I can't help but feel like-

we've gone past the like peak of inflated expectations and we're going into the trough of disillusionment. Well, we'll see. I mean, like Sam Altman also said that like he was surprised that GPT-4 was so successful and that it, you know, it kind of sucks. I don't know. My feeling on that is that Sam Altman is one of the like greatest marketers of our time right now. And he's,

You know, he's really, really, really good at getting that hype wheel spinning. Oh, baby. Yeah, I swear. I think I think Sam Altman is just really, really smart. And I think, you know, when you see him speak right when he does interviews, he's very calculated, right? He'll ask a question and he'll sit there and he'll usually pause for a good few seconds before he responds.

And I think that he's got that marketer brain. Like, what can I say? It's going to sort of spread the flames to hype this up a little bit more. And I think that's kind of how his brain operates. So I think, you know, him saying like, this is going to be the dumbest model you've ever used by a lot.

And we're going to, you know, we're going to look at back at this and be embarrassed by what we put out. I think that's all marketing. I'm certainly excited. But, you know, if I could have one request right now, it's like, just give me GPT-4 from April 20th, love last year. Give me that vintage of GPT-4. It was better. It was better, damn it. Can't you still go into like the open AI playground and like select the older models? I think you can. Yeah. And I feel like just all these models, especially in the consumer interface, follow this trajectory of like when they launched, they're really good. And then over time as like,

you know, various efforts to make sure the output is, you know, on rails and not harmful kick in. You see this deterioration happen. But hey, that's why we also have open source, right? Right. Yeah. Support for the show comes from LinkedIn. LinkedIn ads allow you to build the right relationships, drive results, and reach your customers in a respectful environment. They are not getting barraged. This is very targeted. You will have direct access to and build relationships with customers

With a billion members, 180 million senior-level executives, everyone's on a LinkedIn, it seems like, and 10 million C-level executives, you'll be able to drive results with targeting and measurement tools built specifically for B2B. In technology, LinkedIn generated two to five times higher return on ad spend than other social media platforms.

You'll work with a partner who respects the B2B world you operate in. 79% of B2B content makers said LinkedIn produces the best results for paid media. Start converting your B2B audience into high quality leads today. We'll even give you a $100 credit on your next campaign. Go to linkedin.com slash TED audio to claim your credit. That's linkedin.com slash TED audio. Terms and conditions apply. LinkedIn, the place to be, to be.

Well, let's talk about visual effects too, because that's like, that's really your background over at Google. I want to go, I want to go back like to like the sort of, uh, you know, uh, 3d imaging one-on-one for a second here. Can you sort of like break down the difference between things like photogrammetry, LIDAR, nerfs, and, you know, gage and splats?

I would love to. In fact, the thing I was talking about is like everyone talks about like generative AI a lot, but I think like the part that's getting not that much attention is this like visual spatial AI space. And so think about spatial intelligence as like

really just like a reality capture like the world is in freaking 3d right and so you know matt you nailed it it's like basically photogrammetry is the art and science of taking 2d images and other sensor data like lidar and turning it into these 3d representations of the real world

So photogrammetry has been around since like before computers were invented even. Like this is a way of basically using like math and images and observations of the world really to like extract 3D structure from it. But you should also think of spatial intelligence as the ability for machines to sort of interpret the spatial data like maps, 3D models, like the world as we see it, right? And so like,

To me, like photogrammetry or like reality capture, all these other techniques are all about recreating reality. And so photogrammetry isn't new, as I alluded to, right? But I think what's gotten a huge boost and why you hear about all these things is like, thanks to machine learning, basically like these learned approaches to modeling the complexity of reality, right? Like basically like, how do I take a bunch of 2D images of the world like this and

and essentially have a model do this inverse rendering problem where it's like, oh, here's where these hundred photos are located in 3D space. Based on this, I'm literally going to like eyeball ray tracing, like, and create a 3D representation that makes sense. And since you know exactly what the model looks like at the photos that you've taken, the representation that you get eventually is like good enough from all viewpoints. And so like,

This basically, the first Nerf paper dropped in 2021 called Neural Radiance Fields. And then there's just been insane progress. Like we talked about from like data centers to like the GPU in your freaking like, you know, NVIDIA workstation to like the iPhone in your pocket. But even this wasn't new. There was like spiritual successors to these like ML-based learned representations to sort of encapsulate the complexity of reality. Enter Radiance Fields, right? Like the way like,

Think about radiance fields generally. Imagine a voxel grid, a cube of cubes, where every single cube has a color value and an alpha transparency value.

And like, that's kind of what you end up getting with a nerf. And then when you do volume rendering, you can basically like, you know, end up getting these photorealistic renditions of the world. And so like the cool part about neural radiance field is instead of photogrammetry where you get this like 3D mesh model, this like with surfaces, with textures plastered on it, think of it like crappy GTA looking models.

What you get with nerfs is like a radiance feel this voxel grid of like all these voxels and their various values therein that change based on how the camera is looking at it. And because of that, you get all these things that photogrammetry couldn't do, which is like,

Modeling transparency, translucency, like freaking like glass, like shiny objects. All this stuff can be done. Freaking fire, volumetric effects, all this stuff that photogrammetry can't do. Because imagine needing to come up with like a cardboard paper mache model of that thing. It's going to look like crap. How do you model hair, fire, fog, all these things? And you can do all of that with these implicit representations.

Now, the problem with nerfs were the rendering speed. Because you've got this voxel grid and you're doing this volume rendering where you're, like, first doing, like, the training process takes forever, but then when you want to render an image, you've got to do volume rendering and, like, trace, like, these rays through that voxel grid and add up these, like, values, like...

That takes a lot of time. And basically it's like, think of it like one frame per second to render out some of these videos, right? Along comes Gaussian splatting, which is like, hey, do we even need the neural part of radiance fields? Like, do we need like ML? Can we just do this with like old school statistical techniques?

And like, which is kind of wild, right? And so instead of having this implicit black box representation where like reality is modeled in the weights of this like MLP, this like multi-layer perceptron, you've got this explicit representation of these like ellipsoidal splat looking things called Gaussians. Just think of them like super stretchy, like fricking spheres. Like turns out you can get like a huge,

huge jump in quality while also being able to render way, way faster. And so like, it's like from one FPS, you're getting like a hundred frames per second. And since it's an explicit representation, it's in this, like most formats, like all these apps that I'm showing on the screen, it's in this format called PLY, the Stanford PLY file.

you can basically bring it into any industry standard game engine. Like you can bring it into Blender, you can bring it into Unreal, into Unity. And since it's not this like black box, like this neural network that you have to deal with and it's explicit, you can go and delete and edit things far more easily. And so it's like super crazy to see like

what's happened there. But basically between nerfs and Gaussian splatting, think of Gaussian splatting basically as radiance fields without that neural rendering part. And the paper uses terms like training or whatever, but there's no neural networks involved at all in 3DGS.

So yeah, like how crazy is that? We went from like, cool. Yeah, you could do like cool fly through videos. If you remember, that was what the early days of like the Luma app was do the scan. And now you can reanimate the camera and you left this thing render for 20 minutes and you got back something. Now you can literally take your scans and drop them into these real time environments. And it's like fricking amazing. Like I think on the left, I'm getting like 400 FPS on an Nvidia GPU. And on the right, I've got this thing in, in unreal engine. And yeah,

The cool part of it is unlike photogrammetry and very similar to neural radiance fields, they still model these light transport effects. So again, imagine if this was a cardboard cutout model, you wouldn't have had all these light transport effects of the light going through the tree, etc. And so the way Gaussian splatting does this is by using this OG physics concept called spherical harmonics.

you know, uh, to model it. And so like, if you're trying to optimize stuff, you can get rid of some of these view dependent effects as they're called view dependent, meaning as you change your view, did like materials look slightly different, but you basically get it all, uh, with Gaussian splatting. So I think it's super exciting. And, um, and yeah, like you can do this stuff in the cloud. You can do this stuff on your fricking desktop. Now, like I think post shot is a tool that not many people have used, but like

If you're working on a commercial thing and you don't want to upload your data, you know, with Luma's terms of service or Polycam's terms of service, like you can train this all locally on your desktop with PostShot, with Nerf Studio. There's some of the models, Nerf Studio aren't commercial friendly. And then even in the phone in your pocket, right? Like, so if you've got an iPhone, like a modern iPhone, like,

and you just want to know what 3d, like, like what radiance fields and reality captures all about, just download the scan of her sap and like have at it. So this is, this is maybe a dumb question, but like, so with like nerves and all this new tech, are you able to make like a really realistic, uh, 3d model of like a city like San Francisco? I mean, is that what you're showing me earlier? Like,

Whereas in only like a certain scene, like how hard is that? Yeah, I mean, there's a bunch of new papers out, right? The initial radiance feels like, so there's a paper called Block Nerf that tries to scale nerfs up to city scale using Waymo datasets. And similarly, you're seeing in the Gaussian splatting world, different papers about basically having like, you know, kind of like nested hierarchies of splats

that have really good transitions to model an entire like city and eventually the globe. So I think like that's the path that academia and industry is on. And I think like already you're seeing city scale data sets that are very plausible in research. And I think it's only a matter of time before that stuff gets into production.

You think that's like the future of Google Maps? I think it's the future of maps for sure. Like, you know, in immersive view, there are certain indoor locations where you get a pre-rendered neural radiance field that you can kind of like walk around and see. This is just the evolution of that. I think those data sets exist and there's like a handful of companies in the world that have it. So I think like that is the future of geospatial and like maps in general. But on the other hand, I think what's interesting is like,

you know, this technology, like building a map of the world is easy. Updating it is way harder, right? Like when people talk about this, like one-to-one digital twin of reality, it's like, oh yeah, well like, by the way, new stuff has built all the fricking time. Things change all the time. Seasonality is a fricking thing, right? Like, so,

I think with this technology we've got, and since we've commoditized capture because sensors are cheaper, compute is cheaper, and now we've got access to the same sort of algorithms and approaches to model reality, I think like updating this model of the world is going to get a lot, lot easier. So I think like it's going to be very exciting where in the near future we're walking around, you know, driving around our cars and walking around

with our like, you know, glasses or whatever. And we're sort of updating this real-time map of the world. I think we're very much on that trajectory and we're closer now than we've ever been. What do you think are like the business applications to this tech? I mean, it's like all the applications that value stuff in the real world, there's utility and there's delight, right? I think like,

Being able to not just like, I mean, if you look at what NVIDIA is doing with Earth 2, right, we're talking about the physical structure of the world. You can think of like the Earth having very facets, right? Like there's like the sort of the terrain, like the natural, like physical features of the Earth. Then all the like, you know, human built things on top of that, the structure that we built.

And then you can layer in like human activity on top of that, right? Like us moving around in the world, our sensors, our cars, et cetera. And then there's other phenomena like weather, right? Like tides and things like that that need to be incorporated. So,

Earth 2 is this really interesting initiative by NVIDIA to focus on the weather systems that govern basically day-to-day weather in the real world. If you've got that understanding of the structure and geometry of a place where the sun is going to be, you can predict already things like, hey, can I install solar panels here? Actually, how much sunlight would I get if I installed this configuration of panels? When you layer weather on top of that, things get even more interesting. So

To answer your question, I think there's a bunch of applications across utility and delight. Like, media and entertainment is obviously in gaming. I think the next GTA is absolutely going to be built in, like, a twin of the real world. Maybe this is the last GTA that will be built by humans manually to emulate the real world. I think that's certainly exciting. That said, a bunch of games have already used reality capture, right? Like, from Call of Duty to Battlefront, etc.,

But I think the utilitarian aspects are far, far more interesting. Whether you're like anything you're trying to do in the world of bits, like from like building stuff to like disaster planning, like the range of applications is just immense.

Well, even just, you know, one of the things that Jensen showed off at GTC this year was to create these sort of virtual worlds and then actually put virtual versions of like humanoid robots in these worlds and to sort of train them on this virtual twin of the real world so they know how to navigate the real world. And then once they get that training data, then they can sort of inject that training data into the real robots. So like this concept

concept of creating this digital twin of the earth will allow us to train a lot of these robots and machinery to operate within that digital twin before actually deploying it in the real world. To me, there's a lot of huge implications there. 110%. I mean, like these...

it's like a way of creating all the training data that these machines and perception models need to be able to navigate the world. Right. And what better way? It's like, you can create that, like, you know, you can 3d scan a city block and then create all these different scenarios of human activity on top of that and feed that to, you know, like, and train like self-driving cars, like, you know, self-driving AI off of it. I think like the,

The fact that we are we've got a place where we can basically like we can teleport reality into the digital world and then also manifest the digital in the real world. It's like that bridge, I think, is just very powerful for a bunch of different applications. Well, I want let's talk super quickly about Ted. So I'm first of all, congrats on on even having a TED talk like that's such an amazing accomplishment. Like, you know, some people say they've had a TED talk. They're really talking about a TEDx talk. And come on, come on.

Come on. You've actually given a real TED talk, a legit TED talk. And not only that, but they asked you to host the TED AI podcast. So tell us a little bit about that and what's going on there. I mean, maybe share a little bit of your experience at TED and then tell us about the TED AI podcast.

Yeah, sure. I mean, certainly the TED Talk was a fun experience last year. And I would say this year was even more fun. So I had the opportunity to co-host Session 2, which was all about AI with Chris Anderson. And we had some amazing speakers like Vinod Khosla, Fei Fei,

You know, the CEO of GitHub, Helen Toner, ex-board member, OpenAI. And even like, I don't know if you've checked her work out, but nice aunties, like absolute trip down. Basically like, to me, what intergalactic social media looks like. It was a super, super fun experience.

Yeah, I mean, like, look, the T in TED is all about technology. And I think right now what's exciting is to put the, like, over time, TED grew to encompass, you know, not just technology, entertainment, and design, but a plurality of topics, right?

And I think with AI sort of in technology being sort of this horizontal, like tech is a horizontal, but it's impacting so many different verticals in our daily lives, right? Like we can talk about all the applications, whether you're a creator, whether you're a knowledge worker, you know, whether you're a musician, you know, whether you're thinking about like national security and defense, whether you're thinking about relationships, right?

And often in all of these sort of topics, you know, there's like a dichotomy that we as like builders and consumers have to contend with. And so the idea of the TED AI show really is to outline those dichotomies and, you know, not necessarily take an opinion one way or the other, but sort of elaborate on the entire gamut of like the good, bad, and the ugly and sort of let people decide for themselves what

And do that by talking to people from all walks of life. Like people whose titles haven't even been invented yet. But obviously technologists, journalists, researchers, artists, you know, the list goes on. And, you know, I'm just super grateful for the opportunity to be able to, you know, just bring my excitement into the space. Like, obviously, like,

I want to build, bring the lens of a creative, like that's like built a following for over a million folks using these tools, but also as a product builder who shipped a bunch of this stuff. And then just like, I would say like cautiously optimistic AI enthusiasts. So I'm going into a bunch of these topics with those three lenses in mind. And it's just been a lot of fun. We've got some really cool episodes lined up for y'all and I can't wait for y'all to check it out.

Do you have any idea of launch schedule? Are there dates planned out for it yet? Yeah, yeah, totally. So May 21st, first episode drops, and then it's going to be weekly. There's going to be a little bit of a summer break there. But yeah, 25 episodes in the season. And let me tell you, I think there's something for everyone. Yeah, I do wonder, what was the general vibe at TED? Are people optimistic or are they really fearful of AI? And if you look at TED also, you had...

people like like blaval i know there were some other speakers there i think maybe mustafa soliman was there maybe that was a more recent one yeah but then you also had guys like gary marcus and i i'm gonna totally butcher his name um uh yudkowsky oh yeah at least or yudowski which are both more on the like hey let's chill out on the ai side so it seems like as from from a speaker's front it seemed like they had speakers on both sides of the arguments

Definitely. I mean, the theme for this year was like the brave and the brilliance and like covering that gamut of opinions. I would say overall, the vibe is positive. So I like...

I'll give you a sample size. Like I taught this discovery session, which was about the dichotomies of AI. It was about 50 people. And sort of what we did is we looked at a bunch of these verticals in the AI space and like, you know, essentially came up with like, what happens if this goes really well? And what happens if this goes really poorly? And like, let's use chat GPT actually to come up with like a headline, like a pithy visual, you know, depiction of that desirable and undesirable future. And honestly, like,

Most folks in the room are optimistic about it, right? Like, but they're not blind to the downsides. I think like the problem with anything is extremes, right? And so like, you know, we're like, Nathan, I hear your concern of like, if we're like, you know, oh, like we can't have like doomerism and like, you know, it's like, it's infectious.

I think the same thing applies to the opposite narrative too, which is like, well, we just obviously we got to keep like accelerating. We got to keep shipping. I think it depends, right? Like is, it's sort of the boring answer to these things. And I think you can't understand the nuances unless you go dissect like that full gamut of like considerations and

And so the goal of like really like the editorial perspective I'm trying to bring. And of course, Ted has a huge say in this too, is like, look, I'm 60% optimistic, 70% optimistic, which is not too dissimilar. I think Matt, what you and I have talked about on most things.

but I'm not going to be blind to all like the, the downsides of this stuff too. Right. Like, and I think that's okay to say. And what I believe is just like the speaker selection, you know, um, uh, we're trying really hard to have a balanced perspective on the guests too. So that like, what's, what I think is going to happen in the real world as well as like

You'll be able to hear both sides of that argument. Like somebody who's like super stoked and thrilled on like AI art and just thinks it's the bee's knees and it's totally cool to train on copyrighted material. Somebody who thinks like that is the death of creativity as we know it too. And you have to- Yeah, I mean, I think it's good to hear both sides. I mean, I agree, but I'm concerned. Like I used to live in San Francisco and like,

they're like pushing for regulation now. We're like to, who's they, you mean, you mean Sam, the government? Yeah. Well, no, the government, they're like, they're like pushing a bill through right now. They're trying to like, they're trying to fast track it. I forgot who's doing the bill, but, um, we're basically, if you launch a new, uh, language model, need approval. Yeah. You like need approval, but also like you have to basically sign something and like it's perjury. If you're lying that this model can do no harm. It's like,

Yeah, like who's going to do that? Yeah. Yeah. And so I agree, like nuance is important. And like, I do consider myself kind of part of the EAC movement, but more generally just like a techno optimist. Do you have it in your bio still? I don't know. I don't. I don't. But, you know, I like Beth. I like all the people who are part of that. It's cool. I think in general. Yeah.

I think in general, it's right, you know, but nuance is important. I agree. Let me put it this way. I think the way I see it is like we've got enough talented humans out there that are like pushing for like acceleration. And there's enough talented people out there that are pushing for, you know, I would say pumping the brakes, for the lack of a better way to put it, in certain areas. And I think like in totality, like we'll reach...

like some optimum solution because of those influences. And I think it's always been like that, right? Like,

I mean, just like the early days of music, everyone was like, oh, yeah, Napster and like peer to peer and let's just go crazy. And then things settled down and we found a business model that worked. Maybe it's not perfect, right? Like people have a lot of gripes with the Apple and the Spotify business model. But I think we found this like globally optimum solution. I mean, it doesn't always work out right, though. They look at like nuclear plants, right? Like in the past, the U.S. was going to build all these nuclear plants, right?

to solve energy problems, and we didn't do it because of regulations and because of fear. And now we're like trying to solve all these global warming problems. The other thing is like, well, we always kind of had nuclear there that we could have been using and it worked. So it doesn't always work out. Like it often does, but it doesn't always.

Totally. I mean, they're one of the funnest topics is getting into how to regulate it. How do politicians and regulators even regulate this like sort of nebulous set of technologies? It's not just large language models, right? Like there's all the perception AI stuff and like,

you know, the, the implications there. And, but like a group of these technologies that are like sort of permeate and go, my God, like some of the stuff that I'm like doing research right now on, on just the intersection of neuroscience and AI and what we're going to be able to do with just like, like passive neural, like interfaces with earbuds and things like that. I mean, there's going to be some real big ethical quandaries that pop up. And, and so, yeah,

Yeah, trying really hard to bring like still be techno optimist, but bring that balanced perspective. And I think folks are going to like it. I think I think at the end of the day, the important thing is empathy, right? Like the perspective I come from is I'm I tend to be a very empathetic person, right? I want to hear both sides of the story. I want to hear both perspectives. I want to be empathetic to both sides to like if.

If somebody is genuinely worried that this technology is taking their job, I want to understand why. I want to understand what we can do to sort of mitigate the damages that could be done from this. I'm always going to come from that place of empathy, which is why I've never sort of identified with the EAC movement. I don't necessarily think we should always be pushing everything forward.

forward as fast as possible. I think we should be listening to the fears. We should be listening to the concerns. We should be figuring out sort of middle grounds. Like you mentioned, there's always people on both sides, which kind of creates a decent checks and balance to make sure one side doesn't go too far and, you know,

AI nukes the world, but also the other side doesn't go too far and technology stops advancing completely. Those checks and balances, I think, are a net positive overall. I think those need to be there. I agree overall. But I mean, I think the big thing that was like, you know, the argument with EAC would be making is that, you know, compounding is one of the most important technologies

powers out there, right? Like an idea that if we build faster than the technology in the future will be better and better and better. And we'll start solving real world problems like cancer and all these other things that maybe we could have been doing if we weren't so like quick to just regulate everything. Right. And so I think with AI, it's the same thing. Like, sure. Like

Some regulation in the future might make sense, but if we just start throwing it out there right now, we're going to slow down the compounding and like the exponential. We'll stop the exponential from happening with our regulations. And we, you know, yeah, maybe some jobs would be lost in the short term, but in the long term, we could have cured cancer. We could have solved, you know, global warming issues and all kinds of other problems that we could have solved if we would have just like waited and see what happens with the technology. And yeah, now there's a big problem. OK, maybe make a regulation change.

but just don't do it like right at the beginning, like they're trying to right now. I mean, India totally flipped their decision, right? Initially they were like, oh, you have to get every model approved. And they're like, actually, we're going to retract this part, which was really interesting. And I mean, like it's like a regulation also, like it could end up in a place where it just only benefits the incumbent, like the largest AI labs too, right? Like the regulatory capture, like point of view, it's like,

It could end up in a place where like any new innovation, like from a startup can actually happen. And there are the ones that get these like onerous compliance requirements and they can't afford a team of lawyers unless they're like super VC backed. And then what an inefficient use of VC capital instead of innovating, you're sort of like navigating like the legal landscape of like a heavily regulated industry. So I think like your point about nuclear is well taken too. It's like,

I mean, like a lot of folks, I mean, including Gary brings up the example of like, you know, like basically air travel and airplanes are so well regulated. But yet we've got this whole Boeing fiasco happening, right? It's like where you've got one really big incumbent. There's probably a revolving door between regulatory agencies and Boeing. And when the Wright brothers got started, I mean, they weren't being heavily regulated as they were like inventing the plane. Yeah.

They were out there just like out in Ohio, just, you know, trying shit out. So, I mean, I think one thing it's like a lot of people talk about is like, just like technology sunsets, like regulations need to sunset rather than us adding up more and more regulation. So I think this is where things get geopolitical too. It's like, I think China is so much savvier about AI regulation than the U S is right now. And I feel for the politicians. I think they're like asking for this type of engagement and

And I think it'd be good if we engage with them on this and like bring those perspectives to bear rather than, I don't know, just being like regulation, bad innovation, good. Let's keep innovating. And so it's like it's nuanced. But then again, like, look, I'm I'm I've always been a Libra and like, you know, kind of trying to build bridges between two worlds.

Yeah, no, I mean, I think, um, the, this being a super nuanced conversation is like the understatement of the episode here. I think there's just so many different like rabbit holes that we could potentially go down when it, when it comes to the regulation thing. I think you're going to have to be one of our sort of recurring guests, maybe every few months, jump on and nerd out about this stuff. But, um, you know, I,

I do want to give you the opportunity to tell us what else you're working on. If there's a place you think people should go check you out, your Twitter, your YouTube, obviously the Ted AI podcast coming out later in May. Yeah. So just please follow me on Twitter. Uh, you can also follow me on YouTube and, uh, uh, tick tock at Billy effects. Uh,

If you're interested in some more long-form expositions that I do, check out the Creative Tech Digest. It's both a newsletter as well as a YouTube channel. And yeah, of course, check out the TED AI Show. Maybe the one last thing I'll say is like, if you're a founder and a builder in this space, building with any of the technologies that we talked about,

and you're looking for early stage investment. I'm also a scout for A16Z Games. So just hit me up on Twitter or you can email me. We'll put the email in the show notes as well. And I really appreciate you guys having me on, wishing you all the success for your podcast. And Matt, I will see you at IO. And Nathan, I hope to see you in 3D sometime soon.

Come out to Kyoto. Come on. I got to make it happen. Awesome, Bilobal. Well, it's been a blast. This has been one of my favorite conversations we've had so far. So excited to see you in person next week. Cool. Cheers.