We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode A Big Week in Tech: NotebookLM, OpenAI’s Speech API, & Custom Audio

A Big Week in Tech: NotebookLM, OpenAI’s Speech API, & Custom Audio

2024/10/8
logo of podcast a16z Podcast

a16z Podcast

AI Deep Dive AI Chapters Transcript
#artificial intelligence and machine learning#ai product innovation People
A
Anish Acharya
B
Bryan Kim
O
Olivia Moore
Topics
@Anish Acharya :认为2024年将是语音技术突破的一年,并指出当前构建对话式语音产品的开发者可以获得与早期ChatGPT相似的对话性能。他分析了实时语音技术的重要性,以及它如何通过电话解锁AI体验,并应用于医疗保健等领域。他还讨论了AI语音技术在B2B领域的成功应用,以及在C端应用中陪伴型应用的突出表现。最后,他还谈到了OpenAI开发者日上展示的AI语音技术在高接触、高成本服务领域的应用潜力,例如语言学习和营养咨询。 @Olivia Moore :详细介绍了Google的NotebookLM及其音频概述功能,指出其病毒式传播并非由于技术突破,而是其生成的语音的逼真性和主持人间的互动性。她认为NotebookLM可以进行深入解读,并可以处理各种类型的数据,生成有趣的播客内容。她还探讨了NotebookLM的未来发展潜力,例如结合视频和头像,以及在儿童教育领域的应用。 @Bryan Kim :补充了NotebookLM的应用案例,并指出其输出每次都不同,但结果通常很有趣且可用。他认为NotebookLM生成的播客主持人之间可以有很好的化学反应。他还讨论了OpenAI的实时语音到语音API,以及它如何使AI语音代理产品质量大幅提升,并使其更适合企业应用。他分析了成功的AI产品发布需要具备一些出其不意的元素,以及技术进步的重要性。

Deep Dive

Chapters
Discussion on Google's NotebookLM and its new audio overview feature, which allows users to create AI-generated podcasts. The hosts explore the realism and usability of these generated podcasts and speculate on potential future applications.
  • NotebookLM's audio overview feature allows users to create customizable podcasts in over 35 languages.
  • The AI-generated podcasts exhibit realistic interactions and can delve into deep questions, making them engaging and informative.
  • Potential future uses include personalized educational content, digital diaries, and even AI-driven audio dramas.

Shownotes Transcript

Translations:
中文

There's elements of IT that are almost similar to early ChatGPT. Anyone who's now building a conversational voice product can have access to that level of conversational performance.

The majority of people may experience, say I for the first time actually going to be via the phone call.

We're taking the oldest and most information dense of all of our mediums of communication, and finally making IT almost program. Ballet phone calls .

are kind of this .

API to the world within a couple .

weeks of deploying their voice model that have three million users do twenty million calls.

Last week was yet another big week in technology. For one noble gale googles latest sensation has been making its way across the twitter verse with its new audio over the feature.

The feature uses and user customize ball rag, which basically means that people can create their own context window for generating surprisingly good podcasts across thirty five 0。 And to add to the voice mix, open a eye held their developer day and announced their real time speech to speech A P, I, enabling any developer to add real time speech functionality to their own. Plus, they noted a wapping three million active developers on the platform.

Finally, we saw one video model company pick break through the AI noise with their one point five model, giving us father to discuss what is really required to capture attention in four and beyond. Today, we discuss all that and more with basic sense y consumer partners, Olivia Moore, brian cam and general partner and niche attalia. This was also recorded in two segments, one with Olivia and another with all three partners.

So you'll hear void between the two. Plus a niche actually predicted that this would be the year of voice, despite IT never historically working as an interface. In fact, microsoft C E O si adella even previously called the past decades generation of assistance, quote, dumb as a rock.

Well, IT certainly seems like we're turning a corner. Lets get started. As a reminder, the content here is for informational purposes only, should not be taken as legal, business tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any asic sensi fund.

Please note that asic sense e and his syphilis ates may also maintain investments in the companies is discussed in this podcast. For more details, including a link torn investments, please see a extent ced outcome slash disclosures. Another big week tech.

I think the biggest thing i've seen is no book l am. So just quick week out for the audience. Google is kind of known for the sight quest becoming main quest, and this product actually has been around for a while.

IT originated in twenty twenty three, but its new audio overview feature has been taking over twitter with these AI generated podcast host, which are surprisingly good. And i'm saying that as a podcast, this job and so basically what people can do is they can drop in their own information in a context window, and then it'll use that to spin up these podcast. Olivia, you've actually tried these out, right?

yeah. So I think IT originated as something for researchers or academic. The idea was that you would explore all of your notes, all of your papers, all your information within the google workspace. And then this new feature that they've ve added is these two AI agents. Essentially, they play the role of podcast host, and they go back and for talking about the data, asking questions, getting into examples.

The thing that's really interesting to me about IT going viral in the past week or so has been there's actually nothing that feels incredibly new or even incredibly in some ways cutting edge about IT, like it's not the OpenAI brand new real time model that cuts voice lent cy down to almost nothing. In fact, with no book I, you have to wait three to five, sometimes ten minutes, for them to generate the episode once you click the button. I think what's really striking about IT is the realism and the humans of the voices, and then also how they interact with each other.

This provides the inter, the interruption. exactly.

They disagree with each other. They, they erupt each other. Like this is not just upload script t and get to read out. IT does feel like two human beings talking.

And to that point, the other kind of striking thing about IT is it's not just repeating or summarizing the points that you upload in whatever data sources. They're actually answering and asking really interesting in deep questions. They're making comparisons.

They're making analogies. They are taking IT a step deeper of almost like how would you teach someone about this topic? I uploaded basically a bunch of true crime court case filings, and I did a podcast about the case.

And then I spent the last two minutes diving into the ethics of why we retain viro crime. Should we be using this information to create media things like that? So it's really kind of like a next level interpretation of the content.

I would say totally, i've seen so many examples of this, someone just their credit card statement and they were able to grill them on that. Even that I don't think the grilling was prompted. Say, I was like, just talk about this. Find something interesting within this.

Yeah, there has to be some sort of very creative and or something. One of the other use kisses I loved with someone uploaded their resume, and they are linked in profile and IT made like an eight minute podcast, describing them as this incredible, legendary mithi figure and going over all the high point of their careers.

I really like that because I see some people using some of the music alarms and then using them for, let's say, IT, a really nice birthday on. yeah. And so when you played with no book, elam was IT the kind of thing.

Or sometimes you're on what's a dollar or majority y you're like not quite what I want and you're just playing the a islam machine. Was IT like that or was IT first shot? I'm getting exactly the kind of cast I was hoping .

for is a little bit slot machine in that the output is different every time. But I would say it's a lot more reliable in that almost every generation that I would do something would be interesting would be on topic IT would be usable. One example I did, I got very into IT.

At first I was sticking to upload academic papers. I was like, i'm going to use this for its intended purpose. And then one of my generations, I was like the host. They sound like they're flying with each other, right? Yes, they have such a good chemistry.

And so is like what would happen if I upload literally a one sentence document that's like, I think you guys are in a cigaret relationship and they went off on like a two to three minute podcast that sounds, I swear, like the meet you in a romantic comedy or something. It's incredibly emotionally compelling, I would say. And so now my vision, I have to do like a full audio drama.

Then we have to end exactly.

It'll be like the first fully AI avatar AR movie using the voices inspired by the nobo alam characters.

This one's about A I, but like A I in relationships really yes, specifically ai that are like a hosting a show like us interest in google is no book l environment.

Oh wow.

So like .

could we be secretly dating?

exactly? That's why that's the document.

as someone thinks, are giving away secret love notes to each other through our banter.

Well, what was the end? Do they agree?

I mean, you have to listen to IT and get your take. What if those ais, you know, actually developed feelings for each other?

Like real feelings yeah exactly like so it's like you're saying two lines of code could fall love over spread sheet or something that idea yeah it's kind of wild but also kind of I don't know.

I know right.

intriguing.

And so given that you have played around with IT and that a lot of the feedback is really good and people are pleasant, surprised by this, what's your reaction? Like you said, there are products like this out there.

I mean, with eye, there are so many trends as we've seen like products that get really hot one week. And then something more interesting comes along. Could be me just being optimistic.

IT feels like there's something here and I hate to make this comparison, but there's elements of IT that are almost similar to early ChatGPT in that one is really usable even for people who aren't academics, people who don't know that much about prompting anyone can upload up paper and kind of generate a podcast. The other thing that feels ChatGPT ask is like people are already pulling IT quote and quote off label. And maybe it's not no book I M itself that becomes the winning product will see.

I think there's a lot google could do IT stand this more. They could make IT a mobile APP. You could customize the voices. I could see IT being used for kid bedtime stories if they treat IT a little bit. But I think something about the format of personalized podcast or personalized audio is gonna en.

Some of the experiences or the podcast being generated are no doubt impressive, but also feel a little maybe gimmie or like cool once. But is this really something that you can see evolving into something practical, useful?

I for one can see IT actually becoming a real product because right now is doing podcast, for example, but over time, and may be easier to add avatars or videos as backdrop of what they are talking about. And that becomes basically for from youtube video that is very personalized. So one of the fun examples was like, kids love my raft.

I love my craft when there is like a new bad rock edition that drops, and there's like a release notes that are pages and pages long. And kids rely on youtube to figure out what's new, like what changes if you drop the release note into the book, L, M. And just say, tell me what's new and tell IT in a way that kids love.

And then IT generates this twenty minute or ten minute back and force. Can you believe this new update? IT IT allows this character decline, but those are the type of things that actually becomes really interesting .

in a everyday use case. IT makes me want to have like a digital diary or something where you can upload. And that gives you A D S. Last month of life, because the innovation is less like a new medium and more how they're really unlock something to your point around how to make any topic exciting and generate insights and make IT something that you really want to listen to and spend time on potentially unlimited outputs. I totally group could be videos that could be avatar.

The interesting thing about that is I always thought of IT as you can read something, watch something or listen to something. But maybe a nuance of listening is listening to the conversation format. I do you think there's something really magical about this? Is the two hosts going .

back in four times? Yes, there was a tiktok I saw yesterday that had two million likes, completely organic. And he was a law school student who is studying for her mid term.

And he had uploaded, like I don't know, sixty pages of luxor notes. And then I generated a twelve minute podcast for her to review before the exam. If you even hear another human being telling a story around an example, or kate IT makes IT so much easier to remember and understand.

you're basically opening up another line. You can read something as you're listening to something and as your more than something else in the real world. Maybe another thing to talk about is opening as dev day.

They released a lot. But maybe the highlight point was this real time speech to speech API in each. I know you've thought a lot about this idea that real time really matters for speech and that latency is almost like a metric that we're going to hear a lot .

more about yeah there's a thresh hold above which voice doesn't really work as a modality interact with the technology because IT doesn't feel real. And below that thresh hold, which is maybe three or four hundred milliseconds, sort of hold the illusion of talking to a person. Phone calls are kind of this, A P.

I to the world. So he feels like the way that the majority of people may experience A I, for the first time, actually gonna via the phone call. And that is unlocked by the real time technology.

And the crazy thing is, like so much still runs on the phones to. So even if you just think about one vertical, like health carrots, like taking in coming calls from patients, it's like doctors calling other doctors, calling pharmacies insures.

So if we think about how this becomes more real time, are there different applications that you think are unlocked, like a the same music education? How does real time voice maybe changed? Some of those industries .

mostly attack products we've seen so far, even like you attempt to home more problem, maybe then you take a screen shot, you uploaded IT to an A I product to tells you if it's right or not. And now with real time, both voice and some of the video and vision model stuff, it's actually almost like having a tutor sitting next to you, going through IT with you, even with some of the vision stuff show IT your piece of paper. So now it's like A I is moving towards actually helping you learn verses. A lot of the use cases so far have been maybe cheating a Jason in like how do I just get to the answer now it's what is your process really.

really interesting. You're basically saying that, in a way, the lack of latency allows for people to integrate in that moment. Yeah and in the past, maybe because there was more late, cy people took short cuts because they don't want to wait.

Or if you do with you can say here's the way you're doing IT. Here's another way actually that might make more intuitive sense for you to solve this math problem. It's going along the journey of understanding with you versus just being kind of answers or outcome base, which a lot of the AI at products. And historically.

what's really interesting about that is that there is a sort of design language or design queues that are already built in the conversations. So interrupting is one or the sort of aha, aha is another. So that actually should unlock much more interesting product experiences as well because and of course, the lencs is necessary for that. But so is the ability to even understand these parts of sort of I don't know, they're not quite nonverbal, but they're not also a part of the explicit spoken language.

A lot of products, especially in consumer, it's not just about being optimal, persae or perfect, right? In fact, what a lot of people are commenting on when you see the note allum examples, IT is the follow words, that is, the interrupting. IT is imperfections that people are drawn to.

This is a big step forward. And for anyone who tried to use the ChatGPT voice mode before, essentially you would press button, you would say something, the airline would pause, IT would interpret IT, IT would generate something to say back, and then I would return an answer. But to take at least a couple seconds was very buggy.

E was very glitchy. IT was more like sending a voice memo, having someone hear IT and send back a voice memo, than having an actual live conversation with the human. And so the new model is truly more like almost zero latency, full live conversation. This hasn't available through ChatGPT own advanced voice mode, which people are using and loving. But what happened this week, a developer day, was the essentially making that available via API for every other company. So anyone who's now building a conversational voice product can have access to that level of conversational performance, which is huge and really exciting because IT brings a lot of AI conversation products from barely workable, not really workable to suddenly extremely good and very human like yeah .

totally you had a tweet that said this is a massive unlock for A I voice agents. I'm expecting to see a lot more magical products in the next few months. We're quickly leaving the era of latency and conversational experience being a blocker. Can you speak just a little .

more to that in particular? Yeah, absolutely many of their AI voice products didn't really feel even S B calibre in terms of quality, let alone maybe like an enterprise could actually deploy this. So now IT is, I think, arguably enterprise quality in terms of real companies being able to replace humans on the phone with N A, I. On the phone.

We're seeing this for all thorax of use cases, the most obvious ous maybe having someone answered the phone at a pizza shop to take orders or at a small business to book nail appointments, all the way to things that are a lot more complicated, like even doing interviews for surround interviews with A I, which is crazy to think about what is happening. Or even more kind of vertical specific use cases like freed brokers spend all day on the phone calling Carriers, calling truckers and trying to find someone to hall alone in a certain Price range. Now you can do that with an AI that can call one hundred Carriers at once and negotiate the prize instead of having a human being. Do those calls sequences all day. This new API and there's other open source attempts at the same type of model is really going to allow those .

products to shine. Yeah, some of the products are describing are kind of voice first. Yeah, but many of the apps that we've had to date are typically not voice first, perhaps because we actually had the technology. And so I want to refer to any idea's big idea at the end of twenty twenty three, which right now feels very he right on yeah, IT said that voice first apps will become integral to our lives. And he basically says that despite voice specifically being the oldest and most common form of human communication, it's never really worked as an interface yeah for engaging with technology .

that feels like voice is one of the biggest things. It's being unloaded by A I voice is the easiest content to create. And we're all creating audio all day, every day essentially. But that content has never really been captured or used or automated in some ways, like now, even outside of real time, there are so many products that will listen to your meeting.

And well, here you say something and can automatically slack someone with a follow up or use IT to trigger a committed github or a task on a sa that your team has to follow up on. And so I think what we're seeing now, both real time voice and non real time voices, were taking the oldest and most information dense of all of our mediums of communication, and finally making IT almost programmable and usable in a really powerful way. The one thing I think we didn't quite predict t when we were forecasting voice for this year was that it's really, really been working for B2B and not as muc h on con sumer qui te yet.

We're getting there. I think I B to b even thinking about the voice agents. A lot of businesses are struggling to find people to answer the phones for all sorts of roles are struggling to retain them.

It's expensive and so it's supernatural to plug in an AI that can perform its similar quality. The consumer use cases are a little bit less obvious. It's probably work the most companion so far. So again, ChatGPT advances voice mode or character A I. I think they announced within a couple weeks of deploying their voice model that had three million users do twenty million calls.

Yes, wow. Because if you are spending .

hours each day, anyway, talking to this companion, giving you a voice in making IT more real makes a lot of sense. So that to me, was like the shining star, a voice so far. Open a ye did highlight two other use cases on developer day in consumer.

And both of them were actually these kind of high touch, expensive human services, almost, that are now democrat tize with the eye. So one of them is company called speak that does language learning? This might be controversial.

I love dual lingo as a product I loved as a brand, but I think it's hard to use IT to learn a language and end because it's just limited as an inner face. So if you really want to learn a language, you might have to pay someone, I don't know, fifty to one hundred dollars in an hour to two to you. And so the idea of speak as you have an A I voice agent, that is essentially your language to do, and it's much more accessible and affordable.

So that was one. And then the second one they highlighted was, what if you had a nutritionist to A A I? So this is a protocol health of ee, where you can send in photos and then talk life about what you're eating every day in your diet. So I think we'll see more of those use cases unlocked with Better voice model.

Yeah, I need that. I've been saying for a while I I didn't think of a specific to voice, but that I need an AI to just call me out on my B. S.

Yes, yeah. His real goals. You said you were going to want, you didn't do the things that you said you were going to do. But also what you're describing use to do a link versus speak example.

But in initial prediction, he also talks about how, yes, some of these big companies are going to integrate these a integrate this technology, but gmail probably still gna look like gmail yeah. And so how do you think about that baLance between the company utilizing this technology and then what's going to spout? Yes, we know it's really .

interesting and and something that we watch really closely and consumer and particular because you would think that the google is the microsoft have all of your data. They have all of your permissions ing. There's a lot that they could do.

I think what we've seen is there structurally, in some ways, disadvantaged in building towards this A I shift in a really native way. One, it's like these are big companies now. They've a lot of people. They've a lot of competing priorities.

And then the second thing would be, in some ways, they would can abilities their own products, like our view has been a google is likely to maybe add A I to augment gmail, but are they likely to create the A I native version of gmail? But you could only concept ze in the past three to six months, probably not just because again of how big of a company they are in the fact that they have so much writing on the constitute success of the existing product. Ah a good example.

This is actually zoom added transrapid or people using that, yes, but there is also been a kind of products that are independently successful in doing A I meeting notes. And those largely are building towards more specific and opinionated workflows for different types of jobs or tasks. And it's just something that zoom is never gonna do such a broad base platform.

Talk about a completely new platform like imagine zoo, but it's a nchu onus. Yes, they're never gna build that point exactly. They're inherently synchronous.

Clearly, open eye is investing in voice, right? And that's not necessarily a given right. If you think about they also do imagery. They haven't talked about dari in a while. They also do video. So I came out a little while ago, but there really seems to be this voice pushed despite them up, creating across modalities. Is that a signal people should .

be paying attention to? I think so. I think we've seen already, almost even though it's still so so early like areas of A I so far, creative tools was the first era and still massive era. And I think we saw a ton of investment in image generation, video generation, music generation, much of which is still happening. Especially IT feels like as A I moves from pure consumer use cases into more kind of controllable, highly monetization enterprise use cases. IT does be like voices kind of a unique unlock in that it's a real game changer for companies in particular to be able to capture and utilize this audio data that they never had before.

Maybe another thing worth talking about here from dev day is that they announced that they have three million active developers in the ecosystem and they tripled the number of active apps in the last year. Since you've. Studying consumer for so long, maybe ground the audience? And how much quicker is this happening persae. Then let's save the APP era when apple releases its up store. How long did IT take for three million active developers to be building on IT?

And just how big is that kind of number today?

Can I do I know that for me. I like .

that .

incredible, like my mass was like, look like. I don't know that after a number, but say each developer I know like maybe reach at two hundred or thousand unique users, that sort of how I think about basically the reach ability of their building.

I think the other question is like what is the revenue per developer in the upstart, a proxy for ai that's for .

very interesting. There is a data that I think I put out where you look at is not nessy the absa one, but is the set like historical task companies or gena I companies and how the gene I companies are reaching a scale revenue way faster and counterparts very yeah.

I think a big part of that though is because Jenny, I is so well set up for consumption revenue and so many SaaS businesses are our service like you pay fix p for the service monthly. And with a lot of these new businesses are paying on a consumption basis, they are also pricing IT as a subset of labor costs, which are traditionally Priced far higher than software.

Cause I think that's like a far more compelling argument for a wide of revenue ramp is much faster. I think the reason why the report said was because the genii companies require training costs up front. Therefore, their para doesn't make money as higher than SaaS, which maybe. But we know the ones that are making money aren't necessarily incurring a huge training class at front, much more likely as they're placing labor cost there is just so useful, is so unique than willing us to pay as much higher for sure.

I mean, I might buy that argument and consumer and that the willingness to pay of way higher post gena I and pregnant, so maybe, but for us, I mean says businesses always existed to make money.

But developer community, three million people activities in on IT today based on how this is form like that is yeah.

I also think i'm seeing so many people who wouldn't have previously called themselves to developer or creating just really small ABS. You may even using the API for themselves in a way that if you use the parallel of the APP store in the past, you on't really creating an APP for yourself today that, like the bury andry for that would just be too high. And I just wasn't on many.

You know, the story of a lot of productivity and consumer companies is enabling up creation. Like notion is a big APP platform. Actually, people have created this like .

daily habit track.

rabs and totally yeah air table. Obviously, this products like retool, but there's a lot of people who have been at least this like latent demand to make apps, especially for people that are not techno in a business context for our hobby est context. And I think the A I know I think is really unlocking out yeah.

the abstract examples is a very good one because we're seeing this may be fragmentation in a positive way of the types of developers that are building on OpenAI models. There's literally people who we talk to who are like, i'm never gna raise venture funding. I am printing cash basically.

I making a million or two million dollars a month of all of this. Not always things, sometimes very sophisticated kind of products that targets may be a really specific use case. So we see that and that could be an open eye developer, but also we could see a developer who's no, i'm going to build a fifty billion dollar company utilizing or fine tuning these models.

So similar to the up store, we saw a big range of people who are like i'm just onna be a solar renewal making an APP too. I'm going to build a generational business on top of the APP store. Maybe the difference to me here so far has been kind of like, as with everything in A I, the slip of the cover or the speed of RAM. But I don't think we have saw, especially in the early days of an APP store solo prenez rs, making millions of dollars a month, that something that has been very uniquely enabled .

by A I yeah you see this overlapping with the code elam space. You've got faster in reply and all of these tools that allow people who couldn't code before yeah.

to become a developer totally yes, you don't have to be a developer designer. There are so many skill sets now that you can abstract away to A I. As long as you have good taste and good ideas, that tooling did not exist in the up store era and now exist in the AI era.

Well, maybe to that end, clearly, there's a lot of building happening, and we've talked about this before, but i'd love to talk about the playbook, right? You're going to to build something with an A I. It's more competitive than ever to get that attention.

And so maybe one frame rest to talk about that against is because launched one point five this week and I just saw so many mean videos, IT was so viral, people squshy things and fleet things, right? Taking a me distorting IT exactly. IT was, I was actually really fun.

So in a pretty intuitive way, I understand why that kind of model went viral. But we are getting to the point, where is their fatigue when someone releases a new model? I love you to just may be break down what you might call like the anatomy of a successful launch in this world.

If you think about video as a category, when sora first came out with their examples, mines were blown, yes, minds are blown. And I think that became this like front of mind of, oh my god, you can create in january videos. Now, the interesting thing of a video is not all created equal, right?

There is a character centric video, and that you have more of a scene generation video. What is happening in the scene, the content density of the video always mattered, right? Slow motion movement of the scene is video.

But it's a lot less interesting. Cat walking around a garden, interesting, but cats moving call. What we're seeing now is these products are becoming a lot more opinion and lot more specific. If you also we talked about pick, but you also have the likes of egle, where it's template size of what you can do, where little yacht like dance walk out scene that's very open ated, like it's not any video, it's a very specific movement.

And seen where you're putting yourself in peak is the same thing where all all the sort of temples st that are going by or are you take a specific object in the video and you're moderating IT, whether you're wishing IT blowing IT up like inflation and a floats away, it's sort of unexpected expecting that was happening in the video. Alright, it's not a cat walking and all. It's a point a and might go to point me how interesting.

You don't expect the mean guy looking at another woman to actually be squish in a picture. Expect all these different mean characters we blow not all of a second. And I think that unexpected tss is sort of the next evolution of what's happening yeah.

I mean, one thing that was really interesting there is there is a subset of things that people expect from video. And with A I, it's not enough to just give people that or maybe there is some subset if you're creating a stock video company, that's one thing. But in order to go viral, in order to garner attention in this very busy world, you need some sort of not known .

quantity that be they could have easily said, we want video to be longer because that's hard. That's really hard. Like thirty second video with some consistency in the seems are are difficult thing to do. They could have done that. But instead, the team decided, you know what, we're gonna pick like objects and in the scene and do wear stuff with IT.

Do you think that's required now to basically design around some sort of viral element?

I think if there has been a large shocking development in the underline modality again video with sorrow type, like you do need some unexpected element of, again, opinion to garner attention, or the quality just needs to be order vanted Better, not just twenty percent Better, but much Better than I think you get attention. But that's the underlying text evolution, which I think we'll continue to see as well. So I ouldn't say it's like a playbook of the only way to do IT is to come with wacked y like very attention graving things. There is, of course, the underlying technical evolution that will continue to sort of push the boundary ford.

All right, that is all for today. If you did make IT as far, first of all, thank you. We put a lot of thought into each of these episodes, whether it's guess the calendar touches cycles with amazing editor Tommy until the music is just right. So if you like what we ve put together, consider dropping as a line at rate this podcast 点 com flash asic extension and let us know what your favorite episode is。 It'll make my day, and i'm sure tomes too will catch you on the flip side.

We're sunsetting PodQuest on 2025-07-28. Thank you for your support!

Export Podcast Subscriptions