We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Erik Bernhardsson on Creating Tools That Make AI Feel Effortless

Erik Bernhardsson on Creating Tools That Make AI Feel Effortless

2025/1/9
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive AI Insights AI Chapters Transcript
People
E
Erik Bernhardsson
Topics
我创建了Modal Labs,旨在构建一个无服务器云平台,简化AI、机器学习和数据应用的工作流程。我之前在Spotify和Better.com的工作经历让我意识到,现有的云计算基础设施在易用性和效率方面存在不足。因此,Modal Labs的目标是让云开发体验与本地开发一样好,并提供即时访问大量GPU资源的能力,从而解决GPU资源的稀缺和成本问题。我们抛弃了Docker和Kubernetes,构建了自己的文件系统、调度器和容器运行时,以实现更快的反馈循环和更高的效率。Modal Labs最初的杀手级应用是基于Stable Diffusion的生成式AI,现在也支持音频和音乐等多种模态。我们提供完全基于用量的计费模式,让客户无需担心容量规划问题。我们的目标是构建一个端到端的平台,涵盖机器学习生命周期的所有方面,包括数据预处理、训练和推理。我们与其他竞争对手的区别在于,我们是一个云原生且通用的平台,能够提供即时访问数百个GPU的能力,并支持运行用户自定义代码。我们认为,对于任何重视模型质量的公司来说,最终都需要训练自己的模型,以建立竞争优势。我们也关注AI基础设施的未来发展,例如向量数据库的演变、更高效的训练数据存储以及减少训练工作负载带宽需求的技术。我们相信,AI将会解锁更多潜在需求,并对软件工程和物理学等领域产生深远的影响。

Deep Dive

Key Insights

What inspired Erik Bernhardsson to start Modal Labs?

Erik Bernhardsson was inspired to start Modal Labs after spending years building infrastructure at Spotify and Better.com. During the pandemic, he realized the need for better infrastructure for data, AI, and machine learning, which led to the creation of Modal. His goal was to make cloud development feel as seamless as local development, focusing on fast feedback loops and developer productivity.

What is Modal Labs' primary focus today?

Modal Labs focuses on providing a serverless cloud platform tailored for AI, machine learning, and data applications. It offers a Python SDK that allows developers to write code as functions, which are then turned into serverless functions in the cloud. Modal handles containerization and infrastructure, making it easy to access thousands of GPUs and CPUs on-demand, particularly for inference workloads like Stable Diffusion and AI-generated music.

Why is GPU flexibility important for AI workloads?

GPU flexibility is crucial because AI workloads, especially inference, are often unpredictable and bursty. Traditional cloud providers require long-term commitments for GPU access, which is inefficient for startups. Modal Labs addresses this by offering fully usage-based pricing, allowing customers to access GPUs on-demand without over-provisioning or under-provisioning. This approach is particularly beneficial for inference and shorter, experimental training runs.

How does Modal Labs differentiate itself from competitors?

Modal Labs differentiates itself by being a cloud-native, multi-tenant platform that offers instantaneous access to thousands of GPUs and CPUs. It focuses on high-code ML engineers, allowing them to run custom code in containers without worrying about infrastructure. Unlike competitors that specialize in specific areas like inference or language models, Modal aims to be a general-purpose platform, covering the entire machine learning lifecycle, from data pre-processing to training and inference.

What are Erik Bernhardsson's thoughts on training custom models versus using off-the-shelf solutions?

Erik believes that companies where model quality is critical should eventually train their own models to establish a competitive moat. While off-the-shelf solutions can be useful, custom models are essential for domains like audio, video, and image generation, where differentiation is key. Training custom models ensures that a company's solution is superior and defensible in the long term.

How does Erik see AI impacting the field of coding?

Erik views AI as another tool that enhances developer productivity, similar to compilers, higher-level programming languages, and cloud computing. He believes that AI will unlock more latent demand for software engineers, as it has historically done with other productivity improvements. Rather than reducing the need for engineers, AI will likely lead to an increase in demand for software development.

What excites Erik about AI's potential in music and audio?

Erik is particularly excited about AI-generated music, as it represents a frontier that was previously impossible. He notes that while current AI-generated music still has an uncanny valley effect, each generation of models is improving. Companies like Suno, which uses Modal for large-scale inference, are pushing the boundaries of what AI can achieve in music, enabling entirely new products and experiences.

What gaps does Erik see in AI infrastructure today?

Erik identifies several gaps in AI infrastructure, including the need for more efficient storage solutions for training data and the challenge of making training workloads less bandwidth-intensive. He is also interested in the evolution of vector databases and how AI-native storage solutions might differ from traditional databases. Additionally, he sees opportunities for innovation in computational biology and physics-based AI models.

Chapters
Erik Bernhardsson, CEO of Modal Labs, discusses his journey from building infrastructure at Spotify and Better.com to founding Modal, a serverless platform for AI and machine learning applications. He highlights the lack of efficient data infrastructure in his early career and his desire to build a better solution.
  • Founded Modal Labs to improve AI/ML infrastructure
  • Prior experience at Spotify building recommendation systems and at Better.com
  • Recognized challenges in existing cloud development workflows

Shownotes Transcript

Translations:
中文

I'm sorry.

Today, I'm chatting with Eric Bernhardson, founder and CEO of Modal. Modal developed a serverless cloud platform tailored for AI, machine learning, and data applications. And before that, Eric worked at Better.com and Spotify, where he led Spotify's machine learning efforts and built the recommender system. Well, Eric, thanks so much for joining me today on No Priors. Yeah, thanks. It's great to be here. So if I remember correctly, you worked at Spotify and helped build out their ML team and recommender system, and then were also at Better.com. What inspired you to start Modal, and what problem were you hoping to solve? Yeah.

Yeah, I started at Spotify a long time ago, 2008, and I spent seven years there. And yeah, I built a music recommendation system. And back then there was like nothing really in terms of data infrastructure. Hadoop was like the most modern thing. And so I spent a lot of time building a lot of infrastructure. In particular, I built a workflow schedule called Luigi that basically no one uses today. I built a vector database called Innoi that, you know, for a brief period people used, but no one really uses today.

So I spent a lot of time building a lot of that stuff. And then later at Better, I was a CTO and thinking a lot about developer productivity and stuff. And then during the pandemic, I took some time off and started hacking on stuff. And I realized I always wanted to build basically a better infrastructure for these types of things, like data, AI, machine learning. So pretty quickly realized this is what I wanted to do. And that was sort of the genesis of Modal. That's cool. How did that approach evolve? Or what are the main areas that the company focuses on today? So I started looking into...

First of all, just like, what are the challenges with data, AI, machine learning infrastructure? And I started thinking about from like a developer productivity point of view, what's a tool I want to have? And I realized a big sort of challenge is like working with the cloud is arguably kind of annoying. Like as much as like I love the cloud for the power that it gives me,

And I've used the cloud since way back, 2009 or so. It's actually pretty frustrating to work with. And so in my head, I had this idea of like, what if you make cloud development feel almost like as good as local development, right? Like how does this like fast feedback loops?

And so I started thinking about, how do we build that? And realized pretty quickly, well, actually, we can't really use Docker and Kubernetes. So we're going to have to throw that out. And probably going to have to build our own file system, which we did pretty early, and build our own scheduler, and build our own container runtime. And so that was basically the two first years of Modo, just laying all that foundational infrastructure layer in place.

MARK MANDEL: Yeah. And then in terms of the things that you offer today for your customers, what are the main services or products or-- JAN-FELIX SCHWARTZ: Yeah, so we're infrastructure as a service, which means on one side, we run a very big compute pool, like thousands of GPUs and CPUs. And we make it very easy to get-- if you need 100 GPUs, we can typically get you that within seconds.

So sort of one big multi-tenant pool, which means like capacity planning is something we kind of take, you know, it's something we solve for customers. They don't really need to think about reservations. We always provide a lot of on-demand GPUs. On the other side, there's a Python SDK that makes it very easy to build applications. So the idea is like you write code,

basically like functions in Python. And then we take those functions, turn them into serverless functions in the cloud. We handle all the containerization and all the infrastructure stuff. So you don't have to think about all this sort of Kubernetes and Docker and stuff. And the real killer app, as it turns out, like we started this company pre-gen AI, but as it turns out, the main thing that really started driving all the traction was when Stable Diffusion came out. And a bunch of people came to us and were like, hey, actually, this looks kind of cool. Like you have GPU access. It's very easy to, you know, you don't have to think about, you know, spinning up machines and provisioning them.

So that was like our first sort of killer app was like just doing gen AI in a serverless way with the focus of diffusion models. Like now we actually, we have a lot more of different modalities. Like a lot of usage is still like text to image, but we also see a lot of audio and music. So one example of a customer I think is super cool building really amazing stuff is Suno.

which does AI-generated music. So they run all their inference on modal, very large scale. There's a lot of customers like that sort of dealing with like, you know, building cool Gen AI models. In particular, I would say in the modalities of like audio, video, image, and music, stuff like that. That's cool. And I think Suno's using all like Transformer backbone now for stuff, right? Versus a diffusion model-based thing. I think it's a combination of both. I'm not sure. Yeah.

Yeah, I think they talk about it publicly. That's the only reason I mention it. You wrote in October this post, I think it was called The Future of AI Needs More Flexible GPU Capacity. And in general, what I've heard in the industry is that a lot of ways that people use GPU is reasonably wasteful.

And so I'm a little bit curious about your view on flexibility around GPU use, how much is actually used versus wasted, how much optimization is left, you know, even just with existing types of GPUs that people are using today. Yeah, GPUs are expensive, right? And I think it's sort of kind of as like a paradox. It's like means that cloud, you know, a lot of the cloud capacity is like, you know, the only way to get it is to sign long term commitments, right?

Which I think for a lot of startups is really not the right model for how things should be. I think the amazing thing about the cloud was always to me that you have on-demand access to whatever many CPUs you need. But for GPUs, the main way to get access has been over the last few years, due to the scarcity, has been to sign long-term contracts. And I think fundamentally, that's just not how startups should do it. And to kind of get it has been sort of supply-demand issues. But yeah.

Just looking at the CPU market, the fact that you have instant access to thousands of CPUs if you need it, my vision has always been there should be the same thing for GPUs. And that means, especially as we shift more to inference, I think for training, it's been sort of less of an issue because you can sort of just make use of the training resources you need. But for inference especially, you don't even know how much you need in advance. It's very volatile. And so a big challenge that we solve for a lot of customers is we...

We're fully usage-based. So when you run things on modal, we charge you only for the time the container is actually running. And that's a massive hassle for customer solutions, like doing the capacity planning and thinking how many GPUs. And then having the issue of either you over-provision and you're paying for a lot of idle capacity, or you're under-provision and then you have, you know, when you run into the capacity shortage, you have degradation in service. And so...

Whereas with modal, we can handle these like very bursty, very unpredictable workloads really well. Because we basically like take all these user workloads and just run a big pool of thousands of GPUs across many different customers. Yeah. One of the things that always struck me about training is to your point, you kind of spin up a giant cluster. You run a huge supercomputer, right? And then you run it for months in some cases. And then your output is a file. And that's literally what you've generated. You know, it's kind of insane if you think about it.

Yeah. And that file in some sense is a representation of the entire internet or some corpus of human knowledge or whatever. And then to your point with inference, you need a bit more flexibility in terms of spinning things up and down or alternatively, if you're doing shorter training runs or certain aspects of post-training, you may need more flexible capacity to deal with.

Totally. And that's something we're really interested in right now. Traditionally, most of modal has always been inference. That's been our main use case. But we're really interested also in training. In particular, probably focused more on these shorter, very bursty experimental training runs. Not the very big training runs, because I think that's a very different market. So that's a very interesting thing we're looking at. How do you think about meeting people's end-to-end needs? I know that there's a lot of other things that people do. A lot of people are using Rack to basically augment what they're doing or...

You know, there's a variety of different things that people are now doing at Time of Inference in terms of using compute to, you know, take different approaches there. You know, I'm a little bit curious how you think about the end-to-end stack of things that could be provided as infrastructure and where modal focuses or wants to focus. Yeah, totally. I mean, our goal has always been to build a platform and cover like the end-to-end use case. It just turned out that inference was, we were well positioned to focus on that as our first killer app. But,

But my end goal has always been to make engineers more productive and focus on what I think was like the high code side of ML. Like I think we're like our target audience tends to be more like sort of traditional like ML engineers, like people building their own models. But there's many different aspects of that. There's like the data pre-processing, then there's the training, and then there's the inference. And that is actually probably like even more things, right? Like, you know,

having feedback loops where you gather data and online ranking models and all these things. And so my goal for Modal has always been to cover all of that stuff. And so it's interesting, you see a lot of customers now, we don't have a training product, but a lot of customers use Modal for batch pre-processing. So they use Modal to, maybe they're training a video model. So maybe they have like petabytes of video. So then they use Modal actually, maybe with GPUs even to like do feature extraction. And then they train it elsewhere.

And then they come back to modal for the inference. So for us to do the training makes a lot of sense. And in general, I think it makes a lot of sense to sort of build a platform where you can handle the entire sort of machine learning lifecycle end-to-end and many other things related to that. Also the data pipelines and nightly batch jobs and all these things. Yeah. I mean, what you describe is a pretty broad platform-based approach. I think there's a handful of companies who are sort of in your general space or market. How do you feel that modal differentiates from them?

I think, first of all, we're cloud native. We're just cloud maximalists. We went all in and said, "We're going to build a multi-tenant platform that runs everyone's compute." And the benefits of that are very tremendous because we could just do capacity management much better. And that's one of the ways we can offer instantaneous access to hundreds of GPUs if you need to. You can do these very bursty things and we just give you lots of GPUs, right?

I think the other benefit or the other sort of differentiation is to be very general purpose. We focus on sort of what I think, as I mentioned, like high code, like we run custom code in our containers, in our infrastructure, which is a harder problem. Like containerization and running user code in a safe way is a hard problem. And then dealing with container call start. And like I mentioned, we have to build our own scheduler. We have to build our own container runtime and our own file system to boot containers very quickly. And I think so.

Unlike many other vendors, they're only focused on say inference or maybe only LMs. Our approach has always been to build a very general purpose platform. And in the long run, I hope that sort of manifestation will be more clear because I think there's many other products we can build on top of this now that we have the compute layer becoming more and more mature. When I talk to large enterprises about how they're thinking about adoption of AI,

Many of them already have their data on Azure, GCP or AWS. They're running their application on it. They've bought credits in the marketplace. They want to spend resident. They've already gone through security reviews. You know, they've kind of done a lot and they worry about things like latency or pings out to other third party services versus just running on their own existing cloud provider or their hyperscaler that they work with or set of hyperscalers. You know, many of them actually work across multiple services.

How do you think about that in the context of modal in terms of your own compute versus hyperscalers versus, you know, the ability to run anywhere? Yeah, totally. And of course, there's also a sort of security compliance aspect of this. Like, I think, you know, it is a, you know, challenge. I look back at when the cloud came and I remember back in like 2008, 2009 and the cloud came and my first reaction was like,

how the hell, why would anyone put their computer in someone else's computer and run that? And I think, to me, that was just insane. Why would anyone do that? But over the next couple of years, I was like, actually, it kind of makes a lot of sense. And I think now even among enterprise companies, there's a sort of recognition that, yeah, actually, probably our computer is more safe in the big hyperscalers. And in a similar vein,

I remember talking to Snowflake back in say 2012 or something like that. And they had a sort of similar approach where like, they basically said like, we're going to run databases in the cloud and it's not going to be in your environment, you know, or maybe your environment, but like we're in infrastructure as a service. And I thought that was nuts. And then obviously like, I think Snowflake now is a very large, you know, publicly traded company. I think they showed that like,

Infrastructure as a service makes a lot of sense. And so I think there is a little bit of resistance to adopting this multi-tenant model. But I think when you look at security and adoption of cloud, I think we have a lot of tailwinds blowing in our direction. I think security is moving away from a network layer into an application layer. I think bandwidth costs are coming down. I think there's a lot of tricks you can do to minimize bandwidth costs.

with transfer costs. You can store data in R2, for instance, which has zero egress fees. It's something that I think is realistically going to mean we're going to have to push a lot. But I think there's so many benefits of this multi-tenant model in terms of capacity management that to me, it is very clearly a big part of the future of AI. It's like running a big pool of compute and slicing it very dynamically. You mentioned earlier that one of the things that really

caused early adoption of modal was stable diffusion and sort of these open source models around image gen. Are there any open source projects or models that you're seeing be very popular in recent days or in the last couple of months that have really started taking off? That's a good question. I think if anything, it's actually been a little bit of a shift towards more like proprietary models, but like proprietary open source models. So like Flux, I think most recently has been

a model that's getting a lot of attention. I'm personally very interested in audio. I think on-reel is very underexplored. I think there's a lot of opportunity for open source models in that space. But I don't think we've seen anything really cool yet.

What else do you think is missing in the world today in terms of AI infrastructure or infrastructure as a service? I'm very biased, but I think modal is missing. Basically a way for engineers to take code and run it. And look, I'm very bullish on code and people wanting to write code and building stuff themselves. I think outside of sort of LLM space, which is like a very kind of...

different world, in my opinion. I think there's always going to be a lot of applications where people want to train their own models, they want to run their own models, or at least run other models but have very custom workflows. And I just don't think there's been a great way to do that. It's pretty painful to do that. And so...

I think that's pretty exciting. I think on the storage side, there's some other really exciting stuff. We haven't really touched storage at Modal. We focus very much on compute. So I'm personally very interested in vector database. How's that going to evolve? I don't think anyone really knows. I'm pretty interested in more efficient storage around training data. I'm also very interested in... I guess another thing I'm very fascinated by right now is...

training workloads. In order to train large models efficiently, you have to really spend a lot of money and time setting up the networking. So one of the things I'm really excited about is what if we can make training less bandwidth hungry? Because I think that would actually change a lot of the infrastructure around training, where you can now kind of tie together a lot of GPUs in different data centers and

and not have to have this very large data centers with like, you know, InfiniBand and stuff. So that's like another sort of infrastructure thing I'm looking forward to seeing more development on. How important... So there's sometimes been a little bit of debate around vector DBs, and you mentioned that you actually built one when you were at Spotify. I think Spotify today hit $100 billion in market cap. I think it's one of the first European technology companies to get there, which is pretty cool. So a lot of folks I know may use one of the existing vector DBs, or in some cases are just using Postgres DBs.

with PG Vector, right? How do you think about the need for vector databases as sort of standalone pieces of infrastructure versus just adopting Postgres versus doing something else?

Yeah, I feel like everyone's debating that. I don't know necessarily. Like, I think there's a lot of, there's a case to be made that, you know, you can just stick everything into relational database and you're fine. To me, like, the bigger question is, like, in the long run, like, you know, if you think about, like, what's, like, an AI-native data storage solution? Like, I don't even know if it's, like, necessarily has the same form factors and the same interface as a database. So that's actually a bigger question that I'm more excited about. It's like, I think people look at, like, vector databases and, like,

you know, whether it's relational or not. They sort of shoehorn it into this like, you know, sort of old school model of like, you put data, you get data back. But I don't know, I think there's like a lot of room to sort of rethink that in the age of AI and have very different like

you know, interaction models with that data. I know that sounds a little fluffy. Yeah, it's super interesting. Could you say more on that? I mean, like one thing I think a lot about is like maybe the database itself be like the embedding engine, right? Like instead of like you put a vector in and you, you know, you search by that vector. I think there's a lot of, you know, the more native, like AI native storage solution would be you put text in, you put, you know, video in, you put image in and then you can search by that. Like to me, that would be like a more sort of

native, AI native sort of storage solution. So that's like one line of thought that I've had is like, maybe we just, we're just like so early to this that like, I think it's going to take five, 10 years for it to really, for it to shake out. Yeah, that's really cool. I guess one other thing that you mentioned was more people seem to be training their own models, at least in a lot of the areas that, that modal works with.

Do you think there's any heuristic that people should follow in terms of when to train their own model versus use something off the shelf? I think eventually for any company where model quality really matters, unless you kind of train your own model in the end, I feel like it's going to be hard to defend the fact that you have a better solution. Because otherwise, what's your moat? If you don't have your own model, you need to find a moat somewhere else in the stack.

And that might be possible to find. It might be somewhere else for a lot of companies. But I think at least if you have your own model and that model clearly is better than anyone else, then that sort of inherently is a moat in itself. I think it's more clear outside of the LM space when people are building audio, video, image models. I think if that is your core focus, it's very clear to me, you kind of have to train your own models in that case. Yeah. If I remember correctly, you're an IOI gold medalist.

Yeah, that's right. Obviously, you think a lot about code and coding. And how do you think that changes with AI over time? Or do you have any contrarian predictions on what happens there? I don't know if this is contrarian, but like, I actually think that like, you know, this is just like one out of many improvements in developer productivity. And, you know, you look back at like, you know,

whatever, like compilers was originally like, you know, a tool that made developers more productive and then like higher level programming languages and databases and cloud and all these things. And so like, I actually don't know if like AI is like, you know, different than any of those changes in the hindsight. And so, and by the way, like every time that's happened, you know,

it turns out like there's so much latent demand for software that actually like the number of software engineers goes up. So like, I feel like you look back at like, you know, the last 40 years of software development, like every decade, engineers get like 10 times more productive due to better frameworks or better, you know, tooling or whatever. And it turns out actually that just unlocks more latent demand for software engineers. So I'm very bullish on software engineers. I think it would take a lot to sort of

destroy that demand. I think people look at a lot of like AI as like a kind of fixed something, but in my opinion, it's like, no, it's just going to unlock more latent demand for more things. So I'm very bullish in software engineering. And then I guess the other field that you touched a long time ago was, I think you won a Swedish physics competition in high school. And I'm curious if you followed any of the physics-based AI models or some of the simulation, like that's an area that strikes me as very interesting.

And the way you think about the models for it are different. I did win the Swedish High School physics competition. I was a total math lead nerd when I was, you know, my teenagers. Okay. Yeah, I think it's a really fascinating area right now. Like it's one of those areas that seems like there's some real reinvention needed and not as many people working on it. So it's one of the areas I'm kind of excited about just in terms of there's lots and lots of different applications that you can start to come up with relative to it.

Yeah, I think it's, I mean, like physics, in my opinion, it's like, you know, it looked like the golden era of physics, like the 20s and 30s and 40s. I kind of feel like it's like, it hasn't really evolved much in the field. So I don't know, maybe I would love for you to be right that there's like a resurgence of, you know, new physics-based models. Yeah, I don't know if it would necessarily help in the short run with basic research. I think it just helps with simulation.

It kind of feels like physics as a field really doubled down on the Ed Witten path of physics and maybe got a little bit lost there or something. I'm not sure. Are you talking about doing more compute-based methods for physics? It's kind of like Ansys or other companies where you simulate an airplane wing. You simulate load-bearing in a crypt. I see. So like high HPC. That's always existed, right? Especially in like...

oil and gas and stuff like that. But it's a lot of kind of small bespoke kind of fine-tuned or hand-tuned models for specific things versus, you know. I mean, meteorology is like something I actually think like deep learning should be

like change right like it sort of makes a lot of sense like you know deep learning should be very good at like you know predicting you know turbulence and things like that like because turbulence is actually very hard to solve in traditional physics models right and so deep learning should in theory i kind of feel like makes a lot of sense yeah i think there's been a couple papers on that uh out of nvidia and then i think google has a team that's worked on it and so there's a couple different sort of uh weather simulation teams that have started to publish some pretty interesting stuff it seems yeah yeah

I would also point to an adjacent area, like biotech, I think computational methods have been enormously successful. If you look at protein folding in particular, but also other things like sequence alignment and things like that. And that's actually a field where we start to see a lot more usage of model as well. I feel like there's a resurgence of computational biology. It's a really exciting field right now.

Are there specific use cases you see people engage with most across your customer base relative to the sciences? There's a lot. I'm not a buyer person, so this is kind of superficial, just kind of looking at our customers. But one thing I've seen a lot is actually medical imaging, because my understanding is with modern methods, you can do very automated tests

get millions of experiments and do automated electron microscope imaging of that. So we've actually seen quite a lot of customers use modal for then processing and doing computer vision on those images, which is kind of cool. It's really cool. Is there any area that you're most excited about from a human impact perspective for some of these models? With my background at Spotify, I think it's, who knows, to me, a very exciting thing. I think it's still very early in AI-generated music. You can still...

hear that it's not right. It's uncanny valley a little bit. But soon, every generation of their model is getting better and better. And first of all, music in itself tends to be always one of the first areas where you see real impact of new technologies, whether Spotify or iTunes or piracy or all these things, or gramophones going back. So I always think music is an exciting area in that sense. It always shows the opportunity of new technologies.

And I also think like Sooner is like fundamentally something you couldn't have done before Gen AI. So that to me is like really exciting. It's like sort of really pushing the frontier, enabling a completely new product that there's like, there's no way this Sooner could have existed five years ago. That's cool. Well, I think we covered a lot today. Thanks so much for joining me. Yeah, thanks a lot. It was great.

Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.