We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Runway Just Raised $308M — Here's What It Means for AI Video

Runway Just Raised $308M — Here's What It Means for AI Video

2025/4/12
logo of podcast AI Education

AI Education

AI Deep Dive AI Chapters Transcript
People
J
Jaeden Schafer
Topics
我观察到Runway在AI视频生成领域处于领先地位,尽管面临来自OpenAI(Sora)和Google的竞争。他们最初的模型质量较差,但如今已经取得了显著进步,其最新模型Gen 4甚至可以与Sora媲美,并且更早地公开发布,价格也更实惠。Runway的成功之处在于其拥有API接口,方便开发者集成到其他工具中,降低了使用门槛。虽然其成本高于ChatGPT,但仍远低于雇佣专业电影摄制组。 Runway的D轮融资获得了General Atlantic、Fidelity Management、Nvidia、SoftBank等重量级投资者的支持,这表明投资者对其技术实力和市场地位的认可,同时也反映出市场对避免OpenAI一家独大的愿望。与OpenAI相比,Runway能够进行更频繁的迭代更新,快速响应市场需求,从而保持竞争优势。OpenAI虽然拥有强大的技术实力,但其更新速度较慢,产品发布周期较长,这使得Runway能够迅速追赶并超越。 Runway将此次融资视为其构建基于AI的全新媒体生态系统的重要里程碑。他们声称Gen 4模型并非简单的增量改进,而是基于对视频生成方式的重新构想,其核心是构建“世界模拟器”。这使得AI视频生成能够更准确地模拟真实世界的物理规律,例如鸟类的飞行、风的吹拂等,从而生成更逼真、更具沉浸感的视频。 除了视频生成,Runway还提供图像生成工具,并在行业中处于领先地位。他们与好莱坞工作室合作,并资助AI独立电影制作,以推动AI视频技术的应用和发展。Gen 4模型能够生成场景一致的角色、地点和物体,并支持从不同视角重新生成元素,这使得其在电影制作方面具有巨大潜力。Runway的目标是在年底前实现3亿美元的年收入。 然而,Runway也面临着关于其模型训练数据版权的诉讼,这与其他AI公司面临的类似问题一样。未来可能需要建立版权共享和艺术家付费机制来解决AI模型训练数据版权问题。就我个人而言,我更希望这些公司能够尽快推出最佳模型,然后再解决版权问题。

Deep Dive

Shownotes Transcript

Translations:
中文

Runway has just announced the raise of $308 million. This is their Series D and an absolutely huge jump. Runway, for those unfamiliar, is a company that creates AI-generated video. And to be honest, they're pretty much the front runner. Now, they have big competition for sure from OpenAI with Sora and even from Google. But really, Runway's been around for a lot longer than anyone else. And I remember when Runway first launched

I was kind of making the rounds at the same time as ChatGPT came out. They'd had this model for a while. It was pretty bad, if I'm being 100% honest. Very frequently, it looked kind of like animated GIFs. There was no physics. Things would move out of proportion and they were kind of wacky. It was low quality. You can only generate a few seconds. So this was, you know, when ChatGPT came out. They have come a long way since then. Today, runway videos look

quite impressive. And I even remember when OpenAI launched Sora and I was like, oh my gosh, Sora is going to completely smoke Runway. And about three weeks later, if I remember correctly, Runway came out with a new model that was on par with Sora. And the great thing about Runway was that it was actually launched and live, whereas Sora was, you know, unavailable for a very long time. They were kind of having beta tested and

people using it and doing safety testing. It wasn't publicly available. And even to this day, while Sora is publicly available, you don't really hear it talked a lot about. And that's because it costs $200. You have to be on the $200 a month tier in order to actually use it. And so it just isn't very used. Meanwhile, Runway, you can have much lower tiers. It's more affordable. And the absolute best thing about

runway is that it has an API so developers can integrate it into tools. I myself am looking at integrating it into the AI box platform. People love being able to generate AI tools. And I think maybe one of the drawbacks or things people are concerned about is, yes, it is more expensive, obviously, than running a chat with ChatGPT because you're generating a video and this is like generating a thousand images or, you know, stitching them together and that's how you make a video. But at the same time, it's

way cheaper than have hiring actual film crew. And you're still only paying, you know, maybe 25 cents a video or something like that. So it's quite, quite impressive. And you can do some really amazing things with it. So who actually paid them the money? General Atlantic was the one that led this round round. They also had fidelity management, um,

Billy Guilford, Nvidia, SoftBank, and some others. So obviously some big heavy hitters, right? When you see Nvidia getting in, when you see SoftBank getting in, this is a series D round and Runway is a clear leader. No one's really concerned that Runway has lost their secret sauce or that the competition is going to be too hard for them. I think everybody wants a lot of competitors. No one wants OpenAI to run away with this. And I think it's kind of interesting because it feels like OpenAI has a very talented team. They come up with really incredible products, but the big drawback is they have super,

so many products coming out that like, you know, they just had their latest version of Dolly, which is amazing and fantastic. And it's an incredible image generator that apparently has generated over 700 million images since it launched like a week ago or a week or two ago. So that's amazing. But the thing is, that wasn't updated for like two and a half years. So the...

the improvement jump was massive. People were like, oh my gosh, it got so much better. But for the two years in the meantime, everyone just had to use mid journey because Dolly's opening eyes, Dolly model was pretty bad and not actually that great. So I think the problem with opening eye is that they have these really impressive launches or maybe even demos. And then, um,

It's a long time before they get updated, whereas you can have a company like Runway or Midjourney that can make much quicker iterations and they'll have smaller improvements, but they happen way more frequently. So as soon as they catch up, they can kind of run away with it where opening eye gets stuck until they have their next big, huge update. So that's just kind of my observation of what is happening here. And I reason why I think Runway is...

is not being severely threatened by OpenAI. And I think they have a lot of, there's a lot of potential here. So up until this date, Runway has raised 536 million, which means that this 300 million they just raised is quite impressive, right? That's a huge chunk of that. And this is what Runway said in relation to all this. They said, quote, today marks an important milestone as Runway announces a significant next step towards our goal of creating a new media ecosystem with world simulators.

They said our recent advancements aren't merely incremental improvements. They form foundations for an entirely new approach to media and ecosystem built on AI systems that can simulate our world. Okay. So I think that what they're trying to get at here is they're like, this isn't just an incremental improvement. And so they just, okay. So I guess like cutting to the chase, they just released their latest version, which I think is like called

or something. And it is impressive. It does a lot of cool things. I'll cover Gen 4. I'll cover some of the cool things that they've released this week in Gen 4. But a lot of people were like, okay, this is better than your previous version, but it's just incrementally better. It's just a little bit better. It's maybe just 20% better. And this is kind of what I was getting at before with like OpenAI and doing the big releases that kind of wow everyone versus doing these incremental ones. Yes, it's 20% better. What they're trying to say though is they've reimagined

They've rewritten how these videos are actually being generated, which means it's the foundation for something that can get a lot better. And so what they've done is beyond just, I guess, sucking in videos and trying to get it to spit out similar videos, allegedly they've created what they're saying is a world simulator. So this means that...

In order to accurately create video, one of the big problems is it's not just like, you know, with a picture, you can kind of suck in other pictures and spit out a picture that seems kind of right. And then maybe there was like we had the phase where everyone had weird 10 fingers and they had to try to fix the finger problem on AI generated images. With video, there's a much bigger problem. And that is that there has to be physics. If a bird flies by in the background, it has to like...

The video has to understand how a bird's wings flap, how the physics of that work. If a gust of wind hits the bird and it kind of bumps, like how everything is interacting with each other. And essentially in order to create AI generated video, you have to create a real world simulator, an actual simulation of how everything functions in the world, how physics works, how things interact. When you grab a pillow and shake it, how the creases will move as the pillow kind of

bends, like all of the little tiny things. And in a video, you could have like 20 or 30 of these things all interacting at the same time, right? You can have people in the background talking and their hair is blowing in the wind and someone in the front, you know, bending a stick and how it moves and all of that. So it's crazy, right? This is a real world simulator.

What Runway does is they have a whole bunch of AI media tools. They have video and image. I should mention, they also do image, which isn't a shocker. In order to make video, you got to generate thousands of images. So they do have an image generation model. And of course, there is a lot of competition, specifically in the image generation model. And in the video, it's more just OpenAI and Google. And there's a couple other ones. There's Pika Labs. And there's a handful of others that are doing the video as well. But I think Runway might be doing it the best.

So they have really tried to differentiate themselves. They have a deal with a major Hollywood studio. They've also set aside millions of dollars. Something that I see a lot is they fund like these A.I. indie films. So they are funding filmers or producers to make A.I. produced footage that look really cool. And they have this kind of film festivals every year. That's really kind of cool to see like, oh, look, these were all made by A.I.

And even from when the footage was like not that great, I was really impressed with how creative people were and how they were able to actually make things that look good. And then today they're getting better and better. So they have, you know, Gen 4 got released this week. And this is,

Allegedly what Runway is saying is that with this new video generation model, it can create consistent characters, consistent locations, and objects across scenes, right? So you might be me in one scene and then I'm able to be in a completely different scene. In the past, you would just say like, you know, a white male with...

uh, blonde curly hair walking on the beach. And then you'd try to say that exact same description walking down the road, but it might be a completely different white male and it wouldn't look like me at all. So now you're actually able to have consistent characters. And this is what is obviously needed when you're making different shots of a film. You need the same person to make it consist or, you know, coherent. Um, they say that they can, uh, create, uh,

quote unquote, coherent world environments. And they can also regenerate elements from different perspectives, which is kind of cool, right? So like I could be talking to the podcast here, for example, and then all of a sudden the camera pans over to another angle. It's a different perspective, but it's still me in the same place doing the same thing. So this is really cool. The camera moves around. And when you think about how they're having to simulate real world environments, a lot of people say that in order to make these videos, they're simulating like a 3D environment. So they can really move that camera anywhere.

anywhere they want around that environment to see any angle of what's going on there, which is really cool. And you get to a point where if for real films, I imagine real film producers are salivating over the concept of maybe you run through a film, something you upload it to AI, and then you could just pick wherever you want the camera to be anywhere in the shot, zoom in in any which way it would be quite impressive. And that's probably where we're going into the future. So this is really amazing.

They said with their new Gen 4 product, they hope to hit a $300 million annualized revenue by the end of this year. So by the end of the year, they hope that they'll be on track to make $300 million in revenue. I think...

I hope they're able to hit that. I think this is a great company. I'm really excited about what they're doing. One thing that people say is possibly gonna get in their way is that there is a big lawsuit against essentially how they've trained their model. People are saying, well, they just sucked up all of this copyrighted artwork and data and videos and they trained their model off of that.

And to be honest, yeah, I don't doubt they did. They say like, oh, it's all in fair use and whatever. There's this whole lawsuit going on for a lot of companies. OpenAI as well. And it kind of goes back to the whole drama with OpenAI and Miriam Marotti when she was asked by the New York Times, like, hey, you guys using YouTube to train? And she's like, oh, I have to get back to you on that. I don't know. And it was like, clearly she was uncomfortable. And clearly, yes, they were using YouTube to train Sora, their video model.

So evidently, Runway did the same thing. Everyone's doing the same thing. So they have a lawsuit, but I think everyone is also probably having a lawsuit. It'll be interesting to see how that shakes up. Obviously, a company like OpenAI has a lot more resources to fight it. Although, you know, 300 million isn't too shabby either for Runway. So I'll be curious to see what happens. And to be honest, at this moment, maybe there's a future where we work on some like

you know, trademark and copyright sharing and payouts to artists. And I think there's some companies doing that, like Adobe is doing a good job with their image generation model.

To be honest, from my consumer perspective, I just want these companies to get the best model out as fast as possible. And I know some people will be upset about that controversial hot take, but I just want this, like, I just want an actual video generation model that I can use for a business or I could use for whatever I want. And maybe once they make something good, then we can start regulating and clamping down and making them pay everyone that they took their data from. But I just want it to come out as soon as possible. So that's where I'm at on it. And I mean...

the people that could pay are adobe and google and whatever the massive companies are so sometimes when i see these startups i have a little little empathy and sympathy for them but i know everyone's not maybe on board with that that's that's kind of where i sit on it i just want the tool out as soon as possible and i'm sure we'll figure out all the monetization and compensation stuff in the future hey if you enjoyed this episode and you want to learn how to grow and scale your business or your career using ai tools i have a community called the ai hustle school community where every single week

I record an exclusive piece of content where I break down the exact AI tools I'm using and the workflows I'm using to grow and scale my business with AI. It's $19 a month. And in the past, I had this at $100 a month. I've...

recently dropped the price. It's discount now. We'll increase the price in the future, but if you lock in the $19 a month price now, it'll never be raised on you. We have over 300 members in the community that are all sharing how they're growing and scaling their businesses, all of their side hustles using AI. And we share all sorts of things. We talk about my co-host

Jamie, who does this with me, he made $25,000 from an Amazon program last year. And he talks about how he's using AI to scale that up and make even more money this year. So many amazing videos we don't share anywhere else. The link is in the description to the AI Hustle School community. I'd love to have you as a member and I hope you have an amazing rest of your day.