We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Tech leaders on next phase of the artificial intelligence revolution

Tech leaders on next phase of the artificial intelligence revolution

2024/12/6
logo of podcast Washington Post Live

Washington Post Live

AI Deep Dive AI Insights AI Chapters Transcript
People
C
Clay Shirky
S
Shyam Sankar
Topics
Shyam Sankar: 人工智能技术能够显著提升军事决策效率,缩短军事行动时间,增强威慑力,维护和平。其核心作用并非创造全新的应用场景,而是显著提升现有业务效率,例如目标识别、信息处理和资源分配等。人工智能不应该取代人类决策者,而应作为辅助工具增强其决策能力,如同为人类穿上钢铁侠战衣。在军事领域的应用并非全新问题,关键在于如何规范其使用,确保符合法律和道德规范,并防止技术落入敌方手中。 人工智能在商业领域的应用也日益广泛,能够显著提高效率,例如在医疗、汽车制造等行业。AI可以作为一种新型劳动力,辅助人类更快地完成工作,提升竞争力,而非造成失业。 数据安全和隐私保护是AI应用的关键,需要关注数据来源、访问控制和用途限制,确保数据不被滥用。

Deep Dive

Key Insights

What is the OODA loop and how does it apply to military decision-making?

The OODA loop stands for Observe, Orient, Decide, Act. It is a decision-making process pioneered by Colonel John Boyd in the 70s and 80s. In a military context, it emphasizes the importance of making decisions faster than the adversary. AI helps accelerate this process by improving targeting, damage assessment, and logistics, enabling faster and more efficient decision-making.

How does Palantir's AI technology enhance military operations?

Palantir's AI technology enhances military operations by enabling faster decision-making through the OODA loop. It uses sophisticated computer vision and sensor fusion algorithms to identify targets, process large amounts of data, and prioritize logistics. For example, it reduced a division's footprint from 400 to 20 people, making them harder to detect and more effective in the field.

What are the concerns about AI in warfare and how does Palantir address them?

Concerns about AI in warfare include the potential for 'digital dehumanization' and escalation of conflicts. Palantir addresses these by ensuring human commanders remain in control, using AI as a tool to augment human decision-making rather than replace it. They also emphasize adherence to legal and ethical standards, such as the laws of armed conflict.

How is AI being adopted in commercial industries according to Palantir's CTO?

AI is being widely adopted in commercial industries, particularly in the U.S., to improve efficiency and productivity. For example, AI is used in healthcare to manage patient discharge summaries and revenue lifecycle management, and in manufacturing to inspect welds in seconds instead of minutes. Companies like United Airlines, Ferrari, and Kohl's are leveraging AI to gain competitive advantages.

What is the purpose of Palantir's AI boot camps?

Palantir's AI boot camps are designed to help customers build production-ready AI use cases in just eight hours. The goal is to move beyond theoretical discussions and provide hands-on experience with AI, demonstrating its potential beyond chatbots. Participants leave with practical applications that can be integrated into their workflows.

How does Runway's AI technology transform filmmaking?

Runway's AI technology, such as its Act One algorithm, allows filmmakers to translate performances into animated characters without the need for motion capture or rigging. This significantly reduces the time and cost of production, enabling creators to focus on storytelling rather than technical processes. It was used in the Oscar-winning film 'Everything Everywhere All at Once' for tasks like rotoscoping.

What is the potential impact of AI on Hollywood and creative industries?

AI has the potential to democratize filmmaking by reducing costs and enabling more people to create high-quality content. While it may disrupt traditional roles, it also opens up new opportunities for storytelling and innovation. For example, AI can generate real-time video, creating personalized experiences that were previously impossible.

What are the challenges of AI adoption in organizations according to Connor Grennan?

The biggest challenge of AI adoption in organizations is the need for a behavioral shift rather than just technological implementation. Leaders must foster a learning culture and encourage employees to integrate AI into their workflows. Without this mindset change, AI adoption will remain limited to specific use cases rather than driving broader organizational transformation.

What industries are leading in AI adoption and what results have they seen?

Industries leading in AI adoption include technology, pharmaceuticals, and banking. These sectors have seen significant improvements in R&D, pattern recognition, and operational efficiency. For example, AI has accelerated drug discovery in pharmaceuticals and optimized financial processes in banking. However, industries like agriculture have seen less incremental lift due to prior optimizations.

What is the future of AI in the next five to ten years according to Cristóbal Valenzuela?

In the next five to ten years, AI is expected to revolutionize media and entertainment by enabling real-time video generation and personalized storytelling. This will create a new media format that blends elements of film and video games, offering unique, on-the-fly experiences. The technology will continue to evolve, making high-quality content creation more accessible and affordable.

Shownotes Transcript

Translations:
中文

This Washington Post-Life podcast is presented by IBM. How do you stay ahead when you're building AI applications? Start with a head start. IBM's high-performance Granite models are optimized for enterprise tasks, like domain-specific processes and workflows. They're designed so you can do less training and more creating. Ready to hit the ground running? Get started now at ibm.com slash granite. IBM, let's create. ♪

- You're listening to a podcast from Washington Post Live, bringing the newsroom to you live. - Good afternoon and welcome. I'm Jonathan Capehart, associate editor at the Washington Post, and joining me is the fellow you just saw in the intro, just not 10 seconds ago, Shyam Sankar, chief technology officer and executive vice president at Palantir Technologies. Shyam, welcome to Washington Post Live. - Thank you for having me, Jonathan.

So this year, Palantir won a $100 million contract with the U.S. government that will expand access to AI tools to all five branches of the military. First, can you explain how the technology works that the military is or will be using?

Really, the work that we've been doing has started with the program MAVEN, which I think began in 2018, which is the military's effort, a crash program on AI to actually drive deterrence. And if we look at the world today, I think we have lost deterrence. We've had a pogrom in Israel. We have North Korean generals getting wounded in Ukraine. We have a very virulent Iran that is probably months away from a potential bomb.

And so how do we regain deterrence to drive peace? So the idea there is you don't want to pick a fight. You want your adversary to be so afraid of you that fighting's not worth it.

And to drive that process, we started working with generals around their core goal of how can I make decisions faster? Colonel John Boyd pioneered this in the 70s and 80s. They call it the OODA loop. Observe, orient, decide, act. How do I do this quicker? Whoever does that process of decision making the fastest is going to win. Wait, what's it called again? The OODA loop. OODA loop. Yes. Say that again. Observe, orient, decide, act.

Okay, go ahead. Yes. So how do we apply? So very concretely, how do you apply AI to decision making? And I think that's really the opportunity. AI doesn't help you solve new use cases you've never thought of. It actually helps you do substantially better against the core business that you have today. And in the military context, that's targeting. How do I find my enemy? How do I get through a targeting process? How do I assess the damage that's happened and then repeat that?

Okay, how many of you have seen the movie Eagle Eye? Wait, what? Okay. So are any of you thinking of the movie Eagle Eye where the... It's not exactly an AI being, but it is a machine that works with the military, works with the government, and it goes a little haywire. Why shouldn't we be worried that...

the technologies that you are contracted with the government to provide that they won't go haywire in that way? Maybe paradoxically because we're all worried about it, we should be less worried about it. I mean, the question is technology is fundamentally a tool and we have a

a human commander that is in charge of the battle space. The question is not how do you replace the human with an AI, I think that's very unlikely to work. It's how do you develop an Iron Man suit around that human so that they have the decision advantage.

Okay, so that is putting an Iron Man suit around the person that has a decision advantage. But Human Rights Watch is one of the many activist groups warning about what it calls, and I'm quoting here, digital dehumanization in warfare. What do you say to those who believe technology could further escalate conflicts?

I think the history would suggest that... So we look at AI and you can think it's a fundamentally new thing. But if you look at even fighter jets from the 70s, if you're a pilot in the jet, it is your radar that's telling you that there is an enemy airplane that you cannot see. You're trusting your technology that that thing actually exists. And then when you hit the fire control...

You also cannot guide it to its terminal destination. You're trusting the technology to do that. So I think it's how we govern this. Where is the human on the loop? How do we have the assurance on this? Are we following the legal armed laws of conflict around military decision making? That's really crucial. And I think it's important because it's actually not a new problem. It's making sure that we follow the processes that we have and believe in as a democratic society today.

You know, I'm just realizing, when I asked you the first question about, can you tell us about the technologies that the military is or will be using, you didn't exactly, can you give me a concrete example? Yeah, absolutely. So one of the first projects we started with is, the general said, I would like to make 1,000 decisions in 24 hours. You know, we watch the movies and we think finding the enemy is just so easy. We know where they are all the time and we could just magically press a button and...

That's not how it really works. In fact, doctrinally, anything that is shorter than a 72-hour targeting time is considered a dynamic operation. So usually you're planning to have three days to figure out how to go find and strike the enemy. So that today, obviously, we would say that's ridiculous. That sounds very long and latent, and people are going to move, and we're going to miss the window of opportunity. So how do I not just only do that much shorter, but how do I do 1,000 of those in 24 hours?

When you start that process, you realize, wow, the first problem is I can't even find a thousand things. Okay, so how do you use sophisticated computer vision and sensor fusion algorithms to find a thousand things?

Then you're like, wow, I can't process a thousand things. Okay, so how do you apply AI to provide leverage to the humans who are going through that military decision-making process to be able to do that? Then you realize, well, you know, the less sexy problem, logistics, is actually, you know, I can now find and process them, but I don't have enough ammunition to do that. How do I prioritize? What's even worth it?

going after that's gonna have a deterrent effect on the enemy. So there is a big ball of yarn to kind of pull here where there's a lot of, you know, it's a complex state machine, if you will. There's a lot of decisions that are being made, and you're trying to figure out what's the bottleneck today? How do we get better at doing that? - We're gonna move on to some other things, but now I'm thoroughly fascinated. So I'm just curious.

Given everything that you just said, have you gone back to, say, like the mission that killed bin Laden and tried to use the technology that you have now and how you would have done things differently from the things that you're able to know then?

I haven't done that specific retrospective, but I would say that Georgetown did a study out of CSET, the Center for Security and Emerging Technology, and they looked at how much Maven had changed the targeting process from now to the second Gulf War.

And they found that actually 20 people could do the work that previously took 2,000 people, as an example. So I think there's a lot of efficiency. Obviously, we want a smaller footprint as well. It enables us to hide. We had an Army unit that was training in the National Training Center. They were able to get their division footprint down from 400 people, which is very easy for the enemy to find. It requires a

large number of vehicles and infrastructure to support to 20 people where the red team could not even find them in the desert. So it's going to make U.S. service members more survivable and more effective. So your company's AI tools are also being used commercially by companies like United, United Airlines, Ferrari, Kohl's. What does this tell you about where businesses are on adopting AI? I

I think, in particular, U.S. businesses are way out ahead. There are bright spots in Europe and Australia and Asia Pacific, but I think the U.S. government is now catching up and has really embraced this because fundamentally AI adoption is experiential. You can't really think your way through it. You can't just analyze the problem. You have to kind of roll up your sleeves and experiment with it.

and get after the use cases that actually matter. So we manage 21% of hospital beds in the U.S., for example, and using it to drive patient discharge summaries, using it to drive revenue lifecycle management. I just saw a very cool use case with an automotive manufacturer where it takes them 30 minutes per weld, and there's 400 welds on the seat to inspect the quality of the weld.

With vision language models, you can take that down to seconds per weld. So the amount of efficiency, the reduction of dwell time, the idle time of the capacity you have is where the big leverage really is. So why shouldn't people be deathly afraid of what AI is going to do to their potential livelihood? Because in the example you just used,

I mean, how many people are now not doing that job that is now being done by AI? I think this is a real question that we need to wrestle with. I don't think they should be deathly afraid. I think what my experience has been in the work... So with one large insurer in New York, we've automated their insurance underwriting process, a process that used to take three weeks now takes an hour. It was very manual, a lot of copy and pasting, a lot of...

munging of data. And you can kind of think of AI as a new type of labor that's attached to this human that's helping them do all of this much faster. And in fact, explicitly, they're not reducing force. They're saying, this is a competitive advantage for us. If my competitors take three weeks to underwrite insurance and I take an hour or a day,

I'm going to go get all the best risks that's out there. I'm going to go use this to win in the market. So that's the exciting part is that this is really an opportunity for us to increase our ambition of commercial companies to go do more faster. Can you tell me about these boot camps that you do and what's the purpose of the boot camp?

The boot camps, so we get our customers in a room, and in the course of eight hours, we build a use case together. We get our hands dirty, and they leave with a production-ready or near-production-ready use case. And really, this idea came about because we realized when we were interacting with folks kind of deductively, just talking about the technology, they would get stuck in the idea of chatbots.

And, you know, there's a role for chatbots, but I think that they're also-- it's very limiting, actually. AI can do so much more than that. So how do you walk them through an experiential journey where you start with the chatbot, they experience the limitations of it, they then realize, actually, these LLMs are almost like a new type of logic runtime. I'm going to encode logic that humans would otherwise be doing

integrate it with software I already have, and build it into a workflow that my existing humans do today to build something that is quite charismatic and magical that's ready to be in production. I'm going to, let's talk, I want to maybe do one of these boot camps. Yes. So what is the, can you tell me the balance between intelligence gathering and privacy concerns, and how should companies think about that? It's actually one of the founding ideas of our company, which is

The only thing worse than terrorism is the reaction to terrorism. No one wants to live in an Orwellian world. If you're going to collect data, you have to be able to protect the data you're collecting, whether that's in the healthcare context, in the commercial world, or the intelligence context, in the government world. And you have to build those robust protections in. And there's really two categories you have to think about. One is the lineage of the data. What data did I collect under what authorities? And that tells me how I need to govern and protect that data.

The second part then is access control. And the things that people usually think about are role-based access control, maybe classification-based access control, but there's a third category that I think is really important, which is purpose-based access control. If you're a medical patient, you've probably consented to allow your data to be used for specific purposes.

Not for any random sort of non-predicate-based investigation, but maybe for this sort of research or that sort of research. So how, in the context of the analysis someone's doing, do we create a virtual cloud around this that protects it around the purpose to which it's consented that you can audit all the way back to the authorities you had when you collected it?

I mean, that's assuming we've read all the terms and conditions where you sign away that consent. Now, you have technology currently deployed in Ukraine and Israel. How do you ensure that your tools, that the tools that are being used there, don't fall into the hands of the adversary?

Well, I think we have that concern with even the U.S. If you have, you know, edge units that you're taking out to the field, you have to make sure that you're protecting this. And I think these platforms are sophisticated enough that they actually require not only a military support infrastructure, but actually an industrial-based support infrastructure to drive them. So you build as many of the controls you can into your software, both at kind of the –

let's call it the binary level of how you're protecting the artifacts itself, but more importantly into access to the software and who can use it for what purposes. So then can democracies compete with an authoritarian regime like China, which doesn't have the same ethical constraints as we do in the United States? Over the short term, it's very easy to see ways in which

kind of cheating on these rules of the world we want to live in may give you tactile advantage. I think over the long term, you would create a huge number of errors and it actually makes things substantially worse. One example I'd give you is that, you know, I think one of the reasons we've been able to continue our progress in LLMs faster than our adversary is

If you're building an LLM in China, you have to lobotomize it. You have to make sure it doesn't say certain things it's not allowed to say that happen to be true. And that becomes a very hard game. And you can't precisely-- it's not like a database where you can say, don't say this fact. It actually-- you're lobotomizing a significant part of this that's going to alter its behavior. So I think openness, innovation, the track record is that it's consistently won. What is really important is the will. Are we focused on these problems?

And I'd say right now with AI, we are probably slightly over indexed on the supply side of AI, building the models, training the models. And that's really important. But what we're seeing with that is the models are in fact getting better, but they're also converging across closed and open source models are becoming more similar to one another. The other side of the equation is really important. It's where all the economic value is, which is the demand side.

okay, we built this really amazing model. Are we just using it to recite fancy poetry? Or are we using it to make the lives improve the human condition? Is it having an effect in the hospital? Is it having an effect on the factory floor? And how do we refine the tradecraft around that? Because I think we're very much in the early innings of AI.

I liken it to the transition from analog computing to digital. Now, we really do get to pretend that what's flowing through your CPU are zeros and ones. It's not. It's actually an analog waveform, but the abstraction is so good

And that took so much engineering around it. And similarly, I think we have this powerful stochastic genie in AI that we need to build a tool chain around that allows us to reliably use these things to improve the human condition. In a few weeks, we're going to have a new administration in Washington and in the United States. What do you hope this means for AI and AI regulation?

I would almost double down on my last point, which is I think that regulation or not, I don't have very, very strong opinions there. What I do have a strong opinion on is that we need to get busy implementing it. We need to not admire the problem, not think too deductively about it, but actually, great, rising health care costs are essentially a national security concern. How do we use this to improve care and manage costs? Let's go after the problems that really matter. How do we improve transportation? How do we improve the services citizens are getting?

Where do you think we'll be in AI innovation in four years?

Well, I like the inverse of Bill Gates' quote. Bill Gates has a quote where we overestimate how much progress we'll have in two years and underestimate how much will be accomplished in 10 years. I think with AI, we kind of overestimate what's going to happen in 10 years, and we grossly underestimate what's possible inside the next two years. So two years ago, we barely had the chat GPT moment. I think two years from now, we will not be able to imagine our software without AI. We're seeing more countries...

become dependent on the private sector for innovation when it comes to AI? Should we be concerned about governments depending on entrepreneurs whose policy views could change based on politics? Well,

Well, I think we need to manage that, but I would say maybe this is a mean reversion. So if you look at the industrial base that helped America win World War II and the Cold War, it was Henry Ford, Henry Kaiser. It wasn't Northrop Grumman. It was Jack Northrop and Leroy Grumman. And there's this great quote from Vannevar Bush, which actually sounds like it could be made in the present moment, which is, you know...

he was talking to FDR that we really need to get our civilians involved with weapons because there's so much more they could be doing than the military can do on their own. And so I think bringing our, our civil and government, uh, organizations together is how we compete against authoritarian regimes like China. Um, in the minute that we have left, um, maybe the folks in, in here are all, you know, up to speed on AI and are excited about it, but there might be people who are watching, um,

online via the livestream for whom they're watching because they want to know what is this AI because they're afraid of it. In the 40 seconds that we have left, tell people why they shouldn't be afraid of AI. Because I think we've overexoticized it. I think a very secular technology, Silicon Valley, has replaced one form of religion with another, which is AGI and killer robots and what they wish to be true.

I think we should trivialize AI a little bit. We should say AI is just software that works. What we call computer vision today that we don't think of as too fancy, that used to be the cutting edge. It used to be the coolest thing that you could find cats in a picture. Now it's trivial. We just think about that as software. And I think as we continue to kind of absorb AI innovation, we'll just view it as software that works.

AI, software that works. Sean Suncair, Chief Technology Officer and Executive Vice President at Palantir Technologies. Thank you very much for being here today. Thank you, Jonathan.

I'm Euni Kim, Technology Editor for Corporate and Personal Tech here at The Post. And I'm joined now by Connor Grennan, the Chief AI Architect of NYU Stern Business School, and Qualen Ellenrud, the Director of the McKinsey Global Institute. Welcome, and thank you so much for joining us. Thank you. Thank you.

So, Connor, last month was the two-year anniversary of ChatGPT, which probably many of you guys all use. Hundreds of millions of people are using it in schools and at work. But according to a study that was commissioned by Slack, the usage rate has been flat at 33%. So what does that tell you?

This is a great question. I think this is a fascinating statistic. I personally believe that AI adoption rates are really overblown. So I talk to organizations across industry, and if you're in a big room, you just don't see tons and tons of people using it many, many, many times a day. So I think what this says, this sort of idea that this is flat, is a couple of things. People who adopted have already adopted. And I also think it's interesting, and this is going to be a little

controversial maybe, but I don't like, I teach big corporations, I don't teach in use cases because I think teaching in use cases is the way that we've done it with past technology, but this is so broad, it would be like going back in time and trying to explain electricity to people in terms of use cases. Like, nobody wakes up in the morning and is like, how am I going to use electricity today, right? It's more like you just run into a problem and you use electricity, and that's what ChatGPT and these other large language models are

R, you have to think of it in a new way. When we only think about it in use cases, that's what's going to happen. You're going to find your three or four use cases. It's going to flatten out. People aren't going to grow. You have to instead teach a new way of thinking and a new behavior. That's my impression of it.

So, Quylet, I want to turn to you because you predict that AI could lead to 12 billion occupational changes between now and 2030. What are the jobs that are going to change? Yeah, 12 million occupational changes, not just from generative AI, but when you add generative AI to automation, to aging, to consumption changes, you add all that together between now and 2030.

We estimate in the US alone, 12 million occupational changes. They fall primarily, about 85% of them, into four job categories. Customer service and sales, right, as you shift a lot of that online and that last mile delivery. Food service, so waiters, waitresses, bartenders. Office services, administrative assistance, a lot of that's being automated. And then production or manufacturing. And three of those four occupational categories are primarily women held jobs.

which is partly why women have 50% greater likelihood to be needing to change occupations between now and 2030 as a result of generative AI, but all of these other changes.

That's really an interesting finding. Conor, in your role as the chief AI architect, you kind of play an evangelist role. What is your advice to kind of workers and leaders in the room? What kind of arsenal should they have to stay ahead in AI in the workplace? Right, usually the arsenal is like, oh, get good at technology or digital, but that's actually not the case here. There's no technology...

prowess that you need to learn this and to use it. It is purely a behavioral change. But at the same time, I think I was saying in the video, it's like saying, "Oh, well, you just got to practice." That's like saying, "Eat less and exercise." Like, "Yeah, man, I know, eat less and exercise, but I don't do it." Because it's hard. It's a behavioral change. So in this case, when I say, "What should we actually do? How do we get people on the right track?"

I actually really put this on the onus of the leadership right sort of where you're thinking leadership has to be you have to have an organization that's a great learning organization. You have to have a leadership that's very vulnerable and telling people like hey look this is not going to replace you. You have to say with integrity of course, but.

This is not we're not looking to replace you just chat to be T replacing you would be trash right they would call trash GPT instead what you have is you want an augmentation I love what the previous speaker was saying about an iron man suit that was so cool I want one but like here's the thing when you're giving people chat to be T you're augmenting what they are good at and if you have a company of people that you've hired.

Because they are excellent at driving value for your organization. You're not looking to replace them. What you're looking to do is augment what they do. And I think there's an over-indexing on, well, how is AI going to drive revenue? I'm like, you already have people in your company that are there to drive revenue. Think about what you are doing, what drives value for you and get your organization trained because you already have the brains in the room to be able to do that. Mm-hmm.

- Connor, I would add to that of, I think the concrete skills, it's not coding or anything like that, but it's how do you work with technology? So this openness to learning mindset I think matters now more than ever as this pace of change we were talking about earlier accelerates.

technology skills, but just interacting with technology and doing work differently is one thing. I also think complementary skills to technology, so in this case social and emotional skills, how do you interact with people, build trust, build relationships, show empathy, those are things that generative AI is trying to do but can't yet do well. Those complementary skills I think will increase in value as generative AI adopts and really handles all these other elements.

This is brilliant. This is why she's on stage. But this is the other thing is that also this is a software that behaves like a human. So all those skills which we're going to need, all the empathetic skills, communication skills, those also make you great with generative AI. It makes you sort of a super user of that because you talk to it like a human. I love that. Yeah.

Let's shift gears to talk a little bit about productivity. So, Qualen, you also predicted last year that thanks to generative AI, automation could take over tasks accounting for 30% of hours worked in the US economy by 2030. Give us an example of some of these tasks. Yeah.

lots of tasks that fall into this 30% that can be addressed. One would be kind of computational skills, taking very simple data, either making a very straightforward judgment on that or kind of composing that into a file and then adding real judgment and experience based on that. Those lots of different kind of skills and activities that could be addressed.

But I think when you add up the 30%, it's very uneven. It's probably 30% for the folks in this room and online, meaning our jobs will not be eliminated by generative AI, but we will have to learn how to work differently. But for those four categories of jobs that are primarily being eliminated, back to the office services, customer service and sales, those jobs are primarily going away. And so that occupational switch will need to happen.

Connor, you train many company executives and companies as well on generative AI. What have been some of the biggest obstacles that these leaders or companies face as they seek adoption? Yeah, I think the single biggest obstacle that leaders are facing is

If you're in leadership, you tend to be pretty good at understanding how to navigate like a digital transformation or something like that. And this is so very different from a typical digital transformation, I think, because in a typical digital transformation, typically you're taking one product, you're replacing the old product. You have to burn down the old product because everybody's going to want to stay. They like their pencil and paper. You're like, no, you have to use the new product. But eventually everybody just changes. You give them videos, all that kind of stuff.

This is much different. So I think right off the bat, they're using, and I work a lot with the brain, they're using the wrong part of their brain right away, which is this

How do we transform from this to this instead of change management with leadership is also outstanding at change management But they're not treating this like change management. They're treating it like a digital transformation instead of thinking we have to change behavior Well, you know, you're in leadership because you know how to change behavior You're in leadership because you understand how to move people in how they work and values and things like that so I think the number one obstacle is

is believing that this is just something that you give to people and then you let it go. And the analogy sometimes I think about is like, it's like thinking that if you give, you know, put a treadmill in every home in America, like you're going to cure heart disease. Like it's just not going to happen because you can get a better and better and better treadmill. And we see these tools getting better and better and better.

But that's not the... When I get off the treadmill, I'm not getting off it because I forgot to read the manual on the treadmill. No, right? I'm getting off it because... Well, I tell my wife I have to do laundry. But the real reason I'm getting off the treadmill is like, I don't want it... My brain wants to...

quick rewards and it wants to conserve energy and like a couch and a bowl of Doritos does that, right? Not the treadmill. We have to think about this differently. ChatGPT and these tools are much more like a treadmill behavior rather than, okay, everybody's gonna learn how to use this new CRM system, for example. - So there needs to be a behavioral change, but kind of a shift in the mindset.

Back to the Slack study, I think a lot of workers who kind of participated in that study said that they're fearful of telling their bosses they actually use ChatGPT for fear of being accused that they're incompetent or they're cheating.

Do you think the kind of mindset of using generative AI is shifting? And I'll throw that to Koyalyn. - Yeah, I do think it's starting to shift. Interestingly, in similar research, we saw that actually younger women were more reticent to use chat GPT because it felt like breaking the rules or cheating in a way. But when companies are actually very clear about the guidelines, in fact,

we allow you, we encourage you to use ChatGPT for these examples or in this case, or if you even require it for people reviews, everybody has to use it, then those gender gaps and other gaps close. And so I think being clear when you can use it, when you should use it, will help address a lot of these challenges.

How can companies be more transparent or vocal about using AI in their organizations? Is there a great playbook that organizations should be taking when it comes to usage? I don't know that there's a playbook. If anybody has seen a new model of GPT or CLAWD or something come out, there's no manual. You go to Twitter or X and see how people are using it. It's totally screwed up how we do this.

But I think what Quayle is saying is exactly right. The leadership has to set an example. The companies that I see doing fairly well are the ones where the leadership leads a learning culture and where they are slightly vulnerable, where they say, hey, here's how I'm using it in my life. Here's the struggles that I'm having. But also then they have to give permission. You're not cheating.

cheating if you're using generative AI. We're not in fifth grade anymore, right? There's not a grading rubric and all the company wants is to drive value. The other reason why I always work with senior leadership teams first is that if I've worked with teams and you train them and they're all like really excited, what happens is they turn their eight-hour day into a six-hour day, but it doesn't hit company revenue, right? And also if you're not teaching senior leadership how to really use this, a couple things go wrong.

Number one, they don't have benchmarks to see what a new eight-hour day should look like. So you have to show them, like kind of walk the factory floor, give them a framework for how to use this, not just here are some use cases, but a new way of thinking. And the second thing is that really, to Quillen's point, it sort of like really skews talent evaluation, right? If you had like a superstar in your organization, but then you have this like young gun coming up and they're so much better, the

problem is the young gun is probably using generative ai where the other person isn't like you have to in like educate your entire organization otherwise the talent evaluation is out the window i do think you need though there are risks that we have to acknowledge right that your internal data your ip gets out with kind of uncontrolled usage so i do think very clear guidelines of in these situations for these purposes

we would like you to use it. And for these others, right, we have to be careful over here. I think there's also some risks around data, right? We're kind of embedding in generative AI use cases are built off historical data, which also have bias ingrained in them. And so you perpetuate that, put it into a black box,

And it kind of spews out the answer, even with human in the loop, as sort of the truth, very convincingly. And we just perpetuate some of the bias that has gone into the original data. So I think just clear guidelines will help across the board. We have a question from the audience. Susan Orr from Indiana asks, which companies or industries have been the lead adopters of AI so far? And what kind of results have they seen from it? And Connor, I'll throw it to you. Yeah, Quillen,

clearly may have sort of a better view but the companies that i work with everybody is wrestling in different ways i think it's dangerous to say oh this company is doing really well because they have i won't name names but like you know 700 custom gpts that they're using or something like that that's great but have you adopted across the organization and i really do think sorry broken record but it requires a mindset shift so if i was to ask everybody in this room who uses excel everybody probably uses you know raises their hand because we're using it for

personal budgeting and we're using it for making lists and things like that. But if there's like three investment bankers in the room, they're using it 30 times a day at a much different value proposition to make billion dollar deals. That's Chachapiti. When we ask people like, which companies are doing well? There's no good benchmark, in my opinion. I'd love it if you had a different opinion on this. No great companies that are nailing this. It's everybody's on this massive sliding scale because we're talking about changing the way we work. So I think it's dangerous to point to a company and say,

You can say, hey, IKEA is doing this or Walmart is doing this. That's great. It's still only a use case. And I just don't see it in those stark terms. But maybe you see it differently. No, I would say the industries broadly, technology for sure. We've all been reading the headlines. I would say pharmaceuticals because generative AI can drive so much in R&D, spotting patterns that humans can't even identify. And then also banking.

And interestingly, on the other end of the spectrum of not that much generative AI lift are things like agriculture, where they had already optimized field crop, irrigation patterns and all these things many years ago. And so not a lot of incremental lift, but a pretty wide dispersion across industries.

In terms of the chat box or the generative AI products right now, it's really dominated by open AI and the big tech giants. Do you see that scenario continuing or are you seeing kind of other upstarts that have potential to compete?

So I think that there's gonna be more consolidation than anything. I think that the intelligence is starting to get commoditized to a certain extent. I think that now we just saw Amazon with Nova. Anthropix Cloud is a phenomenal product. OpenAI has GPT, phenomenal product. Gemini is a phenomenal product.

But is anybody testing it to the limit? Even when GPT came out with this new reasoning model, how many people in the room really saw a huge difference? It's much more about how people are using it and the user interface. And I think the companies, and I think Google's doing actually a very good job of this, cracking the user interface issue rather than which, you know, there's not a lot of companies that have the capital to build a new foundation model. So I think we're probably set with the five or six that we have, unless I don't know more. But I think it's

It's not like, oh, the big upstarts. Upstarts are going to build new products and build new user interfaces, and we see perplexity as a great example of that. But I'm not sure that we're going to see another huge company come out of the blue. Rylan, is it a little bit worrying that it's these dominant big tech companies that are actually controlling AI, have all of our data, and...

who knows what they're doing with it, right? So if that's kind of the prediction that these guys will continue to get more competitive and bigger and bigger, are there any concerns?

I think there are some concerns. There's certainly escalatory dynamics in terms of every generation takes more and more capital to kind of keep at the forefront of that. I am, though, seeing at the company level a real discomfort in putting all your eggs in one basket, whether that's one company. And so there's a lot of let's spread and see what the next generation brings. So we may see just a few companies continuing to play in this across different competitors.

Let's shift gears to look at the future a little bit. One year down the road, if you had a crystal ball, where are we going to be in terms of AI development a year from now? I don't know. No, I'm kidding. I mean, yeah.

Okay, I think agents is a great place to start. What are agents? Does anybody actually know? Microsoft calls agents one thing. Salesforce calls agents another thing. And they're not really agents. We see Anthropic coming out with something that actually looks like an agent. I think agents are going to take a lot longer to build. Again, unless Quillian is seeing something different. So I'm not sure. I think adoption will still be slower than we think. I just do. But I do think that the... I think people will be leaning into agents just because people are going to see that as a way of...

really driving even the revenue for their company or being able to sort of restructure their company, potentially really impacting the workforce.

I think technology will continue to accelerate at breakneck speed. I think at the company level, we're starting to see now at this two-year anniversary a real division between companies that are actually finding bottom line impact. I would say only about 10% of companies are really capturing real value from generative AI. Everybody, right? It's a board-level issue. It's a CEO-level issue. There's no leadership team that doesn't have a generative AI, at least, pilot plan, which

Which ones are actually capturing value? Very, very few. I would say roughly one out of ten. I think in this next year, my prediction would be we're going to see bigger separation between companies that are really capturing, whether that's growth value, whether that's bottom line impact, and sort of the can actually capture this versus just dabbling. That divide will widen. Well, we'll definitely be watching the developments very closely. Thank you so much for your time, Connor and Qualen. Thank you. Thank you. Don't go anywhere. We have more interviews coming up.

The following segment was produced and paid for by a Washington Post Live event sponsor. The Washington Post Newsroom was not involved in the production of this content. All right. Hello, everybody. How are we doing? I'm Rob Thomas with IBM. Great to be with you all. I'm joined by Asha Vary, who's vice president of partnerships at Meta.

And Clay Shirky. Clay, you have a long title, Vice Provost of AI and Technology. In Education, which just means when anything digital hits the classroom, I help NYU adapt. All right. Excellent. The topic of open, probably one of the most overused, misused, but important terms when it comes to AI. So let's talk a little bit about open source technology.

Open standards, open technology, what do these terms mean? Why do they matter? Clay, I'll start with you. Well, I'll say, because we now have this history of things like open standards, open source.

A lot of people when they look at openness, they focus on cost control, customizability, the inability of a company to suddenly add new licensing or alienate you from your work. And all that stuff's important, but what has really made openness important in previous generations are surprises and coordination.

I heard Yung-Hee Kim ask my colleague Connor just before this about employees disclosing whether they were using AI or not. And openness maximizes disclosure. You find out what people really care about when all of the work is happening in the open. And so you get use cases that you would never have imagined on your own.

It's sort of like a corollary of Bill Joy's law. No matter who you are, most of the smart people work for somebody else. No matter who you are, most of the smart ideas are in somebody else's head. And so openness allows you to see the kind of maximum amount of

interest in an engagement with the new technology. And then because it's in the open, people who are working on similar things will find each other and collaborate. And you start to get more than just small teams working on these things. You start to get the kinds of large networks that have led to the development of Linux or Wikipedia or what have you. And I think that we are at a point where, I mean, you will know better because you have the leading open model, but I think we're at a point where

the customization of leading models for specific domains is competitive with just having the next generation of frontier model. Well said. Ash, your view at Meta, because you sit in a bit of a different seat than Clay. How do you think about this term open? Yeah, I generally agree. Actually, wholly agree with everything you're saying.

For those who aren't aware, we have a really long history of open and open source at Meta, basically from when we were founded. And we were like a small startup, and we just didn't have a lot of engineers, but we needed a lot of help in literally getting the website up, the Facebook website up, and running and keeping it going. And we just started open sourcing various components of our stack. If you think about things like React,

React Native back in the day as we made the shift to mobile, we really found that if you put the software out there, you keep it open, and you, like, let the community build around it, people are not only finding value, but they're improving it. They're making it better. You're getting a massive force multiplier from all the energy that's out there in the world.

to really further what you're working on. And so the community benefits because they've got this piece of software that's open that they can go do whatever they need to go do with. Others are making it better and better and better. And we as a company benefit by putting it out there because we're also getting the benefit back from the community as well. So that whole community flywheel is really, really important and one of the core elements that we find.

And we're just sort of extending that philosophy into AI. As of now, I think we've released about 1,000 open source AI projects out there.

And when you look at the model specifically, these are hard, expensive things to create. If you think about training in LLM, the amount of GPUs that's required, the amount of data that's required, the amount of engineering talent that's required-- and I'm not even talking about power and data centers and energy and all those sorts of things. It's a lot, and it's really expensive. And it's not something that most companies can do.

And so in some ways, if you look at it, what you don't want the world to look like is a handful of really powerful tech companies out there or very rich companies out there building models, holding them back and charging you whatever you want for them without any transparency. Our view is very different because we've benefited and we've seen the benefit of things like PyTorch to the open source community and the innovation that it drives.

It really is about democratizing AI and LLMs from our standpoint. And so we put our models out there. We've been doing it now for multiple versions of LLAMA. People can grab them, they can build on them, they can build a road of works, they can use them any way they see fit. They can download them. And then to your innovation point, when we released LLAMA 2, I think it was within about two weeks, some developers had it running on a Raspberry Pi.

So if you think about sort of democratization, access, and innovation, I love this story because you've now taken this LLM that we spent tens of thousands of GPUs building and training, and we put it out there in a matter of two weeks. An enterprise and developer took it and has it running on a $35 piece of hardware, which if you think about access, particularly as you get outside of the Western or the Global South and other areas, it's expensive. You don't have the power. Right.

All of this democratization through open source and innovation is just core to how we think about it. The community, you used that word twice, will often give you the answer is what we've discovered. When we open sourced our granite models, we put them up and everybody went to one. They said time series is the most unique thing we've seen in LLM. That's where everybody went. So there's an aspect here around ecosystems to what both of you are talking about. But my guess is you two see those different.

Clay, you would have one view of what is an ecosystem around this. Ash, you may have a different one. Maybe Clay, start with you. I don't know that we have different views. We sit in different places. Right. So I'm a little disoriented because usually when academics are on stage at IBM, it's to get beat up for not being entrepreneurial or whatever. I mean, it's just we're the punching bag of, oh, they're so slow. But what the Academy does well is it chews through problems from multiple angles at once.

So I have a fancy title as you heard, but I am not in charge of the faculty using AI at NYU. I sometimes have trouble getting them to open my email in fact. And so I can only work by persuasion. And what I've learned is a department at a time, they have different needs. The physicists and the philosophers are using it differently, the chemists and the choreographers. And so instead of fantasizing that we can write some kind of top down framework,

We're actually looking for places where, not completely bottom-up, we're not saying each faculty member has to figure out their own thing on their own, but where is there a group of like-minded people who are likely to be the community that solves the problem for us? And we're consistently finding that, for instance, we have a game design school, and they were the first people to develop policy around image generation, not just text generation.

I'm effectively in those situations an overpaid bike messenger. I take a message from one part of the school and I move it, show it to everybody else who needs it. And I think that that ability to both incubate in small groups and expand the message of what's learned to larger groups

is just gonna be a huge accelerant. Now that we have open models, again, the LAMA models especially, now that we have open models that are at a level of power that's also frontier, at the frontier, we don't really have a choice, we don't face a choice anymore between power and flexibility.

and we can really lean into the flexibility to learn from different communities how they're using these tools. Rather than thinking we have to wait around for AGI or some all-powerful model, we can say we can apply customizations to the models we have, and each group gets a lot out of that.

Your view on ecosystems, community, building off of Clay's points, Ash? Yeah, I think it's pretty similar, but maybe at a bit of a broader scale in terms of what we look at is, I mean, the easiest thing for us to look at is sort of just tracking downloads, right? Because we put it out there and we let people go and run with it. I think we just crossed 600 million downloads of the models, which translates into, I think, just a ton of people experimenting.

But then you take that a step further. That's sort of like the top of the funnel. For us, it's really about are there companies out there that are being built?

that are using LLAMA as an ingredient. We have this vision of the models, the LLAMs, LLAMA specifically, being very much like Linux, democratizing it so that anyone can go and use it and take it and build on top of it. If you think about the internet, it's a LAMP stack. Can we be the L? What is the components of the LAMP stack today for AI? We're putting it out there

we're watching how developers are using it and we're looking for innovation around it, you know, whether it's like new companies being built or, you know, companies using it internally to optimize processes or unlock new experiences, which we're starting to see a ton of, you know, everything from like AT&T that's

building consumer chatbots now and making it way easier to get your customer support to the Mayo Clinic that's doing all kinds of like crazy stuff on radiology to speed up discovery of medical problems and issues. And I really think this is just the tip of the iceberg.

So I think it was one, sorry, Clay, you had a comment? I was just going to say, I will say one other thing to these sort of the variety of uses. We all collectively have 50 years of coiled reflexes for dealing with digital tools, which start practical and get powerful, right? The first spreadsheet drops and it only does a few things, but it grows into Excel and we've all implemented Excel. Same with the web browser, same with the app store and so on. These tools didn't work that way. They started powerful, but not practical.

and we're trying to find where the pluses and minuses of

hallucinations and gaps can fit into existing process. Nobody who saw these tools, you know, JotGPT turned two, its public instantiation turned two last Saturday. Nobody who saw it at the tail end of 22 said, oh, I know just how I'll slot this into my production process. They said, oh my God, what did it just do? And that, like, taming that reaction is, I think, part of what

these communities can help us do, which is to figure out, given the strengths and weaknesses of the current tools, and given the fact that they consistently produce surprises, and sometimes surprises that we don't want,

whose regime of error checking, of integration of humans in the loop is going to fit the tools best. And open lets them take what they learn and actually modify the training, the retrieval augmented generation, or even just the sort of wrappers around the tools in a way that you don't get when it's a kind of log into this remote model, you can't change the parameters. So it was one year ago today, I believe, that

we launched the AI Alliance. IBM and Meta were two of the main sponsors for that. You want to reflect on the last year of that, Ash? And I'd love to hear Clay's reaction as he hears this. Yeah, look, I think it's been a really good year to bring together

big companies like ourselves, as well as all the other members that have come on board since then, in a field that's constantly evolving, and then actually putting wins on the board when you think about things around trust and safety. I think it's been a really, really banner first year. I think there's a lot more we get to do. One of the things I'm particularly proud of and sort of animated by is...

we've really taken this um inclusion you know inclusion to heart and you know when you think about um voices at the table particularly like voices at the silicon valley table it's usually other silicon valley companies or maybe american tech companies um you know i think the way that the alliance has really embraced like the global south and other developing economies and if you think about like iit bombay being a part of this on day one on such an important

that we know will change everything. I think that's been a really big win of the alliance of bringing together voices that would normally not be heard, certainly not in the up and down the 101. I think it also set a new tone on data.

We exposed all the data sources that we used to train our models. I think that forced other people to think about transparency. Clay, any reaction from your seat when you hear about things like this? NYU looks at this as an opportunity to get around our single biggest problem, which is the eye-watering capital costs of throwing this much data at this much compute. We just will never compete with

Meta or with IBM in terms of infrastructure and yet we often have intuitions and use cases in our computer science and electrical engineering departments but also in our sociology department, right, in our game design department. And to participate in a network of people sharing ideas and opportunities is I think a really, it's important that it's industry and academy, obviously it's important to be academics but I think it's valuable for the alliance as a whole.

And when we think about this sort of surprises and coordination thing, the industry focus on having industry-specific tuning for frontier models is going to become more important as these things start to take more actions in the world.

People are, you know, the word agent is being thrown around pretty broadly. It's not terribly well defined. If you set aside the idea of agents and you just say, can this produce output other than ones and zeros? Can it actually cause a piece of software or in some cases a piece of machinery to

take a non-reversible action. It's really important to get industry guidelines about best practices in there. It doesn't matter how broadly intelligent the tool is. Industry by industry, you have to figure out what are acceptable ranges of false positives and false negatives. And the industry piece of this, I think, is especially interesting for an academic environment because

We're both looking to the jobs our students will graduate into, but also trying to figure out what industry should look like in the future based on what we know inside the school.

Well said. You both gave us a lot of things to think about. Ash, Clay, I want to thank you both for being here. Thank you for your insight. Thank you so much. All right. I'll hand it back to The Washington Post. And now, back to Washington Post Live. Hello again. For those of you who are just joining in person and are online, I'm Jonathan Capehart, Associate Editor at The Washington Post. On stage with me now is Cristobal Valenzuela, co-founder and CEO of Runway. Chris, welcome to Washington Post Live. Thank you for having me.

All right, so we know what Runway is. Tell us the origin story. What was the catalyst? The catalyst? Well, the short story is I'm from Chile, and I moved to New York eight years ago to study here at NYU. And I like going into rabbit holes, and I went into the neural network AI rabbit hole in 2016. And with my co-founders at the time, we met at school and started building this idea of

taking AI and thinking about it for artists and for creatives and for filmmakers. And so we started the company six years ago after meeting in school, and it's been quite a journey since then, but we're still pretty much focused on the same obsession and the same rabbit hole. Well, let's take everyone into this rabbit hole. I'm going to show the audience what your technology is capable of. Take a look at this video. Watch. Or do we not have the video?

I think this is the thing that Veronica told me that we weren't going to have. Am I looking in here? You're going to fire me because there's hot milk everywhere on the floor of the coffee shop? I mean, it's one latte. It's not that big of a deal. Everything's fine. Everything's totally great. This is going to be the best day ever. Smashed my phone the other night. And you know how much I love that phone. Okay, all right. Just breathe. Okay, maybe you don't breathe that hard. Okay.

Who knew there were so many kinds of beans to choose from? Fava, black, kidney, butter, white beans. Now you're surprised.

Oh, wow. I'm beginning to think that you don't understand the gravity of this situation. You came all the way down to the Department of Motor Vehicles and didn't bring your driver's license. Settle down, everyone. Settle down, please. I don't know how to be sad without closing my eyes. Perfect. We got it. Thank you so much. Thank you so much. This is it. All right. Who's next?

So no motion capture or rigging was required to make those animations. Explain in layman's terms, meaning in terms that I can understand, how your technology works. Yeah, so what you just saw is an algorithm that we have called Act One. What it allows you to do is it allows you to take a performance. And so it's really important. You need an actor. You need someone who's reading a script or performing a really good line.

And then what you can do with this algorithm is translate that performance into any of the characters that you saw or any character that you want. This is something that used to require an army of people and very expensive technology to make because, as you were saying, you need motion capture and these complicated systems. This represents a major step forward. You don't need any of it. You just need a very good idea and runway. And you combine...

You can combine those and get the renders in the videos that you just saw. It's very simple. So what do you say to those who see that and think, as you just said, that's going to kill a lot of jobs? Hundreds of people are now

If I wanted to do that in the olden days, I would have to have the army of people. Now what you're telling me is I just need an idea and runway, speak it, and I can speak it into existence. So why shouldn't Hollywood be afraid of runway? Well,

The first thing is like it's important to remind ourselves that Hollywood films is the history of technology. Like we only have Hollywood because we have great technology. The camera is one of the major breakthroughs I think for art. And I think of runway as a similar camera. It's a device, it's a tool that allows you to do storytelling.

And in the history of Hollywood, technology has changed. And it has changed and allowed us to do things that used to require us to do things differently in the past. For example, I don't know if you remember, but movies used to be silent. We had silent movies, and then we got talkies, and talkies was suddenly the movies have audio and music. And one of the biggest concerns, and this is true, that people had at the time was what happens with the orchestras in the theaters? There used to be orchestras in the theaters. Right.

And I think it's like, the task, the job changed.

and it went from live performances to recording studios, where now you can have much more of a consistent industry on sound design, and many more jobs were created, right? And so this is, for me, very similar, where it's not about frame-by-frame animation. It's about the story you want to tell. And this can allow, I think, to create many more jobs, many more applications of things that perhaps you couldn't do before because it was too expensive or too time-consuming. Now, I don't know if people know this, but your technology was used...

in the Oscar-winning movie from 2022, "Everything Everywhere All at Once." What was that experience like? And give us some insight into how much AI is used in movies we're seeing today. But first, talk about "Everything Everywhere All at Once." So in the process of filmmaking, there's many different things, many different parts and processes, and they're all very complex.

There's many parts of that process where you can have better and more autonomous systems that can aid you. One of those things is called rotoscoping. So rotoscoping is this very fundamental piece of filmmaking that allows you to separate a background from a foreground. So if I have a shot of you and I want to add something on the back, I have to basically cut you out of that shot. And the way people do that until very recently was by hand.

It's just literally going frame by frame. There's 24 frames in a second. So you spend hours, days, weeks, months doing something that's very manual. And so we have algorithms and tools in our kind of like set of products that allow you to do that in a fraction of the time.

And so everything I've ever wanted, some of the editors behind that movie were using Runway specifically for that kind of jobs that allows you to take the time of making that film from hopefully weeks to minutes so you can focus on other types of jobs or tasks.

And so there are many other parts where you can find optimization stages that you can assist yourself with tools like Runway, basically. You've said, and I want to quote you to you, "We're heading towards a world where all the media and content entertainment you consume will be AI generated." So given that, where do you see things headed in the next five to 10 years? You say we're heading towards a world.

Is that five years, 10 years, or five months, or five weeks? Well, AI these days moves in very exponential manner, so it's very hard to predict. But I think what I've tried to articulate many times is there's two ways of looking at this. There's the short version, short term, and the long term version.

The short term is that we're going to have a lot of new what are called AI films, right? So you're going to get a film that used to require $100 million to make. You can probably make it with a budget of $10 million. Similar visual effects, similar content. And those are going to be AI enhanced or AI kind of augmented films. But eventually, once you get to real-time video, so most of the videos you're seeing here are 10 seconds. It takes 10 seconds to make them. You're going to have real-time. You're going to be able to render videos in real-time.

You're going to be able to make stories and narratives in a real-time manner. That, for me, is different from film. It's a new media format. It's something we haven't experienced yet collectively together. Imagine taking your phone, turning your TV, and you're watching something that's being generated as you're experiencing it. It's a combination of perhaps something we can call a video game. It might feel like a movie. It might feel something else. And I think that's the world we're preparing for. How is that different from virtual reality where everyone's got those

sets on their faces. Is it completely different or is it the same? So virtual reality for me is like the device and the medium on which you can experience something like this, right? But...

But you're watching still in virtual reality. You're watching something someone made before. You're watching the same experiences. If I put a headset or if I turn on the TV, we're both watching the same thing. This is different because I can generate personalized experiences. I can generate a video or film, a story that's tailored to you. And that's different from the one I'm watching. And the one I'm watching is no one has ever watched it before because it's being generated as I'm experiencing it.

Is that now? We're going there. You can do real-time video generation right now. That's feasible. Two things are not yet solved, which is the unit economics of it need to make sense for it to be distributed worldwide. And quality. This real-time video generation has only been feasible for a couple of months. So we're very early stages still.

I think we've got four and a half minutes left. I think we have time for this. Your company also has a model called Frames. Yes. We're going to show some visuals used with Frames, but can you explain what we're looking at and how AI was used to create this? So these are all generated

images with our new model called Frame. So Frame is a state-of-the-art image model that's being created specifically for storytelling, for filmmaking, for cinema.

And so it has, you can control and define and tell the model like the kind of camera that you want, the kind of film grain that you want, the kind of shot that you want, the kind of expression that you want to express. And so it makes it particularly good for the studios that we work with, for the filmmakers that we work with. It's a very, very flexible model that's only made for story time.

Well, this is interesting. It gives a great segue to my next question. I mean, you went to art school. Yes. So then how does your experience there inform the technology your company is developing, informing the technology that led to Frames? That goes back to the rabbit hole, I guess, at the beginning. So the reason I went into the rabbit hole is I was obsessed with the history of art and with software engineering. And 10 years ago, I tried to blend those two things.

So Runway is somehow the result of burgeoning both the art and the science that we had a deep passion on with my co-founders. And I think what brings that, what makes us different in a way is that understanding the artistic process and understanding the way stories are told, understanding the way artists work with the craft is very much needed in order for you to build tools for artists.

If you have an overall simplistic assumption of how artists and production and filmmaking works, it's going to be very hard to build tools that actually are meaningful and interesting for art making. And so for us, it's always been around merging those two worlds, art and science. And we hire half of our team are artists who help develop the research. So it's not only researchers. It's researchers working with artists.

You've also been working with a lot of studios. Talk to us about your recent partnership with Lionsgate and what you're hoping to accomplish. Sure. So it's a first of its kind kind of relationship, a partnership that we have announced very recently. So we're working very closely with Lionsgate and their team to do a couple of things. The first one is to

take our tools and help them use them in productions, in future films, in short films. And they have many different projects we're embedding ourselves into. And the goal there is to help them tell better stories.

And the second one is part of, I would say, the benefits of having a large catalog of data is that you can create very specific models and very specific tools that are specific to your kind of like world. And so think about John Wick. They can take the data, the models and the videos of John Wick and create a specific version that they can use in the next version of John Wick.

And that can be something they own or maybe they open to others to use. And so it's a relationship both on the data side and on the tool side of things. We have less than 90 seconds left, but we're going to go a little long because I have two questions. Go ahead. I'll ask the final question as the penultimate question. Lawmakers apparently have been studying AI regulation for years. What impact do you expect that to have on your business

your creating ability? Do you have any, well, one, do you think Washington is going to do anything? Congress is going to do anything in terms of AI regulation? And do you think whatever it is they're talking about is going to impact AI

your work? Well, I think there should be. It's a new technology. It's very powerful. For some people, it's very disruptive and it's going to change a lot of things. I think the important thing of regulation is that it needs to be done thoughtfully and at the right time. I think it's still very early to fully understand the extent of how this

this technology can be used. And the one thing you want to make sure doesn't happen is you stop innovating. Like we're at the beginning of a new major media transformation that can help and aid and create many new jobs. It's our job to make sure that we can keep on expanding that. And I think there might be a case where regulation actually stops

that process, and I want to make sure it doesn't. And the final question, you know, you, Connor, who was on two panels ago, and I were talking about, you know, Runway and ChatGPT and how all this technology has, particularly with Runway, you said leveled up. Yes. So now everyone has access to what you do at Runway, and it made me think back to when

the internet burst on the scene and what it meant for journalism. So when I was in college in the ye old 1980s, you know, we had three major news networks, few major national papers, and, you know, we were the, I was on the school newspaper and radio station. So, you know, I was one of the gatekeepers of, you know, giving the news. Well, with the internet,

My mother had the same access to information and news that I did. Everyone then had the same access to information. And the danger with that is people started siloing themselves into their own little news silos where they were consuming news that fed their worldview. Why shouldn't we be afraid? Why shouldn't Hollywood be afraid or creatives be afraid that now that

you know, non-creative types like me have access to runway that I'm not going to come up with something and sort of force them out. That the teeming masses are going to push Hollywood aside for entertainment that they've created for themselves.

So the first thing I would say is I think creativity is a state of mind. Everyone can be creative and everyone should be creative. And if anyone has a story to tell, they should tell it. You should be constrained because of the industry, the resources, the technology that you have. And so in a way, this is true democratization of storytelling in the sense that you can make movies and stories you couldn't have thought of making before. And that's great. I think the best movies are yet to be made. The best stories are yet to be told. And we need to hear from those stories. I actually think the opposite where

which is if we only hear from the same amount of people, then the stories are always going to be the same. There's billions of people out there that we haven't heard of before. There's great stories, trust me, great stories, that if you just give them the right resources, you will hear from them. And maybe it changes your point of view. Maybe it changes the way you look at the world. And that's the role of art. The role of art is you make something, and hopefully you're going to make and change the perception and the point of view of people.

See, and now I'm going to leave this conversation and jump down the rabbit hole that you jumped in. Tristobald Valenzuela, co-founder and CEO of Runway, thank you very much for joining us here today. Thank you. Thank you. Thanks for listening. For more information on our upcoming programs, go to WashingtonPostLive.com.