We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Rep. Jay Obernolte on why bipartisan regulation is essential for the future of AI

Rep. Jay Obernolte on why bipartisan regulation is essential for the future of AI

2025/2/28
logo of podcast Washington Post Live

Washington Post Live

AI Deep Dive AI Chapters Transcript
People
D
David J. Lynch
M
Martin Kon
M
May Habib
Topics
May Habib: 我认为合成数据是构建无版权、透明且可靠的模型的关键。我们专注于为特定领域(如医疗保健和金融服务)构建模型,这些模型需要特定类型的数据,而这些数据并不存在于互联网上。我们合成的数据是为大型语言模型(LLM)而设计的,这与为人类消费而设计的数据不同。这使得我们的模型能够更好地满足企业客户的需求,并降低了幻觉的风险。我们已经证明,我们的方法在成本效益方面优于其他方法,并且在性能方面也具有竞争力。 我们帮助企业客户找到他们数据中的“宝藏”。企业内部存在大量未被利用的信息,而我们的AI代理可以帮助他们找到这些信息,并将其用于决策。我们的AI代理能够安全地部署在客户的数据环境中,并连接到他们的敏感数据源,确保数据的真实性和可追溯性。这使得分析师和财富经理能够配置自己的代理,并利用这些数据来提高工作效率。 我认为,生成式AI的真正潜力在于它能够彻底改变工作流程。它能够帮助企业完成以前需要数月才能完成的任务,从而提高生产力和效率。我们已经看到客户在几小时内就完成了以前需要数月才能完成的任务。这将改变企业对AI的投资回报率(ROI)的看法。 Martin Kon: Cohere专注于为企业提供安全可靠的AI解决方案。我们与其他AI实验室的不同之处在于,我们专注于为企业提供安全可靠的AI解决方案,而不是面向消费者的应用。我们的解决方案具有三个核心优势:数据安全、模型应用和定制化。 首先,我们重视数据安全和隐私。我们不依赖于单一的云提供商,而是可以在客户的任何数据环境中部署我们的模型,包括私有云、本地部署和隔离网络。这对于那些处理敏感数据的行业(如银行、保险、医疗保健和政府)至关重要。 其次,我们不只是提供模型,我们还提供解决方案和应用。我们与客户合作,为他们的特定需求定制和配置模型,以确保它们能够满足其独特的业务需求。我们已经为多个行业开发了定制的AI解决方案,例如为银行开发的North平台,以及为电信公司开发的类似平台。 第三,我们提供定制化和配置服务。企业需要帮助来定制和配置AI模型,以满足其独特的业务需求。我们帮助客户优化其AI模型,以确保它们能够为其业务创造价值。 David J. Lynch: 我想了解一下公众对AI的态度,以及这种态度是否反映了人们对自身就业前景的担忧。

Deep Dive

Chapters
The discussion explores how AI companies like Cohere and Ryder are transforming business processes with AI models and applications, focusing on customization, security, and efficiency.
  • Ryder provides AI assembly line services for businesses to automate and secure their workflows, ensuring data and security requirements are met.
  • Cohere focuses on secure AI for business, offering private deployment and data security, crucial for industries like banking, insurance, and healthcare.
  • Companies like Cohere and Ryder emphasize customization and configuration of AI models to optimize mission-critical capabilities.
  • AI agents help businesses find valuable information within vast data sets, tailoring solutions to specific enterprise needs.

Shownotes Transcript

Translations:
中文

This Washington Post Live podcast is presented by Meta.

You're listening to a podcast from Washington Post Live, bringing the newsroom to you live. Good afternoon and welcome to the Washington Post. I'm David J. Lynch, global economics correspondent here at the Post, and joining me today to discuss how businesses are using and investing in artificial intelligence are Martin Cohn, the president and COO of Cohere, and May Habib, the CEO and co-founder of Ryder.

Welcome to The Post. Thanks, David. So, May, let's start with Rider. Your corporate customers read like a who's who of corporate America, Salesforce, Vanguard, L'Oreal, Accenture.

Help me understand exactly what you provide your clients. What's the product? What does it do for them? How do they use it? Yeah, great question. I know journalists love analogies, so I'm going to give you an analogy, Dave. I'm a sucker for an analogy. So your chat GPT, your co-pilot, that's your Kia. It's your finished car off the line. Your LLM is your raw metal.

And what writer is, is your assembly line. And what we have learned over four years of doing generative AI that has been enterprise grade from day one is these large customers that we work with, they need to build a factory for AI agents and applications for their businesses.

And this is not your Henry Ford factory, right? It's not linear. It is your autonomous, totally automated factory. But all these workflows, if you're building off of generic models, really don't meet the data requirements nor the security requirements that customers have. And they're pretty generic. They're not good enough.

And so what we provide is the state of the art models plus the AI engineering that's built in for these applications to be relatively pre-built for these companies to implement. Does that make sense? Is that an analogy? - It does, it does. I'm gonna come back to you and ask for a little,

layman-esque explanation of exact use. That was the layman version. That was it. Well, look, you're talking to a liberal arts graduate here, so you're going to have to dumb it down a little bit. Martin, same question for you. If I'm out there in the business world, what can I get out of Cohere? Yeah, interesting. I have not a completely dissimilar answer to the description of writer, but maybe I'd focus on a different part of the...

of the market. So there are, I'd say, six AI labs that can create foundational frontier AI technology from scratch, not take someone else's and fine tune it, but actually create from scratch.

The other five, Cohere is one, the other five are primarily focused on consumer. And they've started with chat bots, chat GPT or Claude or building AI into Google's services, et cetera, or X. Cohere is focused only on secure AI for business. And there are three elements that differentiate what we do.

Number one is security. So we have not taken a massive amount of money from a cloud provider in exchange for being exclusive to their cloud. We deploy everywhere and anywhere where our customers have their data, including and actually especially over 80% of our revenue is private deployment, either virtual private cloud, all the way to on-prem, all the way to air-gapped.

And so data security, data privacy is essential for certain parts of the industry, critical infrastructure like banking, insurance, healthcare, government, energy, telco, etc. Secondly, just providing models similar to what me just said is a little bit like providing electricity. It's hard to generate electricity. You need a nuclear power plant. We happen to have built one, but people don't really know what to do with that. So we've invested quite a bit in solutions

and applications like North for banking that we're developing with RBC, North for telco that we're deploying with STC that enables companies to have an AI agent foundry factory inside their data environment with connectors to all their extremely sensitive data inside the company that you would never in a million years send out through an API to a generic consumer chatbot.

And the third is configuration and customization. That again, enterprises, we focus on enterprises and their mission critical capabilities that differentiate them, give them alpha against their competitors or their adversaries. And they need support and help in doing that. So we help them customize and configure so that these capabilities are not generic, like an API or a chatbot that 300 million people can use.

but rather they are specifically optimized for their exact specifications. We power all the features in Oracle Fusion NetSuite, over a hundred different use cases, each one was optimized. We have created a completely unique capability for Fujitsu in Japan called Takane, which is objectively the best.

LLM capability for the Japanese market that allows them to win against their competitors. So there's lots of space for lots of different players, you know, certainly the mass market generic use cases just like you use Google search, so do I, you use chat GPT or Gmail, so does your competitor.

We focus on the mission-critical capabilities that will really differentiate and create value. And you mentioned AI agents, and I've heard a lot of talk and read a bit about that. Tell me how agents fit into what you can offer and how you see them developing in the near term.

Yeah, I'll steal a page out of my colleague's book here in terms of analogies. I love the analogy that someone else told me that I stole, a chap named Henry, that we help our partners and customers find needles in haystacks. The amount of lost knowledge inside enterprises or government departments or whatever is incredible. You think about how much information is there, and there's no way you're going to find it. There's no Google search inside companies.

but we can help companies find needles in haystacks. Every year those haystacks get bigger and every year the needles get more valuable, but every enterprise's haystacks are different, they're data sources, and everyone cares about different needles. And so when we think about agents, we think about deploying a capability. All the yucky stuff no one writes articles about, deploying in their data environment. A regulated bank that we work with right now has 14 different risk departments.

So having a chatbot is not the answer. You need to deploy this where all those risk departments approve, the federal regulator approves. It's connected to their very sensitive data sources. It inherits the data access privileges. It makes sure there's veracity and traceability of where the information's coming from. And so you, as an analyst in the Treasury Department or as a wealth manager, can then configure your own agent, connect it to these data sources that you have access to,

to enable you to provide better advice to your clients, to make better decisions about investing, whatever the case may be. And so really we don't predefine the agent. We create the factory or the foundry very safely, securely inside the environment, configured, customized, to then enable the end users to configure the agents that will help them do their jobs more productively.

Now, May, as I understand it, you've launched an AI model whose competitive advantage is much lower cost. You were able to develop it at much lower cost than others, like, say, OpenAI. And as I understand it, and correct me if I'm wrong, you did that using synthetic data.

Explain to the audience, and to me for that matter, exactly what that means, how it worked, what were the benefits you derived by doing it that way, and what are some of the potential risks in terms of degraded performance or other potential problems?

Yeah, synthetic data definitely needs a rebrand. I mean, it doesn't sound like data you would want to use, right? It's synthetic data. But actually, it's been the key to our success across our family of models and building models that are copyright free and transparent and reliable.

And actually they are the same techniques that DeepSeek a year later wrote about in their papers. We had written about it as well. And what our approach entails is really looking at the types of data that you really want your model to know about if you're doing the kind of mission critical workflows that Martin talked about.

We help a lot of banks as well. We help a lot of healthcare companies. We help pharma companies. And the data that makes those use cases work isn't on the internet.

And so when we synthesize data for our domain specific models as well as our core models, we are really looking at the types of data that a healthcare model should have, right? It should know what a Quest Diagnostics sheet looks like. It should know what a blood draw result looks like. If it's a financial services model, it knows what everything in EDGAR looks like as well as what a fund tear sheet looks like.

And so when you synthesize data against those requirements, what you're really building, here's the rebrand, is LLM ready data and you think

you think about everything that's on the internet today, it's content and data that was built for human consumption. But what you really need to get these models to perform is data that is built for LLM consumption. And so that is what we are synthesizing, both the content of the data, so it's original and it's not copyrighted, as well as the format. And I'll give you an example.

there might be data for our instruct models that we've at training data that we've actually formatted as a question and answer pair. So then you're not having to guess as the model what that information needs to look like when you see a query come in.

So that's what we mean by synthetic data, and we've been talking about it for years, and it is definitely the path forward for those types of models. That makes sense, but does using synthetic data or relying on it to a great degree increase the risk of hallucinations in your...

No, we just published on this. We created a benchmark called Fail Safe and what we showed was a huge difference between the hallucination rates between the reasoning models that are all the rage, the test time compute models and instruct models. It goes to show just how consumer of a approach the rest of the industry is taking to really the detriment of enterprise.

I think Satya was caught in kind of an unguarded moment these last couple of days when he was like, yeah, basically there's been zero impact in the enterprise of AI, right? Being quite candid. And that's absolutely what we see. We come in to enterprise customers where it's been two years and one CIO told me they wasted $100 million on Gen AI with really nothing to show for it. So we've got a lot to work back from. And I do think a Gen Tech AI is very exciting because we're

you get just a lot more business impact for your change management

so to speak. You get to change a workflow end-to-end because agentic, and this really should be the broader definition, it's AI that can self-orchestrate between and across systems, right, and really respond to open-ended instructions. So for the first time, we've got intelligence that really responds like an expert human when it comes to making decisions on direction versus the last generation

of automation or RPA where you really had to be quite specific on the exception handling, now there's some really amazing things that are possible. And we've had customers tell us they've done in hours what had taken them months to do in RPA with Rider. So there's a lot coming that is hopefully going to change the ROI picture in the enterprise. Right, can I add on that? Sure.

We see things very similarly. And I think what we've seen is this incredible technology first applied to consumer and then taking that and just dropping it in the enterprise. And I used to worry Google before coming to Cohere, and Google search has got to be one of the best inventions in the history of mankind. I mean, okay, the wheel, fire, and penicillin were pretty good too.

But Google search, 3 billion people are using it all the time. If you roll out a new feature, 3 billion people are using it within seconds. That obviously has not translated into enterprise. Google really has not done anything in enterprise search. Why? Because it is just so fundamentally different. If you have a massive scaled SaaS platform like Google search or YouTube where I used to work,

That is diametrically opposed to enterprise, which is more like building to building combat. Everyone's haystack is different. Everyone cares about different needles in those haystacks. And so you can't just drop those same consumer models in, as May is saying. One of the things, you know, we talk a lot about runtime as well. The data that you want, the information that you want to access in your enterprise to create value

You don't want to or you can't treat the models like a Jeopardy contestant where you make them memorize everything and shove them full of information. That's not intelligent. Humans, as intelligent beings, we've learned at university, in school, at work, to know how, where, when to look for information that we were not trained on, that we don't have in our heads. And so we teach...

our technology, our platform to do the same thing and to connect a very, very sensitive data, which is very different to what a consumer chatbot would do, and to be able to

access that data at runtime to provide the right kind of advice, answer, synthesis of a problem, reasoning multi-step through a problem, pulling that data in and then not storing it in the model because it might be very sensitive, way too big to make it unwieldy, or it might have been created and updated 17 minutes ago and there's no way you're going to have a model that's trained on that.

And so that is a huge difference. That's what we do with our platform called North that enables that kind of runtime information data access. And then of course, through tool use to enable use of tools and different systems as well. So it's a similar approach here that enterprise is just fundamentally different to consumer.

May, let me come back to you. I want to ask about the experience we had last month when the Chinese government

company DeepSeek came out and triggered a trillion dollar global market sell-off because it performed apparently better than some much more elaborate and expensive Western alternatives. Is this at all a threat to your operations? Do you see any lessons in the experience? Was the market reaction overblown or is there something we should take away from this?

Yeah, it was very validating for our resource-efficient approach to training models. We trained models for about a million dollars, a little bit less per run, per model. And so it was great because, you know, all the folks who are, like, a little quizzically looking at us, like, you haven't raised a billion dollars. You've only raised $300 million. How could you be competitive in the long run? Right? We got great validation. The sell-off...

you know, is surprising because the revolution hasn't started yet, right? And so much that we are working towards, right, is really just starting. I mean, there's no way that you can look at the agents that we've got running and customers are using and not truly understand how everything that that company is doing today can be done with 10% of the headcount.

And it doesn't mean that folks are gonna lay off 90% of people. It means that the enterprises who really harness this technology are gonna be able to do so much more than their competitors. And that's what the data centers are for. So it is not for

It's for actually getting them to do useful stuff. And we're very excited about partnering with our customers and encouraging them to dream big, think big. So much of the last few years has been about these incremental productivity improvements. And we have Microsoft to blame, honestly. Who cares about summarizing an email? Why are people sending you emails that long that you need them summarized? That's not what AI is for.

And so it's so exciting to get to think about, you know, kind of big swaths of work that you just wouldn't do. We've got a customer who, you know, really used to struggle with getting kind of a fund manager, right? All of the fund reporting took forever. It was very manual. Hundreds of people do this.

And now they're able to automate it with AI. It's a Gen-TIC. You're looking at multiple sources of structured and unstructured data. And they're personalized now. You're getting the portfolio manager's daily notes integrated, right? They're able to get non-institutional investors this highly white glove treatment. And you just never would have done stuff like that before.

And so, you know, we're just starting, right? The revolution, not evolution, is where we're going. So we're still in the first inning of this ballgame. Yes. Barton, Pew Research Center just came out with its first in-depth survey of public attitudes towards AI, and it found more than half of respondents, average Americans, were worried, described themselves as worried about the implications for their workplace, and only about a third were hopeful.

What do you make of that attitude? Is it a response and/or misunderstanding of May's point that we're going to be able to get by or do what we're doing now with 10% of the workers? Or do you think this reflects people's fear about their own employment prospects? Or is it something broader than that? It's a great question. I think it starts with something broader than that. And I think many

very visible people in the industry have been talking about two things almost at the same time. I'll exaggerate for effect, but they say, "We'll have AGI a week Tuesday, "and then the following Friday, "all humanity will be destroyed by the robots "that will create bioweapons and kill everyone. "And so only we can save that and regulate things," and so on. And I think there's a lot of misunderstanding

maybe based more on people thinking of 1950s sci-fi movies than the true nature of this technology and what it does and doesn't do, as well as then the potential and what it can unlock. And so I think there's just a lot of uncertainty and no one knows to whom they should listen. And there's a lot of hyperbole, hyperbole.

We also were very validated by DeepSeek because we've long said you don't need to throw billions and billions and billions after AGI, etc. We have raised just shy of a billion to train the foundational capabilities, but we've always said bigger and bigger is not always better. So we see it very, very similarly. I think on the jobs front, I've seen quite a few changes.

disruptions and transformations in technology in my time, starting with the internet revolution in the mid-90s. And every single time, these have created trillions of dollars of value, economic impact, and millions and millions of jobs. Basically, it will just prevent us from having to do all the worst parts of our jobs, so we focus on the higher order, just like

A CEO used to have to ask her analysts to go and find information. They'd go and look in the research library with microfiche. They'd come back a week later and say, here are the top 10 mining companies in the Pacific Northwest. Now she just types it in her search bar and

a second later has the information, can make a decision, and there's a week that she can spend doing more valuable things. One of the pilots, one of the little things that Larry Ellison did actually a couple years ago just before they announced our partnership using our capabilities was doing some...

demos at the time of facilitating doing patient discharge notes and things for clinicians. Doctors, nurses, radiologists spend three and a half hours a day on admin. If you can get that down to 20 minutes, that's three hours a day you've freed up of healthcare professionals. You're not going to fire a third of them. They're just going to have more time to treat patients, to train, to teach others, to sleep for the brain surgery tomorrow. So I do think a lot of this is around uncertainty.

And a lot of it maybe is self-inflicted by people sort of being a bit of the pyromaniac firefighter, creating problems that then they say that we're the only ones to solve. - I think we've just got a couple minutes left, but May, I wanna use this as an opportunity to bring in perhaps a little more optimistic opinion survey that we've done of our audience here and people paying attention online.

And we've asked them, and we've got the results here, whether AI is gonna have a positive or negative impact on their daily life. - This is a very educated audience. - Well, it goes without saying. I mean, you're at the Washington Post, for God's sakes.

And I don't know if everybody can read this, but the good news is 114 respondents said positive response. Only 26 said negative, so apparently no journalists responded to the survey. And 36 were neutral. So now we've got the flip side of what Pew found. Do you read that as this is just a more knowledgeable audience or a more...

optimistic audience? How do you compare those or how do you understand those two very different portraits? It could be an audience that's just got more access. One of the things that we see is our customers where they've got a CEO who's out in front talking about how they're going to grow the business, how they're going to take market share, how they're going to launch new products.

and really painting a vision for the future that's exciting for people, those folks tend to be much more engaged in learning new AI skills. We have got the most democratizing technology that we've ever invented on our hands, right? All of these secret Einsteins in the company

that didn't have access to the tools to build can now build things that can be completely trajectory changing for the business. And that's the kind of vision that folks need to hear. And when folks hear that vision, they see a space and a part for themselves in it. But if the executive is totally hands off and has asked IT to figure something out and IT's bought Copilot, then good luck. You're in the 36 at best, neutral.

Again, we've only got a couple minutes left, Martin, so I'm going to ask you to be brief on this. And you're entirely correct when you say every iteration of technology creates more value, ultimately creates more jobs, jobs that we couldn't even imagine before the technology lands. But of course, the pace of disruption can be very tough for individuals in specific occupations, specific cohorts, geographies, what have you.

Looking at the experience that we went through with the so-called China shock, when imports from China and automation decimated manufacturing communities across parts of the country, do you think we have the labor market policies that we need to deal with what's ahead of us? Do we have the social safety net, the retraining, all the programs that we need that will help people stay positive about new technology like AI?

I think we do. I think we do. And I don't know whether it's policy or social safety nets. I do think that we just have a very capable workforce overall. I think, as May was saying, these are very democratizing capabilities. And what's really exciting is that these tools, these capabilities, helping people find needles in haystacks, will actually help the, let's say, less...

less productive people, even more than the most productive. If I use another analogy, my French is not fantastic. If I had to write a business letter in French, it would take me a long time to do that compared to my mother who's totally fluent. If I had a tool to help do a draft of that, it's going to help both of us, but it's going to help me a lot more than my mother. Yeah.

And so you think about that in the workforce. It's actually gonna, I believe it can be a great equalizer. I think it can close the skills gap because you're giving tools to people that maybe don't have a double Yale degree but can still access and pull together things and make incredible contributions. - Great. May, in 30 seconds, you're gonna leave us on a hopeful, upbeat note. If we come back five years from now and we have not the same discussion but a

a new version of this discussion. What do you think will be different five years from now? What are you most hopeful about? - Yeah, I think GDP per capita in the US could hit a million dollars.

I know that sounds crazy, but if you think... Not in five years, it sounds. Not in five years, but in five years, we're going to be able to, I think, really prove out that AI is going to be the first technology. This has been 30 years of technology that has come to knowledge workers. It's actually had us working more, not less, and I think we're going to have the first technology that actually has us working fewer hours, but much, much higher output.

Well, I'm going to hold you to that. I like that prediction, but I'm going to hold you to that. Well, thanks to my panelists for a great discussion, which I hope you've enjoyed. And if we can give them a round of applause, we'll move on to our next.

The following segment was produced and paid for by Washington Post Live event sponsor. The Washington Post newsroom was not involved in the production of this content. Good evening, everyone. My name is Kathleen Koch. I'm a bestselling author and a longtime Washington correspondent.

And I don't think I have to tell you that AI is really increasingly infiltrating every aspect of our lives. And it has come so far so fast, in part because of open source technology. Now the advocates of open source say that it not only speeds innovation, but it improves collaboration and improves transparency.

The best way to really dig into the real world benefits of open source AI and the issues with scaling it would be to talk to someone who is using open source and then someone who has also created an open source model. So please join me in welcoming again to the stage Vinit Khosla, CTO of the Washington Post and Sai Chowdhury, Director of Business Development for AI Partnerships at Meta. So thank you for joining us.

Vinita, I'd like to start with you. You, at the beginning of the program, you described Ask the Post AI, and that's this new generative AI that you created using open source technology. Talk to us a little bit more about how that's going, and why did you decide to go with this, again, this platform where you've got AI answering your readers' questions? Yeah.

When you look at what news does, in my view it does two things. It tells us what is important. Every morning you wake up, you open your app, you take out your newspaper, and then you read it to know why it is important, right?

Increasingly, we have seen over the years the what has gone to social media, WhatsApp, TikTok, Instagram. They already tell you what's trending, what's happening. But people still come to news to understand why. Why did that happen? And what we are now seeing in the world of Gen AI, why it is important has become a question they're asking directly.

So we want to meet our consumers where they are in their journey. They know what they want to know, right? And they come to us and they ask a question and we want to be the one who give them the answer why. So that's why we built it. And your second part of like how it's going, up until now, it's been few months, we had had about a million and a half questions asked. And one of the interesting things that is emerging is there is no pattern.

people's curiosity is very natural and very different. So there is not like big buckets I can put things in and we see interesting questions come and it get answered. And you created it using open source? We created using LAMA, Meta's LAMA open source model and we tried a lot of different open source models and we have built our architecture and frankly LAMA's been working best for us so far. Not that they asked me to say that. Thank you.

So, Cy, this marks the two-year anniversary of the introduction. Almost to the week. Right. Almost to the week. And it's done pretty well. It's been downloaded, I understand, on an average of one million times a day. So why are organizations choosing OpenAI Overclosed?

Yeah. So I think it's really important to understand that open models follow a little bit of the DNA of open source software. If you think of what a model is, an open model really means an open file. Just like sometimes you click on a piece of software that the app doesn't know how to open and looks like ones and zeros. An open model is really a file. It has to be interpreted and run somewhere, but it's like a really complex file itself.

And so there's really three reasons why we see enterprise and public sector actually utilizing open models. The first is if it's important that your data stays private, it's really important to use the open model because you have the flexibility, like Vineet's team has the flexibility to actually think about it, use the file, if I might use that analogy,

behind a cloud API or use-- and you know-- And what's an API for those who don't know? Behind a cloud portal, let's say. And you can just use your credit card and get access to it, almost like your consumer thing like ChatGPT. You can use it in the same way for your business use.

Another approach would be, well, I might be renting some compute in the Cloud, but I can use my model over there as well. Or I might need for privacy reasons or regulatory reasons to run servers on-prem, I could run the model here. So the actual place of where your data is where you can run the model. So that's one really important leverage that open models

open up for the engineering team and the product team. A second area is customization.

So, you no longer have to go to one place to actually use kind of a generic model. You can now take this model and customize it on your data. You can give it the nuance of what shopping at your store might be different than what an airline industry nuance might be, etc., or what news might be and the why around the news.

This customization is literally what you're doing is you're taking that file, that model, and fine tuning it. Well, what does fine tuning mean? You're actually further training it based on your data. So you have a slightly different ones and zeros in the model, but that customization is very easy for you to do. And then the third area is really cost.

A really good example is LinkedIn, who have actually moved away from closed models to open models. By moving to Lama, they showcased that it was six times cheaper for them to move to essentially even a customized, in this case, Lama model versus their proprietary model. And so whether it's for data privacy, whether it's for customization or cost,

The fact that you have that file available for you to use and choose where you want to run it, how you want to run it, is really the powerful thing here.

Vineet, I would assume that customization is really important for you here at The Washington Post. Talk to us about how you see AI being used in journalism as you look to the future. And I'm talking about not only from the coverage perspective, but also what you're doing now with it, so, you know, interacting with your audience. Yeah. The journey for AI in Washington Post actually started before I joined, and people were

We helped as a team, as a company, craft our policy and we said we want AI everywhere and there are three pillars that we are going to focus on. The first pillar is consumer. You guys see that when you do ask the post AI when you see summaries.

The second pillar where we focus on is our creation, right? Our journalists. There's so many use cases Gen AI open up. For example, you have a large data set of audio transcription from someone or you got a lot of video files and you want to figure out what's going on in them. So we are empowering our journalists by putting AI inside of our tools, inside of our CMS.

And then finally, the third pillar we said is like, we actually want AI everywhere, even in the business of doing our business. And like this example, like it still blows my mind, the same LLM technology, which is powering ask the post and giving you answers,

is also classifying our customer care tickets when you have problem with your billing, when you send us an email. It's the same LLM which is now also working to say like, oh, this is an important thing, deal with it first. So we have gone these three pillars and put AI everywhere in our company.

Sai, what are the most significant benefits of offering Lama as an open source platform? And how does that really fit in with Mehta's broader AI strategy? Yeah, I mean, I think everyone's heard Mark talk about the fact that we are very, very much behind democratizing AI, making the technology available for large and small enterprises and public sector, and even national security applications everywhere, right?

But the question, I get asked this question a lot, like, well, what is it for Meta? Like you spend all this money in training these models, making sure that they're reliable and safe, and then you put it out there, and then what is it in for you? Well, guess what? Just like with open source software, open models and open source models actually does help Meta quite a bit. So think about it this way.

These models sometimes on the corner cases they hallucinate. Those hallucinations get reported by large enterprises that we work with all the time and small developers, large enterprises like Goldman Sachs and Spotify who are using Lama, like contribute to reporting those errors that we can actually fix in the next version of the model. Guess what? We're one of the largest consumers of the same models in our own consumer and business applications, right?

So that's one really good aspect. Second is we see really innovative engineers all around the world contributing to make the actual model better. When we released Lama 2, it had a context window that's kind of geek talk of the number of stuff that you can ask it, essentially the text that you can or the images you can throw at one time to it. That's really the window, how big the window is. And within two weeks,

engineers not at Meta had figured out a way to actually increase the context window from what we released. Did we incorporate those changes into Lama 3? You bet we did, right? And we benefited from that. And the last thing, this cannot be overshadowed,

We work very closely with the silicon industry. Very, very important companies like NVIDIA and AMD and Intel on the data center, Qualcomm on mobile. They work to optimize the models because it's not just being used by Meta to make sure that those models, those Lama models, the open source models work as well as they can and as performant as they can on their silicon.

Guess what? We buy a lot of silicon for ourselves, and we also run our apps on phones all around the world. And so in each of these examples, it is absolutely by open sourcing the models, it's actually helping Meta as we actually run the models in our both consumer services like Meta AI, which is our AI agent, as well as our B2B products like our advertising tools.

Vinit, nobody covering the ever-changing news environment that we have today is not cheap. And now you've got this AI, these solutions that you've got to build, you've got to maintain. How does the Post balance those expenses? Oh, yeah. That's a good question because we have to be very frugal, right?

Our entire industry is under quite a bit of financial pressure, and there are news organizations that love to report on our financials too. But the point being, we have to be very frugal with our spend, and this is where open source really helps us. The number size we're talking about, like 6X cheaper, those pan out for us, right? So what does that mean for Washington Post?

We can empower 6x more cases if the cost is 6x cheaper, right? We are in a position where we can control our spend and it's actually not huge when you start using open source model, but you empower people to do a whole lot more and hopefully on the other end of it, it ends up making us even more revenue. So very important for us to be frugal and we're doing the best we can.

Sai, how do you see the future of AI playing out? Because I mean, we hear every day it's going to just have a huge impact on everything from economic competitiveness to national security to our daily lives. Yeah, I think, I mean, what Vinit talked about is really kind of just definitely for enterprise and companies like the Washington Post, as well as for even public sector applications. I think the productivity goes up a lot if you use these advanced powerful tools.

I mean, there's not going to be this one necessarily AI. There'll be multiple type of AI enabled experiences and agents and things like that that employees will use. It'll just increase the productivity velocity in a way. And when you hear about companies moving faster, it's those who are going to be using the most recent tools. They just happen to be these AI engines, let's call it that, these AI machines.

tools to actually boost productivity. And I think that's on the commercial side, I think that's going to play out just like what you were talking about. In the public sector side, I think we're going to see also quite a bit of usage of AI, both in non-national security and national security experiences.

If you think about whether you're working out logistics and things like that, this is a big data problem at the end of the day. And all this data is there for different ministries and departments to actually utilize and make better decisions on whatever policy that they're acting on.

Now, all the way through to what we think about with AI, actual uses of the modern day types of warfare like cybersecurity, et cetera. And so it can be everything very mundane from logistics all the way to cybersecurity, all the way to your billing and making sure that you can talk to your state school when you're applying for federal student loans, that kind of thing. And so I think we're going to see a lot of that. In the final minute or so that we have left, let's do a little lightning round.

If you turn the clock ahead five years, we're sitting here, what will be the one thing that companies will say, ah, we wish we'd done that differently when it comes to AI? What will it be?

For me, and I'm speaking this as a developer, right, like the fantastic thing is AI is not a tech like cloud or database. It is unique in the sense that it's a programming language and a product both. So companies, my Gen AI team constantly reminds me when I am trying to write software, they're like, no, you should write some prompts and get the AI to do it. So companies need to adopt it at a much faster pace and at a much deeper level to extract the benefits out of AI.

Sai. I think the biggest piece of advice that I have seen where companies and public sector applications have been very successful is to start small and actually launch it into production. There are so many projects that I've seen that utilize, whether it's closed or open models like Lama, but then do a proof of concept, review it with management, and then stop right there.

That's not the way to learn. Actually, by actually launching it, whether it's a B2B or internal or a B2C, launch it, learn, iterate. And you would do that for a lot of different things. AI is no different. And that's how we're harnessing that same launch, learn, and iterate kind of cycle, even at Meta.

Vinny Khosla, CTO of The Washington Post, and Sai Chowdhury, Director of Business Development for AI Partnerships at Meta. Thank you so much for joining us for this great discussion. And if you would all stay put, my friends, at The Washington Post. We'll be right back.

And now, back to Washington Post Live. Well, hello, and thank you so much for joining us today at Washington Post Live. I'm Kat Zekreski, a White House reporter here at the Washington Post, and I spent the last decade covering technology. My guest today is no stranger to the Post Live stage. This is Congressman Jay Olbernolte from California, and I'm really looking forward to a discussion about artificial intelligence with him today. Welcome to Post Live. Well, thank you very much. I'm

I'm so excited to be here. We get to have a discussion about my favorite topics. Well, so on that point about it being your favorite topic, you really recognized the significance of AI before many people. Almost 30 years ago, you pursued a graduate degree in artificial intelligence.

Can you tell us a little bit about what first made you interested in AI and how you decided to study it? When I was in fifth grade, my father brought an Apple II computer home from work, and he gave me a book on how to teach yourself programming. And that catalyzed a lifelong love of computer science and of programming in me that still exists to this day. So when I was charting my course,

through life, I had decided that I wanted to be a researcher in artificial intelligence because I looked at AI, which was even then kind of a burgeoning new field. And I said, I want to be a part of that. And at the time, the only way to do that was in academia because there was no research being done in commercial settings.

So I went to Caltech, I got a degree in computer engineering and I was working as you say on my doctorate in AI at UCLA doing some of the early research in machine vision and natural language processing when my side hustle at the time

which I only did to make money to put myself through grad school, was writing video game software. And that took off, I had a hit game, and that kind of deflected me out of a career in academia into a career in business. So I ran a video game development studio for 30 years.

Before we get to the tough questions about AI, what's the favorite video game that you've developed? Oh, my favorite that I've developed? Well, okay, that's a loaded question. It's like asking a father his favorite child, right? Because we've done some big games. We did NCAA college football for eSports for a few years.

We've done some high-profile games. We sold six million copies of a game called Game Party for the Nintendo Wii. But my favorite games are the ones that I thought were great games that actually didn't sell well. So we had a few of those, and we have one that was called Mojo that was an action puzzle game with a rolling ball mechanic that I designed, and we've tried to launch it three times, and it's been an abject failure all three times. But I still have hope.

eternal optimist as you said backstage my oldest son still runs the company we our best-selling game right now is we have the official PBA licensed bowling game PBA Pro Bowling and we just launched a few months ago the world's first professional pickleball game so we have a license from the PPA the professional pickleball association we motion captured all the top professionals and I don't know if the world is ready for pickleball video games we're gonna find out

Well, as you're figuring out if the world is ready for pickleball, we're also talking about if the world is ready for artificial intelligence. I just was at the summit in Paris where Vice President Vance gave his speech about the future of AI policy. And he was quite critical of the approach that the European Union has taken to regulating technologies. He also really presented an America first vision for AI innovation moving forward.

Do you agree with this approach? Yeah, absolutely I do. And I think he was right to point out the differences between the regulatory approach that the United States has taken with respect to AI and the approach that Europe has taken. And if you want to see the wisdom of what the U.S. is doing, just look at the way that the passage of the AI Act in the EU

has catalyzed a flight of not only investor capital, but also of technical talent out of Europe. And that is something that is not going to serve them well. So when I was privileged to lead the AI task force in the House last year, which was a bipartisan group of 24 really talented members, one of the conclusions that we came to in our report is the wisdom of embracing sectoral regulation. So

NIST came out with a report on managing the risks of AI about 18 months ago, and it's been since acknowledged as probably the furthest thinking document on AI risks in the world. And the point that NIST makes in their report is the risks of AI are highly contextual, so it matters very much what you do with the AI when you're determining what the risks of that are. So you could have

an algorithm that is completely benign in one risk usage and completely unacceptably risky in another usage. So that demonstrates the wisdom of including the context when you regulate. And that's what the sectoral regulators do. You have people from the FDA who are already processing over a thousand applications for the use of AI and medical device, and hundreds of those devices are on shelves.

which is a pretty consequential high use risk, to NHTSA who's regulating the use of AI and autonomous driving software. The FAA is regulating the use of AI in avionics and aircraft. So we're already having to navigate this space in a lot of different contexts. And we're doing it pretty successfully by the feedback that we got from just about all the corners of the stakeholder universe.

And Congressman, you just brought up NIST, so I want to ask you a little bit about the National Institute of Standards and Technology. It houses the AI Safety Institute. Do you think that the Trump administration should continue to have the AI Safety Institute moving forward? Well, we certainly need an institute to set standards for AI and to create testing and evaluation methodologies for AI. And we make this very clear in the task force report. But

We also are clear that those standards should be non compulsory, which is what NIST does. You know, NIST is not a regulator. NIST is a standard creator and we think the role of

of NIST in that respect should be to create a toolbox that sectoral regulators can use when they're making decisions about how to appropriately regulate AI within their sectoral spaces. So testing and evaluations are a big part of that. Creating a pool of technical talent is a big part of that. And regulatory sandboxes for the testing of potentially malicious AI is a part of that. And so we think NIST is a really appropriate place for that to be housed.

Given how important that work is, we've had some reports about what might happen to the probationary employees at NIST. Many work in the AI Safety Institute. Are you concerned that Doge could impact the amount of talent we have working on AI evaluations? In the short term, yes. In the long term, no. I think it's pretty clear, and both Doge and the Trump administration have been pretty clear, that the

The probationary employees is a temporary step. This is something a new administration is doing to get its arms around

what different employees in the executive branch are doing. And I think as time goes by, you'll see that probationary employees fill a valuable function. And once determination is made who's doing essential work and who's not and how they fit into the administration's plan, I think you'll see some of those problems rectified. - And I wanna come back to Doge, but first I wanna bring in a question that we have from the audience. Timothy Whalen from Virginia asks,

The Trump administration highlights the need for the United States to lead the world in AI. The Biden administration also stated this goal, but the Biden AI EOs have been rescinded. What do you see as the key differences in philosophies for how to bring the United States to lead the world in AI? Well, I think Vice President Vance articulated that pretty compellingly at the Paris Summit recently.

I am very comforted by the fact that President Trump is taking this so seriously, pointing David Sachs as the AI czar, Michael Kratzias as the head of OSTP. Michael is just an amazingly brilliant guy and shares a lot of my thoughts about the importance of

of the united states continuing to lead the world in ai innovation and deployment so i'm also comforted by the fact that the trump administration has put a 180-day shot clock on developing a replacement eo and i think that with all of these thought people looking at the issue that they're going to get to something that uh that really uh is is purpose fit for making sure that the u.s uh remains at the at the forefront i know the biden eos

It had a lot of stuff in there that I was vocally against. I think in particular the way that it invoked the Defense Production Act to compel

The disclosure of trade secrets from US companies to the government was very inappropriate. I mean, the disclosure itself was inappropriate, but also inappropriate was the way that the DPA was used because I don't think Congress ever intended it to be used in that way. So, you know, that was something that I think deserved to be rescinded. But I mean, I think you'll see some really thoughtful work go into this, and I'm hopeful that when we get the new EO, that that'll assuage a lot of fears.

But if you don't use the Defense Production Act, how would you ensure that companies are submitting these powerful AI models for testing and evaluation? Well, I mean, I think it fits into what your philosophy of regulation is. And in this country, we have a long history of regulating outcomes, not regulating tools. So AI is a very powerful tool, but at the end of the day, it is just a tool. And if you concentrate on outcomes, you don't have to worry as much about the tools. And one example that I use a lot is

the potential malicious use of AI for cyber fraud and cyber theft. So, you know, in the pantheon of malicious uses of AI, that's one of the ones that we at the task force worried the most about because

we say bad actors are gonna bad, and they're gonna bad more productively with AI than without AI because it's such a powerful tool for enhancing productivity. So we're already seeing these malicious uses in terms of phishing attacks and identity theft proliferate. And so it's something that we're very worried about. But we don't need a brand new law that says it's illegal to use AI to steal people's money because it's already illegal to steal people's money. What we do need is we need

to spin up law enforcement with the tools and resources that are required to counter the threat. And that includes a lot of different things, but none of that should distract away from the fact that we already have a legal framework to deal with that usage. And you can take that and multiply it across a lot of different usages and contexts and it holds true. So when you focus on that, I think it really more sharply defines what the government needs to know to do its job.

And so I guess on that point, we have Michael Kratios, David Sachs, OSTP working on this AI action plan. Given the fact that you say we don't need new laws in a lot of instances, what is Congress's role going to be moving forward when it comes to regulating artificial intelligence? Right. Well, Congress has a very important role. And

as we were just talking, I was honored to lead the House AI Task Force last year. Our defined task was, by the end of the year, the creation of a report that detailed a proposed federal regulatory framework for AI and the path to get to it. And we produced a report in December that, not to be immodest, but I think was one of the furthest reaching policy documents on AI that's been produced by any

legislative body in the world. It was 270 pages of really detailed policy. We made over 60 key findings and over 80 recommendations. And we intended those recommendations to be a checklist for future congresses.

and starting with 119th, the current Congress. And so we are hopeful that this Congress will start following that list and checking off those different tasks. And we think that there's some low hanging fruit that is really obvious targets right now. For example, I've been very vocal about my belief that we need to pass the Create AI Act. That was something that I was helping to lead last year where I am reintroducing that legislation here shortly to establish

the National AI Research Resource to make sure that cutting edge AI research continues to be done in academic settings as well as commercial settings. We're getting to the point where it's so expensive to do cutting edge research that academic institutions might not be able to do it anymore, which would give up

all of the benefits that go along with academic research. I'm talking about transparency, publication, peer review, literally being able to stand on the shoulders of giants when you're leveraging other people's research in your own research.

And none of that occurs when you are developing AI in a commercial setting behind closed doors. So we're hoping to get that across the finish line this year. Another big concern that I think is definitely achievable in the near future is to address the use of AI to create non-consensual intimate imagery.

because we've got high schools across the country that you probably couldn't come up with a high school that hasn't had an instance of a student using AI to superimpose another student's face on a pornographic body, which has devastating consequences for kids at that age. That's something we should all be able to agree

is not okay. And I think that we had a couple of attempts at solving that problem last year, some in this House, some in the Senate. I think that we should be able to get that across the finish line in the near future. So those are just a couple of the first steps.

But what's changed now? Because both of those pieces of legislation that you talked about were in play last year. There were areas where there was bipartisan consensus, as you mentioned. Why will it be different now? How will that get through? I think it's more accurate to say we just ran out of legislative runway last year.

We had a two year Congress, obviously legislation expires at the end of the Congress unless it's been approved and we just ran out of time to get them done. But I'm really optimistic actually about our ability to pass legislation and establish Congress as an entity that's capable of doing what needs to be done in this space.

I'm very encouraged, and I think everyone ought to be, by the fact that our task force was broadly bipartisan, equally split between Democrats and Republicans. We unanimously approved the task force report, and that included approval by the Speaker and his staff, and the Minority Leader and his staff.

There was a lot that was left on the cutting room floor. I told you we had 270 pages and 80-plus recommendations. There were a lot of things that we did not, we took out of the report, because there were people that had concerns about them, that we didn't have unanimous agreement. But the fact that we had unanimous agreement on 80-plus recommendations should give everyone some confidence that this is something that we're capable of doing and something we're capable of reaching agreement on.

- At that point, we actually asked the audience if they think that comprehensive AI legislation is likely in 2025. - Oh no, really? - And so here are our results. - Not a lot of optimism, come on, really? - So that's 26 yeses and 156 nos. So what's your response to that, Congressman? - Here's my response.

Obviously I have a political job, and as a political person who is, I think this last election was my 14th, I would object to your worst use of the word comprehensive in your survey. It was the words we use matter. And you know what, I would actually, I might have voted no to this because we make it very clear in our task force report that we need to embrace the concept of incrementalism.

We don't think that AI regulation in the United States is like it was in Europe, one 3,000 page bill. That's not a good fit for our system, both from a regulatory standpoint and from a legislative standpoint. We view the job of doing what needs to be done as a lot of little bite-sized pieces. And each one of those recommendations we made in our task force report is one of those little bite-sized pieces. And I just gave you a couple of them.

So if you would rephrase that question, what's the likelihood of Congress doing substantial work in the AI regulatory space? I would hope everyone would say yes. - You know, I just wanted to make sure we got a chance to come back to Doge because you are a member of the Doge caucus. - Yes, very uncontroversial. We should talk about that.

- So on that point, what grade would you give Doge's work so far? - Well I think it's early days. First of all, to put it in context, last year we borrowed $2 trillion out of $7 trillion of federal spending. So we ran a federal deficit of almost 30%. And this is in an era of relative economic prosperity. Normally when countries run deficits that large they're recovering for some traumatic event like a

So a military conflict or a deep recession or the pandemic is a good example, but we're still spending at levels above that we were spending in the middle of the pandemic. So this is going to drive an existential crisis for our country.

the spending on interest on our national debt, passed national defense for the first time in the history of our country. This year it will pass Medicare spending. Within a couple years it will be the single biggest spending in all of government. So I mean this is something that needs to be addressed. Everyone agrees that it needs to be addressed. We disagree on how it needs to be addressed. So DOGE is coming in and the way that they're addressing it is by looking for

bureaucracy that can be cut, wasteful spending that can be cut, mistaken payments that can be cut. Elon Musk has said he thinks that one trillion of that seven trillion dollars in federal spending was either improper payments, mistaken payments, or things where there's fraud, not within the government, but outside the government. So I'm interested to see what they're going to do, and they've turned up some really shocking things so far. So I'd say it's early days to judge how successful they've been.

- On that point though, I mean what we're seeing is Elon Musk really apply the strategy that he's used in the private sector to government.

Do you agree with that approach? Does move fast and break things work in DC? - Well, in some cases it does, in some cases it doesn't. I mean, we have safety critical systems that we can't break. National security, we can't break that. The FAA, our traffic control, we can't break that. We've got healthcare services people depend on, we can't break that. Our veterans are dependent on us for the care that they've earned, we can't break that.

You have to be a little bit cautious. But I think anyone should be also cautious about criticizing someone who came into Twitter and said, you know what, I think we could run this company with 20% of the people that were there. And I was amongst the people, I'll admit, I was a skeptic. I read about that in the news and I thought, there is no way. And yet, from a user perspective, people didn't even notice. So we'll see.

We'll see how they do. - I think the audience is a little skeptical that people didn't know. - Well, I mean, certainly, I don't mean to minimize the turmoil that that caused within the company, but from a perspective of a user, right, life went on. The service continued to operate. So we'll see if those same efficiencies can be brought to government.

But certainly the service is incredibly different today than it was when he first took it over more than two years ago. Yeah, I do not know the answer to your question, but I'm a Twitter user. I mean, we have an official account and I communicate with my constituents on the platform. And from my point of view, life went on, right? There was, you know, we still post, people still post in response. We still have that bidirectional communication system.

And to be fair, it was a money losing proposition and now it's not.

It's gonna be interesting to see if that same philosophy holds true of federal government. - Well, to be clear, we don't know the financial performance of the company and whether it's-- - Oh, Barry, 'cause he took it private. You're right, uh-huh, yep. - Now, so I think to fact check that. But I mean, just big picture, you mentioned the need for caution. When you're dealing with these services that are critical to national security, impact people's lives, have you communicated that need for caution to the White House or to Musk?

Sure, we have ongoing discussions and I'll tell you that's one very nice thing about being in the same political party is you actually can back channel those conversations. We had a recent one about national parks where we want to make sure that people continue to have access to the NPS. We're coming into peak season for visitation. I've got Joshua Tree National Park in my district and we want to make sure that

the public's ability to enjoy the park is not impacted by these mass layoffs. And so we brought that attention to the administration and we got some people rehired and that was a good thing. And I would hope that that would, I've talked to my colleagues in our conference and that's an experience that a lot of us have had.

Well, unfortunately, we are out of time for today, but I have many more questions about Doge and artificial intelligence and would love to keep this going. Congressman, thank you so much for joining us today. Absolutely. Thanks, everyone. It was fun. Thanks for listening. For more information on our upcoming programs, go to WashingtonPostLive.com.

You listen because you know the power of good journalism. And The Washington Post is there for you 24/7. When you become a Washington Post subscriber, you get exclusive reporting you can't find anywhere else. You also get sharp advice columns, delicious recipes, TV and music reviews, and so much more. Right now, you can get all of that for just $4 every four weeks. That's for an entire year.

After that, it's just $12 every four weeks and you can cancel any time. Add to your knowledge and discover all the Post has to offer. Go to WashingtonPost.com slash subscribe. That's WashingtonPost.com slash subscribe.