We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Global trust in news, technological change and the future of journalism

Global trust in news, technological change and the future of journalism

2024/12/10
logo of podcast Washington Post Live

Washington Post Live

AI Deep Dive AI Insights AI Chapters Transcript
People
B
Bill Gross
F
Faye D'Souza
P
Paul Brody
Topics
Bill Gross: 人工智能应该用于回答问题,但答案必须有来源,并基于人类的输入内容。生成式AI应该与内容创作者分享收入,就像其他数字化媒体平台一样。新闻机构应该从AI的使用中获益,前提是与记者分享收入。记者可以通过两种方式从AI模型中获益:一次性许可和按使用付费。GIST.ai是一个基于AI的搜索引擎,它使用来自授权来源的内容,并与内容创作者分享50%的收入。通过显示信息来源,可以确保对AI生成内容的人工监督,并帮助用户判断信息的真实性和可靠性,从而减少虚假信息的传播。通过展示多种观点并标注来源,可以帮助用户避免信息茧房,并促进更公平的观点交流。确保内容创作者有动力创作高质量内容,对人类的未来至关重要。 Faye D'Souza: AI技术目前主要用于新闻编辑的例行任务,但其带来的最大挑战是AI生成的错误信息。Beatroot Media的目标是为年轻受众提供公正、真实的新闻。传统媒体机构应该学习如何直接与年轻受众沟通,并重视透明度和诚实。人们获取新闻的方式发生了变化,新闻机构需要适应人们日益缩短的注意力跨度。传统媒体机构由于缺乏透明度和利益冲突,导致公众信任度下降。重建公众信任需要透明度、真实性和高质量的新闻报道。需要通过AI技术来对抗错误信息,并需要各方共同努力。政治压力正在转向利用非记者的播客和内容创作者传播信息。新闻就像蔬菜一样,虽然可能不那么令人兴奋,但对社会功能的运作至关重要。 Paul Brody: 区块链技术可以创建防篡改的数字记录,从而帮助识别虚假新闻和错误信息。区块链技术可以为新闻文章创建数字指纹,从而验证其真实性。与意大利媒体机构ANSA的合作,利用区块链技术验证新闻的真实性。未来,区块链技术将在识别AI生成内容和区分人类生成内容方面发挥重要作用。

Deep Dive

Key Insights

What is ProRata.ai's approach to ethical AI in media?

ProRata.ai focuses on sourcing answers from content creators who give explicit permission for their work to be used. The platform ensures transparency by showing where answers come from and shares 50% of the revenue with creators, similar to models like Spotify and YouTube.

Why is revenue sharing important in generative AI according to Bill Gross?

Revenue sharing ensures creators are compensated for their work, making the ecosystem sustainable. Without it, generative AI risks exploiting creators, similar to stealing content without payment, which undermines the motivation for producing quality content.

How does ProRata.ai address the issue of AI hallucinations?

ProRata.ai ensures answers are sourced from verified content creators, reducing the risk of AI-generated inaccuracies. By referencing original human input, the platform minimizes the likelihood of fabricated or incorrect information.

What is GIST.ai and how does it differ from other AI search engines?

GIST.ai is an AI-based search engine that generates answers from 400 sources with explicit permission. It shares 50% of its revenue with content creators and shows users where the answers come from, enhancing credibility and transparency.

How does blockchain technology help combat fake news?

Blockchain creates tamper-proof digital records by distributing data across thousands of locations. This makes it difficult to alter information, allowing users to verify the authenticity of content and restore trust in journalism.

What challenges does AI pose for detecting misinformation?

AI-generated misinformation is advancing faster than detection technologies. The lack of investment in detection tools makes it difficult to identify fake content, posing a significant challenge for journalists and media organizations.

What is the mission behind Faye D'Souza's BeTruth Media?

BeTruth Media aims to provide unbiased, non-glamorized news directly to audiences, especially younger ones, using social media and apps. It focuses on transparency, fact-checking, and allowing readers to form their own opinions without manipulation.

Why do younger audiences prefer individual news sources over traditional brands?

Younger audiences value authenticity and transparency, often connecting more with individuals than brands. They appreciate raw, honest information and personal accountability, such as public apologies for mistakes, which builds trust and credibility.

How does the shortening attention span affect news consumption?

With limited time, audiences often skim headlines and blurbs, increasing the risk of misleading or exaggerated information. News organizations must adapt by delivering concise, accurate content that respects the audience's time constraints.

What steps can news organizations take to regain public trust?

Transparency, authenticity, and consistent fact-checking are key. By delivering honest, double-checked news and respecting the audience's intelligence, organizations can rebuild trust brick by brick, story by story.

Chapters
Bill Gross discusses the ethical use of AI in media, emphasizing the importance of revenue sharing with content creators. He introduces ProRata's technology for tracking AI content usage and enabling fair compensation for creators.
  • Generative AI often produces inaccurate information.
  • ProRata ensures AI answers are sourced and references original content.
  • A 50% revenue share with creators is advocated for ethical AI usage.

Shownotes Transcript

Translations:
中文

This Washington Post Live podcast is sponsored by the EY organization, EY Blockchain, offering privacy-enabled SaaS solutions, giving enterprises the confidence to transact using public blockchains.

You're listening to a podcast from Washington Post Live, bringing the newsroom to you live. Good morning and welcome to Washington Post Live. I'm Jason Rezaian. I'm the director of Press Freedom Initiatives here at the Washington Post. I'm delighted to be joined today by two guests who will talk about global trust in news.

My first guest today is Bill Gross, CEO and founder of ProRata.ai, here to talk about AI, media, and public trust. Bill, welcome to Washington Post Live. Thank you for having me.

Bill, I wanted to start. News organizations around the world are increasingly using, experimenting with cutting-edge AI technology. You and your team have engaged with several big media brands who are employing what you call ethical AI. Can you tell us a little bit about what ProVita does and how it's different than other AI platforms out there?

Thank you. Well, right now, generative AI is powerful and unstoppable, but it's scraping up lots of content from all over the web and from everywhere they can find words, and then generating answers using statistics and probabilities of those words.

In many cases, that can be wildly useful, but in many cases, it can be wildly incorrect. The term hallucinations comes to play for when AI makes up something which sounds like correct English, but is just completely farcical, no basis in fact.

What we believe AI should be used for is the power of answering people's questions. That's incredible, but it should be sourced. You should see where the answers are coming from and the answer should be crafted from the actual human input that was put into creating the articles. And therefore we want to partner with people where they give us explicit permission to use their words, use that in answers and then reference it. And we think that's a game changer.

You talk about the idea of ethical AI. That's something that we're discussing a lot in the media, but in other realms as well. As applied to media in particular, what does that look like? Tell us a sense of what that means. Well, if you look at all other forms of digitization of media today, they all share revenue with the creators. You could take an example of Spotify or of YouTube or of Apple News or the app stores.

All of them don't just steal content and then sell it to users and compensate the creator with zero. And people have come to understand that a revenue share is fair. It's sustainable for the creators as well. Well, we think that should be the exact same thing for generative AI. If generative AI is going to consume content from creators, they should cut those content creators in. Now, the reason that they're saying they don't

is twofold. First, they're saying, "Well, we don't think we can make our business model work if we have to pay." Well, that's not a valid reason. You can't just steal muffins from the bakery across the street for your croissant sales, for your muffin cart, just because that's what makes your business model work. So that's one absurd reason. The other reason is more detailed. It's challenging to do. Because there's no simple stream to count or video view to observe,

It's more complex to figure out what content actually was used in an answer, but that's what we've developed. We've developed a ProRata technology that can look at the output of generative AI, dissect it and figure out where the component parts came from and what percentage, and then that enables revenue share based on that percentage. And we feel that fully 50% of the revenue should be shared with the creators.

And, you know, as someone who is concerned about the future of the news media, as someone who started my career as a person whose content appeared on a printed page in paper, watching the

The transformation of our industry is both exciting but also a little bit disconcerting. Do you think that there's a significant or viable future for journalists and news organizations to benefit monetarily from increased AI use in the way that you're talking about?

Of course, I feel the answer is yes. I feel the answer is no, if journalists aren't cut in on the revenue share. But I feel the answer is a thriving ecosystem if journalists are. Take an example of your reporting from the Middle East.

If you write something that is valuable insight for the world that people want to hear and people search for, and then your content is used in that answer, you should absolutely be cut in on the revenue that is made based on that. Now, an existing search, that is not the case. An existing search, say Google or other search engines, someone does a search for something and then they get a list of links where they can go look at articles and come up with the answer themselves.

But in this new Search 3.0, I call it Search 3.0 because Search 1.0 was Yahoo, AltaVista, Excite, Lycos, and that was all sponsored by banner advertising. Search 2.0 is Google and everything we've seen for the last 25 years, and that has been sponsored by pay-per-click advertising.

But search 3.0 is no longer search. It's just get answers. You type in your question to AI. AI understands what you're asking and then searches all the content for the answers and just delivers you an answer. That is incredible. That is so useful to people. People love that.

But those answers came from somewhere. Those answers came from your hard work. You were out there in the field covering something and therefore this new search needs to cut in creators just like, as I said, other forms of digital media. Like Spotify could not just take all the songs from artists and not cut them in when people listen to them. And I think that's the same thing for generative AI. And, you know,

For journalists and media professionals who might be skittish about opting into this sort of information sharing, what do you see as other ways that journalists can potentially benefit from their work being used to train AI models?

Well, there's two places where people who are creators can participate in generative AI. One is licensing their works on a one-time basis for training. And there are many companies that are doing that. Companies that are taking people's bodies of work, whether they're an artist or an animator or a writer, filmmaker, taking their work and licensing it to AI companies to use it in the training set so

so that they can understand the language or understand the images, understand the animations, understand the architecture, whatever the skill is of the creator. And then there's a second part that we're focused on, which is on pay per use on the output. So there's the input to AI systems. That's one place where creators can monetize their craft.

And there's the output on pay-per-use, where every time your information, your creativity is used in the output of an answer, then you get a revenue share. We're focused on the output. The reason why we're so focused on that is I think that's the main sustainable way that creators can make a living, just like you make a living when your work is published in print or online.

In generative AI, every time someone asks a question, every time your content is used, you should get a check. You should get a royalty check. And that's how our system works. All of our creators, each month, they get a statement of what questions were asked.

what questions their content was used in the answer, what percentage, and then a royalty check. And that's the way we think it has to be, again, just like other media types. YouTube creators share in their revenue. Spotify creators share in the revenue. Apple News is a system where all the content creators share in the revenue. We just think this should be the exact same way. Yesterday, you announced the launch of GIST.ai. Tell us what GIST does and how it differs from other models.

GIST.ai is a search engine, an AI-based search engine like the others, where you type in a question in full natural language English, a large language model understands your question, and then generates an answer. The difference is the answer is generated from sources, 400 sources that have given us explicit permission to use their content, and we share revenue with them on a 50/50 basis. So we form an answer,

We show you where the answer came from. And this is really valuable, too, because knowing that your answer came from Reddit or Twitter or from you or from Washington Post or from New York Times or Financial Times or knowing where the content came from is very valuable for you making your judgment on the credibility. But we combine an answer from multiple sources, give you that answer so you don't have to browse through links to go search for it.

And then we clearly enunciate where it came from. And then of course, we have the revenue share with all the partners. So it's a unique search engine in that all of the revenue, whether it's subscription revenue, advertising revenue, microtransaction revenue, whatever the revenue, 50% goes out to the creators. And we think that is a model that we want to show that works and is thriving for creators and for users. Users get better answers.

Creators make money and we deliver a great system that has all that auditing trail of the sources. We think that's an example to show to the whole industry. Much of the skepticism around AI and news is about perceived risk of not having enough human oversight. If you can, take us behind the scenes a little bit. How do the conversations within your own teams evolve as you looked at the risk around the loss of oversight

when it comes to the use of AI. We feel that if a user can see what the source of their content was,

and that all the claims and the answer can be credited to someone, that really will make sure that there was human oversight in the crafting of the original content. I mean, there was human oversight in the crafting of the content. Obviously, when you write an article, your name is on it. And by showing the actual source and the name associated with it, we really make sure that every statement that is made in an answer has some support.

We can't verify the truth of it, but we know that it has support from an organization that has vetted this content. And by only using content that has been explicitly given permission, we make sure that those organizations really have the source shown. And on that, how are you safeguarding and how will you safeguard against the potential growth of mis- and disinformation in this space?

The main way that we feel this will be more quality informed is that every single piece of content, every claim is verified against the source. And every one of those sources, we show you the name and logo of the brand. And that way the user can make their decision. Do I trust this brand? Do I see where this came from? And do I trust it? There was an example that happened recently on Google. I don't know if you heard about this one where I think when Google first put their AI system up, someone asked,

How do you stop cheese from sliding off pizza? And the answer that came back was you should use glue. Elmer's glue will hold the cheese on the pizza. Well, the reason that that answer came back was not because the AI system had a flaw and made it up. It was because someone said that in a tongue-in-cheek fashion on Reddit. But because that wasn't sourced and you didn't know it came as a joke, you couldn't tell if that was valid information or not.

But if you source all of the content in your answer and you can see, oh, this came from a jokester on Reddit, oh, this came from the Washington Post, then the user can make their judgment. And that's how we feel we can really manage the information by revealing exactly its integrity and where it came from. I'm going to use this opportunity to make a gentle reminder to people not to eat glue. There's ample research out there to suggest that

a lot of audiences are losing their trust in traditional news media. And I wonder, why do you think audiences are skeptical about traditional news outlets beginning to deploy the use of AI? I think that when people

go into their micro targets of their own interests. They find an echo chamber that resonates with them. I really feel we can combat that by showing people combined answers with multiple viewpoints, showing where they came from and giving people that information so they can make better decisions. That's my hope. I really want to see humanity move in that direction. But most of all,

By compensating creators, we have more voices. We have more voices that can be thriving and alive to contribute to the conversation. And if those voices are combined in a way that people can look at all of them, I think that will lead to more fair mindedness in the long run.

This is fascinating. We're running out of time, but before we do, I just want to take a step back and look at your kind of broader career trajectory. You are a serial entrepreneur. You seem to have been on a mission for improving the quality of information that is searchable. Why has this been such an important mission to you personally?

I feel this is so important because I think this actually impacts the future of humanity, the future of knowledge, wisdom, and democracy. If creators don't have the motivation and the monetization to use their brainpower to enhance the future of society,

then it will drift away. Then AI will just be feeding on itself. We'll have synthetic data, which is feeding on synthetic data, and people won't advance with new thoughts, new wisdom, and new insights. I think AI is great for compiling insights, but not great for creating them. So I feel a system that incentivizes humanity to continue to create great insights is

is crucial. So this is a mission for me. This is really, really important, I think, for the advancement of society. And therefore, I've committed fully to making ProRata a success. And my real hope is that other people emulate this model and this model expands so that this new media... Think about throughout the course of history, every time there was a new technology shift, whether it was the printing press or television, radio,

Creators had to adapt. In this new media of AI, we have a chance to craft this correctly at the beginning stage. We're early in this revolution. ChatGPT is two years old just this month. Like right now, we're at the two-year anniversary. So this is a chance to craft this new revolution properly so everybody has a stake in the game.

Bill, this has been, for me, exciting, insightful, and hopeful. I really appreciate you taking the time to talk with us, to join us this afternoon. Thank you so much. Don't go anywhere. I'll be back shortly with my next guest, Faye D'Souza, in just a few minutes. Please stay with us. The following segment was produced and paid for by a Washington Post Live event sponsor.

The Washington Post newsroom was not involved in the production of this content. Hi, my name is Lana Wong and I'm a founding member of the Diverse Women Moderators Bureau, moderate the panel. I'm excited to speak with Paul Brody, EY's global blockchain leader, to learn how blockchain can help uncover fake news. In today's whirlwind of misinformation and disinformation, we've all seen how quickly fake news can go viral on social media.

The easy production of counterfeit content poses significant challenges for media organizations and has unfortunately eroded public trust in the news. How can media outlets find ways to verify authentic content and to restore consumer confidence and trust? Thankfully, Paul has some answers for us. So welcome, Paul.

Hey, thanks for joining me. Thanks for having me, I should say. Absolutely. All right, so let's dive in. So blockchain has historically been used to prevent fraud in financial transactions, but it can also be used to identify fake news and misinformation. Can you tell us how that process works?

Yeah, absolutely. So there's two things that have really attracted people to blockchain in the technology world. One is very related to things like financial services. There's a reconciliation inside blockchains, which makes it impossible for me to say, give you an asset without taking it out of my account. And that's really valuable if you care about allocating assets.

There's a second feature, which is that blockchains create tamper-proof digital records. And the way they do that is by copying their data out to thousands of locations. So sure, you could alter your own records, but it's very hard to go and alter everybody else's. And that makes it very difficult to actively deceive people about the origin of data or information. And it's a second property that

is so exciting in the world of preventing counterfeiting or preventing false information from being created and spread as if it was real. That's great because, you know, according to a number of recent studies, people are really losing trust in journalism and the media. So how can blockchain technology help restore this trust and accountability in journalism?

So I think there's some cool things that we can do with blockchain. I always get a little nervous when somebody says, oh, XYZ is broken and blockchain can fix it. I think we can't fix all of it, but we can make a useful contribution. And the central part of this is this ability to create a digital fingerprint from a particular document and then to be able to compare it

to the original source, or if you've got a document, you can check to see if it was created by a particular entity. And I'll give you a really good example. Digital editing tools today make it very easy for me to whip up this like really cool, realistic headline from the Washington Post that says that I landed on the moon this week.

Actually, I was at home for Thanksgiving, but there's no easy way to tell. It looks so realistic. Using blockchain, you can basically do the digital print of the fake article, and then you can go and look up on chain and see if there's an original article that has a similar digital

fingerprint and you won't find one. And then you'll know, for example, if the Washington Post does this for all of their articles, that this as realistic as it looks can't actually be real. And that's the value proposition and the role that we see this taking shape with companies around the world.

Great. So can you tell us about your collaboration with the Italian media agency ANSA? What prompted their initiative to combat fake news and what outcomes were achieved with EY's blockchain solution, EY Ops Chain Notarization?

Yeah, this is a really great example of exactly this kind of fingerprint use case. When we started with Ansa during the pandemic, they are the biggest news agency in Italy. And so people were taking articles that looked like they came from Ansa and either modifying the content or just making up entirely new stuff and passing it off as real news when it wasn't. And we

with the digital fingerprinting solution on each page that's produced by Ansett today. You can see this little thing called Ansett Check. And it will take you to the blockchain. It will show you the digital fingerprint from the article. And you can either use their data source or you can make your own and you can compare it and be sure that what you're seeing on the blockchain is exactly the digital correct fingerprint of the article that you're reading or not.

Great. Okay. Well, then how do you anticipate EY Ops Chain Notarization developing in the coming years as misinformation continues to be a concern?

So I think it's going to go in a couple of directions. One of them is that we hope to see more and more news agencies and publishers use this technology, right? It's so easy to create a realistic fake, and this is a simple way to make sure that it's an original. It doesn't get you around the fact that you do need to exercise judgment about what news organizations do and which ones practice old-fashioned things like fact-checking.

The second thing that we think is going to be really important is in managing a world where a lot of the content online is now AI generated. And so it looks realistic, but it may not have actually been produced by a meaningful expert. It just kind of AI tools generate stuff that looks realistic, but really isn't. And I think people are going to want to be able to know like, hey, did this assessment of the market or the opportunity or the project

come from an actual recognized human expert? Or did somebody just like auto-generate this to get some clickbait advertising? And so the ability to differentiate between human-generated content and machine-generated content is going to become very, very important, I think, for some period of time.

Great. Well, thank you, Paul, for this important conversation, because now more than ever, we need to find ways to verify and trust the news that we're consuming. So on that note, thank you so much for your expertise. And if you all out there have ideas or thoughts to add to this conversation, please do so online using the hashtag Post Live. My name is Lana Wong, and now I hand it back to our colleagues at The Washington Post.

And now, back to Washington Post Live. Welcome back. For those of you just joining us, I am Jason Rezaian, Director of Press Freedom Initiatives here at the Washington Post. I'm delighted to be joined now by Faye D'Souza to continue the conversation about trust in media. Faye, welcome to Washington Post Live. Thank you for having me, Jason. It's a pleasure to be here.

My last conversation with Bill Gross took an assessment of the rise of AI and how media companies can harness this technology for their benefits. I want to start by asking you, have you seen any AI models that you think provide a solid blueprint for how media companies should be integrating this kind of technology into their practices?

Well, for now, I think most newsrooms are using AI to quicken the routine tasks of editing, copy checking, generate the headlines, generate the thumbnails, things like that. It's replacing a lot of human intervention on our desks, on our assignment, among copy editors. But I just think that that's

I think that the big challenge that AI brings us will be misinformation that's generated using AI. How far away are we from a scenario where I could receive a page on the Internet that looks exactly like the Washington Post, where the headlines and the copy sounds, reads exactly like the Washington Post, but it's entirely fake.

or there's a video that looks and sounds like you, Jason, which is entirely fake, generated using AI.

And my research tells me that detection is not receiving the kind of investments. And so it's not, you know, it's not growing and learning as fast as generative AI is learning how to create misinformation or bad information of the Internet. And I think for all of us who function as journalists and who are romantically in love with the truth, this is a huge challenge. And this is something that should keep us up at night.

I can tell you that it's deeply concerning to me and all of my colleagues here at The Post. You started Beatroot Media with the intent of connecting younger audiences with unbiased and non-glamorized news. Give us the backstory of Beatroot Media and how it came about.

Anyone who's tracking Indian news media or the mainstream corporate news media will tell you that over the last

decade, maybe seven years or so, there's been a systematic dismantling of the ability to ask questions, the ability to cover the news as it should be. Mainstream news, especially on television, is largely being run by government and for government. And as a result, a lot of independent voices are now using social media to get in touch with the audience because they were all

removed from their jobs, me included. The process with Btruth News was to simply sit down and say, what do we need in India right now? There is a total lack of credibility with mainstream news because they're partisan entirely. Their main advertisers

you know our political parties and governments and so there is a lack of credibility there as well so in India we you know there's no shortage of information there's a shortage of credibility so the idea with BeTruth News was to be able to use social media to use our app to use YouTube speak directly to the audience and provide various levels of fact checking and uh

like you said, a de-glamorization of news where we're not putting out news that's algorithm driven or that's click-baity in the way it's written in terms of the headlines, but actually simply just tells you what happened.

in the most matter-of-fact way possible, and leave it to the audience to decide how they feel about it. Unlike what's happening on television, where they're constantly being told, you should be angry about this, this should upset you, you should be afraid, but actually allow the audience themselves to decide whether something is good or bad, and not insult their intelligence by attempting to lie to them, or manipulate the news, or overstate anything that has happened.

So that was the route that we started with Btruth News. And simply on Instagram, for example, we just write out the news on a daily basis and it's grown to currently reaching 100 million people every month who come in to just read the news. 50% of those people are women, at least on a really good week where there's an election or there's a big news event, it goes up to 250 million people.

Several of those are members of the Indian diaspora who live overseas in New York and in Washington and in LA and across UK and the UAE. So young Indians are connecting to the way we do the news. This idea of speaking directly and honestly,

audiences is why many of us got into news in the first place. And I don't know what happened along the way that as an industry we've seemed to lose sight of that. But what have you learned personally from starting this organization that you think other traditional news organizations should know about engaging directly, especially with younger audiences?

Well, I think that the younger audience appreciates authenticity. They don't care so much for high production value and the great studios and things like that. They actually want information that's as honest as possible with as few layers of production and manipulation on it. The raw, the better.

younger audience connect directly to individuals more than brands, I found, especially on social media, because the relationship is personal. I think that one of the things that I found very interesting is that on Indian news,

No one apologizes anymore for getting something wrong, especially on television, especially when the wrong was, you know, not exactly an accident. And when we started doing it on social media, and obviously news is written by human beings,

things we might get something wrong i would write out an apology personally with my name on it saying you know we made this mistake but we're going to make sure it doesn't happen again the audience really connected uh to that level of honesty and that vulnerability and it builds a relationship of credibility because they feel like they know you and they feel like they understand your process um transparency i think is what the younger audience wants more and more treat me with respect

"Tell me what your process is. I'll understand if you've made a mistake." But I believe that when the Indian mainstream media exaggerates, when it misstates what has happened, when it speaks in a largely partisan way, the younger audience feels like their intelligence is being offended. And that's what makes them turn it off. We spend so much time holding powerful people to account. It's a good reminder that we need to be accountable as well.

Digital sources have become an outsized part of what we call our news diet for many, if not most people around the world, especially among younger adults. According to Pew Research, 54% of U.S. adults say that at least sometimes they get news from social media, and 86% say that at least sometimes get news from a smartphone or a computer or on their tablets.

What are the most misunderstood or underappreciated realities about how people actually consume their news today? I think for me, the big realization is the shortening attention span, where people tend to open their phones, but they have five minutes to spare, maybe even less. A lot of times, they're just reading the headline and the blurb under it and moving on, and

that's where I believe there's a big danger of click-baity headlines being completely misleading or even exaggerated headlines being misleading. I find that

People just, you know, they want two minutes of, okay, quickly tell me what went on so I can move on with my life. I've not really spent a lot of time on it unless over the weekend I've set aside a podcast that I want to listen to. So I think that as people who work in news, we're going to have to be more and more aware of the fact that while we write, you know, copy or while we're putting out our headlines or we're putting out our videos, we need to do it in a way that

appreciates the fact that people only have maybe a minute or two to consume that information. It's a scary new reality that attention span is the commodity in short supply. You said earlier that young people in particular are more trusting of people delivering them the news than brands.

Why did legacy news organizations lose that trust? And what can we do to bring it back? I think there is a sense that because of the lack of transparency of where the money is coming from and how deals are cut between advertisers and legacy media or corporate media, in India particularly, all large television studios that run news are owned by corporates

where there's a conflict of interest because these television newsrooms are also covering the corporates. They're not then honest about the kind of information they're putting out, the tilt that they have. These corporates are very close to government. And so there's a circle there of mistrust that is being built where everyone's serving everybody else except the final reader.

The interest of the citizen and the interest of the reader is not important. Their attention span is actually just being commoditized and sold to the highest bidder. And I think that that is where, at least in India, that is where legacy media has lost all trust. There's a lack of transparency on ownership. There's a lack of transparency on where the money is coming from, who's paying really for this news to be put out, and where the conflicts lie. And

I think that intuitively the audience can tell something's not right here. On this issue of declining trust in media, I want to read an audience question that we received from Barbie in Arizona. She asks, considering how much people are looking elsewhere for news, which may or may not be factual, how do we compete with this, both domestically and globally? More importantly, how do we gain back the public's trust?

Well, I believe personally from my learning and I've been, you know, B2B is now five years old. I think that the public's trust comes, like I said, from transparency and authenticity and mostly by doing the job like we were taught to do, you know, from day one. Every day that we put out the news in the most honest, incredible way possible.

we build that relationship further and further until we've built, you know, a great wall. But that's brick by brick, story by story. And it's, you know,

We built that relationship with our audience who said, okay, I know that if I'm reading something that this organization or this individual has put out, that it's actually going to be transparent. It's going to be authentic. It's going to be double fact-checked. They're going to be responsible with the way they choose their words, the way they frame their sentences, because they respect us as audience members. And I think that it's a long road, but it's the only road ahead of

Beyond the traditional means of fact checking, how do we protect against misinformation? What kind of safeguards can we put in place to make sure that they don't become a part of a cycle that generates false information perpetually? Well, I think that globally right now, Jason, there needs to be a movement to fund

AI generated fact checking because the answer to AI has to come from AI. Human intervention fact checking can never be fast enough. Is there a way that we can teach AI to crawl the internet and figure out what is authentic and not? Is there a way that we can teach AI to fact check in a manner that journalists do? It's a big question. It's a global question, but there needs to be a sort of

commonplace understanding for all of us who are affected by it. And by all of us, I mean large social media platforms, I mean governments, I mean corporates, people like you and me. We have to come together to find a solution for this. You're no stranger to political pressure. What have been the biggest surprises for you personally as you've pivoted your career from being a broadcast television journalist to BeRoot as it relates to political pressure?

Well, I think that mostly the political lens remains largely on television because they're using television really to put out messaging in sort of a mass media way.

What I found very interesting as a movement in India is that politicians are now using podcasters, which I understand is also happening in the US, are now using casual podcasters who are not journalists to put out their messaging because there needn't be, you know, the burden of ethics when you're talking to a podcaster or a content creator who's not a journalist, who is, you know, bound to cross question and bound to ask you

tough questions. So there's a lot of communication that's happening

That's bypassing journalists altogether in an attempt to put out, you know, pre-designed sort of interviews and friendly interviews. And I don't know whether the audience is able to tell what they're watching, because we've had instances in India where podcasters have been engaged by political parties at a price to put out friendly interviews asking only pre-approved questions.

So if that largely bypasses journalism and bypasses actual journalists asking actual questions, I hope the audience can tell the difference between the two. As part of my press freedom work here at the Washington Post, I've been tracking the declining freedom of expression space in countries around the world. India is obviously a great source of concern. Largest number of

newspaper readers on a daily basis anywhere in the world, protecting that ecosystem is vitally important. And I'd like to say, although I am not a great fan of the actual eat root as something to eat, the concept of what you're doing is thrilling for me. And if there's ways that we can collaborate, I hope that we can do that across borders.

Well, that's lovely. And I'd love to talk more about it. But I must tell you while I have your ear, the idea behind naming it BeTruth is because it's good for you. Very good for you.

Just like the news. It's not exciting. It's not tasty. It's not meant to be fast food. But in order to be a functioning member of society, we have to have our vegetables. And I believe that that's what the news is, that it might be triggering sometimes and it might not be the most exciting way to spend a few minutes of your day. But if we want to be functioning members of a good democracy, then we all have to read the news.

Faye, we're unfortunately just about out of time. I could talk to you all day. It's been wonderful connecting with you, and I hope we have more opportunities to communicate about this and other issues pertaining to news in the months ahead. Thank you so much, Jason. It's been my pleasure. Thank you. Thanks for listening. For more information on our upcoming programs, go to WashingtonPostLive.com.