As companies create AI-powered solutions, how can they ensure they're effective and trustworthy? Join IBM at the break to hear how companies can build trust in their AI with Hrithika Gunnar, IBM's General Manager for Data and AI.
Welcome to Tech News Briefing. It's Monday, June 23rd. I'm Victoria Craig for The Wall Street Journal. Can you really teach an old dog new tricks? Two internet entrepreneurs are certainly going to try as they seek to breathe new life into an old staple of a bygone digital age. But how can they navigate a new online world dominated by artificial intelligence and bot armies to give users an authentic, worthwhile experience?
In the tech industry, companies are constantly innovating and striving for the next best thing. But sometimes there's value in looking backwards. Internet pioneer Kevin Rose, co-founder of the now-defunct internet aggregator Digg, joined Alexis Ohanian, co-founder of Reddit, on stage at the WSJ's Future of Everything event in New York. Our deputy tech and media editor Wilson Rothman sat down with the pair to talk about their new company, a reboot of Digg, and why they've decided to
to go back to the future. So you guys are creating a new product, essentially, a new platform, but it's called Digg. You've bought the rights back and you're starting it up and you are taking a lot of the lessons you learned in creating these two communities and watching all these other communities bloom. You want to make sure, for starters, that you don't get bot armies just walking through the front door because that's one of the things that's gone wrong. So let's just start with the front door. Yeah.
Knock, knock. How do you let them in? The one thing I will say that Reddit was so awesome is that it was the autocomplete on Google. You would say, what is the best speaker or headphone should I wear? And you would see literally Google would say, do you want to add Reddit to it? And it was because humans coming together to discuss something that they're very passionate about can produce amazing results, right? Better than any one independent reviewer on the verge or anywhere else could do. You get the masses coming together. So there is power in that.
But if this is overrun by bots and AI, and we're seeing, I don't know what you think the numbers are, but they're up there. Look, it's big, right? Even you saw that there's a university in Zurich that published a study where they literally manipulated the Change My View community on Reddit using AI. There's reports 10, 20, 30% of content, and it's the M-dash, which apparently is the- And it was the most effective con- Yes. The most effective arguments in the channel Change My View-
came from bots. Yeah. And I think I've long subscribed to the dead internet theory, which like 10 years ago was a conspiracy theory, but probably in the last few years since we've blown past the Turing test is a very real thing. I think the average person has no idea just how much of the content they consume on social media. If it's not an outright bot, is a human using AI in the loop to generate that content at scale to manipulate and evade? So we think very important. You want people who are real people...
who are ideally not even using bots to communicate their ideas coming in the door. Yeah, so basically I don't want to get into the technical details of how we prevent that stuff, but as low as friction as possible. And then at the high end of it, you have Sam Altman that's going out and scanning your retina, right? That's pretty hardcore, fully verified human. That's Sam Altman's world ID. World ID, yeah. So there's a gradient of trust that has to take place here.
And then, so that's a gradient of trust on just the human or not on the other side of the screen. And then there's also a gradient of trust that actually I believe you can get to a lot quicker, which is around, do I trust what this person is saying about said product? So,
We have a very fancy technology out there that people can look up. It's called ZK proofs and it is a way to use an algorithm to prove that you actually own something on another platform. So right now, and I have no affiliation with this brand, but I'm wearing a whoop. It's a fitness tracker. I've had it for three years. I can go in and create a proof within two clicks and then in the comments, I can say, I love my whoop. I think it's amazing. And the new version is fantastic. And
You will see, almost like you do on Amazon where you see verified purchase, you will see that Kevin has decided to share with the world in an anonymous way, meaning whoop can't see which my user account is linked to. That's the cool thing about proves. But it's an approvable way that says I can actually act on authority as a long-term whoop user to guarantee that I've actually held and been active in said device. I think more of that is coming to the Internet, and we're going to need that.
Because the world is going to be flooded with bots, with AI agents. They're infinitely patient. They will hang out in your DMs and befriend you for six months until there's an ask that you don't even think is an ask because you think you just have a new friend. Coming up, we'll be back with Wilson, Kevin and Alexis who get down to brass tacks about how this new venture will actually work. That's after the break.
Enterprise AI is an unstructured data problem at scale. How does generative AI address it? Rithika Gunnar, General Manager for Data and AI at IBM, explains. Think of this as emails, PDF, PowerPoint decks that sit in an organization. Generative AI has allowed us to unlock the
opportunity to be able to take the 90% of data that is buried in unstructured formats, which really unlocks a new level of driving data and insights of that data into your workflows, into your applications, which is essential for organizations as we go forward.
Web 2.0 meets Dig 2.0. But what does that mean? How will it work? And is it a viable business model? WSJ Tech and Media Editor Wilson Rothman poses those questions to Kevin Rose and Alexis Ohanian. So business model. All right. A lot of the problems that we've seen out there have a lot to do with the fact that the major platforms that people use are ad driven. There's a data, personal data loop that's
There's a lot of get people psyched about this so you can pour more ads into their feeds. Like that whole compulsion, obviously it sent record profits to these companies with record valuations. It seems like it's a great business. But I know you guys are going to say that's not what you're going to do. So what are you going to do?
There's a couple things here. I'll try and be as kind as possible, but I do believe the days of unpaid moderation by the masses doing all the heavy lifting to create massive multi-million person communities has to go away. I think these people are putting in their life and soul into these communities and for them not to be compensated in some way is ridiculous to me.
And so we have to figure out a way to bring them along for the ride. And, you know, Substack's doing that quite well. Patreon's doing that quite well. I mean, these are, and they should have more ownership over their communities. It is crazy to me that the creator of WallStreetBets lost the copyright case against Reddit because Reddit went and copyrighted that term that somebody in the community created.
And it's like, in my head, we should be going out and trying helping them copyright these on their behalf. Like, they added tremendous value to the network in terms of millions of users. That's part of it. The other part of it on the ad side, I have a lot of top podcaster, like big podcaster friends that I kind of mingle with. And the one thing the podcasts get right that is so important is
is when you are a Tim Ferriss or Rogan or any of the bigs that are out there, you get to say no to anyone that doesn't meet your values. You can say, I don't want to do that brand. It is so weird to me that an unnamed advertiser can randomly pop up in your community that you didn't approve to get in there.
So if we could put more power back into the community, now you have community buy-in at the ad level, if they are a type of ad unit, that says, I believe I'm in the gaming community. Yes, that is the best mouse in the world. Let that ad come in here. And then figure out if you are partaking in that part of the system, how we can build the rails behind the scenes to make sure that we spread a little bit more of the compensation universally amongst the ecosystem so that they can do great things with their community and
and turn these moderators into unpaid labor, into more director of vibes, and just make them real champions for their community. And Alexis is going to pay them a little bit. I mean, look, I'm a happy investor. I think, look, sustainable business model, for sure, that is crucial. And I will underscore those things that Kevin said. At the end of the day...
you have so much value being created that's not at all being captured by the creators. And if you just look across every other form of online content creation, that is against the norm. You cannot, it is untenable.
And if we can nail both improving the user experience, just so day to day you actually enjoy the stuff that you're doing, and find a way to align your interests with the communities and digs, it's a huge win. And advertising was like the thing to do in the early aughts. If you could build enough of an audience, sure, throw an ad on it. But we can look at it from first principles.
I want to believe the business model that will make Digg successful is one that aligns all those stakeholders. And I think it is very, very possible. That was WSJ Deputy Tech and Media Editor Wilson Rothman in conversation with Kevin Rose and Alexis Ohanian at the WSJ's Future of Everything conference. And that's it for Tech News Briefing. Today's show was produced by Julie Chang with Deputy Editor Chris Dinsley. I'm Victoria Craig for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.
How can companies build AI they can trust? Here again is Hrithika Gunnar, General Manager for Data and AI at IBM. A lot of organizations have thousands of flowers of generative AI projects blooming. Understanding what is being used and how is the first step. Then it is about really understanding what kind of policy enforcement do you want to have on the right guardrails on privacy enforcement.
The third piece is continually modifying and updating so that you have robust guardrails for safety and security. So as organizations have not only a process, but the technology to be able to handle AI governance, we end up seeing a flywheel effect of
more AI that is actually built and infused into applications, which then yields a better, more engaging, innovative set of capabilities within these companies. Visit IBM.com to learn how to define your AI data strategy. Custom content from WSJ is a unit of the Wall Street Journal Advertising Department. The Wall Street Journal News Organization was not involved in the creation of this content.