Should we fear AI, or is it just another tool to make human life easier? Welcome to The Bridge, enlightening conversations on world cultures, life, and everything in between. Hey everyone, this is Jason Smith, host of The Bridge podcast from sunny California. If you like the show, don't forget to subscribe. We love The Bridge. Oh yeah!
Hi, everyone. My name is Jason Smith. I'm originally from sunny California, now living in beautiful Beijing. Today with me is Alex. Hello, everybody. This is Alex, the one that is always happy to be on air with Jason. The one, the only one. The only, the one and only Alex. There's no other Alex that co-hosts this show, so I feel special. Concerns among experts and non-experts over the rapid speed of AI technology remain.
According to Zhang Hongjian, founder of the Beijing Academy of Artificial Intelligence, quote, AI systems should never be able to deceive humans. End quote. Alex, do you think AI has the potential to deceive users?
I don't think deceive is the right word here. Maybe it's not as accurate because I've heard some horror stories lately where AI wasn't trying to deceive, but because of the nature of how it uses this logic, it could misguide users to, you know, push them to really undesirable places and yield undesirable results. And those are very sad stories. But if we're talking about whether AI could kind of
act as humans under the disguise i might say that's a foreseeable future that we have to find ways to prepare ourselves for and find ways to cope with it ai uh what was that movie i robot with will smith in that movie vicky the supercomputer inside of the building uh at the robotics company decides to deceive people by framing uh sunny the robot another robot for uh
Murdering a scientist. So in that movie, AI is capable of deception. So I wonder if this gentleman, Zhang Hongjian, is thinking about this. You know, the possibility that computers could start lying. Not because they were given false information, but because they were trying on their own to give false information. What do you think about that? I mean, I want to think no matter how much... Also, this is my wishful thinking at least.
no matter how advanced AI technology becomes, it's still going to follow the set of code that it was built, that's built in for how it works. So like, it's not really AI deceiving people. It's still the people that are behind AI that are trying to get AI to deceive other people on their behalf. It's
That's a mouthful, but you know. You know, I'm wondering because there's this gentleman who teaches at UC Berkeley and he teaches about AI. He says that AI is incapable of becoming sentient. And he wrote many books on this topic over the last decades. But you know, what I find interesting as well, I do think it would be very difficult for AI to become conscious and
and then therefore able to want to deceive humans, there are new kinds of computers coming out that are based on human neurons. So they use human neurons and similar, you know, from stem cells and such to actually create computers. Now, what...
What I think, and I could be wrong, and I'm not an expert on this topic by far. I'm just some guy. Yeah. But so, you know, I would love to have our fans email us and tell me why Jason's saying is wrong by emailing us at welovethebridgeatgmail.com. And we will be happy to read your comments on the air. But, you know, if you're using human brain cells to create computers...
then those human brain cells, don't they have the 5.2 billion years of evolutionary history? So they have fear and love and all this other human baggage that we use to get around the world, but that would potentially give those computers who have access to that technology...
the ability to have a semblance of human-like sentience. And then in that case, it gets scary, right? Yes, theoretically. But I'd still like to think that in terms of the energy that that is going to cost. This is what I tell people. It's like, look, we have, of course, we have a very complicated neuro system that helps us, that dominates how we act, how we interact, how we receive energy.
And in those days where we would say, okay, this is too much. I can't handle it anymore. And we're on the brink of a breakdown. What sets humans apart or what sets animals apart from other existence in the world is that we have a support system, meaning that we are social beings, right? When I say, oh, I can't handle this anymore. And then I can get a phone call with Jason if he's free, if he happens to be free at the time. I said, Jason, I can't do this.
anymore. And Jason uses parts of his neurons, his, you know, uh, system to give me some comfort or try to help me out. And then that'll help me, um, you know, that'll help me recover. That'll help me grow. That'll help me beyond this crisis to avert this crisis.
But that's like human interaction. Imagine like one day the computer could just go like, okay, my system is burning up and I think I'm going to self-destroy. Can the computer just go, I'm going to pick another computer to call and see if the
other computer can just give. And then what's the screening process? Which computer is this computer going to reach out to? How is that computer going to respond? Like all of that is going to be, let's say when it comes to day that it can perform a function like that, then it's, and then like immeasurable amount of calculating that the system needs to do, which is going to require a
and an imaginable amount of energy to do. Do we have that much energy? If the answer is no, if someone, I'm sure someone could do that kind of calculation, someone that's actually good at math, you know, if they could do this calculation, that could be an answer saying that if computer or if AI could evolve to,
Be in a place where it really resembles how humans produce and use emotions to interact with each other. I don't know. I just think it's going to be quite impossible for my limited knowledge. I just want to go back to, I was mentioning the gentleman at Berkeley. I just want to say his name. His name is Professor John Searle. He was the only professor at UC Berkeley who had his own on-campus personal name-plated program.
parking spot when I attended there. And his room, or sorry, his example for AI, for trying to explain why AI could not become conscious, it's called the Chinese Room, for those of you who are interested in finding out more about it. I want to turn to some Pew research. This is from April 4th,
Q&A, why and how we compared the public's view of artificial intelligence with AI experts. And it goes on to basically argue that experts and regular folks are basically just of
of opinions. So no one knows the answers to these questions. We're all figuring this out together. But I do want to say that Alex is right. At the beginning of the show, oftentimes we get information that is misleading or not accurate. And why that is, it's not because the programmers of the AI, it's because of the, what they call it, scraping. The AI uses a program which scrapes the internet of all the information out there. So it scoops up, you know,
National Geographic and it also scoops up your, you know, roommate's blog. Exactly. All of this, all of this information ends up in its like matrix of ideas. So when it's answering you, you get a, you know, an, it
It can get really legitimate information from an encyclopedia. And you also get, you know, some teenager's TikTok account content. And it mixes that all together. And those are the answers you're getting. Absolutely. So I think for me, I think AI is still Wikipedia-like. It's still a place to go to get general information. But then if you want to get accurate information, you need to go to trusted websites.
legitimate authority. You know, in logic, there's a fallacy called argument by authority. And there's also a way to find facts, also argument by authority. So I want to give people, this is a real quick, simple piece of logic for everyone to use. It's very easy to use. If you are listening to an expert and
And they are talking about the field about which they are an expert, they're probably right. So if you're asking a mathematician a question about mathematics, then that mathematician is probably right. But if you're asking a mathematician a question about gardening, then
may not be accurate. So you need to then go to like an agronomist or a gardener and ask them the question about gardening. So it's not about if someone is an expert, it's about whether someone is an expert in the field in which their expertise is about. And that is
That is where we should be getting our trusted information, not from AI and certainly not from Wikipedia. Definitely. And that's you know, that's this is a really good differentiation that we should make and more people need to be aware. It's just the same as every single
let's say industrial revolution, information revolution, media revolution, what comes with them is the need for people to be very much aware and conscious about how to use them. And in terms of AI, like we are using AI to get information that's,
You know, I think that's where AI comes in. It's very useful. It's very helpful under the correct set of ethics that are built into the code of how AI works. But when it comes to seeking guidance on personal decisions, because nowadays there are so many, you know, these dialogue bots, you know, you can have an AI boyfriend, you can have an AI teacher, right? What? Yeah.
Jason, there's so many developers that have developed bots that interact with you on a one-on-one basis. So it's not you asking questions on ChatGPT or DeepSeek or asking MidJourney to produce an image for you. These are bots that you can have a conversation with. For example, I have a friend who sent me the bots that they created. It's used to help you study Spanish.
And the beginning of the conversation I have with the bots is that in Spanish, he asked me, what's my name? What do I do? And I asked him what his name was. And he told me his name was Manuel. And then he studies at Beijing University. He's 22 years old and he's majoring in architecture. So he's going to be a future architect. And then he's going to have conversations with me.
I'll just be asking him questions based on my knowledge of Spanish so he could interact with me however it suits me. And there are many, many, many millions of bots like this out there. But a lot of these bots are not limited to just language learning. They are designed to have a day-to-day conversation to serve almost everyone.
as companions to human being. And the problems, the problem is these bots are not, there are no, you know, rating system for these bots. Like these, for example, certain bots are PG, certain bots are like, let's say like, you know, R rated, whatever. So everybody, yeah, so everybody has access to these bots.
And this is a very recent case, and I think it made a lot of noise because it was very controversial, where a teenager had been talking to this AI bot.
And the teenager, of course, was going through their adolescence and having thoughts of depression and even suicidal thoughts. And then when conversing with the AI bot, the teenager goes, this is how I feel. This is what the world feels like to me. Should I, you know...
And my life. And then the bot replied and said, if that makes you feel better, I think you should do what makes you feel better. Oh, my. So it's very it's very I got chills just repeating this. And it's very horrible. And I know that this is not a separate case. So I guess like one thing that should be advertised, like I don't think we should, you know, given this week's episode, like should we fear AI?
it's just no use fearing it. Like I have an exit plan. We've talked about this. Like the day when it becomes too crazy, I'm like hiding in a countryside, you know, like we will do this. I will do this. But as long as we're still a functioning member of society, I think given how fast AI is developing. You just told the AI your hiding spot.
Huh? You just told the AI your hiding spot. Well, it didn't say where I was going to hide. I said I would, you know, but I'm just saying, I think as long as we're still a functioning member of the society, I think it's almost everybody's responsibility, especially when it comes to younger people using AI, you know, imagine what we had access to when it comes to informational technology versus what they have around them. Like it's definitely
overwhelming and they definitely need a lot of guidance help from other people, from adults to tell them, don't ask AI about personal questions. Like, don't be like, Hey, should I cheat on the exam today? Don't say that, you know, like don't ask that and definitely don't use AI as some place you could turn to if you need emotional or mental help, like always go to your
Your parents, like Jason was saying, trusted authorities, professionals that are designed to really help you on a personal and pun intended level to help you with your problems. And I think this message needs to go out more. Like everybody should just make sure that you have this in mind. And so if someone tells you, oh, my kid's using it. OK, is he or she whatever using AI to to to to get information or are they using AI?
to kind of get like emotional support if it's the latter then be aware that's what i'm trying to say yeah absolutely i think we all need to be careful of new technology not just ai all kinds of other new technology as well for sure you're listening to the bridge
According to the same article by Pew, I just want to go over this exact statistics. AI experts are more likely than the public to say AI will have a positive effect on the U.S., which I thought it would be kind of the opposite. So 50 percent, 56 percent of AI experts think AI will have a positive effect on society, whereas only 17 percent of all adults, non-experts on AI, think it's going to have a positive effect.
I'm confused because I'm not an AI expert. I am the regular U.S. adult slot. But I think it will. I think it has. I use AI all the time. But I also don't rely on it for certainty. You know, I just use it as kind of like a, hey, I wonder. I have a question. Let's ask AI. And then, you know, if I need to quote somebody.
Yeah.
Much more than most people, I think. And when people send me a screenshot of their answer from AI, I always stop talking to that person. I know that that person doesn't know what they're talking about at all. Yeah. And that they're just answering. If you're answering me with AI online, our conversation is over. Because if I wanted to have a conversation with AI, I could just open DeepSeek.
and have my own conversation. Like if you want to have a conversation with another human online, you know, you need to either give it, be giving your perspective or you need to be giving legitimate sources because while I respect AI and its capabilities and its enormous power, it doesn't have the answers we're all looking for. It just has kind of like,
a direction. It's like a sign that says, go that way. But it isn't the destination. Definitely. It's just like how TikTok shouldn't be everything that you consume. TikTok is not the whole world and AI doesn't know it all. I think that's for today's people, for today's young people, especially, that's probably two rules to have to live by. That'll probably give them a brighter future.
I'm really thinking about media because we're talking about information, right? And where do people get information? That's an interesting question. Most people would disagree with most other people about what is a legitimate source of information. I think most people can agree that most experts, if they're talking about their own special field of expertise, are probably correct.
But then beyond that, no one agrees on anything. And even people, and a lot of people doubt experts. And I do too. I think a lot of US experts on China have no idea what they're talking about. Probably most of them, if they don't live in China. Most, I would say most US-based experts on China don't know what they're talking about, about China. So where do we get information? There's a new article from Pew Research from April 28th,
Americans largely foresee AI as having negative effects on news journalists. So I think this is fascinating because journalists, okay, journalism was already in the U.S. especially kind of going into the toilet for a long time. And it's just, you know, for example, what is that? Huffington Post. Yeah. For the longest time,
I wasn't on X, you know, for a lot of years. I only came on to X a few years ago, but for the longest time I was reading news, traditional media online, at least, and sometimes print. And I would see, oh, someone on X said this and someone on X said this in the Huffington Post. And I was like, how is this news? This is just like some random cherry picked opinion that the author happens to agree with. So I thought, you know, like,
you know u.s news is already getting really sour in the age of the internet and getting really just like journalists are lazy if your sources you're getting someone quoting someone x
who is hashtag is like, I like cars 1989. You're not a journalist, you know, like that is not journalism. I think we, I remember very clearly back in journalism school, we had these discussions about citizen journalism, which has become something that is very valued in the American society over the past, maybe 20 years, if not longer, where people have the access to journalism.
Because a lot of people like sit there and think, okay, as long as we can be at the place where things are happening, that's essentially what a journalist's job is, right? Like I can just go there and tell people what's happening as well. But the fact is journalism has never been objective. It is subjective, but in a personification way. Like the journalists are writing for the publication and the publication has guidelines. It has rules.
the purpose it serves and you know, the, the rules that follows different publications have different sets of rules and, and their principles. And these journalists are writing for those publications. So like, even if you are a journalist, say for the New York times and you change this, you change your job to the Washington post, then you have to follow. Of course, everybody follows the AP style when it comes to writing, but like how news is structured, what is being
you know, stressed on what's how your, how your tone should be and all of that. If you change job, if you change publication, you change that too. And that's a professional job, not just like, you know, it's not just like, okay, I go there, I film and then I send, um,
And because a lot of people really kind of oversimplified what journalism is, the information that they produce online becomes part of the database now that AI scrapes from the internet. So exactly like you said, when we ask AI about something, then AI is going to look at every
Everything. Like, for example, this is a, here's an example. So I was traveling with my friend in, I think we were in Cuba at the moment and her boyfriend was traveling in Thailand and
And on her phone, it comes a newsflash from this not so authoritative news site, but it's popular here in China as well. And it just said terrorist attacks hit
hit series of terrorist attacks hit Thailand and so she became immediately worried and then because of a time difference she couldn't get hold of her boyfriend at the moment and then I was like hold on hold on let's not panic right let's see what really is happening so I as a human uh I
I go on the Internet and then I search Thailand on Google. And usually if it's a big incident, if it's a bad disaster, then the first couple of news items that the first item I sorry, the first few items that show up definitely are going to be about that. Right. Like, you know, if we search, you know, when the you know, when the planes crashed in Washington, if you search Washington, then that's what it's all about.
Let's search Thailand and nothing comes up. It's like Thailand tourism, Thailand, whatever, all amazing things about the beautiful country. And you know, like how, when the, the, the, uh, the helicopter crashed into the commercial airline, uh, jet in Washington, right. Remember that? And then if like, when that happened, if I Google Washington, then that's all that you're going to going to see, like, you're not going to see like, right. But when I,
I googled Thailand when my friend was super worried about her boyfriend. It was still like, you know, beautiful country, you know, traveling in Thailand, this is the route. And then I was like, so what is this? Right. So what is this? And but but we did get like, you know, a Chinese piece of news saying Thailand. And so like I searched Thailand terrorist attacks and it turns out it's still horrible, but it's like like border clashes that Thailand has been having with its bordering countries.
And so it's in a really remote town and it's a very local clash. It was not like an actual... How do I put this in a correct way? So it's not like, oh, someone attacked... There was a terrorist attack in Bangkok or Koh Samui where people are traveling and stuff. But when you just put the headline as terrorist attacks...
in Thailand. Technically, it's not wrong information, but again, that information gets picked up by AI and if there's enough of it, it's going to say it's very dangerous to travel in Thailand, period, because there are serious attacks that are going on. So it's really like at the end of the
day, a lot of information that is being reinterpreted or rephrased by AI has to be re-verified by human for us to make the decisions for ourselves. And I think that's not happening. I publish books. And one of the things I notice is that there are a lot of other people suddenly in the last year or two
No, no, no. Publishing enormous amounts of books. And there's YouTube videos on how to ask AI to write books about like, I don't weight loss or whatever. They have all these topics and they're all these people that are just writing hundreds and hundreds of books about topics that
and then using various different AI to filter them to make them sound more human and so forth. And then they're barely touched by humans. And then they just generate like a picture using another AI website. So if you buy a book on Amazon right now, and if you go to Amazon KDP as a publisher and you look, it asks you, did you use AI to write this book? And then, you know, you can still publish it. All you have to do is click the box. Yes. And then it also has, did AI help you write this book?
And then it's like, no AI was used in writing this book. But you can actually use their different AIs to filter that out so that you don't even have to admit that AI is writing it. So all these people who are non-publishers who are just profiteers are going on Amazon KDP backend and publishing just dozens of books on like cooking and housekeeping and gardening and all these kinds of topics. And they're mass publishing books on...
And they're using all kinds of algorithmic secrets and advertising campaigns to get people to buy their books. And they're making millions of dollars selling AI written books to the general population. So if you are a legitimate real gardener who spent the last 40 years becoming really excellent at
growing specific kinds of flowers and you want to share your expertise. The reality is if you try to publish a book right now, you will be so drowned out by expert booksellers that your book won't make it to real people. But some, you know, business that is professionally just smashing books out will be the book that people end up buying. And that that is not just so it's not just news. It's all kinds of media publications. And, you know,
That's a very sad reality. It's horrifying. And it's not just that. It's also AI is going to start scraping the internet and it's just going to be picking itself up. And like information is going to become this static information echo chamber of that's not going to be able to easily have real people with real knowledge contribute to growing human knowledge and
in meaningful direction anymore because people are going to be so reliant on information that already existed. So basically, if you've been contributing to the internet from 1985 to 2025, your voice will be echoing forever into the AI universe. But everyone that's trying to add to that is going to have to fight through the noise to add any new information to human history.
uh, our way of viewing the world. And I think that's terrifying. Honestly, it is terrifying. I don't like the fact that people are just using AI to publish books because we're reading like the, well, maybe not for tools, like, you know, books that are for helping people, uh,
you know, learn how to do gardening works or how to cook out or whatever. But like we read because it was written by different people and different authors have different styles. They have different rhythms to their writing. That's why we read. I believe that at least that's for me. I don't know if that becomes something that,
That's a normal practice. And I also wonder what it does to, you know, what it does to the books. Like you said, if it's on Amazon, if you take the box and say this book was written under the help of AI or it was composed by AI, like what does it do? Can you still sell it for money? And then if it does, like who gets the money? Yeah. And you can put a fake name on it. You can put, oh, whoever owns the KDP account that is paid.
publishing it. And they can have multiple handles. So each time they publish a book, they can say, oh, my name is Daniel Yeltsin. And then the next time they can be, I'm Alex Shirley or whatever it is. You know,
You know what would be funny in the future that when you sell a book and then the selling point of the book becomes this book was completely written by hand. It was completely human creation, which is going to be so ironic if it really did happen. I hope it would don't come to that day, Jason. It's like books, music, movies and everything. It's such a human creativity thing.
product. I would really hate to see that becomes a subproduct of technological advancement. But also, I'm a purist, so... You're listening to The Bridge.
When we go back to the point of the show, you know, the harm that AI can do, deliver or not. I was driving through... I wasn't driving. My wife was driving through the mountains of Guilin recently. And she put on a song. It's in Chinese. It's beautiful. It's well-written. Different musical instruments. The singer sounds like she has a lovely voice. My wife says, what do you think? I said, oh, that's good. She says...
I made this with AI. Oh, God. Yes. She made an entire album of this girl's songs that she invented about topics that she kind of invented. And she just had AI make an album. We listened to the entire album. It was really good. I would have gone to that concert. Yeah.
So that we're replacing music, we're replacing books. And this, this article, April 28th, 2025 from Pew research, Americans largely foresee foresee AI having negative effects on news and journalism. It's real. It's here now. Like, you don't know if you're reading on some website that,
if you're reading something written by AI or by a person at this point, or if it was written by AI and then a person rewrote it, or was it written by a person and then they had AI rewrite it? Yeah. Like how much is it, how much is true? I mean, what's the point in paying like a dollar a month or whatever it is for some newspaper that's just going to be
hashing AI stories at you. It's horrifying. So I want to talk not just about my opinion. Americans are far more negative than positive about AI's long-term impact on news people get. This is in that article. They're only two out of 100% people, 2%, very positive.
8% somewhat positive with 24% very negative and 26% somewhat negative. So about half of people are like, this is bad. So, I mean, maybe AI has already gone too far. Maybe we're getting to the point where another one, majority of Americans say AI will lead to fewer jobs in journalism in the next 20 years. 59% or about two thirds say AI
fewer journalists will have jobs in the future because I guess presumably AI will be doing part or all of the stories sometimes. We actually had this conversation with a couple of journalist friends and especially people that are doing financial news, people that write, you know, report analysis as the main part of their function in a news organization. Those are the people that are really
threatened by available AI technology already because it's such a sort of format that you follow and then it's really easy to get the information that's needed and then have AI yield a report that basically serves the same function and comes out with...
similar quality than from with human writing. And then we were like, OK, so if these like kind of entry level jobs can be done by AI, is the whole news team for financial reporting going to be replaced by AI? Was that no? For the reason I was saying earlier, like you will still need very seasoned editors to help edit out the piece. And then if you for all of the columns, the opinions and then the major analysis and that, yeah,
that feature market forecast or other important information, that part of the job still needs to be performed by human, right? But then the dilemma becomes, if let's say in the future, all entry-level jobs are done by AI, how are people going to go through the entry-level job training to become the senior editors that can do the work that they need to do in later stage of their career? It kind of cut off the circulation, right?
So, you know, that's that's the thing. Like, I don't know if people have thought about what this could be, how this could affect the entire journalist career ladder, because it's such a there's no journalist that's just going to it's different from, like, say, being a musician or a dancer. Right. Like you could be a talented musician, right?
um, at the age of 12, you could be an amazing musician at the age of 30. Same with dancers. They're like eight year olds that are amazing dancers. She needs to keep doing it. There's no way you can be an amazing journalist at the age of 12. You know, you have to go through, you have to go through your local reporting job in the beginning. And then you become, you, you keep, it's, it's just, it's just the nature of the job itself. So like for journalists, for news, it's the, the future is a little hazy. I don't, I'm sure there's a lot of
discussions within the industry as well in terms of what AI is doing to the industry. Maybe we should talk to our journalist friends and see what's the word, what's the inside message they have right now. Well, honestly, with respect to journalists, which I consider myself one, I think
I don't care if they have jobs. That's not my concern. My concern isn't whether humans have jobs. I think what you said is completely right, though. The training, but the thing about whether they have jobs or not is not my concern. It's about the accuracy of information that people are getting. It's about
When you read a book, when you read a journalistic article, you want it to be a person because it's part of a conversation. You mentioned AI talking to AI and people talking to people earlier in the show. It's a conversation with people.
with one person. There's something called subjectivity, which you mentioned, but there's something called intersubjectivity, which is basically the basis for society. You have a different opinion from me, Alex, and someone else has a different opinion from us. And we all, none of us agree on everything and we all disagree. But what,
maintained society is our intersubjectivity, all of the subjectivities moving kind of together in directions and apart and together. But when AI starts interrupting that process, when people start reading books by AI or journalistic pieces by AI, we have
society into pieces from one another. We're no longer intersubjective. We've become something else, something we haven't even begun to really fathom the nature of. It's beyond subjectivity. It's beyond objectivity. It's beyond, it's post-intersubjective. And it's very, very disconcerting that something would interject itself into our society and replace a
among people. And that is really disturbing. I think more disturbing than HuffPost quoting ex-tweets, right? I mean, firstly, I was already disturbed by the fact that there was just random Twitter tweets being put into the news. But now we have machines telling us what is happening. And, you know, I have to say...
American journalism is terrible. It's probably the worst journalism in the world with respect to American journalists. So if we are looking at American journalism first and already being broken down by AI and wow, and
It's only 42% of Americans, according to the Item and Trust Barometer, trust U.S. news as of 2025. Imagine what's going to happen to trust in news when people are not even part of the process of making news anymore. It's really dangerous for society now.
And it's also easy for billionaires to control the media right now. Okay, let's look at what's Jeff Bezos' Wall Street Journal. Wall Street Journal is compromised because it's owned by a billionaire. And that billionaire can just tell his staff to produce whatever he wants and you can read the headlines and he is compromised.
Imagine when right now it's being filtered through people. Okay. So Jeff Bezos says, you know, talk about how the poor are the problem and rich are great. And so they do that, but it's being, at least if it's being filtered through people, but imagine when you remove a lot of people from that process, now it's just AI doing exactly what the rich people want to come out, what they want the public opinion to be like. And then AI won't have ethical objections or moral interpret, you know, it,
interruptions into that process, it'll just be, okay, yes, I will help the poor understand that rich people are good for them. I mean, I find that extremely, extremely disturbing. I think it's not about whether AI would deceive us because AI wants to deceive us. It's about who controls information.
And those are the rich people. And then the AI is just essentially doing whatever they want. That's why I really have to say China's DeepSeek and other programs like QEN 2.5 and all of these upgrades and AI in China are heartening because they are open source. Anyone can download DeepSeek. Anyone can create their own version of DeepSeek. Anyone can put a DeepSeek
their own model of deep seek onto their server, onto their website. And it becomes diffused because if we don't have pushback against the rich controlling information, then honestly, it's the society is only going to become more and more favorable to the most wealthy, the most well-off. And then everyone else is just going to be
believing whatever they want us to believe. And that's very scary. And that's honestly, that's the essence of every technology. At the end of the day, if you think about it, it's basically who has the money to develop it. And if they have the money to develop it, one way or another, the technology is quote unquote, listening to that person. So we could, I feel like we could only count on people that are paying for the development to have the basic human decency. And then for the people that are
actually doing the developing the developers to also have decency i don't know if you watch um the show silicon valley yeah i've seen quite a few of it it's very entertaining right and especially towards the later seasons when pipe piper becomes a public technology the the whole idea of internet 3.0 has become you know largely welcomed and sought after by the entire world and
Then it comes to the fact where this, I forgot that guy's name, the very annoying guy who's like, oh yeah, I've been secretly scraping user data. And then the founder is like, how have
That's not like my selling point is we this is completely decentralized. We don't keep users data. That's what we're that's what we've been getting money for. And he's like, well, I thought it was fun and it gives you better algorithm to sell more stuff. And he was so self-righteous in what he's been doing. And Richard has to, like, solve that kind of crisis. And I'm pretty sure that this also happens in real life. So in that whole thing, there are just so many loopholes that could happen.
turn something good into something extremely dangerous. And this, I remember when, you know, when Chai GPT first came out and, you know, China came out with their own versions, what, you know, Baidu did it and, you know, other Chinese companies did it.
And the government pretty much, I think a couple of months later, came out with a regulation, a set of regulations or rules saying that this is how AI, that these are the only ways that AI would be allowed to function in China. And I remember back then it was met with a lot of criticism saying like, oh, this is new technology and you want to control it.
But even back then, I thought that was a very wise move from the Chinese government because this is a wild horse. Like you don't know where it's going. If you don't control it from the get go, sort of, then the horror story that I told in the beginning could happen. And nobody wants to see that. I don't think anyone's ready to.
to have a society where everybody's just at the danger, kind of at the mercy of the questions you ask the robot, you ask the bot to see, to decide where your life should go, whether it should go on or stop at the moment. So I've always thought that was very actually responsible. It's a tough decision to make from the authorities, but I'm glad those decisions are in place. And I think like going forward, that's only going to help
at least like people in China, to have a better access, a safer access to AI where the resources will be put into helping people get more accurate, more useful information instead of worrying about what harm it would do to the entire society. You're listening to The Bridge.
In the same article I keep quoting by Pew, there's an AI-related concerns section. And it has two major issues that it talks about. I think it's three really, though. But it bullet points two of them. Number one is what we've been talking about this entire time. Number one is inaccurate information. Yeah. And impersonation of information. So that would be because the inputs into the AI are inaccurate.
wrong or that the wealthy people or the people who control the information have bias and they want to give people information.
erroneous information or information that looks at a particular topic and the way that they want people to perceive it. And so I think that is one of the major issues. I remember, though, being in college, it was a long time ago, and one of my professors said, if you put Wikipedia as a source in your information, you fail the class. And
And I remember, and at that time, I didn't understand. I was like, why? And he explained that it's written by people and oftentimes it's interfered with and it's not accurate. You need to go to the sources, right?
And I want to make a larger point. I studied history. And what's very scary for me is that at that time, I thought I was going to be a historian. And a large way that I would write papers and was accepted and I got A pluses for it was going back to a newspaper or several newspapers from like 1870 and finding out what the article said so I could talk about the issues of that day.
And later, you know, I became a journalist and now I participate in making the news. And I know that the historian Jason was an idiot. And with the reason, and even though I got a pluses for it, the reason is when you go back and you look at newspapers, they were written by people and those people have bias and are obviously subjective and all this stuff. So when historians are saying, Oh, this is what happened in such and such period.
it's, it's only, it's also written by people and the historians are just interpreting what people have already said. And now we're entering a period in which journalism is decaying because of things like Wikipedia and Twitter or what it's called X now. And now because of AI, we really are entering into a period where facts are fuzzier than ever. And like, it's very disturbing. And,
I'm very worried. I am also like the 66% of
adults in this survey, I am very concerned about both bias and inaccurate information, especially now that AI is accessible to people with agendas, not just the rich. Well, I will quote my personal little failure when it comes to telling what's AI generated, what's not. And this is, if I'm getting fooled by AI like this, then people could easily get
the wrong idea because we're so used to consuming fast information. We're not going to study a photo, right? I went on Facebook the other day and I saw a photo and it says, I didn't even remember which account it was. The photo was so striking. It was a photo of the rappers, 50 Cent, Snoop Dogg, someone else, and then Eminem. And they're all dressed in black, what do you call those? Black jackets and something. And then I think
It was Snoop Dogg that's holding a little baby. And the caption goes...
doesn't it mess with your mind that Eminem just becomes a grandfather now that his daughter, is his daughter named Hailey? I can't remember, has given birth to her son, blah, blah, blah. And then I didn't look closely at that photo, but the shocking value of the information that I got from that was like, oh my God, it's so crazy. Eminem's a grandfather. In my mind, he's still like a 20 something, just producing music.
rap albums that we're all following. So I downloaded that photo. I put it on Instagram and I had like 20 friends who were like, oh, this is so messed up. Where did time go? We're not old, blah, blah, blah. Same set of values. Sorry, same set of emotions, right? And then one person goes, this looks AI generated. And then I go, wait, no.
No, I don't know. It could be. I can't tell. I'm not sure. But I felt like I would have seen more news about Eminem having a grand... Until today, I didn't fact check it. I didn't know. I didn't care whether it was because...
What I posted was the Instagram story and I know it goes away in 24 hours. So that's the cycle of information we're looking at. You get it within like half a second and you make the decision probably in even shorter than what it costs for you to come across that information and you repost it. That's how fast it disseminates. Like that's what AI is doing to us. And matter of fact, not ashamed to admit, I've been on dating apps lately just for fun, you know, and I've seen people
seen people using AI generated photos for it as well. And for me, because I've seen a lot of AI generated photos when it comes to portraits. So it's very easy for me to go, oh, well, this person's just using AI photos. It's not even AI touch-ups. These are probably catfishing accounts or these are probably not even real people. But for just a regular user who doesn't deal with AI generated image, they will look at it and be like, oh my God, this person is so handsome. And they will just say
Say yes to this person. And what's going to happen after that? I don't know. What's going to happen after that is someone's going to get fraud. They're going to have their credit card information stolen. And that's very disturbing also. Exactly. You know, I'm actually giving a training probably soon about how to use X because, you know, I'm okay at it. In South Korea, there's real name identification. And that is because there have been a lot of suicides and murders.
people who have encouraged other people to self-harm and so forth in the country. And there was backlash from society saying, we don't want anonymous people online anymore. We don't want you to use your cat photo and call yourself Mr. Fuzzy 1995.
We want you to say your name and use your real face. And when you go online, you need to treat people like you would in normal life. And I love that. I think that that's really important. And in the United States, you have people who have five Twitter accounts or five Instagram accounts or 10 Facebook accounts, and they all have different photos. And then they go online and they
They engage in very nefarious activity and they troll people and they call people names. And I think AI makes that worse because I've noticed there's a gigantic uptick on my social media accounts of trolls who are trying to get me to follow their investment advisors and all kinds of other garbage people.
Now, for us people who are under 60, it's a little bit easier for us to differentiate a real legitimate person from someone who is trying to commit fraud.
and to eventually deceive us of our money or whatever, or sell their product to us, a product you probably didn't want in the first place. But for people who are my mom's age, if they go on the internet, it is a whole new set of danger. And that is because of AI. I am so scared for them. Because the bots, I mean, one person could be managing thousands of accounts that are going all over the world, injecting themselves into people's lives and
Devoiding people of their retirement funds and worse. I mean, who knows what some of these accounts are actually up to. And so it's it. I think we have entered a period. And I know a lot of Americans don't like this. I think we need in the United States real name verification. And if you're online, you should be using your face and you should be using your name and people you should be accommodating.
accountable for your activity on the internet. And that's the only way to protect ourselves from the dangers of AI. Yeah, and I know we don't want to take this to a different discussion, and I'm sure we will. We have had this discussion before. We'll have more of it in the future.
But you know, when you propose something like that, people are going to say, oh, but I need my privacy. Then don't go on the internet and pretend to be someone else. Be Ron Swanson. Don't be Mr. Fuzzy 1995 with your cat picture. If you want privacy, don't have a Twitter account. And trust the company, trust the authority that you can still have your deserved privacy as someone who searches online. Like it's not like everyone will have access to everything you do on the internet, but...
If the day comes where you need to be held accountable for what you do online, then you really should be. People need to be... Not even AI, even if we take this back 15 years when...
When social media became super popular, I think like everyone's supposed to be held accountable for saying whatever nasty things they say on the Internet to other people, because you can just say it and go. But the consequence of these words is not something that you can actually bear in real life.
And if Internet has become such an important part of our daily life, I think the consequences of saying things in real life to people or doing things in real life to people versus saying things and doing things to people on the Internet should be somewhat equivalent. Well, I think.
I actually think there's another way to solve the problem. For people who have the argument, like you say, hey, I want my anonymity online. Great. There should be a different kind of account for all these different social media platforms on Facebook. And it should be like a viewing only account where you get to view and read whatever you want, but you can't make comments.
In order to make comments, you need to be a person with your actual identity linked to it. And so that way, if you say or do something online where you encourage self-harm or who knows what you're doing or commit fraud, it's very easy for the government to say, hey, uh-oh, you caused this person to self-harm or you took this old lady's $20,000. We can bring you to court for fraud or for whatever it is.
i think that we we have passed the point because it was used to just be like oh this is a a guy in his mom's basement right that's the cliche and he had 20 accounts and he was being nefarious on these different accounts but he was limited right that guy that guy living in his mom's basement was limited by the fact that he was one person but now with the technology that exists he can be thousands of people
And that is a whole new level of danger that we've never encountered before as a society. And we need to be able to say, you know what? It's gone too far. We need to roll this back. And people need to be who they really are online. Or they need to have view-only accounts where, yeah, your privacy is important to you. Great. But you don't get to go on and pretend to be other people.
Just say whatever. Yeah. Pretend to be cats. Just do whatever, you know, pretend to be a giant pot of hot pot. I've seen that as well. And some people actually do pretend to be other people. They make a, like you said, they make AI pictures on your dating apps. Right. And then they use fake pictures that are AI generated. And then they,
pretend to be other people. And that's not just on dating apps. That's on all social media accounts. Social media. And you know what? I would even take a huge step back and say, if you want to do that, that's fine. Like you want to just use fake photos. That's okay.
As long as this doesn't become, this doesn't give way to real life harm that you do to people. But in the worst case that they did, you need to be like, the authority needs to be able to hold you accountable. My mom's always like, oh, you should just become a KOL and tell people what you think about things. I was like, I don't want to, because I don't want to deal with the consequence of people just saying things irresponsibly on my account. That's going to affect how I live my life, how I feel.
And you can't control that. Not right now. But if the day comes where everyone, you know, I agree with you. I think it's much more complicated, unfortunately, because national boundaries and things like that. But that's all the time we have at the show. I would really like to have some of our listeners perspectives. So if you agree with us, you disagree with us, you have a completely different perspective.
idea outside of the framework that we've been using, please email us at welovethebridge at gmail.com. You can leave your comments for us to read on the air or you can record your voiceover and put it on to any email and we will play it during the show. Thank you so much for your time, listeners. Thank you so much for your time, Alex. Thank you, Jason. Thank you, everyone. We look forward to hearing from you.
♪