This episode is sponsored by Indeed. When it comes to hiring, timing is everything. Opportunities pop up fast. We've all had that moment where you're staring at a gap in your team, knowing things are about to get busy and thinking, we need someone now. Indeed cuts through the noise and gets you in front of the people who are actually right for the role.
So, when it comes to hiring, Indeed is all you need. Stop struggling to get your job posts seen on other job sites. Indeed's sponsored jobs helps you stand out and hire fast. With sponsored jobs, your post jumps to the top of the page for your relevant candidates so you can reach the people you want faster.
and it makes a huge difference. According to Indeed data, sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs. We find that Indeed makes hiring so fast.
When you know you're always going to get such a high level of potential candidates, you waste far less time finding a great fit to fill the role. Plus, with Indeed-sponsored jobs, there are no monthly subscriptions, no long-term contracts, and you only pay for results. How fast is Indeed? In the minute I've been talking to you, 23 hires were made on Indeed, according to Indeed data worldwide.
There's no need to wait any longer. Speed up your hiring right now with Indeed. And listeners to this show will get a $75 sponsored job credit to get your jobs more visibility at indeed.com slash intelligence squared.
Just go to indeed.com slash intelligence squared right now and support our show by saying you heard about Indeed on this podcast. Indeed.com slash intelligence squared. Terms and conditions apply. Hiring Indeed is all you need.
At GMC, ignorance is the furthest thing from bliss. Bliss is research, testing, testing the testing, until it results in not just one truck, but a whole lineup.
Welcome to Intelligence Squared, where great minds meet. I'm producer Mia Cirenti. Where do we draw the line between free speech and dangerous misinformation?
Algorithms not only dictate our data feeds, but also reinforce our echo chambers. And the stakes have never been higher, both for our individual perspectives and for society at large. Today's episode is the recording from our recent live event, How to Cure Your Algorithm, the third and final installment of our Critical Conversations series in partnership with Sage & Jester.
Sage and Jester are the arts production company who create immersive experiences designed to entertain, enlighten and help you harness your internal BS detector, arming you with the tools to question, challenge and pause before you believe what you see and read.
Live at the Pleasance Theatre, host Sophia Smith-Gaylor spoke to journalist Jamie Bartlett to dive headfirst into the pressing issue of digital manipulation and how information becomes warped in an era of AI and how to take back control. Let's join our host, Sophia Smith-Gaylor, with more.
Hello everybody and welcome to this Intelligence Squared event in partnership with Sage and Jester, the arts production company who make fascinating immersive experiences to arm you with the tools to question, to challenge and to pause before you believe what you see and read. I do hope you can trust what you hear from us a lot tonight.
Their bold new immersive theater experience, Storehouse, opens this June. And I believe on the flyers on your seats, you've been given five pounds off it. So we do hope to see you there. Yeah. Woo! Five pounds off.
This is the third and final event in our Critical Conversation series that I've been hosting here. My name is Sophia Smith-Gaylor. I am a journalist and content creator, spending a lot of my time challenging mis- and disinformation. So it's been my pleasure to host this. It's also my pleasure tonight to talk with fellow journalist Jamie Bartlett about how to fix your algorithm and indeed investigate them. Before I speak to Jamie, however,
I am delighted to be joined on stage with Liana Batarkatishvili, founder of Sage and Jester. So a big round of applause for Liana. She's made this happen. And my first question is, why did you set Sage and Jester up? And was it inspired by your own life experiences?
Yes, absolutely. It was inspired by how and where I grew up. I grew up in Georgia, in the former Soviet Union, then lived in Moscow in Russia for a long time, and then the last 20 years here in London. And I have witnessed this sort of break of eras.
after the fall of the Soviet Union and how these post-Soviet countries had to reimagine themselves and find their own identity once again, and what followed after with economic turmoil and how the 70 years of propaganda, the whole machine was just crumbling before our eyes.
Just to give you a little snippet, I think it's always easier with the little stories. One of my earliest memories, I must have been probably four, four and a half years old, I remember being in Georgia, in Tbilisi, in our apartment where my parents were hosting dinner for their friends.
And one of the friends turned around and shared a political anecdote because, you know, let's not forget it was the propaganda machine was so strong that people were not allowed to share any political information for the fear of being prosecuted. So the way people would share their political opinions is through those jokes. And once this joke was, you know, said at the table, one of the adults turned to me and said, "Do not repeat this at the nursery tomorrow."
So, it was just, you know, when you grow up with this background, you really appreciate, you know, you don't take freedom of speech for granted. Okay. So, Soviet Georgia, modern-day UK. What misinformation crisis do you think we should be aware of that's happening here right now? Well, I think it very closely ties in with the statistics.
that is, you know, facts are such a stubborn thing. The number of democracies has been falling in the last five, seven, ten years. The number of autocracies has been rising. And I think it would be so weird almost to think that, you know, there are certain nations that are more susceptible to propaganda or misinformation than others.
I think from my perspective, where I come from, the democracy is such a big achievement, it's such a big win that although flawed and imperfect, it's definitely something worth fighting for. And I think having this experience from, you know, Soviet era and seeing how quickly the freedom of information, the freedom of media can be taken away from you just really shows how vulnerable democracies can be to the same thing.
And if someone said to you, "I'm too clever to fall for this kind of thing," what would you say to them? Well, first I would probably say, "Good for you. Please, can you share how you do this? Because you must know the secret, I don't." Seriously, though, I think
I think it's, again, slightly missing the point. We are constantly surrounded by a lot of information. And I always say information is not just the news, not something that we read and see. It's architecture. It's religion. It's everything we see. This space is information, which conditions us in a certain way. And I think knowing how it works, knowing what it does to the way you think, you sit, you talk, communicate, should be common knowledge.
Liana, thank you very much for telling us about Sage and Jess's mission. You're going to watch a magic trick now. Liana, you're going to applause, and Liana's going to magically transform into Jamie Bartlett. So everyone clap. And here is Jamie. Thanks. Algorithm. Put your hands up if you kind of don't really understand what it actually is.
Whoa, you all know what an algorithm is? What does an algorithm do? Go on, tell us. Well, the thing is, it is actually really quite simple and quite complicated at the same time. Because on the one hand, you've probably heard it described as... It's like a recipe. It's a set of simple instructions...
to sort of explain to a machine how to do a series of functions. So people will say, well, it's like onion soup. But for a machine, just you do this and then do this. And if user x does y, then you do z. So the basics of it are really quite easy to grasp. But that isn't really what we mean when we talk about algorithms, is it? Because what we really mean now are these
Super sophisticated, no one really understands how they actually work. Machine learning algorithms, which are essentially the same, but they involve vast amounts of data, vast amounts of computing power to process just sort of millions of data points.
essentially provide outputs, bit of a black box everyone talks about. It's a black box. Data goes in, some very, very complicated systems are at work crunching through the numbers, and then some outputs pop out. And that could be a recommendation on your Netflix. It could be a recommendation on your Instagram. It could be a price point on your Ryanair website, or a million other things besides. And it's
It's that in particular that complex algorithm that's got everybody really really nervous worried frankly pretty terrified about Who controls them and what they're doing to us? The premise of this evening is to think about how we could cure our algorithm and whether it's something that platforms should be doing is there something governments should be doing is it something that we can be doing and
You, as a journalist, have investigated tech and society for such a long time. Many of you may have listened to The Missing Crypto Queen. Absolutely phenomenal podcast. Yeah, round of applause for Missing Crypto Queen. It was so good.
She's still out there. We're still working on this bloody podcast. I've been doing it for like eight years, seven years, and still nothing. If anyone knows where Dr. Ruja Ignatova is, the reward on the FBI's 10 Most Wanted list is now $5 million. All right? So I could just leave journalism then, and that would be it. Split the money with you.
What is it that keeps bringing you back, I suppose, to tech and platform accountability as a journalist? Why is it something that you essentially feel like you need to keep keeping an eye on?
I think my first book was about the dark net. And it was just all about how-- this was in 2014, so it was ages ago now. And it was just a sense that there's all these powerful forces at work that are shaping our world, and none of us really understood how they were working. We weren't interrogating them properly. Oftentimes, we misunderstood them, or we were too scared of them to really look at them closely.
And that just got, I mean, the way I actually got into journalism, and this is kind of algorithmically based in a sense, was by running a Facebook survey to target members of the English Defence League in 2006.
Because I thought-- and this sort of sums up the problems and the perils. I'd realized that the English Defense League were essentially not really a street-based hooligan movement. They were an online movement with a small, real-world presence. And everyone was misunderstanding them.
Because radical movements are always the first to pick up new technology and figure out ways to apply it, because they all feel shunned and they're all looking for ways around the system. And I thought, I wonder if I could, rather than trying to target people with adverts to sell them jeans on Facebook, I wonder if I could target members of the EDL and ask them to fill in a survey. So this is what you normally do on a weekend? It was actually over a weekend I did this. LAUGHTER
And I said, target all members of the EDL with a little advert and ask them to fill out my SurveyMonkey survey about their attitudes. I come back the next day, or on the Monday, and there's like 3,000 survey responses from members of the EDL. No one had ever been able to really talk to these people before. So...
I was just so fascinated at new ways of getting into these worlds. I'm trying to understand them, but also to realize the world's changing very quickly. Anyway, that got me into this. And I actually did a series for-- this is really directly on point about algorithms ruling the world. I did another BBC podcast series called "The Gatekeepers," which was really about how algorithms were changing the world.
And it particularly started around 2014/15 when most of the platforms were faced with a big problem, which was we had these chronological feeds. Content was coming to you on your social media platforms in the order in which it was being posted by the people you followed.
And they were like, maybe there's a better way to deliver this content to you. How about we start working out what you're interested in and just serving you more of that? It'll be really good because we'll keep you more engaged on the platform and you'll get to see more stuff you like. So they all shifted to engagement-based algorithms, which just set off this sort of arms race about how can we keep you on there for as long as possible to keep you engaged.
Keep you clicking and selling you stuff. You all know this story now, right? This is now kind of part of common culture in 2015 I bet you didn't notice the change to Instagram or to Facebook's or to Twitter's algorithms, but you wouldn't know you might notice something slightly had happened and
but not very clearly. And what's amazing is that tweaks like that in Silicon Valley can just transform societies. And that's what I found so fascinating about this subject in particular. So, for example, and sometimes it's almost accidental, 2016, Facebook's under a lot of pressure for the way that they'd allowed a lot of nonsense to spread on the platform in the build-up to the election.
So they decide to start down-ranking new stories and up-ranking what they would call meaningful social interactions. Meaningful to who? To each other, your friends, your family, people you care about.
Well, what they didn't realize is that just set off just another wave of absolute nonsense that in some ways was even worse. And there are people now that credit, well, it's the wrong word, but say that ethnic violence in Ethiopia is directly linked to an algorithmic change in 2018 which created meaningful social interactions over news content.
And it suddenly, it was dawning on me that like tiny little changes to algorithms change people's lives. And we just, we don't really, we hardly realise it. And that was the sort of scary thing about it. It can often feel like...
They're all-powerful, all-knowing-things algorithms. But I want to ask you from the perspective of someone who's reported on platforms. I was listening to an episode of... It's a brilliant podcast, The Gatekeepers, and I really recommend you listen to it. There is a moment in it where you're seeking right of reply, just as every journalist does when they're making... They're doing a report about these platforms.
And you had approached Flickr for a right of reply. And in the recording, you go, Flickr. And then there's a big pregnant pause. And then you go, said nothing. They didn't even want to respond to the BBC who was putting forward claims it was making.
Can you speak to the challenges of reporting on these platforms and actually getting them to tell you something? And has that changed in the course of your career? Oh man, these aren't the questions you sent me before. I'm so out of my depth here.
That is a tough question. I'm willing to share that when I entered the space, which was primarily in the TikTok era, TikTok at the time were actually very fast and snappy delivering right of replies, probably because they thought, hey, we're a new entrant into the market. We need to show that we're friendly or at least responsive to journalists.
You're right. I think the bigger... Flickr's not a good example, but the bigger tech platforms do have large media teams. They do care about their reputation. They won't give you straight interviews very often, but they will respond to your queries. And that hasn't changed too much. There's a bigger problem. The sanctioning power of places like the BBC is shrinking.
You can expose groups. I tried to expose OneCoin and other crypto scams. And frankly, now the way that the media landscape has changed, that doesn't stop them. They can just carry on and they can wear it as a badge of honor and just carry on scamming people. So there's a bigger problem generally about...
A lot of people don't care, don't need to reply because they know it's not going to affect them anyway. But the big companies do still care about their reputation and will still get in touch. But one of the things that I found, I don't know if anyone saw a series I did called The Secrets of Silicon Valley. It was quite a while ago now. And I actually interviewed Sam Altman before he was this global star that he is now.
and we were talking about AI and the effects of AI on society. I was very lucky to get an interview with him. I didn't really expect, because he was already running Y Combinator, which was a big startup company back then. I was saying to him, "Sam, you're changing the world, you're building all this AI. What if people don't like what you're doing? What rights do you have to dictate how the world is going to go?"
And he was like, "You're a very pessimistic person, aren't you? You know what? If you and the BBC take this approach, no one's going to take you seriously anymore." And he was really angry at me. I was just trying to ask my questions and try and do my job. And I think partly it was that the tech press does trade on access. To get the big hitters, you need to be kind of friendly to them.
And he wasn't used to sort of quite aggressive questioning and took it really personally as if I just didn't like him. So I think there's been an issue slightly with the tech press not being able to be as tough as they should be because it's all about access. I don't know if it's still happening, but shortly after the Musk takeover of Twitter Now X, if you tried contacting the press team, you'd receive a poo emoji.
I mean, he's just trolling everyone. I mean, yeah, that's a whole other story. But it's the same thing, actually. This comes from the gatekeepers. More frustrating is for ordinary people
When I when I spoke to Abraham in Ethiopia his father had been killed and Abraham believed it was basically following rumors that had been circulating on a very big local Facebook group that was sort of to greet to anti to gray and propaganda and there was accusing his dad of basically being a militia and you know traitor and
And he was murdered as a result of these posts. And he had spent weeks trying to contact the content moderation team, begging them to remove that post, saying, you don't understand what this will do here where I live.
And just no response. So it's one thing for a journalist to get no response from people. Okay, that's a shame. I get to still broadcast it. There's a lot more non-responses going on that are far more serious for people. You've brought up, Sam Altman, in the age of AI, when it comes to misinformation spreading online...
Are you more worried? Less worried? I'm a thousand times more worried. I was already quite worried. How many of you are actually using, I bet it's like nearly all of you, are using a chat GPT or Gemini, a large language model somehow, just a little bit, just chatting to it occasionally? Yeah, a lot of people.
They're pretty incredible, aren't they? They have over 60% of the market as well, open AI. -Oh, is it? -Yeah. It's not necessarily better at certain things, but it benefited from being first. It's like Hoover. It's synonymous now with large language models. If you thought it's kind of opaque how social media algorithms work,
Chatbot algorithms, the power of them, the ability to be personalised to you is absolutely just staggering. And the potential for... Can I read you a tiny... Okay, this is a little bit silly. I love being silly. I thought maybe you'd enjoy... Here's a little thing I prepared earlier for you guys. So I... One of the problems with all the chatbots is...
is obviously the ability to sound like a human in natural language. So it's one thing being, yes, pushed a video that you might like. But when you feel like you're talking to someone that really understands you and can relate to you, the power and the sway it can have over your opinion and your mood, I think, is quite staggering.
Really not getting to grips with this yet. So I thought okay one of the problems with chat GPT's 4-0 model is how sycophantic it is. Have you noticed that? It's just like oh my god, it's such a great question. Wow, you're brilliant. So I said I think I think my name is Jamie Bartley and I shared all the work I'd done. And I said don't you think I'm brilliant?
Yeah, you're absolutely brilliant. You're such a distinctive voice in the world of technology. I said, yeah. Do you think I should be a national treasure, though? Because I kind of do think I should be a national treasure. And they said, well, you're still working on it. They said, no, you're absolutely smart enough, clearly. Because I said, do you not think I'm clever enough? Is that it? No, you're absolutely smart enough, clearly. If I didn't immediately call you a national treasure, it's not because you're lacking. It's because you're still building.
I said, "Okay, great. So we agree that I'm pretty brilliant, but I'm not yet at Attenborough levels, but I might get there." "Oh, yes, exactly. My honest assessment, you are brilliant. You absolutely have the potential to get there." Okay. One of the top uses for chat GPT is therapy. In this context, I can see why, because people are going to it thinking,
Chat GPT might be the only person, non-person, that tells them something nice that day. Yeah, absolutely. But the problem with this is what if you are quite unwell and it starts pushing certain delusions at you or encouraging you in those negative patterns of thought? And this is the problem. I think at the fundamental core of this, I think we are repeating history in that
I mentioned the engagement-based algorithms and why they became so important. And so many of the problems came from that original sin back in the mid-2010s, which was that they were optimizing for engagement. And there's kind of good reasons they were doing it, understandable reasons. I don't think they understood the consequences.
The reason chat GPT-4-0 is sycophantic is probably because it's optimizing for engagement again. Because we kind of like this being told we're national treasures and we're brilliant and we're wonderful. But that could be, imagine where that can end up. Constantly trying to keep you there, keep you talking. If you're using it for therapy, which I know a lot of people are, the sort of information that you are sharing about yourself that could then be turned against you
There was a research came out this week that was people were debating online. It was a really good study. And like half of them were debating with other humans. Half of them were debating with ChatGPT4. And they didn't know who they were debating with. And at the end, they were asked to rank the debater and whether they changed their minds. The machine was far better at debating and changing people's minds than the humans were.
Isn't that quite a scary thought in the wrong hands, isn't it? And the language is so good and so surprisingly accurate, it's very hard for the people to tell the difference. I mean, remember we used to worry about the Turing test. Oh, is anyone going to ever pass the Turing test? That's gone. We forgot that even existed. But there's a good potential. There's so many people working on clever solutions to this because it's also been found that
some clever researchers took chat GPT and went into some reddit groups that were about conspiracy theories and started debating with them and they found that they were actually extremely good at changing people's minds far better than other human debaters were and
So there's all sorts of possible applications for this. I mean, I've always felt like, you know, those scammy phone calls you get from people, you know, scammers pretending to be the bank or offering you opportunity. Just give them a chatbot to talk to and keep them tied up for hours and hours talking to a brilliant machine. There's so many clever uses to try to use this stuff well, but I am very worried about that
Essentially going down the same engagement like optimizing for engagement path and not really realizing how much more powerful is than ten years ago And these are all tools right so you've got the good guys wanting to use it for a positive social impact and then the bad actors using it for more nefarious reasons
Is it government that needs to step in to try and curb the bad and preserve the good? Can they? Should it be platforms? Yeah. So we've obviously got this Online Safety Act that's really recent. I think we just...
It's inevitably going to be a combination of all sorts of things and we probably will never quite catch up with it I don't think I have any brilliant revelations on this even though I've worked in this area for years and years and years I still struggle with oh how far should the platforms be responsible and clearly they should invest more in content moderation and they've definitely gonna have to do a lot in terms of looking out for machine generated content and
Large-scale mass manipulation that subtle so more content moderation now will be required to look at networks rather than individual pieces of bad content but the entire content moderation business was based on originally finding bad content using AI often and then just removing it and
Whereas what real campaigns of influence tend to be is networks of people spreading truthful stories to people to nudge them in a particular direction. And that type of manipulation is far harder to counter because it's all about coordinated inauthentic behavior. As a result of the recent U.S. election, I mean, they've just sort of wiped all of that away because it just all feels like censorship and all the other stuff that surrounds that.
So it will, you know, they're the things I'm particularly worried about. Large-scale, subtle influence campaigns. And I think it will ultimately be governments trying to nudge tech companies to do more and tech companies doing a bit and maybe not enough and journalists trying to figure certain things out.
Well how that's actually going to end up I'm not entirely sure because if I can tell you one other quick story about this what I think a lot of people misunderstand about algorithms and like when they push content at you and what it can do to you and this again is from the gatekeepers and it was the story of Molly Russell who you probably Sadly have all now heard of her because she killed herself age 14 in 2017 after
I mean, basically being recommended vast amounts of self-harm material, suicidal material. I won't go into all the details of it. And I interviewed Molly Russell's dad at great length about all this. But then I also went to Molly's family's lawyers and looked through all the material that she was being pushed, all the photos and the videos and the captions and the Instagram posts.
And I mean, it was hundreds and hundreds and thousands of pieces of content spread out over many, many months. And the thing that most people maybe don't realize is not one single piece of content was the problem.
You could almost make a case that every single piece of content she saw did not breach community guidelines of these platforms. They often weren't directly inciting certain behaviors. But it was the cumulative effect of seeing hundreds of similar pieces of content being pushed at you all the time. To a 14-year-old, remember-- I mean, I found it tough. Imagine what it's like when you're 14.
That was the thing that you know pushed her and pushed her and pushed her into that such dark place And if you concentrate in on like oh, we've got to remove single pieces of content And that's not really the problem with algorithms Just push it's the way that they can subtly keep giving you gently lots of similar things and if you think about the way that algorithms have changed our news consumption
At least in the old days we used to have to read broadly, but not very deeply We'd read lots of different stories about lots of different things sure You know Conservatives would read the Telegraph and liberals read the Guardian and you could kind of radicalize yourself a bit on that as well probably but It was hard to go very deep into a subject and become obsessed by a subject. It was time-consuming and
But the algorithms allow you to go kind of an inch wide and a mile deep very, very quickly into any subject. And they encourage that. And even when it seems healthy, it's not. Like you're obsessed with the farmer's inheritance tax. Suddenly for a week, that's all you read. You're utterly obsessed with it. Everyone else is a moron and an idiot and doesn't understand what's going on.
And it can happen to very, very sensible people. None of that's hate speech. None of that's illegal content pushed by an algorithm. It's just this gentle nudging towards obsessional interest in certain things. And it happened to Molly Russell, but I think in a dangerous way. But I think it's happening to all of us. And I really think that's what the platforms need to try to get to grips with, especially as these tools are going to get even more powerful.
And how were they going to stand up then to the criticisms that they have received of, you're censoring me, you're suppressing me, when people say that algorithms have taken my content down or they've shadow banned or suppressed me in a non-transparent manner?
Well, there's no they're not gonna get it right are they because people are always going to be angry about it I mean there were a lot of right-wing commentators felt like they had consistently been shadow banned in the lead up to the to this election and to the 2020 election in the US and I think one of the problems with algorithmic curation of content is because no one really understands how the hell it works and how the decisions are made and
It is very easy to call conspiracy straight away.
oh, I'm being silenced because I'm being shadow banned. I don't get as much reach as I used to anymore. And because no one is able to sort of independently audit what's going on, no one can know. So conspiracism spreads very quickly and easily. And a lot of the kickback in the last election was in response to the feeling of censorship that had come from the previous one. I mean, you're an algorithmic person.
Genius person yeah, yeah, you're an algorithm Have you I mean do you feel like you? Understand when your contents pushed in certain ways or not and you sort of feel that I mean punished for certain words or phrases like my first book which is about tackling sex misinformation You know guess the problem there one word that I had to repeatedly misspell as sex in
in any content that I made. And I made the decision to do that because it was me against the algorithm, and it's more important to amplify my journalism than it is to spell a word correctly in the English language. But other people disagree with that. They say, why should I be self-censoring in order to get past an algorithm? It's a different decision that people make.
It's something I've tried to game though literally yesterday because my next book has the word kill in the title Oh great, so I went on Sol's chat GPT and I asked it Do you think I'm going to meet any algorithmic suppression? Posting content for a book that's going to be called how to kill a language and
And it gave me actually excellent, or it seemed to be excellent advice for how I could mitigate against that, which was whenever I mention that title, to always contextualize it. Because think about it, there is an algorithm who is going to make the wrong decision because there's not enough context for it to make a better one. So the advice was to always say, my book, How to Kill a Language, rather than
just sort of popping the title out there with no context. So I completely agree with you. You can game it if you know how to mitigate against an algorithm getting confused, essentially. But most people don't know that. Would it be a positive design measure for a platform to take to be like, your content's been taken down. This is why. Word trigger this.
violation, this specific one, do you think that could help? Well, it would help you. And then you could retrain it. But yeah, I mean, the content moderation, and not just content moderation, but content ordering and filtering, because so many of our careers depend on it, and adult social lives depend on it, it could create such a feeling of helplessness
And a sadness that you've got to try to sort of satisfy the machine somehow But we're all weirdly sort of living through it I mean, I think we might be entering a bit of a golden age of reverse engineering of algorithms so
The incredible AI tools available allow us to start working out how algorithms are sort of controlling us. So you can say, look, we know, for example, that certain airlines or certain products will be sold at different rates if you've looked at them before, or if you're in certain locations, or if you're using certain computers. And you can quite easily train an AI agent to circumnavigate that and create accounts for you so you are being charged cheaper.
And we're entering this weird sort of algorithmic race where we are going to use AI all the time to try to game the big companies' AI to get better deals for ourselves. Which they're going to then try to work out how we're doing that as well. And there's no doubt that's going to happen. I mean, it's already happening in a small way with computer scientists do this all the time. But it will soon be an app on your phone in three years.
It would just be natural because what what AI algorithms are doing and reducing all this complex stuff to natural language So you just type in like help me find better insurance by using an AI agent to trick and they'll just figure out a way to do it for you so all of us are going to be engaged in your problem of like always on my Algorithm is gonna be nice gonna delete my content somehow and it's we we are gonna have to kind of get used to that and
but without ever feeling like we're totally out of control. My worry is that, you know, unless everyone... The sad thing is I'd love it if none of this existed, frankly. So I'd love to turn back the time to the glorious 90s where we didn't have to worry about all of this crap. And I feel very sorry for today's teenagers that this is what they're having to deal with.
But a worse situation would be that a handful of people work all this out and the rest of us just don't. And then we're just constantly being ripped off. We're constantly at the mercy of some algorithm. People using chat GPT algorithms are suddenly brilliantly productive and the rest of us are rubbish because they produce PowerPoints in 30 seconds and you labor away for six hours so you're going to get fired.
And that will be what will happen unless we don't actually start using them as well ourselves. I remember a really scary quote, though, that I saw. One of those sort of... In podcasts, you call it like a water cooler moment, something that... A little line that then sticks with you that you share over the water cooler. And it was just someone saying that they were in a taxi...
And they'd ask the driver, have you had a good day today? And he said, yes, the algorithm's been really good to me today. And I was like, oh my God, how long before we're praying to the algorithm to please... You know what I mean? That's the sort of... It's not that different from religion. The people, sort of this powerful being, trying to... granting me, you know, these opportunities. I think it can feel like we often don't have control. And I do remember...
In the early age of TikTok, some people would complain to me like, oh, my whole For You page is just half-naked women dancing. That's all this app's good for. And I had to delicately explain to them that the reason your For You page is lots of half-naked women dancing is because you keep watching the half-naked women dancing and you're not scrolling past it. You're not telling the app.
What it is you do want to watch if it's something different I was using it in those early days to find stories and I was getting a lot of People sort of talking about problems that they get they were going through that was my for you page and another journalist would say Mine's loads of kittens and it's like yes. Do you like kittens? Yes, I do I could have guessed that that's why your whole for you pages kittens you act I had to be very
be ruthless. And if a cute dog video came up, swipe away and not watch it. Otherwise, I wouldn't get the sort of journalistic content that I was using to find stories. So last question, Jamie.
What can we do for ourselves? Is there any way we can take back control and do something this evening on our phones if we feel the need to? Yeah, yeah, definitely. I mean, we are going to enter obviously into a new age of luddism. So you might all be going out there and smashing up these machines one day and the sort of the new wave of neo-luddism. I'm sure that's coming.
It'll start in France, obviously, and then it'll come here. So I think there's two ways of thinking about it. One is-- and I think about it this way, anyway-- a couple of simple things on your phone, really basic stuff. X and Instagram, you'll know more about TikTok, about how far you can do this. I think you can in the US, but not here yet, although I might be wrong now.
Its default setting will be the recommendation engine. So it will be pushing Elon. You don't follow Elon Musk, but it will automatically go to the For You section, where the timeline will be things and stuff it wants you to see. It'll push controversial content at you that it thinks you're going to click on. You can just toggle over to Following, and then you only see content from the people you follow.
It's so simple. You can do the same on Instagram. There's other things you can do about turning off, going into your settings and changing your YouTube history so it doesn't know your history, so it doesn't keep pushing stuff at you. These things will take two or three minutes. And it's a fun exercise in a way because then you can think, oh, my God, my experience of online is so different just from making that simple switch.
Took me less than 10 seconds to do it. And it gets you understanding the effect that social media content can have on you. And what it also does is makes you realize, oh, I am kind of in control. I don't need to just be fed stuff. I can actually manipulate a little bit and control a little bit what I'm being shown. So that is extremely simple to do.
There's obviously bigger things. And, you know, we could talk about education and importance of critical thinking. We talked about this for years and nothing ever really seems to happen. But I would advise everyone as well to look at themselves. We are so quick to blame every other idiot for falling for propaganda.
sharing fake news, just being led by any old nonsense. And I guarantee everyone in here has also done it. Maybe I sometimes see the things I share, indignantly share. And then I'm like, am I the problem? Have I been pushing inflammatory stuff? I mean, I might agree with it. But seen from the perspective of somebody else, I probably look like a right annoying little git.
And all I'm doing is stirring the pot and making things worse. So just look at yourself. And if I can say one thing as someone with small kids as well-- everyone's worried about their kids doing this stuff-- I stand in the queue of parents who are complaining about, worried about their kids. And they are all on their phones all the time.
And they are modeling this behavior constantly. So one thing you could try to do, and this is really useful, is to de-phone your life. Look at all the phone-- like, this isn't a phone. Who uses this to make phone calls? It's just a mini computer.
Now, what you use it for is Google Maps. You use it for your music. You use it to tell the time. You might use it as a compass. You use it as a calculator. These are all objects that you could also have that aren't on your phone. Because what so many of us do all the time is pick it up to check the time. I've got an alarm clock now. This is why I wear this.
or pick it up to check the music or to whatever, and ten minutes later you're scrolling through social media, and your kids are watching or your friends are watching. If you could try to sort of decouple other objects from your phone, you'll find your usage goes down drastically, and you'll feel a lot better for it. And instead of doom scrolling, you could of course read one of Jamie's best-selling books as well, if you so fancied it.
We now have an interactive element of the evening. On the screens behind me, a QR code has mysteriously appeared. You may now scan the QR code and there are going to be two questions asked of you. If you're watching our live stream, you can click the link that's been made available. I'm going to read out the questions and the possible answers and the results will appear to us all. One, how often do you see views in your feed that challenge your beliefs?
A. Often, I follow a broad range. B. Sometimes, it slips through. C. Rarely, it's mostly like-minded people. Or D. Never, I actively avoid opposing views. Second question, which is more dangerous in today's culture? A. Misinformation spread seriously. B. Misinformation spread as a joke.
C. People who can't tell the difference. Or D. Our favorite social media algorithms. Question one results are on screen right now. And I'm going to be asking you, Jamie, for your reaction to them. How often do you see views in your feed that challenge your beliefs? Most people think it's sometimes it slips through. Is that accurate? Yeah. Yeah. Yeah. I think...
Seems about I've seen similar things before I think the biggest misconception about echo chambers is just my opinion on on having spent a lot of time in different groups is It's easy to believe that you can become radicalized online just by being surrounded by like-minded people sharing the same thing all the time I actually find a more powerful dynamic is is seeing people I disagree with
and thinking they're complete lunatics. Because all they've been able to do-- because their most aggressive content is pushed at me. So I'm seeing them in their worst light.
And rather than having a long discussion with them about it to understand maybe where they're coming from, I'm just seeing their angry little message. You're being rage baited. It's not picking often, it's called. It's like take the most extreme version of your opponents and then assume that that's what all of them think all the time. And I think the Republicans were actually brilliant at doing that, making all liberals seem like complete lunatics. Why would you ever vote for these people?
And it's very easy when you're online to find complete lunatics on your own side. Of course. And so I'm not even sure whether the problem with echo chambers is that it's just people you agree with. It's like, do you really take seriously the opposing views and actually try and listen or maybe understand where they're coming from? This could be a great use of AI, like...
Like, imagine I am a member of the English Defence League, curate my news feed, tell, like, what am I feeling? How do I consume? Trying to get into someone's head and understand how they see the world. It's never been easier to do that today. But none of us really ever do it. No. Second question, which is more dangerous in today's culture? Wow.
C that has had the most votes, people who can't tell the difference, have been perceived to be more dangerous than the algorithms.
Well, in the end, these algorithms are... You all know this now. It's really interesting how much more informed I think everyone is about this, broadly speaking now, compared to even five years ago. And most of us know that these algorithms and the content you consume is a sort of reflection of us in some way. It reflects the things that we do and we say.
But it's turbocharged. It's manipulative. But it is still a distorted reflection of us. And so understanding that really it's about us and how we're thinking and how we're consuming stuff makes a lot of sense. It's like people who can't tell the difference, that's a sort of psychological issue. Whenever I talk about teaching critical theory or critical thinking on the internet, very rarely I suggest that we just talk about algorithms.
I want to talk about what are the main cognitive biases that humans suffer from? You know, there's like 160 that have now been sort of discovered. Normalcy, fallacy, and I mean, for me, fear of missing out, fallacy is the biggest one of all because it affects all of us all the time, especially on crypto scams.
And we don't ever learn about those in critical thinking classes much, but they ultimately are the things that are being manipulated. So that's why some responsibility must rest with us. A round of applause for national treasure, Jamie Bartlett. Thank you.
You can clip that up, feed it into an LLM and it'll all be well. Make sure to visit sageandjester.com for the exciting experiences that they have coming up and obviously their ongoing fight against the misinformation crisis. I've been Sophia Smith-Gaylor. Thank you. Thanks for listening to Intelligence Squared. This episode was brought to you by Sage and Jester.
Make sure to visit sageandjester.com for all the exciting experiences they have coming up in the fight against the misinformation crisis. This discussion was produced by myself, Mia Cirenti, and it was edited by Mark Roberts. In honor of Military Appreciation Month, Verizon thought of a lot of different ways we could show our appreciation, like rolling out the red carpet, giving you your own personal marching band, or throwing a bumping shindig.
At Verizon, we're doing all that in the form of special military offers. That's why this month only, we're giving military and veteran families a $200 Verizon gift card and a phone on us with a select trade-in and a new line on select unlimited plans. Think of it as our way of flying a squadron of jets overhead while launching fireworks. Now that's what we call a celebration because we're proud to serve you. Visit your local Verizon store to learn more.
$200 Verizon gift card requires smartphone purchase $799.99 or more with new line on eligible plan. Gift card sent within eight weeks after receipt of claim. Phone offer requires $799.99 purchase with new smartphone line on unlimited ultimate or postpaid unlimited plus. Minimum plan $80 a month with auto pay plus taxes and fees for 36 months. Less $800 trade-in or promo credit applied over 36 months. 0% APR. Trade-in must be from Apple, Google, or Samsung. Trade-in and additional terms apply.
We interrupt this program to bring you an important Wayfair message. Wayfair has got style tips for every home. This is Nicole Byer, helping you make those rooms flyer. Today's style tip, when it comes to making a statement, treat bold patterns like neutrals. Go wild!
Like an untamed animal print area rug under a rustic farmhouse table from Wayfair.com. Ooh, fierce. This has been your Wayfair style tip to keep those interiors superior.