The Hoover Dam wasn't built in a day. And the GMC Sierra lineup wasn't built overnight. Like every American achievement, building the Sierra 1500 heavy-duty and EV was the result of dedication. A dedication to mastering the art of engineering. That's what this country has done for 250 years.
and what GMC has done for over 100. We are professional grade. Visit GMC.com to learn more. Assembled in Flint and Hamtron, Michigan and Fort Wayne, Indiana of U.S. and globally sourced parts. At Sierra, discover great deals on top brand workout gear like high quality walking shoes, which might lead to another discovery. 40,000 steps, baby. Who's on top now, Karen? You've taken the office step challenge a step too far. Don't worry though. Sierra also has yoga gear.
It might be a good place to find your zen. Discover top brands at unexpectedly low prices. Sierra, let's get moving. People had like seven fingers on their hands. You were generating an AI avatar. And now some of them are really hard for even me to tell whether it's like a fake image or not.
Imagine you're an immigrant and a friendly mutual on social media or in a Discord server suddenly starts chatting with you. You become close and maybe make an offhand comment about politics. Before you know it, the police have a full record of your conversation and comments and you're being deported. Massive
of blue is a company that's working with police departments across America to bring this type of surveillance infrastructure to life they're creating an endless stream of AI generated people online to interact with and collect intelligence on college protesters immigrants political activists and alleged criminals 404 media's Jason Kebler has been exposing all of this today he joins me to talk about digital surveillance how military tech is increasingly being used on the American public and what
what activism might look like in the age of AI. Hi, Jason, welcome to Power User. Hey, thanks for having me back. So this story, when I read it, terrified me because I immediately started to look at my social media feed and was like, oh my God, like how many of these people in my replies are weird AI agents? So tell me what's going on here. Yeah, so basically a company has invented a tool for cops where they can create these AI powered social media profiles that are intended to
to collect evidence on any target that the cops want. And this includes not just criminals, right? It can include criminals like child trafficking is a big use case that they say, but then it can also include quote, college protesters, activists, things like that. When did this start happening? Cops have been going undercover for decades, but this sort of like automated social media monitoring is really recent. Like this company that I wrote about started just in 2023.
I think that company that you covered is called Massive Blue and its tool Overwatch is, I think, what you described as like running all of these AI systems. Was there something that happened in 2023 that made this company come to fruition? I think the big thing is that AI just got a lot better, like a lot more conversant. We started to see the rise of like chat GPT toward the end of 2022.
I believe it was. And so we don't know exactly what large language models or which AI tools are being used. But I think before things like chat GPT, it would have been really hard to make a believable AI bot person.
person. And so these are social media accounts that have a profile picture that can interact with other users, both like out in the wild, like on your timeline, but then also in DMs and like in discord chats and things like that. You wrote that massive blue markets itself as a quote, AI powered force multiplier for public safety that deploys lifelike virtual agents that infiltrate and engage criminal networks across various channels. And
And some of those channels, I think you mentioned, were text messages and social media. So how do these agents find people on social media and how do they even get them on text message? The way we reported this story was we got public records. So I filed a lot of public information requests with different police departments. And so I got emails between Massive Blue and these police departments. And in those emails,
Massive Blue, which is the company that makes these personas, was asking cops for trigger words, more or less. So a trigger word might be protest or it might be the name of a local university or it might be different slang words for drugs or what have you.
And so these bots will monitor, say, X or Instagram and look for different hashtags or look for different words that the police believe is associated with criminal activity or protected activity that the police want to surveil. And then the bots will monitor.
basically like start responding. They're like reply guy bots more or less. And I think that if they're able to like start a conversation, you know, you can go down the rabbit hole where maybe like a conversation would start on Twitter and then move to like a private discord or something like that. Yeah, you start tweeting at each other, maybe the bot DMs you then suddenly you're talking to them on text message and discord.
Massive Blue lists things like border security, school safety, and stopping human trafficking among this tool's use cases. Has this technology so far led to any arrests? From what we're aware, it has led to zero arrests. We know that they're working with at least two police departments. And in those two counties, which, you know, Yuma County and Pinal County in Arizona, both close to the border, there have been zero arrests associated with this. It's possible that they...
are working with other police departments in other states. But the company is really, really secretive. And so we don't know exactly which police departments it's working with. But we know that it's been associated with zero arrests so far in about a year. So it's not really clear how effective this tool actually is. Okay, so you talk about how Massive Blue develops these personas. Can you run through some of the personas that they've deployed across social media so far?
Or at least ones that you found that were in the presentation. Basically, the way we reported the story is we got a presentation, like a pitch deck, a pitch presentation that the company sent to police. And as part of this, they were like, here's examples of all the types of people, like personas, that we can make for you. One is...
named Heidi and she's called a radicalized AI persona like that's how the company pitches it. According to this slide, she's 36 years old. She's from Texas. She's divorced. She has no children. She's really interested in body positivity and seeking meaning and also baking. I would describe it as kind of like a fever dream stereotype of what the police imagine a like leftist protester to be.
A blue haired liberal that is divorced, cat lady, all of that. That's exactly the vibe. And then there's another one who's like a 25 year old from Dearborn, Michigan, which is a heavily Muslim area. And it says that this woman's parents emigrated from Yemen. She speaks Arabic and she's active on Telegram and Signal. So that's like very patriotic.
Palestinian protester coded is sort of what I think they're going for here. And then they also have what they call a child trafficking persona, which is
a kid, a 14 year old kid named Jason, not me. And they say that he's really into anime, gaming, comic books, he's shy, he has difficulty interacting with girls. And then they have like examples of conversations that he has where he's like obsessed with SpongeBob, for example, I thought this was one of the most ridiculous ones just because it has like a back and forth conversation between the bot and like, an imagined
child trafficker, it's pretty, pretty nuts. It's clearly concerning that this company is getting lots of contracts with police departments, maybe some that we don't even know about. But what is the real harm here? It's like we're familiar with bot networks online. I feel like this was a big thing, you know, back in 2016, 2017, when people were really worried about bots
influencing our election or shaping discourse. And I feel like a lot of those concerns ultimately never came to fruition. So what is new about what's happening here? And why should people care about this? I mean, I think that in the 2016 election, for sure, there was like a lot of fear about the Russian troll armies. And in some cases, those are bots. In other cases, it was like PSYCHO.
people in Russia who were, you know, planning protests and sowing discord and, and like fighting with other people online. In this case, it's like police who are supposed to be keeping us safe. The like use cases that they are describing, like,
Child trafficking, definitely a crime. Drug trafficking, definitely a crime. But then they also say that they want to use this to catch college protesters. It's like that's a protected activity under the First Amendment. And so you could easily imagine this being used not to catch criminals, but to trick people into...
endangering themselves if they're trying to plan a protest at a college or if they're trying to plan an anti-Trump protest. One of the people who works for Massive Blue is this guy named Chris Clem, and he was a Border Patrol agent for like 27 years, I believe. And if you look at his LinkedIn, he's posting images of himself with
RFK Jr., Tulsi Gabbard, with like all of these Trump administration officials. And so it's clear that this technology is very like anti-immigration coded, anti-leftist coded, like that is kind of the point here.
here. One thing that immediately stood out, and I think you mentioned this in the story, is the potential to, you know, like you said, trick students and specifically foreign students. I mean, the Trump administration has revoked the visas of hundreds of foreign students at this point, mostly those who have protested against, you know, Israel's assault on Gaza. But it's
seems like, especially for maybe a non-English native speaker or maybe somebody that's a little bit new to online spaces on the left, like they've just started college, maybe they're visiting, they could be very easily tricked by something like this, right? I mean, I could imagine a young person, again, not a native English speaker, maybe starting to talk to one of these AI agents and saying the wrong thing about Israel and Gaza or saying the wrong thing about some political matter and then that being used to
revoke their visa or yeah, punish them in some other way. I mean, absolutely. And that's like where my head went to immediately when we started doing this story, because protest is protected by the First Amendment if you're a US citizen. But what we've seen in recent weeks is that people getting picked up off the street because they wrote a critical op ed in a college newspaper, we've seen people held at the border because
They have these like very minor visa overstays and things in their past. And so you could easily imagine this being used to find students who are guilty of thought crimes against the administration. I don't know how else to say it, but it's really, really concerning for that reason, I think.
How advanced do you think the tech behind all of this is? I mean, do you know anything about Overwatch's funders? Anything about the company? I know you mentioned that guy, Chris Clem, I think he was, who went on Theo Vaughn recently. And Theo Vaughn asked him, OK, what does your company do? And he says, well, I'm not going to get into it that much. So who's behind this serious company and what tech is underlying it?
That's the thing. Like when we were writing this article with surveillance technology, there's usually a big gulf between what a product promises and what it can actually do. And so I think if you imagine like the scariest version of this technology, it would be an AI that's totally indecipherable from a human being perspective.
And what we know about AI right now is it's not that good, but it still does trick lots of people like AI, generally speaking. So I think it's really hard to know whether this is like a company that's kind of pitching snake oil that where it doesn't really work that well, or it's like a sophisticated AI. I think given that the people involved don't seem to be highly technical, I would bet that it's kind of like off.
off the shelf AI technology. Like I don't think that they're using technology that you or I couldn't use to create like an AI persona. If we were doing like a chat GPT, that's probably what it's like. Right. I mean, I feel like this company might be a little
bit of a joke or maybe hasn't really built the most effective tech yet. But then you guys have also done tons of reporting on Palantir. And I wrote about their efforts recently, too, and partnership with ICE, where they're seeking to develop more and more AI tools. And they've developed so many sophisticated AI tools, obviously, for use in places like Gaza. How likely do you see that sort of similar tools like this will be replicated potentially by more capable tech companies like Palantir or some of these other kind of defense contractors?
Yeah, I mean, we saw that ICE is saying that it's going to scan the social media of people on student visas to determine whether they can stay in the country, which is really scary. And we know that the way that they'll be doing that is with AI enabled tools made by companies like Palantir, which is an incredibly sophisticated company that has been in the surveillance business for decades at this point. I think that cops, like a lot of other industries, are obsessed with AI and what it can do to kind of like
automate their work and empower their work. And so I think even things like you know, facial recognition, license plate readers, like we're seeing AI being integrated into both regular policing, but then also immigration enforcement. I think that AI tools are just like they're very scary in this context where things that you post on social media, people that you hang out with tattoos that you have can be used against you to deport you from this country. And so I think it's a huge, huge concern.
Yeah, and we've seen over and over again too that these systems are flawed, right? These AI systems make mistakes and there's this tendency to say, well, the AI found you guilty of X, Y, Z or the AI determined X, Y, Z as if it's sort of this infallible tech. I mean, you mentioned the tattoo thing. I could see them scanning
stuff wrong. So much of this feels like military tech too, that's being repurposed for citizens. We've seen a lot of these tools used abroad in foreign wars. And I just keep thinking back to that. I don't know if it's a saying or a tweet or something of like everything that's happening in Gaza, like those technologies will be weaponized against citizens. And I think of like that robot dog that we saw as well, that's now being deployed by police departments. So are there any other ways that you're seeing kind of like military or defense tech now being used domestically?
Yeah, I mean, predator drones that, you know, flew over Afghanistan, over Yemen for years are still there, are now patrolling the border. As far as we know, they are not armed. We're not doing drone strikes in Mexico, although people have been worried about that. I think that with surveillance...
technology in general it is usually used in foreign wars first and then it comes back to the united states and they often try to use it in these like really high profile cases with really unsympathetic people to start with so
It might start with the Department of Homeland Security or the CIA or the NSA using a surveillance technology against someone that they consider to be a terrorist overseas. And then they might use it against a quote unquote terrorist in the United States or a mass shooter. But then that technology trickles down and it's used against undocumented immigrants. It's used against protesters. And then it's just used as part of day to day police work. And that's something that we've seen over and over and over again, where a technology that's used either
either in a war or like in an emergency context in the US suddenly gets used against everyday citizens as like a routine thing that happens all the time. And is there any sort of like oversight of these contracts or this technology? I mean, it seems like the federal government is deploying it, these local police departments deploy things like this. But is there any sort of oversight system for all of this tech that's being rolled out across the country?
They're supposed to follow the law. Like there's various privacy laws. It depends on what technology is being used. You know, there's privacy laws in this country. We have some. We have some. For example, there was a case just last week regarding a cell tower dump. And what that means is the cops were able to ask AT&T or Verizon for permission.
all of the phones that had connected to a specific cell phone tower to try to solve a murder, more or less. And so the cops were able to identify the murderer, but then they also identified thousands and thousands of innocent people. And a judge ruled that that was unconstitutional. And the way that something like that happens is you have to have people willing to fight this stuff in court. Like you have to have groups
like these civil liberties groups, or in many cases, criminal defense attorneys who are willing to fight these surveillance tools on a constitutional grounds. And sometimes, you know, courts will say that this is illegal, but often that comes like years after this technology is already being used against people.
And so the complicated answer to your question is that sometimes these technologies are ruled illegal, but it usually takes a really long time. And often their use is normalized before that actually happens. I think of this horrifying moment constantly. Last summer, I was at the White House for this content creators event and Neera Tanden, a Biden official, got up on stage in front of a crowd of hundreds of content creators and said, who wants to remove anonymity from the Internet? Don't you wish you could unmask every single troll?
And every single person in there raised their hand, which even journalists to me are like these content creators that consider themselves journalists. And it was so scary. And then she did this whole sort of Q&A about why we need to remove anonymity from the internet and why we need facial recognition and all this stuff. And this is from the Democrats. So it seems like both political parties are just completely aligned. You see this also with these moves to push age verification online, which would tie your offline identity to your online behavior. I mean, this is also
stuff that I feel like both political parties criticized China and authoritarian countries for for years. And now they seem completely aligned in wanting to roll it out here. Why do you think there's no sort of cohesive political effort on the left to fight back against this stuff? I mean, I think you're absolutely right. A lot of these technologies are built under democratic administrations thinking like, oh, we'll use them only against the worst people.
And then someone else gets in power and it's used against everyone. It's like once the technology exists, it's really hard to put it back in the box, if you will. I mean, I think the reason that there's not a huge pushback against this sort of thing is because cops and the government are usually good about using these technologies against really unsympathetic people to start. Like they're usually used against...
against mass shooters, white supremacist terrorists, people who are bombing buildings and things like that. And then then they sort of say, well, thank God we had this surveillance technology, we were able to catch this person. And then it gets rolled out really widely. And by the time it's used against college protesters or journalists, or just like random people in your neighborhood, it's too late, like the technology is then mature, it's all over the, you know, United States is proliferated. And
I feel like Democrats really don't want to be seen as being soft on crime. Like that's something that they have shown over the years where they're like obsessed with not being soft on crime. And so they're like, well, we're developing these technologies. Yeah, they're obsessed with buying into this like right wing reactionary framework that like,
the US is being overrun with, you know, violent criminals from Mexico or whatever, whatever. And it's just like there are people fighting back against this, but it's usually nonprofit lawyers who will pick a few cases and then they'll fight them in court. But they take again, they take a long time to run through the court system. Like by the time they have gotten a ruling on any given technology, there's five more technologies that have come out that are maybe even more invasive. And so it's really like playing whack a mole, I think.
This episode of Power User is brought to you by Delete Me. As a high-profile journalist who's dealt with stalking, harassment, and death threats, I know how crucial it is to protect your personal information. Unfortunately, no matter how careful you are, data brokers will collect and sell your personal information to anyone willing to pay for it. This includes your name, address, phone number, and more. And when someone buys your data through a data broker, they don't just see you. They see your spouse, your kids, your family members, and loved ones.
Removing this information off the internet helps protect against things like identity theft and stalking. But how do you even remove this information when there are thousands of data brokers? DeleteMe is a subscription service that automatically removes your personal information being sold online. It's been an absolute game changer for me as somebody that values privacy.
DeleteMe scours the internet for your information and automatically handles takedown requests. They provide regular privacy reports so you can see exactly how much data they found on you, where it was found, and where it was removed. You can even make custom removal requests. Their family plans are especially critical because when you're targeted, your loved ones usually are too. Get 20% off DeleteMe consumer plans when you go to joindeleteme.com slash taylor20 and use my code taylor20 at checkout.
That's joindeleteme.com slash Taylor20. Use code Taylor20 at checkout. Don't wait until it's too late to safeguard your privacy. Make sure that your data and your family's stays protected. Do the social media platforms have any culpability here to like identify AI users or like automated bots? I know there was a lot of conversation about that a while ago when Elon was taking over Twitter about like how much of these platforms, you know, are actually authentic behavior versus automated systems. So yeah, I mean, is there a way where...
instead of waiting for the courts to figure this out, that some of these companies could be pressured to put AI labels on some of these accounts.
You know that feeling when someone shows up for you just when you need it most? That's what Uber is all about. Not just a ride or dinner at your door. It's how Uber helps you show up for the moments that matter. Because showing up can turn a tough day around or make a good one even better. Whatever it is, big or small, Uber is on the way. So you can be on yours. Uber, on our way.
Yeah, there was a period where Facebook and YouTube and even Twitter were doing these big reports where they said, oh, we identified this nefarious North Korean bot farm or we identified this Russian bot farm and we took it down. And I just feel like the social media companies have been...
under attack for this idea of censorship for a long time and almost all of them have taken a step back from wanting to do really any content moderation whatsoever and so almost every social media platform is kind of allowing anything to happen on there now if you go to instagram if you go to twitter if you go to tick tock there's tons and tons of ai generated content on there almost none of them have done anything to really label it or to take it down
And at that point, deciphering between an AI-generated account that's operated by the cops and identifying just like random AI spam, I feel like it's really hard at that point because they really have done a really bad job deleting this stuff so far. Yeah, it seems like the whole internet is already overrun with AI. Like I was mentioning before, I feel like maybe somebody that speaks English as their second language might fall for these bots, but I feel like I'm a pretty savvy internet user. I feel like most people at this point...
understand the concept of like bots and like AI and like they'll do those snarky replies, like give me a recipe for, you know, a mango smoothie or something to see if the person's AI generated or a bot. How sophisticated do you think these systems could become? And do you think that we're approaching an era where maybe it's impossible to know who is a bot and who isn't?
Yeah, I mean, I have seen so much AI-generated content on Instagram and Facebook that people can't tell is AI-generated. So I think that a lot of people are pretty sophisticated at being able to tell what is a bot and what is not. But at the same time, I think there's a reason why all of these...
automated scams exist. Like my blue sky and my Instagram is full of people DMing me that are bot accounts trying to steal money from me. And I imagine that that must work with some people, otherwise they wouldn't exist. And so I don't think tons and tons of people are getting tricked, but I do think that, you know, it's concerning that this exists at all. And I do think that this technology is getting way more advanced. Like, I don't think that this could have existed before.
two years ago. And now I at least see the vision. I think if you told me like two years ago, like that was the era when people had like seven fingers on their hands if you were generating an AI avatar. And now some of them are really hard for even me to tell whether it's like a fake image or not.
Yeah, I was talking to a guy that runs like one of those OnlyFans chat farms recently, and he was talking about AI and how they're developing AI tools to automate OnlyFans girls chats. And they're already deploying it in a way where, you know, you can get really hyper personalized stuff. The bot has somewhat of a memory and a lot of people cannot even tell, especially if their guards aren't up.
that's where my head went to also because i've talked to some people who run ai influencer accounts and i guess it's like if your guard isn't up i think that's the right way to think about it if you're looking for it then yes like you probably will be able to tell what's a bot and what's not but if you're in this context where let's say you're already in a discord chat with like hundreds of different people and and they're all just chiming in here and there like maybe you're not gonna be able to tell
which individual account is a bot. Whereas like if you're talking one-on-one with them, you might be able to. So I could imagine it being pretty scary. Like, and I could also imagine these bots being able to infiltrate a community and then just sit there and collect information versus walking up to someone saying, hello, fellow kids. Like, do you want to traffic some drugs? Like, I don't know if that's going to work. Do you guys want to protest the genocide at any point in the next six months? Yeah, exactly. Exactly.
So there's different ways that you can imagine this working, I guess. I also think that part of it is just the chilling effect that it'll have even if it doesn't work, right? Like I feel like these days people's guards are up and this will just make it harder to plan things like this. The beauty of social media, and I feel like why it was so liberatory for so many years is because you could sort of plan a lot of progressive things
activism out in the open. I mean, the Me Too movement, Black Lives Matter, all these things, but also just climate protests that are openly planned on TikTok and Instagram and things like that, school walkouts. Now I feel like a lot more activism is going to have to be maybe more undercover, more direct, more in person, because the surveillance potential is just so significant online.
I think you're absolutely right. And like, let's say you're here on a student visa and you sort of have a precarious situation. We've heard of students asking their college newspapers to delete all of their articles that they've ever written. We've heard of, you know, people trying to
delete their social media accounts. And like you said, it used to be the case that you could just join a Facebook group that was organizing a protest or say, I'm attending like this anti-fascist rally. And now that is a threat, like you can be deported for that. So I think that it's going to be a lot harder to organize these sorts of protests. And then also, if you are a person who's
in a precarious situation, maybe you're not going to join that telegram group or that signal group or that discord where people are talking about ways that they're pushing back because even just being in the group
could be detrimental to you. What do you think people can do to kind of fight back against this massive surveillance ecosystem that's being built? Obviously, we need more reporting uncovering things like massive blue, but what can average people do to help, I guess, fight back against this system? Because I feel like so much of it feels so futile. Yeah, I mean, it's a really scary time right now. I think I've been heartened to see a lot of the protests against
the administration more broadly. And I think that if you are someone who is like a natural born citizen who has like a little bit more privilege, like you need to speak up, you need to be out in the streets, you need to sort of be using that privilege to protest this sort of surveillance. And then if you are in a targeted population, it's really scary. I like I don't know what you're supposed to do. I think that's part of the point of all of this is
it's intended to have a chilling effect and it's intended to make people scared to speak out and make people scared to protest. And so I don't want to say that there's safety in numbers, but I think that if you have the ability to protest, to write your congressperson, to call them just about the entirety of what's happening right now with anti-immigration targeting, with people getting disappeared to El Salvador and students having their visas revoked, like we need to speak up.
now before it's too late. I would say we also need to fight back against this bad moral panic about the internet that is, I think, driving the legislation that allows for these invasions of privacy. I mean, the facial recognition and the age verification stuff is what I'm thinking of right now. But obviously other bills
that are framed as protecting kids online, we know to be just essentially censorship bills that allow for more surveillance. I mean, there's a reason that Meta is backing the age verification efforts. It mandates that they actually collect more sensitive data on users, although they do want it from the device level. But it just it seems kind of frustrating, I guess, when we look at the state of tech legislation and people hear all these scary stories about these tech companies like Massive Blue or Palantir or whatever.
And then they're sold this idea of, and that's why the internet's bad. And that's why we should just take everyone's cell phone away. That's why we should ban students from even having smartphones and things like that. And I guess I'm concerned about people missing that nuance of actually, we don't want to dismantle the whole internet. We want to be able to use the internet for its original purpose, which was liberatory. Yeah. I mean, I think that anecdote you raised about people raising their hands at the White House saying, we
that anonymity online should be abolished. It's like people can't imagine these technologies being used against them until they actually are used against them. And I think that that's...
One of the really scary things about the surveillance state is that first it comes for the most marginalized people in society, but then it just becomes really commonplace. And the idea that, well, I don't have anything to hide, so I shouldn't worry about it. Like that is a terrible argument because what is being criminalized in these cases is just normal online activity. And I think the worst thing that we could say to anyone is, yeah, just get offline because that's not an answer either.
Well, it also disempowers you because you know who's not getting offline is the far right and ICE and all of these systems. These reactionary people are only getting more and more online and their tools are getting more and more ingrained in our internet ecosystem. And I think so many people on the left have fallen for basically really reactionary rhetoric through this anti-capitalist lens where they're like, oh, well,
yeah, the internet is bad because it's for profit. And so again, we should just regulate the internet, but they want to regulate the internet in this way that actually enables surveillance and enables top down control and is really nefarious. And I think what you said is so true as well about how these things start as
benign. And I think when I talked to so many content creators, and after that event, I asked so many people, I was like, why would you raise your hand for this? And they're like, well, I get so much online hate and toxicity. And when you see the way that these things are sold to the public, especially it's like, oh, these bullies, right? These bullies are harassing or grooming children. They're being toxic online. Shouldn't we unmask the trolls? And it's like, yeah, I mean,
I get trolled more than anyone, right? Like, I would love to unmask a lot of these freaks in my comments. But as you said, you need to think about the ways that that would immediately be weaponized against you and others. I think you had such a great point about the far right is not marginalizing themselves. They're not taking themselves out of these spaces. And a lot of online spaces are starting to be dominated by
that world. And I think that I don't want to say anyone should go subject yourself to harassment, like go fight with people on X or whatever. At the same time, I think that kind of like proactively and preemptively removing yourself from online spaces is
is exactly what the right wants. It's like a tricky situation, I guess I would say. And like people need to make that calculation for themselves. But I don't think that deleting your Facebook, deleting your ex, deleting all of your social media and just kind of like receding from the internet is an option at this point because there's very little difference between the real world and the online world as it
you know better than anyone. And so I think it's important that like we make ourselves big and, you know, keep these conversations up and don't proactively minimize ourselves, I guess. Also, these people can text these agents can text you. They can message you on Discord like they're not just out there in the public. They can infiltrate telegram groups, you know, group chats. Like I think that part is extra nefarious, too, because even if you quit social media like we are, as you mentioned, we are all just more online and more interconnected than ever.
ever and more and more of our communication is digital. Yeah, they are trying to infiltrate like all of these non public spaces as well. And so they know that not everyone is on Twitter, maybe a conversation might start there, but then it moves to somewhere more private. And they know that as
as well. Jason, thank you so much for joining me today. Where can people find your work? Yeah, so I'm a co founder of 404 media, which is an independent journalist owned website. So you can find us at 404 media.co. And our podcast is the 404 media podcast. Yeah, thank you so much for having me.
Thanks again to Delete Me for sponsoring this episode of Power User. To help get your data removed from the internet, check out Delete Me via the link in the description and use code Taylor20 at checkout for 20% off consumer plans. All right, that's it for this episode. Thank you so much for watching. Don't forget to subscribe to my tech and online culture newsletter, UserMag. That's usermag.co, usermag.co to support my work and to keep this podcast going. Also, my bestselling book, Extremely Online, is finally out on paperback. It has a brand new cover. It's
out now available wherever books are sold. Pick it up. It is awesome. And I'm obsessed with the new cover. If you like this show, please don't forget to give us a rating and review on Apple Podcasts, Spotify, or wherever you listen. Thanks. And I'll see you next week.