This episode is brought to you by Amazon Prime. From streaming to shopping, Prime helps you get more out of your passions. So whether you're a fan of true crime or prefer a nail-biting novel from time to time, with services like Prime Video, Amazon Music, and fast, free delivery, Prime makes it easy to get more out of whatever you're into or getting into. Visit Amazon.com slash Prime to learn more.
At Capella University, you can learn at your own pace with our FlexPath learning format.
Take one or two courses at a time and complete as many as you can in a 12-week billing session. With FlexPath, you can even finish the bachelor's degree you started in 22 months for $20,000. A different future is closer than you think with Capella University. Learn more at capella.edu. Fastest 25% of students. Cost varies by pace, transfer credits, and other factors.
fees apply. What we're doing is handing a weapon to the government that can be whatever the government wants. And when that weapon targets speech and ideas, it's a real big problem. Welcome back to Free Speech Friday. Today, I want to talk to you guys about one of the most dangerous laws in Congress right now.
It's called the Kids Online Safety Act, and it was recently reintroduced by a bipartisan group of lawmakers, but it's being heavily pushed by the Democrats. It's a law that claims to make kids safer online, but please don't let the name fool you because this law would give the government unprecedented power to censor speech across the entire Internet in the name of protecting kids.
From mental health support communities to LGBTQ forums, COSA endangers the very spaces that young people turn to for help, identity, and connection. The Heritage Foundation has come out and said that it plans to use COSA to censor LGBTQ content off the internet.
internet and Marsha Blackburn, the bill's co-sponsor said that COSA must be passed in order to quote, protect minor children from the transgender in this culture. Today to help me break the whole bill down is Ari Cohen from the Foundation for Individual Rights and Freedom. He's a brilliant tech policy expert and one of the foremost free speech lawyers in the country. Ari, welcome to Free Speech Friday.
Thanks for having me. I feel like you and I have discussed COSA before on this very podcast, I think. Too many times. Okay, so this law sounds pretty good, I think, to a lot of people that don't know what's going on. Why is it so dangerous? Walk me through kind of the biggest reason why COSA is so harmful and will lead to such censorship.
So the law includes something that's called a duty of care, and it requires social media platforms basically to protect people against certain kinds of harms like anxieties, suicidal ideations, eating disorders, things like that. That requires platforms to look at content and say, this is going to cause someone harm.
And if it is, what they're going to do is they are going to stop kids from seeing it because they could get in trouble if it causes some kind of damage or even the government thinks that it might. So I want to give people a concrete example, because when you say it requires the platforms to prevent showing harmful content, that sounds good, right? Give me an example of how that might play out in a negative way, because I think the biggest issue here is that platforms, what someone considers harmful content, especially under the Trump administration, is LGBTQ content, right?
That's exactly right. And this duty, it applies to the design features. So say the content recommendation system, the algorithm that provides you what's on your feed. Take the example you just gave, LGBT content. There is a contingent on the right that thinks
that kids are turning trans just because they read things on the internet. Say the Attorney General of the United States sends a letter to Facebook saying, "This is harmful material for children." What's Facebook going to do? They're going to tailor the content recommendation system so that it excludes anything related to being transgender whatsoever because that is the safest thing for them to do so nobody can say you're causing kids harm, which means
You're not going to see any trans-related content if you're a minor, which means if you are trans, you might think you are all alone. You are not going to be able to find support, community, resources, or anything like that in the way that I, as a little gay kid in a religious Jewish community who had never even heard of the fact there were other gay people, was able to find support and community on the internet when I was a kid. We're basically going backwards.
Yeah. And especially, I think, to do it at this time when we know other stuff that's considered harmful content is reproductive justice content, information about women's health care, anything about women's issues, civil liberties, social justice issues. These are all considered sort of harmful content, anxiety inducing. And what's harmful content will grow and change and morph with whatever the subjective content
values of the person doing the determination is at any given time. And Republicans should probably beware as well, because when Democrats get elected into office again, should they ever decide they want to win an election? What's going to happen? They're going to target stuff that the right likes and say, that's harmful to minors. What we're doing is handing a weapon to the government that can be whatever the government wants. And when that weapon targets speech and ideas, it's a real big problem.
Well, let's talk about some of those problems because everything you just said, I think, again, to people sounds great, right? Like all they hear about all day is the harms of these platforms. And obviously these platforms do do a lot of really nefarious bad things. But this law, I would argue, is really dangerous and bad.
Number one is the censorship, right? Like this is essentially a coded censorship bill. So can you sort of walk me through how that sort of like duty of care could be weaponized to silence speech? Quite easily, in fact. First of all, the original version of COSA didn't revolve around quote-unquote design features. What it did was it imposed a duty of care to prevent and mitigate those classifications of harm. And that initially drew criticism, including from me,
because a duty of care means platforms, first of all, they have to figure out how users will reasonably react to certain content. And the problem with all of these kids bills so far is that they treat minors as a monolith. They treat them as a big group that all have the same interests and the same mental state, the same capacity, and it requires them to treat them all the same, but different kids, particularly with respect to say a 14-year-old versus a 17-year-old,
year old have different reactions, different experiences, different ways they manage and cope with different content. And even within the same age range, different kids are different. They have different life experiences. They don't all react to content in the same way. So how platforms are supposed to figure it out for
kids as a whole, I don't know. The alternative is that social media platforms have to be each kid's virtual psychiatrist. And if you don't trust social media companies to be taking care of kids in the first place, I don't know why you would trust them to do that. Either way, it's an impossibility. So what results from that is that
platforms have to basically restrict to, for minors, everything but the safest content. - When you say safest, like what does that even mean? - That's a great question, I don't know. Basically like the content that would not even set off a reaction in even the most sensitive kid.
And that's basically nothing. And we've been here before. Parents have tried to sue broadcasters, video game makers, the makers of Dungeons and Dragons for, quote unquote, harming kids. And the courts have all rejected the duty of care on First Amendment grounds because they've said that would basically restrict allowable speech to the most sensitive members of society. So what did the COSA sponsors decide to do?
they made it instead of directly about the content they said okay this is only about design features and to some people that would seem to make sense but let me read you again take reasonable care in the creation and implementation of any design feature to prevent and mitigate say
mental health disorders, including anxiety, depression, eating disorders, substance use disorders, and suicidal behavior. So take Infinite Scroll, because that's usually cited as one of these design features. And just Infinite Scroll means basically a feed that continues to refresh. Right. So I'm not sure how Infinite Scroll, if you don't consider the content at all, could possibly cause an eating disorder.
It would always have to do with the content that is displayed. And also, the fact of the matter is, Twitter's feed, Facebook's wall, what have you, those are all also design features. There is no way to mitigate against these harms that they're very content-based harms.
without considering the content because features don't do anything to kids at all. They're just there. Let's like kind of put this a lot more in like layman's terms. What you're saying, and correct me if I'm wrong, is they've tried to sort of reform this bill, which was essentially, it sort of fell off the map initially because it was very obviously a censorship bill. Now they're back saying like, we're not regulating the content, we're regulating these dangerous features
like keeping kids scrolling forever and these dangerous design features. And this is what you hear also from these groups that I deeply disagree with, like design it for us. And they're like, it's the features that are the problem, not the content. But as you're saying, you can't really decouple the two because a feature in itself isn't inherently harmful. It's the content that it's delivering that is being litigated.
I always tell people if we were in a situation where social media presented nothing but educational content and funny cat videos and kids were like staying up late at night under their comforter, like unable to sleep because they're so too busy learning from social media and they were sleeping too much during class because they were up late all night learning. Would we have this same panic? No, no, we wouldn't. It is all about the content and being
they barely try to hide it. And now they're like trying just a little bit and it's still completely insufficient because really what they're going after is the content. And if there's one thing that the first amendment doesn't allow, it is the suppression of content just because the government finds it objectionable or harmful. And that's what this bill does. And it gives the government a,
ton of power to decide for everyone what that harmful content is. I think that's actually such a good way to think about it, because I think actually that's maybe like a good test for any of these internet laws is if the content on these apps was, as you said, like the most wholesome educational videos ever, would this law still be necessary? And no, it wouldn't. I think there's
There's also just a lot that's sort of presupposed in this law. For instance, the idea that social media is inherently harmful and dangerous and it's leading to compulsive usage and all of these other things. The eating disorder stuff drives me crazy because I grew up also in the aughts
where when I was a kid, like all we had were these magazines that promoted the most blatant eating disorder, like the entire media promoted eating disorders. And in fact, we have the body positivity movement because of social media and girls, young girls today, especially are exposed to more diverse body types because of the internet. And yet we see this sort of trope that the internet is leading kids into these things.
How soon we forget even very recent history. You know, there's also the quote-unquote addictive feeds issue. And I am not sure if these people are consumers of any kind of media because the entire purpose of media is to keep your eyeballs and your attention. And that's why we have cliffhangers at the end of episodes. That is why we have, say, commercials that...
tease episodes and automatic play of the next episode it's because the entire purpose of media is to keep you watching and like if you try and say that is inherently a bad thing then where do we go from there because what you're saying basically is that addiction to ideas is a thing i don't want to live in a world where that is a legally cognizable thing because that is terrifying
It's even worse than that, too, because they're not just saying that you're addicted to consuming information and entertainment or whatever, which is absurd on its face. Like you said, it's like you're addicted to reading. It's also addicted to communication. I think of Tristan Harris, who I deeply disagree with, talking about Snapchat's streak's
feature, which for people who don't know, sort of encourages you to message somebody every day. Like if we message each other each day, each day that we continually message each other counts towards our streak. So if we've had a hundred day streak, that means we've sent each other at least one message a day for a hundred days. Now, this is something that these people
that claim social media is isolating us and it's making us all miserable and all this stuff. You would think that they would actually love, right? Wow, it's encouraging you to keep in touch and directly communicate with your friends. Tristan Harris described it as being chained to your best friend on a treadmill and being forced to run. And there was all these endless moral panic articles about the anxiety that's giving kids to break their Snapchat streaks.
That's very like old man yells at cloud territory. It's absurd, but it leads to legislation like this too, where we know that the primary use of Instagram and most other social media apps among teenagers is actually not even through consent. It's messaging. It's DMs. Adam Asari said that the majority of teens spend most of their time on Instagram in DMs, communicating with each other, messaging each other, group chats, basically. And
That's what we're telling them is so negative. And it's like, you're addicted to communicating with your friends. It just reminds me of the moral panic over telephones and landlines. And the woman is talking on the phone for too long, and it's going to destroy her marriage or whatever nonsense they were pushing back in the 70s. As ever, we've been here before.
Where are these like presupposed beliefs coming from? Why do you think that there is so much stuff in this law that sort of assumes that social media is inherently addicting? It is responsible for the decline of mental health. Like, where do you think this is all coming from? Well, you get a lot of them from the moral panic pieces, as you said, and a lot of it is also Jonathan Haidt. His book, The Anxious Generation, kind of
put it into the mainstream of like pop psychology, basically, even though none of that evidence is causal, it's all correlational. And I'll take your listeners on a brief law tangent, real brief. When laws target speech based on the content of the speech, they have to go through a
legal test called strict scrutiny. The courts will apply strict scrutiny. And one of the parts of that test is that the government has to prove that there is an actual problem in need of solving. And when California tried to ban the sale of violent video games to minors, it put forth an expert who put his study there that said this correlates with higher mental health problems and acts of violence. And the court said, no, you have to have causal evidence. You have to have
actual data that shows this is a problem. And all of the studies so far have said there's a correlation, but there's not necessarily causation. In fact, some studies have found a positive effect on kids' mental health, and so the science is very much not there yet. But
Jonathan Haidt keeps pushing for this kind of legislation, saying it's the only thing we can do. It's the only reasonable response to this kind of data, which, of course, is still correlational. It's just that's a lot of what's driving this. It is. It's a lot of nonsense. And this is a man, by the way, who's on the board of Barry Weiss's fake censorship university. So I don't think he cares much about the speech concerns here. Well, he's actually like come out and said, oh, Elon Musk and Linda Iaccarino say it's
not a free speech problem, as if they are the two foremost experts on First Amendment law in the country. I was like, okay. Elon Musk, who's banning journalists, who has currently banned multiple journalists for reporting critically on him, and is one of the most anti-free speech tech CEOs ever. He's constantly threatening defamation suits around anyone who covers him critically.
My jaw dropped when I saw that. Do you know what a credible source versus a non-credible source is? Because this does not indicate that you do. He does not. But speaking of Elon, actually, I think another thing that you hear from people is, well, we've changed COSA a little bit and now tech companies are on board. X actually supports this legislation. So does Apple. So if it is really so dangerous to these tech platforms and companies, why would Apple and X support these types of laws?
Well, I don't know that it's necessarily dangerous to the companies. The companies that operate globally are kind of already set up for compliance. And Apple probably sees itself in a way as like already kind of doing all of this in a sense. Yeah. Apple also doesn't have social media. Right.
And there's a side effect of that if there's actually a war right now over if there is going to be age verification, should it be the device manufacturer or the app store? And Apple certainly has an interest in that dispute and would much rather it be the social media platforms. The company's job is to look after their bottom line. It's not necessarily their job, although everyone thinks they should look after users' free speech.
So companies might support it, but you have to realize that the company's interests and your interests are not necessarily aligned. In fact, the sponsors of COSA will tell you that users and platforms do not have aligned values because they are saying over and over and over again how platforms only follow their business interests. They only do what's going to protect their bottom line, which, by the way, means that, yeah, of course, they're going to over-censor because they're
Their bottom line says, "You have to be cautious or you're going to get hit with a giant lawsuit by the attorney general or something." And so the sponsors are kind of speaking out of both sides of their mouth saying platforms are only motivated by their business self-interest. So that's why we have to do this. And then they say, "Oh, but it's not a free speech problem. Platforms aren't going to over-censor content, even though their business interests will be to over-censor content."
Not that any of these people have particularly told the truth about this bill from the get-go, but... Right. I mean, I think this bill and others like it are framed as cracking down on big tech, but it's not cracking down on big tech. It's cracking down on speech of users because big tech, as you said, they don't really care that much about free speech. I just did another video on this law in Texas, for instance, that would censor content about abortion online. Like the platform
platforms are like, "Fine, we'll go along with it. We'll censor it nationwide. We might as well just to be compliant, like you said, and not jeopardize our business." And so there's not really any sort of advocacy group looking out for users in this situation. Well, FHIR is looking out for users. Other than FHIR, other than FHIR, our one savior here, there are some organizations.
There aren't any companies that are doing it. There aren't any companies. That's for sure. I want to just sort of like run through a few other ways that this bill is harmful. So you mentioned the duty of care. The duty of care is problematic. There's no science behind COSA's primary claims. And there's this idea of the carve-outs. And they say that carve-outs fix this First Amendment problem. And they even say, I guess, in this text of the bill that it's not a free speech issue. Don't look here. The bill's stock language says that viewpoints are protected.
So can you talk a little bit about this? Yeah. So first of all, that's entirely a fig leaf because the sponsors know damn well what the operation of the duty of care and the rest of the statute is. They know what platforms are going to do to cover their asses for business purposes. They're saying, oh, speech is protected and you don't have to take it down or censor it knowing that the platforms are going to do that because it's in their best interest. It is entirely disingenuous.
So some of the other ways, you have the default settings. So the miners have to have the most restrictive settings enabled by default. Of course, platforms have no way of really knowing which user is a miner or not, unless they do what? Age verification, which is a huge security and privacy nightmare, not to mention age.
another free speech concern. But say platforms say we're not going to do that, what are they going to do? Everyone's going to have to have those most restrictive default settings so that the platforms don't get in trouble. It might not seem like a big deal, but first of all, anyone who's tried to play with the social media platform settings know how much of a pain in the ass it is to actually find anything.
And second of all, there are a lot of these bills in the states, and I would not be surprised to see it introduced into COSA at some point, that talk about how parents have to provide consent if a minor wants to change. I think right now COSA says users under 13...
have to have parental consent, but most platforms don't allow users under 13 on the platform anyway because of COPPA, which is an entire other barrel of evil monkeys. So kids are going to be by default, or potentially everyone by default, is going to be restricted from contacting other people or posting publicly, things like that. Or platforms are going to have to infringe on all of our rights by providing age verification.
COSA also gives the FTC the power to issue guidance regarding the best practices in providing minors and parents with the most protective level of control over safety and provide additional tools that allow parents to address the harms to minors the bill specifies. That gives the FTC a whole lot of power over what people get to see, read, and hear on social media.
because in practice, the FTC issues guidance saying this is how you best protect children, which first of all is nowhere near the FTC's zone of expertise whatsoever. The FTC basically gets to be the internet censor. And wouldn't you know it, Andrew Ferguson, the chair of the FTC, is currently engaged in this phishing expedition. He asked for public comment regarding social media quote-unquote censorship by the platforms as potentially
you know, basically looking for pretext to find some kind of competition claim where it really is a speech issue. And if you haven't looked at the past, you know, 100 days of Trump administration and think, oh, Andrew Ferguson's not going to use this to go after content he doesn't like on the internet, then I just don't know what to tell you. It's not even a question. It's not even almost certainly. It is certainly going to happen. She's made up her mind.
She's gotten it right out of her life.
Boring money moves make kind of lame songs, but they sound pretty sweet to your wallet. BNC Bank, brilliantly boring since 1865. Hi, this is Joe from Vanta. In today's digital world, compliance regulations are changing constantly, and earning customer trust has never mattered more. Vanta helps companies get compliant fast and stay secure with the most advanced AI, automation, and continuous monitoring out there.
So whether you're a startup going for your first SOC 2 or ISO 27001, or a growing enterprise managing vendor RIST, Vanta makes it quick, easy, and scalable. And I'm not just saying that because I work here.
Get started at Vanta.com. I think of also the Heritage Foundation, which is obviously the architect of Project 2025 and said outright, this was reported in Tech Dirt as well. We will use COSA to censor content about reproductive justice and other things. They have told us who they are. We should believe them. And yet these Democrats are really helping to push this forward. I mean, Schumer is on board with COSA now. So I guess I'm wondering, like, why is this such a bipartisan bill?
when we know that the right has explicitly said, "We will weaponize this law in the worst way that you can imagine." So first of all, there's a politics issue of it being very hard to say, "I oppose something," when it's quote unquote, "for the children." There's a reason why people do that, because it makes it politically costly to say, "No, this isn't the right way to do it."
to do it. But I think more fundamentally, I think that some people are deluded into thinking, oh, either no, that could never happen. Nobody would be brazen enough to do that.
or the courts will sort it all out and they'll say that the government can't do this. To which I say, first of all, again, have you been alive for the past hundred days? You must have been because there is a minimum age to be in the Senate. Second of all, if you don't think anyone is brazen enough to do that,
look at the entire history of the United States. Third, maybe the courts will say you can't do that. But two problems with that. First of all, that means the people will have had to go through however long it takes for that case to wind through the system, during which time the platforms are going to continue doing all of these things because they want to cover their ass.
So you have that period that it takes for the litigation to unwind where the censorship is ongoing. But second of all, you trust that the Trump administration is going to listen to the courts. This is handing a loaded gun to somebody who says, I want to commit murder and thinking, oh, they just said that. There's no way they're actually going to do it. So to give you the short answer, I have no idea what's going on here because it's so nonsensical. It doesn't make any sense at all.
It is fundamentally terrible idea. I think also what we forget is how vast the internet is. And I mean, this has come up actually also in the discussions of Section 230. But when we think of social media, we think of communication platforms, we think of Instagram, TikTok, Twitter, you know, the main ones. But the internet is massive.
massive. And there are so many niche communities, especially recently, as we've seen sort of the rise of alternative platforms that sort of challenge this tech duopoly that we have with Meta and Google. How would COSA affect them? Because I'm thinking of like these small forums, for instance, for body positivity or substance abuse help, actually. It seems like they would be targeted in this legislation as well. Yeah. And there was a version, the House version last year tried to do
a thing where it applied to only like extremely large platforms. That's not in the introduced text this time around. It creates compliance costs and risks. And like you said, as with Section 230, big platforms could live without it. Small platforms would get absolutely demolished with the potential liability. This creates a lot of liability here. And you think of some of the niche communities that might be around
disfavored content, say a message board for LGBT people. If you don't think that those are big enough for people to go after, well, again, I got, you know, a bridge to sell you in Brooklyn. Like they're going to go after it because they've said they're going to go after it. And it's going to probably cause a bunch of sites to get shut down.
I think the LGBTQ thing is so important and the mental health stuff. There's basically no way to host a community or a forum or support group online talking about this stuff without it being targeted by COSA. When you look at what the government considers harmful to children, it's information about LGBTQ rights, as Marsha Blackburn said just last year about COSA, right? We need this law to eradicate the transgender from the internet or whatever.
She thinks Instagram's turning your kid gay. But there's no way to talk about mental health if your whole thing is to stop any sort of content that might be harmful for mental health. That includes support groups that actually help people with mental health because you have to sort of talk about some of these thorny issues. And there's no sort of agreed upon ways also to handle some of that stuff. So it seems like the effects would be so wide reaching in ways that I think people aren't even considering. Like their hatred of meta and Google is so strong that they're not even thinking about the rest of the Internet, which is that.
Two things to that. First of all, yes, it's so broad and vague that it can transmogrify into basically whatever the government wants it to be. But second of all, and related but also unrelated, if people say are going to Meta for information...
about, say they want to be anorexic or something. They're looking for pro-anorexia content. Included in the content they find is going to be anti-eating disorder content because the people who combat eating disorders specifically try to infiltrate, use the same hashtags to try to pull people away. In addition, Facebook is probably going to say, hey, you're looking at harmful content. You should think twice about this and do like little nudge interventions.
If Facebook is now saying, nope, we're just going to block all that content, what are people going to do? They're going to go into Google and search for it, and they're going to find a website that doesn't do that. And they're much more likely to find very harmful content there.
So we're pushing kids into the darker corners of the, as you said, very large internet, where if you think Facebook is doing a bad job, I can promise you there are sites that are doing a worse job. And they're not necessarily like social networks. They're just sort of like these websites that provide very dangerous information. You could be radicalized by them. I mean, I spent a lot of time on GeoCities when I was younger and all the weird pro-Anna circles back then. What a time to be alive. I know. But it was such a different internet.
But that whole infrastructure still exists. The eating disorder problem, in my opinion, is so much bigger than social media. This is a society-wide issue. And I feel like that's the problem with all of these things. We're trying to take shortcuts. Yes, they want to blame social media for these much bigger systemic issues without ever addressing the bigger systemic issues in society.
because these issues existed long before social media ever did. The problem is humans. It's not social media. Right. And ultimately, the internet just sort of is a connector of humans, right? And that seems to be what they want to target. As if talking to each other less is like the thing that we need to do, right? Which is ironically, I think what this will lead to. I think of this as well, like it's stigmatizing communication. It's telling kids like you're getting addicted to communicating with your friends at a time when they're also doing all this moral panic about the loneliness epidemic.
and how lonely children are. Kids are trying to- Pick one, you can't have both. Yeah, and kids are trying to connect online. I guess, what would you say to people that are like, well, this is all different. They've been hoovering up all the crazy conspiracy theories about meta and Google and whatever. And they're like, well, these algorithms are fundamentally evil and bad and we must do something. And yeah, this law isn't perfect, but we've got to protect the kids somehow.
What I'd say, first of all, is that the algorithms are generally more of a mirror than anything. It shows us what it thinks we want to see based on what we have seen. But more fundamentally, I would say to them, people thought television was different. People thought radio was different. People thought comic books were different. People thought video games were different. It always seems like it's different and scarier, but it's not.
When my parents were growing up, people were freaking out about the effects of a TV in the household. Rock kids' brains are going to spend all day in front of the boob tube and whatever. By the time I was born, America had figured out a way to integrate television into the home in a way that is more or less videated the concerns that people had previously had. We are in the period right now where we are still figuring out how to integrate social media into our lives. In a generation, we will look back and this will all seem fantastic.
Very, very stupid. Just like it seems stupid that people tried to go after comic books and video games. Like we're going to look back and say, oh, well, now we know how to use this in a way that causes fewer problems. We're going to get there. We're in the process of getting there. Don't screw it up for the rest of us. You're more optimistic than me, Art.
I have to be. Otherwise, what do I have? I know. I guess like, you know, I've done a bunch of reporting on sort of like these past moral panics and there is so much harm. I mean, the comic book comparison is a good one, right? They passed actual laws, right, against comic books. And it forced one of the biggest comic book manufacturers out of business. It deplatformed actually a huge range of diverse comic books that had LGBTQ characters and things like that. So I worry that, and same thing with, I mean, I grew up in the video game panic. A lot of millennials were there like, you play a video game, you're going to be a school shooter, you know, like
it cut off a lot of kids from community and connection. Well, that's why I think the response has to be, don't pass these laws. If we don't pass these laws, we will figure this out in a generation, maybe less. But if we do, we're going to experience the same harms that we experienced with the other moral panics.
all over again and we will have done it to ourselves. And we will have real systemic harm. I mean, it really caused actual harm. And we know too that there was that recent study that came out that found that kids that have smartphones actually are happier again because they're in communication with their friends. They're able to connect with their friends. And it's harder and harder, I feel like, for kids to find that connection. And so I just, I hate to see it stigmatized in
in that way. Awesome, Ari. Well, thank you so much for your time. Where can people continue to follow your work? You can find me at Ari Cohn on Twitter, AriCohn.com on Blue Sky. You can follow the Foundation for Individual Rights and Expression, FIRE at thefire.org and thefire.org on Twitter. Yeah, I think those are the main places. Thank you so much for chatting with me. Thank you. My pleasure. All right. That's it for this week's episode of
Free Speech Friday. I'd love to hear what you guys think about COSA in the comments. If you have any other questions about this law, I want to hear them. I'm going to be responding and providing more information also in the description to a couple of great pieces on tech dirt that really outline how dangerous this piece of legislation is. If you liked this episode, please subscribe to my tech and online culture newsletter, usermag.co. That's usermag.co. It's a tech and online culture newsletter that covers a lot of free speech issues and just broader online culture, tech news, and more. So thanks for subscribing and see you next week.