The Hoover Dam wasn't built in a day. And the GMC Sierra lineup wasn't built overnight. Like every American achievement, building the Sierra 1500 heavy duty and EV was the result of dedication. A dedication to mastering the art of engineering. That's what this country has done for 250 years.
and what GMC has done for over 100. We are professional grade. Visit GMC.com to learn more. Assembled in Flint and Hamtramck, Michigan and Fort Wayne, Indiana of U.S. and globally sourced parts.
Abercrombie's Vacation Essentials are 25% off right now, and Spotify listeners are getting an extra 15% off with code SPOTIFYAF. Abercrombie's Perfect Pack means denim shorts, your favorite swimsuit, and tons of boho dresses. So book the trip and use code SPOTIFYAF for an extra 15% off through June 9th, 2025.
25% off vacation essentials is valid in US and Canada through June 9th, 2025. Applies to select styles as indicated price or flex discount. Everything in this conversation has gone horribly wrong. We're not able to protect the kids. We're only able to protect some kids at the expense of others. All across the internet, parents and lawmakers are calling for tech companies to verify users' ages online. They frame these efforts as a smart, reasonable, and harmless way to keep kids safe.
that's not what age verif verification efforts are a privacy, freedom and the web. No one knows this be He is the associate dean University School of Law
He recently wrote a fantastic paper on age verification, soon to be published in the Stanford Technology Law Review. And today he's joining me to talk about how age verification works online and how these laws pushing it are all ultimately a Trojan horse for government control, surveillance, and censorship of online speech. Hi, Eric. Welcome to Free Speech Friday.
Thank you. Glad to be here. So you wrote this fantastic paper that I'm going to link in the description below about the problems with age verification online. To start off, for people that haven't been following all of this, can you explain what is age verification online? What's happening in the tech world that has suddenly kind of made this issue at the forefront?
It's a pretty simple concept. The idea is to segregate children from adults so that children can be given some special regulatory treatment. That might be protection for privacy concerns, it might be restrictions on things that they could access that could harm them, it could be concerns that they're becoming addicted or overusing a particular service, whatever it is. The idea is that children have special needs online, so we need to segregate them from the adult population in order to provide some greater legal protections.
So age authentication, and that's the general term I use as opposed to age verification, though many people treat them as synonyms, is just the idea of let's ask or find out the age of the person who is presenting themselves to an online resource. And if they are identified as children, put them into the special track
where they'll get these greater legal protections. I think we've heard so much about this from high school teachers, parent groups, people like Jonathan Haidt, my nemesis, these people out in the media saying, "Hey, let's raise the age of using the internet."
Or let's make sure that we keep, for instance, kids under 16 off social media. This is something that actually Australia is trying to do right now. And so there's this sort of increased scrutiny, I think, on the age of users. And what should the internet experience look like for different ages? And should we tailor the open web and social media to different age groups?
We treat children differently in all kinds of respects, both from a legal standpoint, like as we get older, we get additional rights from the government, and also treat them differently in terms of their ability to navigate through the offline world. So there are certain items that are unavailable to children for purchase or certain experiences that children can't have. So it's really like
embedded in our society that children are different and they have different needs and different vulnerabilities, and therefore we should treat them differently. And so it's such a natural extension from the offline world to the online world to do exactly that, to say children are different, they have special needs, we should provide a special form of internet for them that is built for where they are in their development. Yeah. And I think
because of that, it seems so quote unquote reasonable. This is something that I feel like I'm constantly fighting with people online about because people are like, well, exactly like you said, we have all these other experiences that are different for children. Why is it such a problem to verify everyone's age online and tailor those experiences? And I think your paper really breaks these
five big problems down and I want to walk through each of them today because I really want people to understand that even though this sounds like many things that sound like such an easy fix and so wonderful, this is actually not something that we want to do. So to start off with, let's talk about the privacy implications. I think there are big privacy invasion implications with any sort of age
authentication scheme. Can you talk about some of those? So the idea is that we're going to do some form of checking an internet user's age in order to determine if they're eligible for this special treatment or subject to the special treatment. If they don't want it, it doesn't really matter. And there are a couple of primary ways to do this. There are
a bunch of other ways they're being explored, but the two primary ways are to either require internet users to prevent some form of government identification that includes their age so that the party that they're dealing with, let's call them a website for now, can determine their age and therefore decide if they're eligible to come in as an adult or as a child.
So it would involve some form of displaying government issued ID to someone or something that will be able to verify the age information that's contained on that ID. The other primary way that age might be determined online is through some form of visual inspection.
and the most popular mechanism today is to basically do a face scan so that someone or something looks at an Internet user's face, determines if they are an adult or a child, and then makes that classification and allows them to proceed according to the classification. In order for an Internet user to navigate either of those two systems, they have to provide something that's incredibly sensitive to them as part of the checking process.
That could be a government-issued ID, which now displays to a third party information that we might consider to be quite sensitive. Some of the information that's contained on a government-issued ID includes things like a user's height and weight, and those are also things that people might not want shared with a third party.
So when you add it all up, exposing a government-initiated ID to a third party potentially exposes lots of additional private information that the user wouldn't want shared with a third party, and that might actually be quite sensitive to them. But even if it's not that,
Sharing the age alone itself is a pretty sensitive piece of information. That's something that, in fact, we often don't necessarily want to reveal. And there are, in fact, situations where that can cause significant adverse consequences outside of this age authentication process.
Showing a face scan is also involving disclosure of what we would consider to be highly sensitive information. What happens with a face scan is that typically done automatically, some algorithm will compute some set of geometrical principles that they'll then use to estimate the age of the person who's presenting their face.
Those biometric calculations are considered to be highly sensitive information because if they are recycled elsewhere, we now might be subject to being evaluated by a third party using data that we didn't provide. And as a result, once we are identified by our facial geometry in one place, we have the risk of it then being used elsewhere.
As a result, we provide great legal protection for biodramatic metric information. And the age authentication process requires us to essentially disclose that information to lots of other people, as opposed to us being able to keep it more restricted and potentially under our control.
The combination of all that is that in order for us to do something as simple as doing age authentication, we have to disclose highly sensitive personal information about us, whether it's a government ID or the biometric information. And as a result, that puts us at greater risk of privacy violations. And I think people today are so cavalier about their data. I work with a company called Delete Me, which helps get your data off the internet, which I've paid a lot of money to over the years to just kind of like scrape this information from the web. It's kind of
crazy because we have all of this panic about our data and we want to ban TikTok because our data might get out there. But it seems like these age verification schemes actually require tech companies to collect more data. Like you said, suddenly, not only are you collecting sort of basic user data that you can glean from usage of a social media app, but now you have, like you said, young people, children's personal information, their weight, their height, their detailed biometrics.
face scanning data. Like all of this just presents such a privacy violation and also makes me wonder if they would be victims of identity theft. I feel like young people, especially when their data does get out there and they are victims of identity theft, the consequences can be really significant for the rest of their life. So it seems like this is a huge liability.
You make a great point. And it really gets down to the fact that we have a duality about privacy in our society. On the one hand, younger people tend to be pretty public with a lot of information that my generation and older might not have chosen to share publicly. We're capturing more of our lives in digital format and sharing them globally. This entire generation has been out in public much more than prior generations. And yet there's also extreme concern about privacy. And
we're aware of the fact that our information can be captured, gathered, sorted, and then weaponized or used against us. And so there's a lot of times when even people who are otherwise public with a lot of their lives make great efforts to try not to share information with people they don't trust or in situations where they're nervous and for very good reason. And so the idea that the law would build an infrastructure that
increases the likelihood that people who don't want to collect your information are required to collect it just seems backwards to me. And because we're dealing with kids, to the extent that kids are being asked to make that choice, that's even more illogical. The whole reason why they need to be segregated in the first instance is because we think they're more vulnerable. And so asking them to make these very adult choices is like completely backwards.
It's so terrifying. And I think of also things like the rise of facial recognition on student campus protesters or all of the other ways kids are tracked and surveilled online and exploited for advertising. I mean, especially I know beauty companies and weight loss companies have been targeting younger and younger children. And, you know, if we create this infrastructure where suddenly you're gathering young kids weight and height and highly detailed biometric information, you can see how that would be very valuable to advertisers. And we have such a leaky data structure in the U.S. We don't have tons of
data privacy or anyway, I can't imagine that it wouldn't make it into the wrong hands. In addition to some of that stuff, which we're going to get into later, I want to dive into the second major concern with age verification schemes, and that is security risks. Can you talk about some of the security risks that age verification would present? We already mentioned that in order for users to navigate the age authentication process, they have to disclose sensitive personal information about themselves. That information may
might very well be captured into a database. We call it sometimes honeypots. The idea is that it's a nuisance that attracts the bad people to come and try and grab that data for their own purposes. We can build a legal system that discourages that. We can say that anyone doing the authentication cannot store the data. We can say that
anyone storing the data has to deploy good security. We could say that anyone who's going and stealing or expropriating that data should be subject to severe sanctions. And yet we all know it's intuitive to us that if we're transmitting data over the internet, it's gonna get captured somewhere and capped
and then stolen. That's just not a weird happenstance. That's just the basic nature of the process. So as a result, we are subject to greater security risks with our data. Not only are we exposing our private information, but now it becomes potentially expropriated by third parties who could do a variety of bad things to us.
It could be identity theft. It could be data mining for adversarial purposes. It could be the government using that data in order to make adverse judgments to us. It doesn't really matter. Once the data is out of the control of the people we thought were going to possess it, we no longer can protect our own interests at the same level. Even if no data gets stored,
it still becomes an interception risk. There's a bunch of ways in which we can try to design systems to reduce that interception risk, but just by mere movement of the data from my computer to the central server in order to authenticate the information, it becomes vulnerable to interception, actually at multiple checkpoints along the way. Even if we discounted the storage and interception risk, a third risk is that bad
will create websites for the sole purpose of trying to gather authentication information. And we're talking about kids. They're not necessarily going to know that this site is set up solely for purposes of capturing information to use it for one of these adversarial consequences. So the entire infrastructure is built
that puts more of our data at risk at various checkpoints without our control or encourages the bad guys to actually go out and try and gather from us solely for the purpose of then using it against us. Yeah, it creates this whole legal framework to do that and to incentivize people to do that. I'm just thinking of so many people that create these like pop-up social media sites that go viral. This guy, Nikita Beer, who's like a viral guy on Twitter, he's made a lot of quick apps.
that are sort of like overnight confessional apps that go viral or whatever. You could imagine somebody creating one of these viral type apps or social media platforms or confessions platforms solely just to gather data. Because again, now you've created this permission structure to authenticate everyone's age, to gather all of this data. And there's actually no assurance that the company that's collecting all that data will protect it or not just sell it to the highest bidder. We've actually seen an example of that, which is that people were circulating
apps that were designed to show the user what they might look like X number of decades in the future. And people were so vain, like, oh, that sounds great. I want to know how great I'm going to look when I'm old. But that was actually getting people to expose their face, use the software to estimate what they might look like as they get older, and then feed it all back into a central resource.
kind of attractive temptation is a security risk and it would all be happening with the government imprimatur. The government is saying you got to present your face or you got to present your ID if you want to go and take advantage of the internet.
As a result, the bad guys will capitalize on that government obligation saying that this is what you just do as part of the internet. I want to get into the third issue with age verification, which you called speed bumps, basically putting up more roadblocks than internet. Right now, I think we actually take
for granted how free and open our web is. You know, pretty much can visit any website. You don't have to enter lots of information just to view an article that somebody sends you. Maybe if unless you hit a paywall, you enter your email or something. But for social media, it's all free and open. You can access any sort of content. So can you talk me through this third concern of roadblocks?
Let's think about how we use the internet today. And I'm going to talk about the web. Apps are a little different. But with the web, we're so used to the idea that we click on a link and we go to our destination. And we're actually quite impatient about the delays between when we click and when we arrive. If that delay is too long, we get impatient. We might turn around. We might cancel the search. We don't have a lot of tolerance for delays. The legal structure is being
built that will require it to be click, authenticate, then go, and insert into the process a mandatory set of interactions that we'll have to navigate before we can get to our end destination. This is not the internet that I'm used to. It's not the internet that we have today. It's
even more problematic than the cookie walls or the cookie disclosures that we see where we go to a website and there's some disclosures about what's going on with the cookies that we have to navigate, but often we can just ignore, click and go to where we want to go.
All of that is going to be an old model of the Internet. The new model will be that we click, we have to navigate through a set of interactions that will take us time and expose us to the privacy security risks that we just mentioned. We'll have to make a series of adult choices about whether or not we want to
provide the destination website or its authenticator with the sensor information required. And then and only then will we get to our destination to figure out if all those steps were worth it. It was the payoff worth it in the end. As a practical matter, what's gonna happen is that knowing that we're gonna have to navigate, I described you referenced the speed bumps, we're going to think before we click on a link.
it might not be worth even investigating the link in the first instance if we know that we're going to bulk when we're presented with a speed bump. As a result, it's going to completely change the way we think about the Internet. We're so used to click and go, and it might be, I'm not going to click because if I go, I'm going to have to run through a gauntlet that's not worth it to me. It inverts the paradigm of the Internet. We're so used to the Internet being this seamless web that
we just navigate without really thinking about it to get where we want to go, it's going to become a place where each click is going to be thoughtful and slower and discouraged. And it will no longer be a single slow. It will be a very fractured web. Okay.
I can't even imagine. I mean, it's such a terrifying vision of the future of the web because I think we really take for granted, as you said, how easy it is to click. I've worked at publishers where the whole goal of the product team is just to get the page to load as fast as possible because like you said, even a fraction of a second delay makes people click off. And I think you can see a world where so-called adult content or information that is considered adult, which is often reproductive justice content, like providing really valuable information that
I think kids should have access to, but a lot of adults don't think it's drive access to would suddenly have all of these roadblocks and would make it harder for people to get to. And so they're just going to click away. And I think they're going to end up, it's going to have sort of this like chilling effect where people won't be able to just access information as freely and easily.
There's a certain presumption on the internet that it's all open to all of us. We can go wherever we want. We can explore our most niche or even embarrassing interests through a very elegant process. Just click and go. And breaking that paradigm, it will be just a massive loss. It's something that will change our lives on a day-to-day basis, how we think about the internet. We will
be always guarded in where we go. We'll always be thoughtful about the consequences. I think that we'll just be poor for that. It will be a less viable and rich database of information for us as users because of these boundaries that are going to be established, even if we can navigate through them.
The second-order consequence of those speed bumps, and you mentioned it, your entire team whose job was to just make the page load faster because if it takes too long for users to navigate the speed bump, for whatever reason they're bulking, then
the publisher makes less money and it's just economics 101 what happens there. At that point, there are fewer publishers in the business and or they're going to start imposing paywalls to try to capture more revenue from each user who does show because they aren't getting as many users as they used to. So speed bump based Internet
is a less profitable internet for many publishers, which means that not only are we as users less likely to explore the internet, but there will just be less of the internet to explore. It literally shrinks the boundaries of the internet such that it's a less vibrant environment than we're used to today. Well, I think that feeds very well into the fourth big problem that you identify, which is costs of all of us.
this, right? There's a lot of costs associated with these systems and these costs will be passed along to publishers of information. And like you said, it will make it harder for publishers of information considered adult, which is not just like adult material, but anything that I think children might not want to see and even non-adult content, right? Like just regular content. Everybody will have
to engage in age verification, it won't just be some websites, it'll have to be everyone. It's just easy to see how those costs would really skyrocket. So can you talk a little bit more about those challenges that you identified around costs? I do want to make clear that you were talking about adult services, adult content, adult websites. Many of the age authentication requirements can apply to any website that might have children showing up at them.
regardless of whether or not the content itself is things that we might object to children having access to. So for example, things like social media, if we're concerned about children being addicted to social media, we would say then that we have to determine adults versus children and screen out the children or restrict their functionality. And that means
all of the adults are being regulated as well. And it means that even if the social media itself had no content that was harmful to children, we would still require that process to take place. - Well, I think that's a really good point. And I just wanna drive that home for people because you hear a lot of people, I see this in the comments of like, well, they're just verifying the ages of children. You can't just verify the ages of children. You have to verify everyone's age
collect everyone's data, including adult biometric and highly detailed personal data in order to know who is a child. - Yeah, I can't stress that enough. An age authentication mandate affects every user of a site. If they knew who the kids were, they wouldn't have to go through the age authentication process. The whole point of the age authentication process is to try and identify which users are kids. And that means every adult is exposed to the privacy risks, the security risks, and the speed bumps that we've discussed so far.
Can you talk a little bit more about costs and where we might see costs come up in this equation? So when we were talking about speed bumps, I was laying out a concern about the revenue side from publishers' perspectives, that there will be fewer clicks that will be showing up. That means that either there'll be paywalls and those will further shrink audiences or the reduced audiences from the smaller number of clicks will lead to less revenue. On
top of the revenue problem, there's a cost problem for publishers. To implement age authentication requires money, and there are two main pots of money that are required. The first is to set up the system, to build it into the service, and then the second is to do the ongoing operations of that.
Now, it's likely that many publishers would outsource age authentication to a third-party authentication service of which there are dozens and all of them are dreaming of becoming the next big thing because they see huge potential dollars coming to them. So they're going to charge both some setup fee,
to establish the age authentication process for the particular publisher, and then they're going to charge a per authentication fee. So each time that the service is used to authenticate a user, the publisher is going to have to pay the authenticator to do that work. So when you add those costs up, it now creates this squeeze for publishers. They've already lost money because of the speed bumps,
And now they're going to have to pay more money in order to service this smaller user base. And we were talking earlier about the fact that you have to age authenticate everyone in order to figure out who the children are. What that means is that when there's a per authentication cost, the publishers have to pay for all of the users who are ultimately not regulated in order to identify that set of users who are.
And so there's a bunch of basically wasted costs in the process as well. Now, for some publishers, that won't be a big deal. For other publishers, assume that the ratio is 99% adult, 1% child. In order for them to identify that 1% children, they have to incur the cost of the other 99% of adults. So the cost function actually could be wildly out of proportion.
whack with the number of users who are actually going to benefit from any of the authentication. So when you put it together, it becomes this huge cost barrier to publishing for a smaller audience that's going to be
driven away by the speed bumps. And that's going to further compound the shrinkage of the internet and the shrinkage of the availability of information. It's just going to mean less information available on the internet. I think that's the key point that I want people to take away from this cost section is that it means less access to information. Because as you said, not only will many publishers not be able to bear these costs, people won't have as easy access to information. But
I can imagine so many niche sites that I've grown up using, or like you said, maybe you're interested in some niche thing, or you have a niche identity or a health problem, right? Where there's a forum or there's a community online that really speaks to a very specific health niche. They're not going to have the resources to implement these massive systems and expensive systems and data controls and all of that. And so I do believe that a lot of those smaller sites and publishers and communities will simply be removed from the internet.
And others will go to a paywall model. And there are circumstances where paywalls are a good thing and we should encourage them. But it's certainly contrary to the way that most of us use the Internet today, where we view paywalls as the exception to the rule.
In general, we expect to get free content that's ad supported and we're okay with whatever deal we're striking with our personal data or attention to get that ad supported content. But that option might go away. And it might be that we run into a very small number of paywall publishers and we're having to pay for access to every database that we're ultimately going to be allowed into. It seems like, again, the government is trying to push this model. And I've talked a lot about this in other videos where they want to just replicate old media.
Back in the 1980s or 90s or whatever, I guess there was some internet in the 90s, the 70s and 80s, where you had to physically purchase a magazine to get information. You had to physically purchase media, and it wasn't so free and accessible. And like you said, it ultimately hurts these smaller publishers. So it's concerning. Yeah, it really was a weird...
historical accident that we got to the point where most of the content on the internet was free back in the old days. And I'm going to go to about mid 1990s for you here. When we don't only had to pay for content, we had to pay for newspaper subscriptions or magazine subscriptions. We had to pay for cable subscriptions. And so that was just the way of the world. You wanted access to a content database. You had to pay for access to it. And what we've learned over time with this weird
accident that the internet has created an environment where much of the content that we can get and that is incredibly valuable is free. We just take it for granted now. This is the way of the world for us. And what it's done is it's enabled the availability of information to a larger audience than was able to get in the old paywall world. I'm a privileged person. I can afford to pay subscriptions to content if I have to.
I have a university that will pay a bunch of subscriptions for me as part of my job. I can get access to information, no problem. But many people don't have the money to literally be able to afford to get paywall content. So the entire shift away from free content to paywall content has huge distributional effects. It basically further marginalizes communities that are already marginalized
And if we're in the audience and people think, well, I can afford that, you should acknowledge your privilege because that is not the majority. I think it's going to lead to a lot more ignorance, a lot less access to information for some of the most marginalized people in society. I want to get to the fifth point because I think this fifth point is one of the most terrifying implications of these age verification schemes. And that is building this surveillance infrastructure. I know that we've talked about the privacy concerns and all of this stuff, but I really want you to to
emphasize the consequences of building this type of structure. So can you talk about your fifth problem with age verification, how it works and kind of why this infrastructure is so dangerous? So I'm going to start with a premise. And this is just built up over decades of experience and research that the government always wants to know more about its constituents. If there's any opportunity for them to learn more about their constituents, they will take it. And the
They will then have the capacity to use that information in ways that are adverse to their constituents. They will use that to reinforce their power or they will use it to deprive people of privileges and rights that they have. This is not a weird accident again. This is just the way that governments work and they've worked this way for decades.
Now, when we build an infrastructure where people have to present either their face or their government ID in order to access a service, it creates a checkpoint that the government will be very interested in. They could say things like that. If you see this face, tell them no, that they're not allowed to get this resource because we are taking that privilege away from them.
Or if you see this name on a government ID, tell them no. Block them because we're taking away that privilege or resource. And as a result, the infrastructure of age authentication becomes a co-optable infrastructure for other forms of government control of our movement over the Internet.
I'm petrified by that. I'm so concerned that our governments will not view this as an opportunity to help their constituents. They will view this as an opportunity to control their constituents. They were building that infrastructure that they will then be able to do exactly that.
And so this is a little bit more speculative than some of the other concerns, but the stakes are so high here. The risks that the internet can be under the government's surveillance and control at all times everywhere we go is so great to us that we have to have this discussion as well. Also, it's not hypothetical, even some of it.
I had an experience last year that completely affected me so deeply. It was like one of the most impactful things that I've ever seen happen in my career. And I witnessed a lot of crazy stuff over my career. But I was at the White House for this Content Creators Day last summer. There was hundreds of big content creators in the audience and Biden and a bunch of staffers were speaking to them.
Content Creator Summit. Neera Tanden, a Biden official, got up on stage with Jackie Aina, a big YouTuber. They were talking about online hate and they were saying, oh, the internet is such a dark place. Don't you wish you could unmask every troll? Raise your hand if you wish that we could remove anonymity from the internet. Don't you wish that we could see basically what everyone is doing online and under their real government name?
And all of the influencers, dumb, dumb, dumb, were cheering. They were like, yeah, we want to unmask every troll. And it made me sick to my stomach because especially as a journalist and, you know, I report on a lot of activism and stuff. I just think, God, this is terrifying. Here we have the government explicitly saying essentially to media that they want to track citizens online, that they want to monitor and remove anonymity from the Internet.
So I just think it's really scary. And I think a lot of young people are realizing this day because they're seeing crackdowns on campus protests, things like facial recognition being used to target activists. But this could go in a really scary way. And like you said, maybe some of it is slightly theoretical, but I don't think it's that theoretical when we're seeing even the Trump administration today cracking down so aggressively on online speech. People's online activity could be criminalized quite quickly.
Yes, we have a number of examples of this. I'll just point out a couple. One is how the Trump administration has specifically targeted individuals to remove their security clearance as retaliation for the speech that they're doing. We've also seen students having their visas pulled for their online speech. And you could imagine then a government that's willing to do that, then also saying, we're going to block list some people from being able to access
online banking or from being able to access online content. And it's such a small leap from those current activities that we're doing to the ways in which you could weaponize control over people's movements online that I feel like that's part of the government's objective. That's just where the government is going to go. Even if we don't get there, in order for most websites to
confirmed that they're dealing with an adult as opposed to a minor, they're going to create online profiles of users that you're going to have to log into with your login credentials. That way, it saves you from having to re-authenticate yourself each time you go. It also saves the publisher the cost of doing the re-authentication. So many sites that we go to today, we don't have to create an account to get into a registration wall.
We can just click and go. In the new model, even if we don't get to the government surveillance panopticon, we're still going to have situations where our account activity is going to be available to publishers and anyone else who can get them.
And we're going to have persistent identities at many of these places that we're used to being able to just dip into and dip out without identifying ourselves. And so in order to do the identity authentication, that's part of the age authentication piece, we're going to basically be forcing people to not be able to be anonymous, that they'll be attributable or even identified in
every place where they've done the age authentication. So there's so many different ways in which the internet looks different with an age authentication requirement, including the fact that we might have to do more registrations that make our actions more attributable. I just really want people to think actually how anonymous the web currently is, which is a good thing.
The troll comment bothered me a lot because we have research on this. We know that actually removing anonymity doesn't even help cyberbullying. It doesn't even mitigate online harassment. It's just this sort of pretense. And it reminds me of a lot of the conversation around age verification and why this is being pushed. The government isn't...
overtly saying that stuff aside from maybe that Neera Tanden comment at that one time. What they're really saying is like, think of the children. We have to protect children from online harms, and that's why we have to pass all of this legislation. And so I just want people to understand that like, that's not really what it's probably about. Obviously, we don't know the government's full motivations, but like the ultimate consequence of age verification is going to be vast and broad, and it's going to affect everyone. It's not just going to be this thing where we can protect kids. So I want to get into one
thing that a lot of people suggest as a solution to some of this stuff. And that is this idea of parental consent where people say, well, all right, so age verification isn't great, but what if we just establish parental consent? So you can access the internet in all these certain ways, as long as your parent says so. What are the problems with that?
So we think about parental consent as such a natural form of child development that we want to allow parents to know what their kids are doing, make the choices for their children that reflects their knowledge about how their children will best flourish and gives the adult supervision that's required to help them make greater choices. And so parental consent is ubiquitous, again, in the offline world, you know,
in order to do a bunch of things in high school. You had to go to your parents and get them to agree. It wasn't your choice as a child, it was a family decision that parents were involved. So an easy way of addressing any concerns about what kids might experience online is to say, let's just put the parents in charge
let's not make the children make that choice. Let's give parents the choice. And if the parents consent to it, then it's a good decision for the family. The children will be free to do whatever the parents have allowed them to do. All we have to do is just get the parents to agree. Now, in order to get the parents to agree, there's a bunch of things that have to happen. And those things are both in
incredibly onerous and costly and raise all of the other risks that we just referenced in terms of privacy and security and speed bumps. All of those problems get doubled or tripled when we involve parents in the process. The first thing we have to do in order to give parents consent over children's behavior is to figure out who the child is so that we now know that they need to go into the special queue for additional consent. So we already have to do age authentication. That's the first step.
Now, the second step is that we need to figure out who is the parent of the child. And there are no good solutions to do that today. We don't have any software that is automated for this. It's not as simple as just doing a face scan or doing a government ID check. I have no...
piece of paper from the government that establishes that today I am the parent of my children. I have birth certificates, but that was a long time ago. Lots could have changed since then. And otherwise, that's it. There's nothing I can show that establishes my status as a parent of my children today.
So we don't have a mechanism to confirm that whoever might be giving the consent is actually the parent. If we wanted to fix that problem and make sure that we were actually hearing from the parents and not just the friend of the kid who's pretending to be the parent, we would need the parents to present lots of
highly sensitive personal information, which they really don't want to do. They may not be able to figure out how to do it. And then they would be faced with all the other risks that we talked about. Not only would the children have the risk of privacy and security and all the other consequences we talked about, now this new information would also have privacy and security risk that now puts the
parents at greater risk as well, and by implication, also the children. And especially where the concerns are relatively minor, if we're trying to, for example, protect children from some kind of privacy harm that in the grand scheme of things is not super significant, if we require parental consent for that harm, we actually are putting the children at substantially greater risk in order to authenticate the status as a
parents in order to allow that decision to be made. So the net consequence of all that is that now we've hurt the child with the privacy and security risks of age authentication. We now have hurt the parents with the privacy and security risks of their authentication, and we've made the children more at risk because the information disclosed by the parent also puts the children at risk. And then finally, we need the parents
to consent to whatever behavior there is being regulated. And so the parents have to figure out not only how to establish their identity as parents, but then also to navigate whatever approval process is necessary. And the reality is for some of you, you're so digitally savvy, like, oh, people will figure that out.
They actually have done studies on this. And parents are like, I don't know how to figure that out. I'm a boomer. I can't navigate my phone. How am I supposed to figure out how to navigate this entire process? And so we have all the compounding problems of less revenue, higher costs. All of those get exacerbated with a parental dissent requirement.
With a Venmo debit card, you can Venmo more than just your friends. You can use your balance in so many ways. You can Venmo everything. Need gas? You can Venmo this. How about snacks? You can Venmo that. Your favorite band's merch? You can Venmo this. Or their next show? You can Venmo that. Visit venmo.me slash debit to learn more. You can Venmo this, or you can Venmo that. You can Venmo this, or you can Venmo that.
The Venmo MasterCard is issued by the Bancorp Bank, and a pursuant to license by MasterCard International Incorporated. Card may be used everywhere MasterCard is accepted. Venmo purchase restrictions apply. You know that feeling when someone shows up for you just when you need it most? That's what Uber is all about. Not just a ride or dinner at your door. It's how Uber helps you show up for the moments that matter. Because showing up can turn a tough day around.
Or make a good one even better. Whatever it is, big or small, Uber is on the way. So you can be on yours. Uber, on our way.
It goes back to this idea of, I think, people drawing too many comparisons between the offline world and the online. And I just want to address that as well. Because it's like, yeah, people are like, well, you have to get parental consent for a school trip. And it's like, okay, but that's very different. Like the IRL world, you can establish your parent shows up at school or the school sort of knows who the guardian is. It's a very different world, right? Or people say, well, you do have to show ID to buy a Playboy at the store. So why is it a big problem for you to show ID online? Can you talk about
the differences between harvesting this data online versus like showing your ID in the real world for something? This comes up all the time and it's such a natural approach to the conversation. We look at the offline analogs and we say, isn't it just like that? And it turns out it's not just like that. And the sooner we recognize that, I think the better our policymaking will be. So some of the differences between the
offline age authentication, like showing your driver's license to buy a six pack of beer, and the online age authentication, showing your face or showing your government ID in order to access some content online, include the fact that first, when you're doing it in physical space, the first thing that the authenticator, let's say the liquor store clerk,
is going to do a visual inspection. But in order to do that visual inspection, they will capture nothing. Their eyes will scan your face and nothing gets saved in the process. There's no digital trail of that whatsoever. By doing that scan, they can then decide whether or not they need to do a secondary form of authentication like the presentation of a driver's license,
or if they can just accept that the person is clearly an adult or clearly a child. And in my paper in a footnote, I drop a little snark. I say, I haven't been carded buying alcohol in decades because every single clerk when I'm buying alcohol is like, this guy is clearly a geezer.
I don't need to go through that secondary process. So the number of people who actually have to then show their government ID in the process is constrained quite a bit simply by the fact that the visual inspections were able to resolve many of the evaluations. Now, when I present my government ID, if I'm carded, again, no copy is made in that process. The clerk looks at it, they zero in on the age,
They say it's either above or below the cutoff and then it's done. Now, in order to do that, they could in theory memorize some of the other information on the card, but that's just not likely. They're not likely to be able to do that and we accept that as the risk of the transom of information that flows over people. We don't really worry about that particular risk. When we do all that online, we have to create digital trails and
there's the likelihood, even if the rules are opposed it, of information being stored and the bad guys gain access to it through security breaches or through interception. There's no risk of interception from some electronic surveillance of a malfactor when I'm presenting my card to the grocery store or liquor store clerk. But there's absolutely that risk that did not exist in the offline world when I do the exact same process in the online world.
Now, when the clerk is evaluating whether or not I'm allowed to buy the six pack, there's a cost to that. There's some time that they're taking away from their other duties to that, but it's no real marginal cost to do so. So it just gets built into the overall economics of running a store, whereas with the online
there's an actual cost for each and every authentication, even when it's not required. So for the adults, it's like the store would have to pay money out of their pocket to verify the government ID of an adult that never needed to be verified in the first place. It just raises the cost for everybody. Now, some laws require that authentication to take place online before you're allowed into the website whatsoever. So assume now it's not the clerk
who's evaluating the purchaser at point of purchase, assume you have to show your ID in order to get into the store. Now that raises the speed bump issues, it raises the private security issues, and it raises the cost. Now the online publishers have to pay the cost before they even know if they're going to make any money from that person.
So the costs have gone up dramatically in the online world compared to the offline world. And that, of course, also chills people as well. The last thing I'll say is that the goal of the segregated and suppressed process is to keep people from getting to content online. Most of the circumstances where we're required
to authenticate our age in the offline world, we're buying physical things that have no speech implications. But if we are restricted from accessing published content online, then the restriction is keeping us away from speech. And so all of those issues about the speed bombs causing people to bounce is keeping adults away from constitutionally protected material that they're completely entitled to get. And that has never
nothing in common with blocking a adult from buying a six pack of beer. Yeah, I think that's what's so important for people to understand. And like, I don't want to talk too much about like the cost. I think we've made that clear, but I think it's just important for people to remember that this is speech. Like most of what people are doing online is accessing information, researching something, looking into something again, trying to find out a little bit more information about maybe a niche medical diagnosis, connecting with friends, learning, educating themselves.
entertaining themselves even, right? And I think that the comparisons between liquor or illicit materials or drugs or things like where it's like, well, you've got to show your ID before buying alcohol. I think that comes from a lot of these lawmakers. And again, people like Jonathan Haidt who are pushing this idea of the internet as a drug or the internet as an illicit substance.
substance that needs to be regulated like an illicit substance. And the reality is that it's simply not. Not only does the internet itself and content online not have the same effect, there is no such thing as like internet addiction, the way that there is like this physiological addiction to things like alcohol. But I think that it's like the downstream effects of correlating those two things is really dangerous because it leads to these regulatory frameworks that replicate illicit substances rather than sort of regulatory frameworks around content and
and books and speech. Like, I don't think any of us would accept these sorts of roadblocks and privacy violations to like buy a book at a bookstore, right? But that's how we should be thinking about it. Like, that's the type of interaction that we need to think about. And I think it's a better correlation than I think the way it's usually presented, which is like,
"Yeah, you're buying alcohol or buying something bad." You're absolutely right about this issue about speech because speech is just special. It has positive benefits or what we call an economic spillover effect for people that make our society better. And that just has nothing to do with alcohol or gambling or some of the other regulated activities where minors are prohibited from engaging in the offline world. Those don't raise the same speech issues.
in some cases we might take the position that those are categorically harmful to all consumers, whereas with speech, that's almost never categorically harmful to all consumers. In fact, usually it benefits not only the consumer, but a bunch of other people because whoever is consuming information becomes more knowledgeable and is able to do something more pro-social.
There's a reason why we have a First Amendment for speech and we don't have a First Amendment for liquor or a First Amendment for gambling. Like equating the two is, to me, actually quite offensive. I think what you just said there, too, is such a good point of like the harms. So often these stats are cited, right, where there's the stat. And I know many other journalists, including Mike Mazenik, hate this stat as well. You see it all over. One in five teens say that Instagram makes them more depressed.
That is used to then justify preventing most teens from accessing Instagram. And by the way, we know that what most teens are doing on Instagram is actually messaging their friends. And so we ignore the fact that actually far more teens reported that actually using Instagram made them happier. And we see this replicated a lot through these studies. I wrote about a study recently that was showing actually kids that had access to cell phones had better mental health outcomes the younger they got the cell phone. Again, probably because they're more likely to be able to be in touch with friends. And actually, I thought it was really surprising too. I just want to say
that the lower income kids were more likely to have cell phones than the upper income kids. So it didn't have anything to do with the money that their family had. Tangent aside, I feel like we're so comfortable on the internet saying that we're acting in the best interest of some kids while really like regulating the internet at the expense of other kids, like a greater amount of kids. We're hurting a lot of kids that would benefit from free and open access to information on the internet to ostensibly help this very small minority of kids that claim to be harmed by it.
Does that make sense? Well, it sure does. As a statement, it doesn't make sense as a policy. And that's part of the reason why we object to the age authentication requirements is because like every other government policy, this implicitly creates winners and losers. And we don't really like to talk about the idea that when we provide protections for kids, what we're really saying is we're protecting a class of kids.
at the expense of other class of kids who might be harmed or suffer detriments due to that intervention. And the moment that we acknowledge that it's not a categorical win when we intervene with respect to children, but it's simply privileging certain groups at the expense of others,
that we realize that everything in this conversation has gone horribly wrong. We're not able to protect the kids. We're only able to protect some kids at the expense of others. We're not even protecting them. We're passing these really dangerous laws claiming to protect them. But there isn't really evidence, by the way, that a lot of these laws even ultimately protect that minor group of kids. I just want to reiterate that.
Absolutely agree with that as well. But I'm going to give the benefit of the doubt that the policy intervention might benefit some classes of children, that there are in fact circumstances where the regulators could help some of the children. You were mentioning, for example, the Instagram scenario where 20% of the teenage girls were harmed by Instagram. That they felt Instagram harmed them, not that there's proof.
where 40 percent said that they felt they were benefited by Instagram. And so just take that scenario. Imagine that we could do a regulatory intervention to protect the 20 percent of the Instagram users who feel like they were harmed by Instagram. If we could do that with no collateral damage, the answer is absolutely yes. Let's go protect those 20 percent. If we say we will be able to benefit as 20 percent, but the 40 percent who said they were benefited by Instagram might no longer get those benefits and might
actually suffer harms, now we're having to pit the constituencies against each other and we're having to make trade-offs. And it's possible that those trade-offs result in a net negative to society by intervening. And that's what's happening with almost all of the laws are designed to protect children online is that they are making trade-offs where we don't even know that the net benefit is that children are winning. It is entirely possible that
the net outcome is that more children are harmed than are benefited. Now, that's only children. I will acknowledge and remind everyone, there's a whole bunch of adults who are going to be harmed in this process as well. Let's not even get that far. Let's just focus on whether or not we can help the kids. And we may not even be able to do that. And I just, I have to just reiterate because I get so triggered by this. But like, again, we know that these policies, actually every ounce of data that we have shows that they don't actually help kids. Like that's not what this is about.
And I give your point, I think you're giving more credence than I do to the government maybe of like, I do think that it could maybe in some ways, right? Banning the internet, maybe the way that banning books could help some children. Banning comic books. Did that help some children? Maybe. I think ultimately the harm is so much greater and
If you are having these conversations without centering the harm that we know of, right, to the greater number of people, then I don't actually think that you're concerned about children's harm. Because if you're really concerned about sort of minimizing harm, you would acknowledge the much greater harm that we know comes at the expense of these policies.
So I'm so sorry if I had all implied that the government was acting in good faith here. Because on that one, you and I are 100% on the same-- I assume bad faith on all parts of the people pushing laws that are framed as protecting children online. And yet I was giving the possibility that, in fact, some interventions might benefit some populations of children.
some interventions could. I think the current slate of interventions, we know that's not what they're about and that's not what they're going to do. And so I guess that brings me to my final question for you, which is how can we make it better? Because I think like nobody wants to see children harmed by online content. This is an issue that is ongoing and I feel like it feels so fruitless anytime I do these videos and people like, well, then what are
we supposed to do? And I'm like, well, we should start by not passing these really dangerous laws. But is there anything people can do to help make a safer internet? Or do you have any sort of policy suggestions that you think might be beneficial? I do have some, but I'm going to start with the acknowledgement that there's no magic wand here to wave that will just solve problems that are easy and that will have no collateral damage. And
Once we let that go, once we recognize that any policy intervention is going to create winners and losers and involve trade-offs and that we might very well be harming children, and we are trying not to even admit that, but once we do, we realize that maybe that's not an option. Once we realize that there are only
difficult trade-offs here, I think we actually get to a more productive place. Now, at this point, we could talk about, are there interventions that will categorically help children? Let's talk about those. And otherwise, let's talk about the trade-offs and let's start to really accept that we might have to make trade-offs in order to get the outcomes that we want in our society. But there are some things that we can do right away that I think can be helpful. And let's start with the premise that there's no one act
in our society that can protect children from all other harms in society. And it takes a whole of society approach to benefit children. And that means if we put the burdens on any particular player to fix the problem, they're never going to be able to do it. There's so many other moving parts to this.
Part of why the efforts to regulate internet services and how they deal with children is so misguided is it treats it like that's the magic wand. That's the lever that we can pull that will only lead to good outcomes and will be categorically solving problems for children. It just doesn't work that way. So what do we need from a whole society standpoint? Well, the first thing we need is we need to acknowledge that. The second thing we need is data. We need more
more data about some of the trade-offs that we're making. We need to understand different populations and we need to understand both the harms and risks that they're facing, but also the benefits that they're getting. And there are surprisingly little government efforts to try to systematically capture what benefits children are receiving online.
And we need that not just at the level of all children, we need it broken down demographically, and most importantly, to get to some of the niche communities that might have significant outlier outcomes compared to the normal population.
For example, neurodiverse users of the Internet might achieve very substantially different and better outcomes than the main population because the Internet might solve some of the problems neurodiverse people face in their society. Or we might talk about the disabled.
And there's a bunch of ways in which the Internet can solve problems for people with disabilities that they can't solve easily in the offline world. And if we take away tools from them, we might very well be exacerbating or compounding their harm. But we won't know that unless we have the data. And so that's like such a cheat for an academic to say, give us more data. But it's actually critical here to good policymaking to understand what's going on.
I think the data thing is such a good example because to me, the media plays a role in this too. I feel like there's such incentive to study the harms. And like I said, this study on cell phones, for instance, it was this big study that came out, didn't receive an ounce of press. I think one other outlet aside from myself wrote it up. Whereas every single even non-peer-reviewed paper that sort of puts forth harms about the internet gets
national press and the New York Times is doing a daily episode on it or whatever, you know, like there's just this sort of panic industrial complex built around manufacturing outrage about this stuff. And so I think just to add to your point, not only do we need data, I hope that the media can also cover the data responsibly and actually give a holistic picture instead of just feeding into moral panic headlines.
I love the idea of a panic industrial complex and it's heartbreaking because it's literally true. And I was an OG internet user. So back in the 1990s, we were having these euphoria about the ways in which the internet was benefiting us and
somewhere along the line over the 2010s decade that went from being celebrating the Internet to only focusing on the ways in which subpopulations were being harmed by the Internet. And we've lost so much from the conversation by not spotlighting and highlighting the benefits of the Internet in ways that
make us think that there are no trade-offs if we intervene. There's only downsides, there's no upsides. And like, there's tons of upsides to the internet. We live that every day. And the idea is that our regulators aren't giving credit to that. If you look at the bills that are passed, the child safety bills, they'll have recitals where the government will say, here's our findings. And they will only talk about the bad of the internet. They will never talk about the good of the internet. The conversation has gotten so lopsided. Data can help with that.
A second way in which the government can help is it can help train parents about how to teach their children how to use the Internet. As a parent, I was never taught how to help my children use the Internet. Now, as an Internet law guy, I've been thinking about this for a long time, but I have no special skills. I have no intuition that is based on data about what will help my children become better users of the Internet. And the expression is,
As parents, we are our children's first teacher, but as parents, I don't know how to teach my children. And of course, many parents today are themselves very heavy internet users and all the kids see is all I should be doing is using the internet all day long. That's like the only role modeling that we can really give. But we could do so much better if parents better understood their role as teachers
and government could help with that. A third thing that the government can do, which is already underway but not nearly enough is being done, is to teach children how to be better digital citizens, how to understand how to integrate the internet, and some resiliency in order to deal with some of the adverse issues that they run into on the internet. I know this is happening. My kids got this, although they're now a little bit older, so I'm hoping that the teaching has gotten better. But there's so much more we need to do in order to help kids
not only understand the internet, but to become valuable, viable, engaged partners with the internet. They have to be taught how to do that. It's not intuitive. It's something that we need to show them and teach them. And so the government could do a lot on that front. And
it has to prioritize that. What we're seeing in practice is a retrenchment on critical thinking skills and a retrenchment on the free thinking of teachers, their ability to teach children to think for themselves. So like when we see those policies, we're moving further away from helping our children be digital citizens. We're actually making it harder for them to be prepared for the future they're gonna face.
And on that front, one of the worst things in my mind is the categorical bans to say that children can't use the Internet. What we're really saying is you're going to get the right to use the Internet some future time, but you're not going to be taught or have any experience in how to use it. You're just going to have to figure it out as an adult. And to me, that is like the worst.
worst possible response to a digital future. That is what our children are going to experience for the rest of their lives. And the idea is that we're not even going to teach them and help them, guide them through that process. We're just going to expect them to know how to do it as adults. Like, that's the worst.
This episode is brought to you by State Farm. Knowing you could be saving money for the things you really want is a great feeling. Talk to a State Farm agent today to learn how you can choose to bundle and save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state.
This episode is brought to you by Amazon Prime. From streaming to shopping, Prime helps you get more out of your passions. So whether you're a fan of true crime or prefer a nail-biting novel from time to time, with services like Prime Video, Amazon Music, and fast, free delivery, Prime makes it easy to get more out of whatever you're into or getting into. Visit Amazon.com slash Prime to learn more.
I know. And I think this is what's so problematic about what people like Jonathan Haidt are pushing to Congress and these people like Mark Warner, Richard Blumenthal, Amy Klobuchar, these people that push these things where they want to age gate the Internet. And it's like, OK, so you prevent kids from accessing the online world until they're 16. And I know the ultimate goal is 18. And then what? You send them out into the world with absolutely zero media literacy, with zero ability to actually navigate these systems that, by the way, are extremely relevant for employment.
It's a farce. It's nonsensical. And it's this, again, this idea that like, oh, well, we'll just stop kids. We'll just prevent kids from being online. I mean, ultimately they want to prevent all adults from being online. They want to censor the internet. They want everyone to be less offline. They want to dismantle the internet as we know it, which is their ultimate goal. But I just think that the premise of keeping kids offline is so problematic.
we want to expose kids. And I had an educator on here recently too, that said the same thing. He was president of a local school board and talked about the importance of actually exposing kids to the internet very early when they're young, so that they understand how to navigate information, how to vet information, how to know what's reputable and not reputable. So yeah, I'm totally with you on that. I think it's really dangerous. I would add to that simply that if you ask children what they want,
They're like, don't take tools out of my hands. Teach me to use them better. And so it's actually another form of erasing children from the equation to say, we're going to protect you by doing exactly what you don't want and telling you that you should really like this. So to me, it sends such a terrible message to children when politicians take that stance. They're basically not listening to children and what they really want from them. And they're ignoring, again, the primary
way kids use a lot of social media is to message their friends. It reminds me of a lot of the moral panic around even just the landline telephone and the idea of like, oh, everyone's on the phone talking all the time or whatever. It's like we want kids to not be isolated. These same lawmakers are out there yelling about the loneliness epidemic all day long and you're trying to criminalize kids messaging each other. It's absurd.
You know, Taylor, that's a great point. I want to make sure that we don't skip over that. I'm willing to accept the possibility that our children are under great psychological distress today, and we don't really know why. So that's led to some really bad, I think, arguments that do correlation studies and say, here's the introduction of social media and here's... Here's the introduction of baggy pants, and it correlates with a mental health crisis. So therefore, we should outlaw baggy pants. Exactly.
One possibility that children are distressed today is that their politicians aren't caring about them. They're not actually listening to the children. They're not thinking about how the children are going to navigate the future and speaking to those issues and giving voice to those concerns.
And so in a sense, for many of our children, they're getting the message from the government, we don't really care about you. You're not important to us. I mean, there was no better example of that in my mind than the TikTok ban. This was just a situation where absolutely the government just did not care about the users of TikTok. And we're mad, right? When kids were calling in and saying like,
God, I want to kill myself. Like, this is terrible. And then lawmakers were mocking that and being like, this is why we need to ban it because these kids care about it and love it so much. Or they blame TikTok and said, well, you just spurred your users to do that. That wasn't their authentic belief. So what message are children going to get when they're like, don't do that?
This is really not what I want you to do. And the government's like, this is in your interest. You should like this. I think that could be a major causal contributor to the kinds of distress we have. And until we are honest with ourselves about both the distress that our children are facing and the ways in which we might be causing that, we're not going to get it right. And we're going to blame a scapegoat like social media or the internet and maybe really miss the mark. Yeah. And
ultimately harm the internet for all of us again, because we all use it. And I think you did such a good job of outlining the harms to adults as well. So Eric, thank you so much for your time today. Oh, my pleasure. Thanks for spreading this conversation and thanks for all the great work you're doing to evangelize the issues. All right. That's it for this week's free speech Friday. Let me know what you think about this episode and age verification laws in the comments. Also, don't forget to subscribe to my tech and online culture newsletter, usermag.co. That's usermag.co where I write about all of these issues and more.
You can watch full episodes of Power User and Free Speech Friday on my YouTube channel at Taylor Lorenz. And if you like the show, don't forget to leave us a rating and review on Apple Podcasts, Spotify, or wherever you listen. Also, my bestselling book, Extremely Online, is finally out on paperback. Go pick it up today, wherever books are sold. Thanks so much and see you next week.