We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Empowerment, safety and equity: children's visions of rights-respecting digital futures

Empowerment, safety and equity: children's visions of rights-respecting digital futures

2025/6/21
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Chapters Transcript
People
A
Adam Engel
A
Aisling O'Farrell
D
Dylan Yamada-Rice
F
Filippos
M
Michael Murray
S
Sakshi Ghai
S
Srishti
Topics
Sakshi Ghai: 我认为数字安全干预措施应该特别关注全球南方的大多数年轻人,他们经常被排除在对话之外。重要的是要了解如何为他们构建本地嵌入和文化敏感的干预措施,而不仅仅是自上而下的方法。我们还需要关注文化上显著的风险,例如在线赌博和技术促进的儿童性虐待,这些风险在某些情况下会被放大。技术公司应该开发针对不同语言的内容审核,并采取安全设计和算法公正的方法。 Srishti: 我非常感谢您提到的文化方面。我的许多家人也住在印度,所以我经常能实时看到这一点。我们需要确保数字干预措施能够反映不同文化背景的需求。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences.

Hi everyone, welcome to LSE Today. This is an event which forms part of the LSE Festival, Visions for the Future, which is a series of events exploring the threats and opportunities of the near and distant future and what a better world could look like. My name is Dr. Kim Sylvander and I'm a researcher at the Digital Futures for Children Center at the Department of Media and Communications. The Digital Futures

The Digital Futures for Children is a joint research center between LSE and Five Rights Foundation, which advances understandings of the challenges and opportunities presented by digital technologies for children's rights and needs. Our partner organization, Five Rights Foundation, is an international NGO dedicated to transforming the digital environment so that it speaks, it respects and advances children's rights.

Together, we work with their Global Youth Ambassadors program consisting of over 200 children and youth in over 50 countries, some of which you'll meet today.

So today's event is called Empowerment, Safety and Equity, Children's Visions of Rights Respecting Digital Futures. And the theme draws on a project we've done with children engaged in the Global Youth Ambassadors Program in 14 different countries from around the world. Through a series of Foresight workshops held this spring, the Youth Ambassadors imagined what a rights respecting and inclusive digital future could look like over the next 25 years.

We're joined today online, as you see here, by our brilliant youth ambassadors who've been working with us in this project, and they will be presenting their thoughts and reflection on this research project and our findings today. They will also lead the discussion and ask the questions to the panel.

So I would like to present the youth ambassadors that you see online. We have Srishti from the United States who's been involved with Five Rights for about a year. During this time, she's developed a range of new skills, including hosting a podcast episode on Tech This Out. She's also earned a digital citizenship certificate from Microsoft. Aisling from Ireland has been a Five Rights ambassador since 2024 and has been involved in youth activism and consultations since she was 13.

She has recently helped create the Irish LGBT plus strategy as a youth representative. She won a Young Economist of the Year award and is about to launch a nationwide workshop program that educates young people about gender-based violence called Asking for It.

Filippos has been a Five Rights Youth Ambassador since 2024 as well. Along with his advocacy work, he's a national team captain in science competitions, an award-winning STEM innovator, and an aspiring robotics engineer who hopes to help shape ethical technologies while pursuing studies in engineering, robotics, and applied sciences.

And our youth ambassadors will be engaging with our excellent expert panel today on the stage. We have one panelist who will be joining us. He was stuck on a train that had been stopped. So hopefully he'll be popping in at any second. We'll be attentive to his arrival.

First, I'd like to introduce Dr. Sakshi Gai, who is an assistant professor at the London School of Economics. Sakshi previously worked at the Oxford Internet Institute and is currently a junior research fellow at the Lineker College at Oxford. Sakshi's research focuses on how technologies like social media, online gaming and AI impact the well-being and safety of vulnerable young people, with a strong emphasis on inclusive global research, particularly from the global south.

And we have Adam Engel, who is a global lead for digital policy at the Lego Group, where he ensures Lego's digital products uphold the highest standards for safety, privacy, and security. He also works to influence policies that empower children online. Before Lego, Adam led the Emerging Technology Unit at the UK Information Commissioner's Office, where he advised on the risks of technologies like AI and immersive worlds.

We have Dylan Yamada-Rice, who is a professor of immersive storytelling at the University of Plymouth. Dylan is a researcher and artist who explores how children and young people engage with play and storytelling across platforms like apps, augmented and virtual reality, and television.

Dylan's work sits at the intersection of experimental design and social science, always with a focus on children's media. And lastly, we have Michael who will be joining us soon, Michael Murray, who is head of regulatory policy at the UK's Information Commissioner's Office. He leads policy development related to the Children's Code,

and works on children's privacy issues with industry regulators and civil society internationally and in the UK. His team also shapes policy on age assurance. Michael is a member of UNICEF's expert advisory group on the best interests of the child.

So that's our excellent expert panel. Welcome today and welcome to our youth ambassadors. So for those social media users in the audience, the hashtag for today's event is hashtag LSE Festival. Please do.

expose us on social media. We love that. I would ask to please put your phones on silent so as not to disrupt the event. The event is also being recorded and will hopefully be made available as a podcast and on YouTube if there are no technical difficulties today. Today's event will start with our global youth ambassadors who will lead today's discussion. The panel will then respond and then that's followed by a Q&A with the audience in the

For our audience online, you can submit your questions via the Q&A feature. Please include your affiliation in that. And if you're a youth ambassador or a young person, please let us know. Because we would like to, of course, welcome you to pose questions to the panel and to our youth ambassadors.

For those of you here in the theater, I will let you know when we'll open the floor for questions. Please raise your hands and wait for the stewards who will walk around and give you a microphone. And I will try to ensure that a range of questions from both our online audience and our audience here in the theater will be addressed. So now I would like to hand over to our youth ambassador, Shristi. Hi, Shristi. Thank you, Kim.

Hi everyone, my name is Srishti and I'm a 14-year-old youth ambassador from the United States. I'm absolutely thrilled to be here with all of you today, representing a generation that's growing up alongside rapid technological change. Over the past few months at the Digital Futures for Children Center, our global cohort dove into several creative exercises designed to stretch our imaginations and challenge our assumptions.

From role-playing scenarios to collaborative timelines envisioning the perfect digital world, each activity helped me see technology through fresh lenses. Together, we mapped actions for an ideal future, debated ethical dilemmas, and even discussed solutions for bridging the digital divide in underserved communities. Working alongside peers from 14 different countries, I was inspired by the diverse perspectives they brought.

A friend in Kenya shared her experiences with social media and misinformation. A peer in India spoke about the importance of legislation. And a fellow in Nigeria spoke about the importance of inclusion and fairness. Hearing these stories made me realize that while our contexts differ, our hopes are the same. We all want to see a safe digital environment with the opportunity to explore new frontiers.

Through our rich discussions, whether we were unpacking the pros and cons of AI in education, evaluating the impact of social media trends and peer pressure on mental health, or mapping excess gaps in remote regions, we agreed on one core principle. Safety and empowerment must go hand in hand. A truly inclusive digital future safeguards children from harm while equipping them with the skills, confidence, and opportunities to innovate.

Looking ahead, I envision a world where every child can use social media responsibly, where AI tools amplify creativity instead of limiting it, and where no one is left behind because of where they live or what they can afford. Together, by listening to young voices, co-designing policies, and advocating for equitable policies, we can build a digital landscape that is as protective as it is empowering. Through our work, I can't wait to see what we'll create next.

So my first question is for Dr. Guy. Based on your work uncovering North and South digital divides, how can safety interventions be shaped to empower children in very low resource contexts? Well,

Thank you, Srishti. Really lovely to hear from you and to hear you reflect your experiences. So I want to start off by really positioning the conversation around digital safety and, you know, digital safety interventions with a focus on the majority of the young people that naturally live in the global south yet tend to be left out of the conversation.

whether it's through research or it's through funding or it's even through tech governance. So I want to pose three broader perspectives. The first perspective I want to highlight is who really gets left out of the conversations around digital safety, especially from an international governance perspective. Now, as I already alluded to a little bit, majority of the children live in the Global South. In particular, children in

Africa are the ones that are most left out in the global, on the global scientific stage. And these children are also some of the fastest growing cohorts of the digital generation.

the digital world and in particular children who live in rural areas, children who live within cities in informal settlements, particularly them, as well as children who live in peri-urban areas. You know, there's limited infrastructure,

There is, you know, as digital access expands, it's really important to start understanding how can we build interventions for them, you know, that is locally embedded, culturally sensitive. And that's not just a top down approach. Now, the second thing I want to highlight very quickly is that actually in a lot of these contexts,

It's not just, yes, there are risks that, you know, youth face as they go online, but there are also culturally salient risks. So, for example, a lot of youth in parts of Kenya, parts of sub-Saharan Africa, particularly, say, boys,

face risks around online gambling that often never gets talked about when we think about risks that have to do with young people on the internet or risks like technology-facilitated child sexual abuse and exploitation that is also another risk that does not get that much vantage, especially when we think about the risks young people face. Now, naturally, these are not just local risks. These are risks that are present everywhere, but in some contexts.

context, they get more amplified. And particularly in the work that we've been doing in Kenya with a participatory approach using citizen ethnography, which essentially trains youth to go into context in informal settlements, I think we are starting to pick up on these emerging risks. So I just want to highlight that it's very important we start paying attention to these emerging risks, to risk

develop tailored interventions. And I think the third thing I want to just say, with children's, with interventions in low resource contexts, of course, there are multitude of challenges. But, you know, there are some things that are emerging from the research perspective, especially at the individual level that we can start to think about. So, for example, a lot of work has focused around positive parental mediation to help

bridge or help offer the buffer to help children navigate online environments safely. And that is also, you know, that it may be an individual factor that might be relevant. But equally, we also need to stop thinking about sort of technology companies and particularly technology companies that often develop

you know, content moderation that is developed for maybe English speaking context. So a lot of harms that are online, particularly on social media,

that children access maybe in different languages that often don't even get picked up. So just to say that while individual level, family level responses, cultural level responses are really important, we also have to start to think about technology companies and, you know, the risks that they are not

or the interventions that they can then deploy at that system level and you know thinking about safety by design algorithmic justice I mean there we can spend hours on this but I just wanted to leave it there. Okay thank you so I'm gonna just to make sure that we have the time for for all our talks I'm gonna ask Shristi if you can ask your next question and then we're gonna move on to Aisling and we're gonna run through the youth ambassadors first and then we'll go to the panel.

Sorry for the confusion. No, no worries. Go ahead, Srishti. All right, that works. Thank you so much for your answer. I really appreciated the cultural aspects that you mentioned. A lot of my family also lives in India, so I get to see that in real time a lot. So my next question is for Mr. Engel. As you develop AI-driven play experiences at Lego, what are some of the things that you're going to be able to do?

How do you navigate the tension between fostering creative exploration and guarding against bias or privacy risks? We're going to get all of their talks. Sorry. We'll go through all your talks and then we'll come over to the panel again. Sorry. Thank you, Shristi. We'll move on to Aisling, please. Hello, I'm Aisling O'Farrell and I'm a 17-year-old Irish Five Rites youth of Aisling. Today, it's a genuine privilege to bring you the voices of young people across the globe empowered by five rights.

in this digital rights movement. As a youth leader throughout this process of consultation and facilitation, I had the opportunity to notice areas of similarity amongst the concerns of young people. It became apparent that fears about the future of AI and privacy/data protection were a shared global discomfort. But the digital divide was illustrated heavily also, with clear inequalities of access and education.

This was evident in technical difficulties and personal testimonies by the participants themselves. Equal access to devices like computers and media literacy were not universal aspects to school life. So we decided that all solutions we found had to meet each country where they are currently. One of our most energizing exercises are my personal favorites. Put participants into two futures, a hyper-connected world or a deeply divided one.

Seeing those twin paths side by side brought home both challenges and benefits that could lay ahead for our future digital ecosystem. Today, we're more linked than ever, yet we obsess over local impacts so much that we risk losing sight of the bigger picture.

It had not occurred to me that some groups in society may opt out of the digital world for religious or lifestyle reasons. It became discernible that we need action to ensure that no matter where you were born or what groups you identify with, that the future is fit for all. To reach our goals of safe online spaces that are accessible for all young people that also bridge the digital divide, legislation and regulation need to happen rapidly. In LSE language, data equals capital.

If you don't own it, you can't leverage it. Our haste to adapt latest technologies like AI needs to be met with equally as fast policy, driven not by current problems, but by future proofing. AI is no longer science fiction. Today, it's algorithms deciding whether you get into college. There are desperate needs for progressive solutions for these future and present issues, and the only way to achieve such aspirations is with international cooperation and representation of all states on the world stage.

As Shammi Chakrabarty said, we must champion the cause of those without a voice. And that is exactly what we concluded in this project. I have three questions. Sorry to Michael Murray, who had trouble on the train, but two of those questions are also for you. Sorry to double down on that. But the first one being, with the rise of anti-DEI sentiment in the US and growing anti-inclusion movements across Europe,

How can governments ensure that minority and vulnerable groups are protected online while also drafting inclusive regulation and require companies to safeguard these communities as consumers? My second question being, in recent years, local power has shifted from state actors to multinational corporations.

How can national and supranational regulators assert control over digital corporations, many of which are based in the US and operate beyond traditional borders? And then my last question is for Dr. Sakshi Gaya. Given the disparities in digital access and regulatory capacity between the global north and south, how can we ensure that both are held to the same progressive standards in regulation and legislation? Thank you so much. Thank you, Aisling. Filippos.

Welcome. So good evening, everybody. My name is Philbos. I am 17 years old and I am from Greece. With AI shaping what we see and do online and digital tools becoming part of everyday life, companies have a real impact on how young people grow up. It's important that innovation doesn't come at the cost of our privacy,

safety or sense of fairness. That's why youth ambassadors like us have an important role in making sure children's voices are included when decisions about technology are being made. During this consultation process, I noticed how often the same concerns came up from the other ambassadors, especially when it came to digital transformation and the role of AI. It's clear that these technologies are changing the way we live, learn and basically connect.

While there are real benefits, there are also serious worries about data being misused, about bias in algorithms, and about the growing feeling that young people don't really have control over the digital tools they use every day. One moment that stood out was when we were asked to imagine the ideal digital future. It brought up a lot of concern. Many of us wondered whether technology could end up being used by powerful groups

to control or divide people and whether big companies would always put profit first in

instead of doing what's right for the team. As a youth leader, I felt this part of the process was really valuable. Thinking about the future helped us come up with ideas for change and ways we might respond if things don't go as they should. In the end, we concluded that digital transformation is reshaping what it means to grow up. From how we communicate and learn to how we build relationships and see ourselves. Technology is everywhere basically. It can bring people closer,

but it can also make us feel more isolated. It's a big shift and it's one we're living through. So it only makes sense that young people help see where it's going. So that leads me to the question I want to address to Mr. Adam Ingrid.

Since LEGO focuses on digital innovation that is both child-friendly and rights-respecting, what specific measures do you believe companies should take to ensure that children are meaningfully included in shaping technologies like AI and digital play environments, not just as users, but as co-creators of their own digital future? And also to Professor Dylan Amanda Rice,

When we imagine different digital futures, it helps us see both risks and possibilities more clearly. From your work with children, how can creativity and storytelling help make sure that their ideas actually shape how technology evolves?

And the second question, as part of this process, we imagine what different digital futures could look like. In your work with children, what are some of the most effective or inspiring ways you've seen them contribute to reimagining digital technologies creatively and meaningfully? Thank you. Thank you, Filippos.

Thank you so much to our global youth ambassadors. I'm going to go back to Sakshi to respond to Aisling's question, and then I'll ask Dylan. And then Adam and then Michael. Thank you.

I'll try to be brief. So thank you again for the question. You know, when it comes to regulation, and particularly you mentioned progressive regulation, I think one of the critical questions is how do we define progressive? Much of what we call progressive regulation is very much designed from the standpoint of global north, again, assuming high connectivity,

individual device ownership, digital literacy, institutional capacity, many of this global south context... Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question, like, why do people believe in conspiracy theories? Or, can we afford the super rich?

Come check us out. Just search for LSE IQ wherever you get your podcasts. Now, back to the event. The reality is very different. You know, of course, access is often shared, particularly gender differences in access. It's sometimes restricted, sometimes completely restricted.

They are completely left from the digital world. Again, like I mentioned, platforms are not optimized for local languages. That really puts at risk content moderation at the systems level. And then again, protective infrastructure, legal recourse is not as easily available. On top of that, we face this very multi-led social norms, gender norms around how digital media is mediated online.

you know, by family hierarchies, community expectations, safety concerns. Now, of course, these realities are not that easy to disentangle, but they must be factored in. So just to sort of quickly move to answering your question, I think we need to move

We need a globally relevant but equally locally responsive approach. That means creating shared regulatory standards grounded in universal child rights, but also allowing for local adaptations so that they can be easily implemented and enforced. And that's not easy to do. You know, so, for example,

harmful content that might be flagged or removed very quickly in English-speaking markets might remain online for weeks and months and years, might not even be detected in parts of the world, like in African or South Asian or Asian languages. And this is not a linguistic issue, it's an equity issue. So,

Yeah, so just to close, I think we need to think about sort of regulation, particularly international regulation around these issues from both the global and a local approach, as well as thinking a little bit more deeply about how we're going to integrate children's voices in shaping and understanding where regulation is most needed. Yeah, thank you. Thank you for the question. I'm going to step...

back slightly. I work in an art school and I also do a lot of work with children in schools and I think that I would try to empower children to demand their arts education back. I think without arts education you cannot learn fully how materials work, therefore you cannot understand how things can be made differently, how they can be used differently and

And I think it's a deliberate political ploy to separate fine arts and digital arts, to separate digital making into STEM and mathematics and engineering. Whereas actually what we need is for people to understand the full breadth of materials and making, whether that's analog or digital making.

So, first of all, in the art school end of things, when I work with higher education students, you know, I see a lot of guys coming in wanting to make digital games based on things that they've seen and played online. And I see a lot of women coming in and wanting to do fine arts and performance. So it's very, very gendered. I think you, the difference

that you're talking about, obviously they're driven by the tech companies. They don't want them to be fair and just. We're led by these billionaire tech bros. As we know, they're the ones that are developing AI. They're the ones that want us to remove arts, I think, and arts as a powerful means of kind of critiquing these technologies.

So if you ask yourself, why did ChatGPT first come out and say, you know, I can generate an image in a few seconds, I can generate poetry.

And there's a study that came out earlier this year from MIT that looked at kind of how chat GPT is affecting us cognitively. So people who are using chat GPT regularly, their brains are firing in a different way. They're making less connections. They're unable to recall the kinds of information that they use, for example, in their essays. You've got to ask yourself, this is a little bit political maybe, but...

if you're interested read Atlas of AI read more than a glitch around AI and ask yourself who is benefiting at the moment from AI and it's not children and it's not everyday people and I would say to kids reclaim that you

You know, you're in a system which is asking you to be assessed regularly rather than to think and be creative. Ask more from your education. I think that's my starting part. Yeah. Thank you.

Thank you for the questions, Filippo and Srishti. Just a reminder, the first question I'm going to answer is around youth participation and how companies can take on a role to make sure that children are co-creators in digital products. And the second question is all about education.

you know, how can we build AI that is creative and inclusive, but also mitigates kind of privacy and risks of bias. On the first point about youth participation, you know, I do really think we have a problem in industry, but also in policymaking circles of patronizing children and assuming that, you know, their views and their perspectives can be predicted and

Um, and that, you know, as adults, we can understand, um, exactly, uh, what they will prefer in a certain feature, you know, what level of safety is appropriate, um, and,

without actually engaging them and fundamentally understanding their views. So it's a really good problem that you've highlighted because the processes, I think, for a lot of child participation are slightly broken or at least sometimes they can be, you know, it's not a real consultation. It's performative. We want to hear your voices. We care what you think. And then, you know, submissions are made, but nothing's actually taken on board and they don't, you know, decisions, minds aren't changed and design decisions don't reflect what's

what children are actually wanting and seeking. And I think to kind of move past that patronizing nature that I think really does typify a lot of how companies and other entities think about children's needs, you know, like the language used as a co-creator I think is essential. And

It needs to start from actually developing the evidence base and also the expectation that in order to rebuild trust in technology and kind of move past the trust deficit that we've seen, I think, develop in the past particularly 10 years, people need to feel like

they are participating in the design process. Children need to feel like they're participating in the design process and then the products need to reflect that participation. So at the Lego Group, we have a series of processes which we undertake to transform

to try and achieve this exact purpose. First is being led by the evidence and that's undertaking research projects which, you know, have child participation and children's use at their core. So one is called Responsible Innovation and Technology with Children and so that's a partnership that we've done with UNICEF where we

I worked with children in 31 countries across the world, around 346 children of diverse backgrounds and diverse ages were involved in different workshops to basically help us understand what does digital well-being from children's perspective look like.

And then what features of a digital product, whether it be, you know, an online platform that we're using, a game that we're designing, you know, what features actually amplify that digital well-being. But digital well-being, you know, I mean, quite precise things like creativity as an expression of digital well-being, emotional regulation as an expression of digital well-being, competence and skills building, but also privacy and safety. So, yeah.

What do children think about, you know, amplifies their digital well-being? What does the research show and which features actually embed that in product design? So undertaking that research and making those long-term research investments to get that full picture are really important for companies to actually understand what meaningful child participation looks like.

Similarly, we sponsored the Children's AI Summit this year, which was at Queen Mary University. 150 children, barely any adults in the room. And they were basically telling us their hopes, their concerns about AI. And we brought those...

those children's perspectives to the AI summit in Paris with a lot of decision makers. So it's making the investments to understand children's voices and then incorporating them into the design processes, which includes actually really empowering the teams within a corporation like our child rights and safety team who are child rights experts, who are experts in engaging children with safeguarding and proper processes and making sure that they're part of design conversations. So it's not an afterthought that is baked in from the process.

And then obviously it's having a lot of these design choices set

by children themselves because you're involving them as you're developing the products and you're constantly testing them. Not just so you can say, is this safe from your perspective, but is this fun from your perspective? Will you enjoy this? So I think they're some of the key things that we try to do. It's build the research, build the evidence, get in the processes to ensure that child rights and safety, but also perspectives have a core role in design and then bringing children along for the ride. I'll quickly go to the AI question. I think I just want to

be clear that Lego isn't thinking about, you know, launching an AI product next week. You know, obviously we are interested in, in, in the opportunities that I might bring for play. But also we're very, we're quite a conservative company. We are not a first mover. That's going to drop an AI product anywhere because we want to understand, is this actually something that children want? Is it beneficial for children? Do,

do we know its risks before we can put it in front of them? Is the research there to fully understand that? So again, with this research-led approach, we've actually partnered with the Alan Turing Institute to understand this exact question. The research findings actually released two weeks ago, unpacking what some of the impacts of generative AI on digital wellbeing are.

I encourage you to read that report. We're certainly ingesting it internally. And it kind of shows that, yes, in some instances, AI can augment creativity. But in workshops that we did with children in that research, we found that the bias actually and the fact that some children could not represent themselves in image generators, could not represent themselves in some AI outputs,

you know, that was really distressing and upsetting. And that made them not want to use AI and adopt AI. And actually they preferred the art material and all the tactile material and play material that we provided them as an alternative, you know? So I think bias is a real barrier for inclusion and adoption in AI for children because I wouldn't want to use a system that didn't reflect me. And I don't think we should expect anyone to use those systems. Yeah.

So I think that type of research is really helping us think about AI more holistically. The privacy challenges, or maybe Michael can touch on that because he's the privacy expert. But it's certainly, you know, it's something that we're thinking about, but we're not rushing into this because, you know, it's important to do it right. Otherwise, we're just going to repeat the next 10 to 15 years of a trust deficit in tech, you know, concerns and harms around social media. And I do think AI represents an opportunity to maybe get this right,

if we can really unlock its benefits. So that's the kind of path that we want to be on.

Thank you. First of all, apologies for being late. Best late plans to be here early don't survive signal failures in rugby. So I've got two questions. One is on the sort of the DEI sort of fairness element, and then the other one was transnational regulation. So before I answer those, I want to take a step back and just make sure that a really important acronym I'm going to mention is people are aware of. The acronym is the AADC or the Age Appropriate Design Code.

I started working at the ICO about five years ago, and I've had the privilege ever since of working on the age-appropriate design code or the children's code, as we call it. At that time, almost nobody talked about children's privacy other than Baroness Kidron, who kind of was the founder or the brainchild behind this, and Sonia, I'm not sure if you see her or not.

That's changed significantly over the last five years. It's on the top of every agenda. It's been discussed at the G7 this week, for example. And a lot of that comes through the success of the code and success we've had in sort of, I guess, spreading the ecosystem of child's rights-based age-appropriate design and privacy protections.

The code is a guide for industry about how to compile data legislation requirements in your country. In the UK, that's the GDPR, the Data Protection Act. It's not there to prevent children from accessing the web. It's there to protect them while they're in it and not gate them out.

By conforming with the Age of Appropriate Design code, you should be able to demonstrate your conformity to data protection law in just about every country. Now, since I started, the UK was the first one to do it. We've quickly joined by the Irish and the Swedes.

And part of the answer to this sort of transnational regulation is, and the issues that you raised, is this progressive rolling out of localized age-appropriate design codes. One of the things the ICO has committed me to do in my role is to help countries and data protection regulators around the world to roll out their own localized versions of the AADC codes.

So we've seen it rolled out in many countries in Europe, in California, which then led to several other states in the US supporting it. It's now being developed in Australia, in Canada, in Indonesia. I've talked to Nigeria and Argentina. So that broadening internationally of a recognized principle-based approach to children's online data protection.

And because it's based in the local law, I'm hoping that's going to address some of the issues that you've heard about, that the law is a North-based approach. The principles shouldn't be North or South. The principles should be the same everywhere. And it's how you relate those principles back to local circumstances matters.

which I think the sort of globalization through local legislation that the AADC promises is a way to sort of get around some of those problems. In the UK and in much of Europe, I think the GDPR is probably one of Europe's greatest export achievements over the last seven or eight years.

Even though it's a European law and UK law now as well, it has influenced the design and the application of data protection globally. Most services, as the astute young colleague said, are transnational. So I will be talking with

with online services that are based in California, that are based in Crete or wherever, and they're asking me about how to comply with the ADC in the UK. So having that sort of international code is useful. So the GDPR on which all this is based is based around the principal features are fairness, transparency, and lawfulness. So...

if a service is trying to offer you something that's not fair because you are a different gender or sex or whatever your diversity is, then that is counter to the core principles of the UK GDPR and the processing of that data is likely unlawful. And that's where the regulators in Europe and increasingly globally are stating. So we see

So that spread of data protection, that code-based service, trying to address those issues around lack of fairness and transparency.

But similarly, because it's trying to make services transparent, you as children and you as parents need to have the strength of will to say, I'm not going to use that service that I know is not fair and not transparent and is going to process my data poorly. The colleague said, if you don't own it, you can't leverage it in your data.

Our research shows that kids feel that their data is their only leverage. It's the only tool they have to gain access to services, so they give it away in order to get access to the things they want to use. We as regulators have the role of trying to push and regulate age-appropriate design and safety by design, data protection by design.

Services should be designing in a way that children don't have to make that trade-off. But while it's still out there, I think it's still a role to play for parents and kids to have ownership of what they do, to be able to say no when they think something is not fair or something is potentially dangerous or unlawful. So with the, come back to the transnational regulation element, so I think I've covered it to a degree, but

Also, what's happening in this space is we started with data protection, and now that protective regulatory environment has moved into safety by design. In the UK, we have Ofcom, who's now regulating the Online Safety Act. In Europe, the EU Commission and its regulators are increasingly regulating the Digital Services Act.

We have eSafety Commissioner in Australia doing the same. So those sort of online safety acts are spreading as well. So this is really promising, and I'm really excited by it in the UK. We at ICO and our colleagues at Ofcom regularly talk to each other about how we can regulate collaboratively and in alignment to ensure both online safety and data protection protocols.

Safety by design and data protection by design are built into services. So that will help. And as the DSA comes on board and the power of the EU will help progress that transnationally as well. So although things aren't perfect, I think they're heading in a direction where we see a lot more effective regulation, a lot more flexibility.

Raising of the issue of children's safety by design and online protection by design onto the agenda of many countries. And if you look at the code itself, the very first principle is best interest of the child.

All of these children's ADC codes around the world will have that fundamental best interest of the chart as principle number one. And I'm really proud and pleased to see that. And that hopefully is our way forward to kind of move this along. Thank you so much. Thank you to the panel and thank you to the youth ambassadors again. We're going to move into the Q&A. And I think we had a very good full circle moment here, kind of coming back to how these...

how regulation can be localized, but also have kind of universal applicability. So now I'd like to open the floor for questions. I want to try to disperse the questions amongst the panel. So I might say, does anybody have a question for this person, just to make sure that we cover the panel. And also, of course, you may ask questions to the youth ambassadors.

And just as a reminder, if you're online to use the Q&A box and try to write a short question and please do let us know who you are so that we know and let us know to who you want to pose the question. But also if it's a general question to the panel, that's also fine. So I think we'll start in the room. I'll take about three questions before I let the panel speak.

Yes, we have, our colleagues will come to you with a microphone so that our online audience can hear what you say. Thank you so much. My name is Mehdi Skarir from Oxford. I'm not quite sure if Michael just answered this question or not, but on 26th of January 2025, which is the inauguration of President Trump's second term,

Soon after that, the big companies relaxed their very first, I would call, defensive line of protection with respect to hate, speech, and children's grooming. And they've actually have significantly reduced that or perhaps even eliminated altogether. What can we do?

to bring pressure on the big companies, on the big tech companies, to actually retract that step that they took for relaxation. Thank you. So I'll take two more questions. Thank you. It's really interesting, all of the talks. When you talk about the universal principles and then kind of thinking on a local scale, do you think that there's a role for regional actors, particularly in the global south, when the annual revenue of some of these tech companies is many times the GDP of a lot of the companies that would try and regulate them?

Thank you. One last question. Okay, we'll answer these two questions and we'll come back to the audience and then we'll also go to our online audience. So we have one question for Michael and one that was more maybe to everyone, anyone who feels compelled to answer. So I'll go to Michael first.

Good question. First of all, we probably can't do anything for content moderation in the US, although I think the

The new leadership of the Federal Trade Commission in the United States is young. It is very interested in children's protection and data rights, or children's protection, I should say, although they focus a lot more on parental responsibility and parental power rather than the European focus on best interest of the child and sort of design, age-appropriate design. From a UK perspective or a European perspective,

In the UK, the Online Safety Act has its requirements around content moderation and online safety rules. So those big services will have to abide by the laws here in the UK or in Europe under the DSA.

It is quite possible that you will have different tiers of access and content moderation. It's still a bit early to know how that's going to go. But all of those services have offices in Europe. They would be within scope of the data protection and the online safety regulations.

in Europe where they operate here. So they would need to conform with the laws. I don't think we're going to push them to change that in America. It's not our responsibility to do that, but we can push them to conform and to be legal and lawful in the areas where we can regulate on.

I think that probably answers the first one. Shall I start with the second one and I'll hand off? Yeah. Regional regulation. Well, the EU is a great example of regional regulation in action. I think the DSA,

shows what can be done in that sense. I'm not aware of any other regional body that has the same legal basis in law that the EU operates under. However, so I think in most cases, it's going to be national legislation that needs to be strengthened. And then again, I kind of feel that the

the more countries that have a code, an age-appropriate design code, it spreads the sort of regulatory expectation and also spreads the regulatory burden across those countries. So if the ICO tries to regulate all 100,000 companies that are in scope of the code, it's going to be really difficult.

But if our EU colleagues are also regulating and our Australian colleagues and our Nigerian colleagues and our Indonesian colleagues are all regulating, then eventually you start getting that sort of spread of hopefully change of practice on a global level rather than just on a European or local level. It's difficult, but we do know the African Union is looking at this issue, for example. It's high on their agenda. It's high on the G7 agenda.

Our colleagues in, we have an international age assurance working group that has representatives from about 30 to 40 data protection organizations from every continent, all looking at how to do this better. How to, first of all, recognize who's a child and then to apply the protections that those codes have to those children. So there is international cooperation, but probably not in the way that you're hoping for at the moment, just because we don't have the legal structures to do it outside of Europe.

Thank you. Sakshi and then Adam? Yeah, I can go. Maybe this speaks to sort of both questions. But just to sort of add to Michael's point, I think another massive challenge, whether it comes to regional cooperation or it comes to online safety accountability for tech companies to a great extent, is security.

and funding, particularly in regional areas, particularly in the Global South, whether it comes to research, whether it comes to, you know, that building those communities of cooperation and putting it really on the agenda for policymakers, but also for researchers. I think just

Just to give an example, going back to the question around holding tech companies accountable, one of the various risks, children face various risks online. One of the most pressing risks, in my opinion, and also because we do work on this issue, is technology-facilitated child sexual abuse and harms.

And the extent to which this affects children is not just local. So, yes, there's a great there's a lot of children in the global south that get exposed to this kind of, you know, harmful content, whether it's unwanted pictures or grooming, but it's also harmful.

Very much a high risk in the global north. But the research around this is not so mature and not at the level that we can start to really account for. I think we're spending a lot of time and UNICEF in particular is doing great work to really highlight this.

this issue and put it on both the scientific agenda but also policy agenda. And I know UNICEF in particular have done a lot of work around building regional cooperation and working with national governments. So there are actors that are putting a lot of focus and bringing these issues on the policy agenda, scientific agenda. However, I would argue that the research is really not

at par and I think one of the big challenges really remains investment and funding and particularly in context of the DI question also raised. International funding at this time has been massively cut back

and particularly in areas that I focus on, which is children in Asia, children in Africa, it's not that easy to even get funding and often participatory research, research really innovative methodologies that would help us bridge and find solutions to these problems really require investment. And so that's just a quick call for that. And then I'll hand it over to you.

Sure. So just on the question about the US policy framework, I actually see child protection being one of the really rare areas of bipartisan consensus in the US, even at a federal level and at a state level. So the wider content moderation and online safety debate vis-a-vis adults, certainly that's quite

quite politicized and that's varying. But there is some consensus around child protection, which is good to see and quite remarkable in some regards. So I wouldn't

I would expect maybe to see some regulatory moves there over the next year and accountability towards US-based tech companies there, vis-a-vis child protection, at least. In terms of the regional picture, there's nothing that a company hates more than a fragmented regulatory environment. And

the more kind of key markets can, and even markets which aren't, you know, major markets for a company, can adopt similar regulatory standards, the more likely they'll have those interests reflected in company products. And there's a strength of numbers argument that Michael was saying. I think companies often will accept a higher administrative and regulatory burden if it means consistency. So I think...

the more people can align and look at, you know, what other kind of key markets are doing and then try to lift or modify their regimes to meet that consistency. I think the better. It just, it's a nightmare trying to make, you know, a different approach for every single jurisdiction. So consistency is key. And I think a lot of companies would welcome just, you know, a more regional, more global approach. I would like now to take a,

one or two but then very very short questions and very very short answers uh from the online audience and especially if there's a question maybe to dylan um so from the online audience i'm not sure who i should be looking at yeah oh there's um

Is it on? Yes. Yes. Okay. Yeah, there's one to Dylan. Yeah, perfect. And it's a comment. It says, Bravo to Dylan. Couldn't agree more with what you said. Thanks.

Just to add to that, because I was listening, you know, a lot of what we're talking about is still adults making decisions on behalf of kids. And I did a project a few years back called Countermeasures, which was to try to teach children about the senses that existed in their devices, the kind of data it collects, where it goes. And we were using art-based methods to do that. And

A lot of kids, they don't know, you know, and I myself had to really take lots of technology apart and play with it to understand how the extent of those senses that are in devices such as a switch, such as an Alexa, and even that idea that how do you want to stop Alexa from listening to you, turn it off. Nobody came up with that out of all of the kids, you know, so...

This level of like, yes, we can teach them, we can teach them in schools, but these are really complex things. And I believe that making with these technologies, taking them apart, hacking them, thinking differently with them actually allows scope to look at the bigger picture. And one of the things that I wanted to say in answer to the first question that I forgot and that we haven't mentioned is all these technologies are entangled with the climate crisis. And

Every kid on every project that I've worked with for the last 10 years mentions the climate crisis and their concern for the climate. And these technologies are propelling the climate crisis. AI, you know, the mining for lithium, all of these things, the extent of water that is needed. And children know that.

Very young children know that. They're aware of it. They're more critical than a lot of adults are. And we need to, I think, coming back to that thing of listening to their voice, and we can learn quite a lot of stuff from them as well, I think. Just really quickly on that, it's interesting you say that because in the work that we did examining children's attitudes towards gender of AI, when they understood the environmental impact of AI, they actually rationed the prompts they would make because they thought this

this is wasting this much energy, this is using this much water. They were hyper-conscious about it. It was remarkable. Well, you've probably seen the news that, you know, because we're all so polite and we say please or thank you to AI, that's actually using a substantial amount of natural resource. But kids were aware of those kind of things and they were suspicious right away that this is going to have a climate damage. What is it? Tell me what it is. And you're kind of like, well, it's quite complicated. But we haven't touched on that yet today. Yeah.

Thank you. This is a very short session today. Unfortunately, it's only an hour. And I would like to give the last word to our three youth ambassadors. So I'm going to ask...

first Srishti and then Aisling and then Filipos, just to say one or two sentences as a last reflection and goodbye to the panel and to the audience today. So Srishti. Thank you all so much for this. It was very enlightening and very inspiring. I will definitely be looking into more of what you all said, such as the design codes that Michael mentioned, the global South, like Dr. Geig. And yeah,

more on privacy and bias and climate risks. I found this very interesting. Thank you all. Thank you. Ashley? I would just like to say I'm so honored to be given this opportunity and to hear from a panel so strongly opinionated on championing youth voices as that is an issue quite important to me. And just on the

the three questions I asked, I loved the link between principles being translated to locality and how that can be applicable in many different ways. Thank you. Thank you so much. Filipos? So, me personally, I'm very thankful for the discussion and also for the answers to my questions. It was very interesting to see how

the work in education and the work on developing policies for companies can be actually interconnected and how companies through policies can basically account children into their actions and into their products. So I thought that was very interesting and I was very thankful for the opportunity to be a part of this discussion. Thank you very much. Thank you.

A huge thank you to the panel for this very rich and dynamic discussion. It was great to have such a broad set of issues discussed in this session today. And thank you to the audience for coming and thank you to the online audience for coming.

Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.