Welcome to the LSE events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences.
Welcome to the LSE for today's event, which is part of the LSE Festival Visions for the Future. My name is Cosmina Dorobantsu and I'm a senior advisor to the Data Science Institute here at the LSE. I'm absolutely delighted to welcome Laila Ibrahim to both our online audiences and to all of you here in the room.
Laila is Google DeepMind's Chief Operating Officer. In her role, she oversees the AI company's partnerships, social impact, external affairs, ethics and responsibility, legal, communications, and government relations. In 2023, Time Magazine recognized Laila on its inaugural list of the 100 most influential people in AI.
Laila has spent her career building effective organizations and pioneering products that benefit the world. In her current role, she's at the forefront of ensuring that AI develops in ways that truly benefit humanity. It is an absolute pleasure having you with us. Thank you. And thank you all for joining us today. Just some very brief housekeeping notes before we get started. To the social media users out there, the hashtag for today's event is #LSEFestival.
Please, please put your phones on silent so as not to disturb the event. I very much look forward to having a conversation with Laila and then all of you will have a chance to ask her questions as well. For our online audience, you can submit your questions via the Q&A feature and please do include your name and affiliation. Great. Well, look, I know this is Visions for the Future and we'll most certainly talk about the future and about AI.
But I find it interesting to spend a little bit of time thinking about the past and sort of understanding where we are today as a result of where we've been. And most people in the audience will know you as someone who is shaping the future of AI, but the decisions being made about AI are made by people like you, whose experiences and values sort of shape their understanding of technology and its role in the world.
So before we dive into AI, I'd like for us to get to know you as a person and understand what experiences have shaped you. And perhaps a really good place to start is the United States. You find yourself age two, the daughter of immigrant parents in Indiana. How'd you go from that to becoming DeepMind's first CEO? There are many decades in between.
I will try to be concise. Yes, it has been quite a journey and
I think in many of the points along those decades, I'm not sure I would have told you I'd be living in London the past seven years as the first COO of an AI research company. But one thing that is consistent in the through line throughout is this curiosity and drive to connect people and technology. So I...
studied in electrical engineering and worked at Intel for 18 years where I had, within a corporate environment during the rise of the personal computing industry, really had a chance to not just develop the microprocessor, the brain of the computer, but then eventually work with customers to design it in. And slowly I kept getting curious and I found myself on actually how do computers and how do the internet fit into the world?
And I remember a lot of being a part of that build-out actually shaped so much of my view of the work that I'm doing today. Because I'm seeing, you know, there was a time when we thought computers would take, what kind of jobs would they take? And did teachers still have a role? And what did this mean for the global economy? So it's really an amazing opportunity. And then after 18 years, I had a chance to go into venture capital.
And I thought after 18 years, which is a very long time, I thought, you know, I can either stay here for the rest of my career or really go, you know. By that time I had worked and lived in four countries. So I thought now's the time to go and see what it is going from an entrepreneur, of being very entrepreneurial in a corporate environment, to going and work with startups and entrepreneurs and sharing those best practices and also having a chance to learn
And I was blown away by the audacity of these entrepreneurs with these big ideas and what they could actually achieve. What they didn't have is a lot of what I had been raised with, which is, you know, how is this going to impact into the world? How do we think about doing this collaboratively? How do you build organizations? And I eventually realized that actually I'm a builder.
And so I went into one of our portfolio companies that I helped make an investment in. Some of you may know Coursera.
So I joined when it was just about 40 people and as the first president and COO and really helped grow that business. And then I was, you know, several decades into the career and thought, I need a break. I just need, everything has been following a passion and interest. I had lived in Japan and Hong Kong and Shanghai. I had seen the rise of the personal computing, made investments in startups, built a startup. I'm like, I just need a break. And then I ended up here.
You speak quite openly
about feeling different or being different throughout your life and your career. I'm wondering how did that experience of otherness influence who you are and what you do? - Oh, it's such a good question. I think throughout my life I've never quite felt like I fit in. I was the foreigner in Indiana, then when I lived in Japan as an exchange student in the 80s, I was the only foreigner in a city of 40,000.
And when I was in electrical engineering, that being, I clearly looked very different than my classmates. And even English was my second language. So I think I always kind of grew up just saying, how do I, I needed to find ways to connect with people that were non-obvious and find ways to connect with different topics. But I think what that happened meant in many ways that I saw things from a slightly different perspective. And
And while I was, during my time at Intel, what I found is I ended up in jobs where no one had done it before and I didn't see that as a problem. I actually saw it as an opportunity because I was so programmed and used to that and instead of trying to solve it on my own, I would definitely take a collaborative approach.
And so I think that of like trying to seek other inputs and bringing it into the problem definition and into the solution really steered the journey that I had. And, you know, really of how do you build things together and build it with people versus having the technology happen to people.
You mentioned your BA in electrical engineering a couple of times. And I'm wondering, look, there are probably a lot of students in the audience and online who are thinking, oh my god, my choice of degree is going to influence so much of what I do in life. So my question is, how do you pick electrical engineering, presumably at a time when it is even more male dominated?
It was even more male dominated than it is now. And how did it allow you to sort of progress where you're at? How did it help you? When I was growing up, so my father was orphaned at the age of five, six, and he...
grew up in an orphanage where he saw, they gave, they really encouraged his educational development. So he, and when I was growing up, I saw my dad get out these really beautiful colored pencils and draw these beautiful pictures that actually turned into these tiny little computer chips that went into things like heart pacemakers. And so I grew up with this concept that electrical engineering
solved problems and created solutions for added life. So how could this little orphan boy from Lebanon end up building technology that would save people's lives? And so to me, it was really the combination of art, math, and science, and those were topics I was very passionate about.
And when I started at university, I had a chance to do some internships. And maybe if I had just done the theory, I'd like to go back actually to some of the women in engineering seminar classes because I tell them, don't drop out. What you're learning is the theory. What really matters is putting it into practice. And if you feel like you're not succeeding in some of your classes, that's OK. Learning how to struggle and learning what you do like and what you don't like is really important.
But what I got out of engineering too was this asking questions of why, to what end, and really trying to get to the heart of a problem. And I think that served me well. Now, I have never had a formal business class other than through my Coursera work.
I've never taken a formal business class. Everything I learned was on the job with people asking questions, trying, succeeding, sometimes not succeeding as I had hoped, like failing and really having to learn from that. And I think that grit, that resilience and learning along the way, and it really reinforced that finding things that you're passionate about because here's a slight secret that I can't believe I'm now saying online, but I actually thought I would go into electrical engineering without touching a computer.
And so when I interviewed with Intel for the first time, I just remember my dad saying, "Don't tell them that you don't like computers." Which is really ironic when you think about my role now. So I do think that so much is going to change over a career and really having, embracing that change is so critical. And I think that's what my experience in early electrical engineering taught me.
No, thank you so much for that beautiful story. It sort of gets to the point of why it's so lovely to hear a little bit about the past. Because it influences so much of where we are today.
When we meet people like you, it seems sort of so normal and seamless and easy almost that you have the job that you do. You know, it's almost like the laws of nature, you know, like the sun rises, the earth spins, and Lila is the CEO of the mine.
And I think part of the problem is that we just look at people's CVs or we meet them in their current role and we don't see sort of the hard work and the personal sacrifice that got you the job that you have today. You had 50 hours, as in 5-0 of interviews to get an offer from DeepMind and you've done it a month into what was supposed to be your year off.
Why? What was it about DeepMind that made the effort worthwhile? Let's see. So in 2000, yeah, I was going to take a year off and reset and be really deliberate about what I wanted to do in this chapter of my life and my career.
And my mentor from my venture capital days, John Doar, said, "You've just got to meet this guy, Demis." And I'm like, "Who is he?" I'm like, "Why would I go to London to work on artificial intelligence when we're here in the heart of Silicon Valley?" There's plenty of opportunities if I want to do it. And besides that, I'm not a machine learning expert.
And I finally decided as a favor to John, I would actually take the meeting. And I think the more I learned about AI, the more I realized this could be one of the most transformational technologies of our time. And there's a lot of things that could go wrong. And I think those 50 hours were them testing of like, okay, we're going to bring our first chief operating officer in. What does this mean in a research organization? And for me, it was a matter also of like,
You've got to really be, I want to be very deliberate. Is this what I really want to be doing? And in the end, I decided that if I could bring all of my decades of experience bringing technology into market with communities in a responsible way, if I could help build some of that at DeepMind, because I felt so values aligned to the founders, Shane and Demis, but if I could do that, then this would be a career chapter that was worth it.
And so picked up the family and we moved here in the middle of, and my interviews were in January, which is also part of the 50 hours of like, do I really want to put myself through this? But it's been an amazing, amazing opportunity. And I, you know, my job is every day to,
managing the risk while also making sure we're investing in the opportunity. And so those 50 hours really made sure that when I joined, I knew what I was getting myself into. And every day, that's what keeps me up at night. Are we doing enough to get the good stuff? Are we doing enough to mitigate the risks? And it has fulfilled my every hope and dream.
No, that's amazing. There's a quote in one of the interviews that you gave several years ago to the Financial Times. And what you said then really stayed with me. You said, "When I had my interview with Shane, who is one of DeepMind's co-founders, I went home and thought, 'Could I work at this company and put my twin daughters to sleep at night knowing what mommy worked on?'" And, one, I'm grateful that someone in your position
is asking themselves those questions. But two, you're now in a role in which you can make sure that you put the procedures and the guardrails in place to make sure that you're okay putting your daughters to sleep at night. And it's that bit of your role that I would like us to turn to next. What does it mean to build AI responsibly? How will we ensure that it actually benefits humanity?
Easy question. When I first started, like I said, it was really, I wasn't sure where and how I would be able to contribute to the mission of building AI responsibly to benefit humanity, but I knew it was something I really cared about in the values alignment with Demis and Shane. And
The first thing I realized was you can't do a technical paper and then at the very end say, you can have all the intentions at the beginning, but you can't get close to a paper review and say, are we going to release this? Have we thought enough about the downstream implications?
So it's really about shifting to saying those values and ideals that we have, how do we actually build that in from the start of the research? Now what's interesting about DeepMind is
from my perspective having been involved with a lot of technology companies is this from the very beginning in London of pulling in a very interdisciplinary team where within a typical academic environment you might not have had people with those expertise interacting with each other. The social scientists with the technologists, the ethicists, the program managers but within DeepMind it was very much of like how
the company was intentionally built. And my job was saying, okay, how do we actually leverage that as a strength and build that, how do we build these into the start of like, even when we're starting to define what a research idea is, can we do it in partnership with the researchers?
They were looking at it very from a technical perspective or their area of expertise, but if we pulled together this interdisciplinary group and had the open conversations, and my job as a leader is to create the space to do that, to pull the people around the table, to create the safe space for those conversations, and to allow the psychological safety to explore what could go right, what could go wrong, what skills, what voices are we missing, and how do we augment that.
So, you know, that was a very deliberate, I often talk about this is, this is part of my job as COO that people may say like, that's not the sexy part about it, but it's actually one of the most critical things as a leader you can do.
And when something doesn't go right, you have to be willing to take that accountability as well to create the space for those conversations to flourish. It's been seven years and we have really robust systems and we've done things like really invest in the technical safety, which was a pretty nascent field.
to try to build this capability, but we also built up our group of ethics researchers and we strengthened our work around frontier safety, helped shape things like the Frontier Model Forum, a lot of our thinking around what are the dangerous capabilities, but also investing in the upsides
as well as like, okay great, we released a model, like awful full, but now it needs to get into the hands of researchers worldwide. How are we going to do that? And it's just not about making the paper or the model. So I would say that a lot of this comes down to do you have the right responsible research in place, the right governance structures, and the right path to impact. So those are all, and you know what? We're still learning.
so important is you need to have the Japanese word is kaizen, right? You think about are we continuously improving? Are we making incremental? So it's not just releasing the models, but like learning when it's out, not only through our trusted tester community, but also more broadly in society and being able to iterate as we learn and as the technology continues to develop.
I like this, what you said so much about, you know, sort of psychological safety and enabling the conversation to take place. And, you know, those are
human skills and it's so important to sort of bring them to the table and I think maybe a lot of us in the audience have been sitting in, have sat in rooms where many times when we're thinking, you know, I so wish I would feel comfortable saying this and creating that space is actually quite complicated and quite complex but it's so beautiful when it materializes and when it happens.
I want us to touch a little bit on the practice of building AI responsibly. So Google DeepMind has a set of AI principles that guide your work. But how does it work in practice? What does it mean in terms of the organizational structures that you have, in terms of the processes? Like how does the rubber hit the road, as it were? Yeah, and I think a lot of that, you know, just as we were talking about, you know, how...
How you invest your resources, so that's your people, the talent that you want to hire, it's the budget to be able to do things, the management, air cover. So I think when you look through our organizational structure of what we have invested in, you'll see a continued investment in this space.
And it's not just within our own organization we've done a lot. Just realizing that, for example, we didn't necessarily have all the right voices around our table. You can do the best you want to do within building an organization, but being able to do things like funding diversity scholarships to ensure that in the space of machine learning that we're bringing more voices and more fields.
You know, the intersection of different fields, agriculture and DENMAL, or more in the social science as well, and just, and global coverage. So I think we've done a lot too to try to say how do we help catalyze a lot of the work that's happening in this space, publishing even some of our own frameworks to share best practices with others in the community. We also have an opportunity to learn from others.
But I do think this is an area we're continuing to, as AI capabilities grow, we're also continuing to invest in this space. - Last year, DeepMind published a paper on the ethics of advanced AI assistance. And there's one sentence in that paper that I would love to share with today's audience. It says, "Which path the technology develops along "is in large part the product "of the choices we make now."
whether as researchers, developers, policymakers and legislators, or as members of the public. I want to pick researchers from that list of people whose choices can make a difference. And we're sitting today within the home of the social sciences. And one of the things that we sometimes forget about AI is that this is a technology about people, because the vast majority of the data that trains these models are about people.
And AI advancement itself will affect people. It already is affecting our economies and societies, our interactions, our institutions, our ways of living and working. If people are central to AI, the social sciences must play a key role in AI advancement. And that's because the social sciences offer us the academic lens to which we can understand humans and humanity.
And I think as AI systems increasingly learn from and interact with and impact people, this understanding becomes not just valuable but absolutely essential to AI advancement. Now LSE is one of the world's foremost social science institutions, and I firmly believe that it is our responsibility to ensure that we help shape an AI future that genuinely serves humanity.
We have very high ambitions at the LSE for where we can contribute, but it would be absolutely wonderful to hear your thoughts as well. And my question to you is, what is your wish list for the social sciences? What are the questions that you come across, the areas of your work where you think it would be so brilliant to have some social science research help us and be involved with us in this? - Yeah, and I,
I have a positive bias on this because even from my Intel days 15 years ago, I had a team of anthropologists and ethnographers as we were doing some of our work in developing countries. And I actually put them in charge of the architecture definition because they were thinking about how this was going to go out into the field.
And so I think from that experience very early on in my career, which candidly I didn't even know that was a field when I was in university. So it was like, I think one thing to remember as a social scientist is everyone's not necessarily aware of what skills and potential you can bring into the conversations.
When I think about, you mentioned the ethics of advanced AI assistance, which was written in partnership by and in partnership with a group of social scientists. I think this is such an important, what this skill can bring into the conversation. So my wish would be help us remember the parts of what makes us human.
What are the questions that we need to be asking, even when it may not be comfortable to do so? And what are, because I think a lot of times we can easily get into, you know, from my experience in the tech sector of like, it's easy to think you have a vision of how that technology might impact, but you need those different voices coming into the conversation to help shape it.
So I think things that we've been doing, for example, that I wish we could scale up, and I know this is not about any one company or any one country, like it really takes a collaborative approach, are things like
when we're developing the technology, we know we're now coming at it from a technical perspective, but how do we bring the musicians along when we're doing generative music? How do we bring the movie producers along when we're doing generative video? The artists to think about things, the philosophers to help us ask the questions and
for them to also ask the questions in a way that we know we don't have the answers. So it actually encourages us to actually expand the community of practice and the people that engage with us. And again, that's not just for us within Google DeepMind, but I think also more broadly for the sector. I think there are more questions than we have answers for as AI gets more general, more capable, more prevalent. And so I see the role of social scientists only increasing.
Yeah, no, absolutely. And look, you know, questions is what we want, right? In many ways, this is what research is about and this is what we thrive on, you know, picking out the areas where we don't know.
and trying to shed light onto them. So yeah, super interesting reflections there. I would want us to touch a little bit on the science as well. And in November of last year, DeepMind and the Royal Society organized the AI for Science Forum. Now this was a one day symposium held here in London of leading scientists and thought leaders
to herald in a new era of AI-driven discovery.
There are four Nobel laureates present, two of whom are your own colleagues at DeepMind. Demis Hassabis and John Jumper won the 2024 Nobel Prize in Chemistry for their work developing AlphaFold. Can you tell us about AlphaFold and why it matters so much for scientific discovery present and future? Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy.
LSE IQ asks social scientists and other experts to answer one intelligent question. Like, why do people believe in conspiracy theories? Or, can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts. Now, back to the event.
Let's see, Alpha Fold has been an amazing journey for us. And I even think back to when I started, it wasn't always, it seems like it was always going to have a big impact. But when I started, it wasn't clear. And it's amazing that just seven years later, there's a Nobel Prize.
the first Nobel Prize for the application of AI, which is significant. So AlphaFold is an advanced AI system developed by DeepMind that helps predict the 3D structure of proteins.
So what that really means is that proteins are the basic building blocks of life. And if you think about diseases like Parkinson's or Alzheimer's, these are malaria, these are all protein-based diseases. Maybe even remember like COVID and the spike proteins. So understanding proteins can really help us do things like understand disease, come up with better therapeutics.
And how this used to work is you needed really fancy equipment, a lot of money, the right skill sets, and let's say a PhD, four to five years to understand how a single protein might fold.
What our advanced AI system did, AlphaFold, was do the prediction of all 250 million known proteins known to humankind. And it's now available in a database. And this was basically, people thought it would take 50 plus years. We did it, like I said, within the past seven years. And we released it in a database. So it's as easy as like a Google map search, you can think of it.
What that means is now researchers can more quickly get to understanding the protein and then do the research based on that. So it's, and they're using it for amazing things. Over 2.9 million researchers worldwide are using this. And candidly, we were surprised that there were that many researchers that were interested in proteins. And they're doing things like why are some crops more resilient to diseases than others?
The University of Portsmouth is doing work around plastic eating enzymes, so even dealing with industrial waste. And it's been, there's work with the Neglected Disease Institute that we've done to ensure more equitable access of like, okay, there's a lot of diseases that aren't necessarily being funded by
pharmaceutical companies so how do we ensure that we're also helping some progress in that space so I think what's been really exciting is that we've now made a giant leap forward and using AlphaFold to advance scientific discovery and the science symposium as you mentioned was to say you
The model is important, but what's really important is when it gets into the hands of the people who are experts and can go off and do amazing things with it. And so what can we learn from this that we might then be able to take into other fields? And how can AI become like a microscope or a telescope to help us understand the universe around us such that we can address some of society's biggest challenges?
No, it's super exciting work and I think nobody could have predicted the impact that it actually ended up having. Another area where you're making great progress is weather prediction, which sounds a bit unusual. You wouldn't think that this is where it's going, but tell us a little bit about that and why it matters. Yeah, and this is what's interesting, right? We did alpha fold, but we're doing work in material science.
We're doing work in math, weather, and weather, as you said, is like, weather feels very chaotic and unpredictable. It's something I'm learning to appreciate living here in the UK.
But what's, we have developed some advanced weather forecasting that for like think of accurate predictions for 15, like a 15-day forecast and we've done this in partnership with meteorologists worldwide. So having like a state-of-the-art weather forecasting for 15 days means that when Christ
crisis or emergency weather, significant weather patterns change, that we can do a better job of predicting help with emergency preparedness. Last week we developed this interactive, we released an interactive website called Weather Lab. And it, we're working with the US hurricane
organization so to try to help with predicting the hurricane season as it comes about so you know being able to have a 15-day view and seeing about 50 different paths that it might take and watch it as it develops like anyone can go and check this out and play with it I mean it's really fascinating and you think about okay we're at this point this early in AI's development how might this be able to help us as as
as we deal with climate changes and other things happening in the world. No, that's fascinating. I want to ask you as well, because we're sitting within an educational institution and another area where AI is going to make a huge impact is education. Up in Scotland, I don't know how many of you are aware, we have an organization called the Children's Parliament. And it's an organization that
aims to bring children's voices and human rights to the center of important conversations. They've done a fair bit of work together with a group of AI researchers led by someone named Vary Aitken to understand how children relate to AI and use AI.
And their latest study involves a series of workshops in state funded schools in Scotland to understand the impact of genitive AI on children. 40 kids took part, age 9 to 11, and one of the activities that they did was to use genitive AI to create a piece of art. And the report about those school engagements just came out, and I want to read you three quotes.
When asked the experience of using generative AI to generate art, one of the kids said, "I love the way you can type in anything you want, your craziest ideas, and it makes it real." Another one said,
It kept coming out with a black-brown cow, but I asked for a rainbow cow. I felt annoyed. And yet the third kid said quite thoughtfully, "I like to feel the droplets of paint on my artwork. AI is just 2D. You cannot feel it." And I find it absolutely fascinating to read their thoughts and to see the diversity of feelings and opinions as they interact with this technology.
By the way, I have twin daughters. I always say I'm running my own A/B tab with how they work.
one despises art and would probably fall into one of those categories and the other one loves art but actually so she would be in the like let me use AI and bring my creative ideas into something and the other one would be like oh but actually no I want to feel feel it I think this is what's so beautiful about children right of like having that ability to articulate points of view and showing how different
we all can be. No, absolutely. And being so straightforward and so honest about it as well. And I think what we're seeing in those quotes from those kids are their very first experiences with the use of AI in schools. We're seeing that technology can be a bit clunky at times. There are no rainbow cows. You can't really feel it or touch it, but we know that the potential is there.
So I wanted to ask you what do you think a classroom will look like in five, ten years' time?
You're full of really straightforward questions. No, I, you know, this is what's exciting is that we actually have a chance to shape this together and I really, I think it's fantastic that starting to have these types of experiments and conversations so we can see what's landing and what doesn't work and starting to form own opinions I think is quite helpful.
I, you know, having been involved in education and tech, ed tech since, you know, for the past 25 years and in many different ways, I think we always feel like the ambition of how the classroom will change is sometimes faster and sometimes slower than we imagine. What I do hope and what I would love to have happen is that a teacher ends up
with their own teaching assistant. Can you imagine if every teacher actually had someone to take some of the administrative work and the grading so that they could really show up and probably for the reasons they got into teaching was less on the grading homework and more about engaging and supporting the learning process.
And then imagine if students could have their own personalized tutor to help meet them with where they're at in the learning style that they like so that when they show up in the classroom, the teacher's not necessarily focused on, you know, where they're spending their attention doesn't have to be on the catch-up. It doesn't have, you know, they can, or trying to explain something in a different modality than a learner likes. You know, my daughter's, twin daughters learn very different ways.
And so, you know, I'm fortunate that they need a tutor. I know where to find, I can find one, et cetera, but not everybody is like that.
Being able to find a tutor that can motivate each girl individually is a challenge. So imagine a classroom with 30 plus students. So this vision of a teaching assistant for every teacher and a tutor for every student is really something that we're striving towards and working in collaboration with communities worldwide to help envision what that might look like.
It's a lot of work that needs to happen still with AI and factuality and making sure that we've got the right access, et cetera. And I think there's something, like I said, that there could be changes there so the classroom dynamic and the learning mode changes.
And there's things that I don't think will change. Like I still truly believe that, you know, the magic in a classroom is a teacher, their ability to inspire, engage, to bring those personal touches in, to help share that kind of curiosity and foster the learning process. So I still believe that there's a lot that continues to exist.
even with AI coming in. You touched a little bit on access and we're seeing sort of a concerning disparity show up here in the UK. So recent research shows that 52% of students in private schools are using generative AI as opposed to 18% of students in state-funded schools. As AI is entering our classrooms, how do we make sure that we're not going to be leaving some students behind?
This is something I'm actually quite passionate about. Maybe I'll first give you something that we're doing within Google DeepLine and then I'll tell you something a bit more personal. Within Google DeepLine, I think this is something we've always felt quite strongly, which is part of the reason why I mentioned even earlier
when we were saying that how do we make sure there are more diverse voices in, and being like, kind of the catalyzing some of the fellowships into the university level to help foster the growth in this space.
What we did was realizing that the technology, the students that are, the generation in school right now is the one, are the ones that are actually going to live with whatever we build, right? This is like, we have to get this in a good place and bring them into being pioneers and builders of the technology.
So we worked with the Raspberry Pi Foundation here in the UK, focused on developing a curriculum for community leaders, working with the age group 11 to 14, specifically in under-resourced areas here in the United Kingdom.
to bring a learning about AI and learning about responsible use of AI because our belief was like if we can help get this to be more accessible in community-based learning in schools then it's something that we can then scale with Raspberry Pi Foundation and it's been quite successful and it's at least again helped kind of catalyze other conversations you know it's not just about us providing this it's about others being able to do this work
On my personal side, I actually founded a non-profit. I built a computer lab at the orphanage my dad was raised in in 2000 as a way to say thank you for my engineering degree. And I had this experience of seeing when you could bring technology into a classroom what might be possible even though at the time they were like we don't need a computer lab, we need food, clothing, electricity.
at the orphanage and you know my dad was like don't discourage the girl and let her show you what can happen so in the year 2000 I built a computer lab in two months hired a teacher and then got to see over the next
It's been 25 years since, so I've literally seen a lot of the kids kind of grow up with careers. What that skill set did for them in terms of enabling job opportunity was really powerful. So I started a nonprofit called Team for Tech with my friend, because I had this cool experience, but I'm like, actually, this is great. It's one orphanage, a thousand students, but how do we scale this more globally?
And it's all about the capacity building for nonprofits worldwide so that the nonprofits can do it in the local context with a local curriculum, the teachers, they have the trust, understand how to get the connectivity. And so one thing that's been really interesting on my personal side is seeing how
much AI has been seen as an opportunity in under-resourced communities worldwide where they maybe don't have a lot of the privilege of a private school, as you mentioned, and instead have to think about things like
they have access to the technology and they need that ability to scale, to have the efficiency to express their selves in different ways, especially multi-modality comes in really critical, is really important because it's not always about writing, it's also being able to visualize, see your colored cows or hear the sounds. And I think this is really what's interesting to me, it's like,
Remembering sometimes that there's, with the billions of people on Earth and the technology like AI, which it's not about building the computers and putting them in the classroom now, it's when you release a model, it's available globally. And the question then becomes,
Because of that, this means that no one company, no one country has the answer that we have to work collaboratively to think about how we're going to do it responsibly from the start and bring communities along with us in a way that empowers them to be pioneers in how AI is being used locally. No, that's super interesting and I think those questions of equity and access have to be a priority for all of us.
Thinking of how this technology can increase productivity, equity should most certainly be one of the concerns that we have out there. Keeping an eye on the time, and we have come to the part of the conversation where I gift you an imaginary crystal ball, and you will be able to see the future into it. So as you're looking at it, what advice do you have for all of us? What can each of us do to prepare for the future as
academic students, COOs, curious minds in the audience? I think one of the things that we all need to do is engage with the technology now in any way that makes sense for you. And I really do believe that
AI will develop based on engagement and people having voices in to shaping, to having the conversations. That's why things like this I think are so important. We're still quite early in AI's development. And so I think that means we all get to be pioneers. Not just me as chief operating officer of Google DeepMind, but all of us.
And so I think, you know, how are you thinking about AI in the intersection of your day-to-day life, your career, how you work, play?
maybe where is it not being useful? Like you get a chance to shape this and to help define the future. And I think that's, it needs to be, it's such a transformational technology and it requires exceptional care. And because it requires exceptional care, it means that it needs everyone's participation. One final question from me before we turn to the audience. And it's a very short question, but what is the future that you dream of?
Like I said, every day I think about what could go right and what could go wrong. And I feel like we need to have a very balanced approach. Like I really want us to get to the point of understanding. I think many of us have personal stories of like knowing people who are dealing with chronic illnesses or have passed away and thinking about what would...
better solutions be, right? How can we think about therapeutics? How can we think about living healthier lives? How do we think about food supply? These are things I think about, like the benefit of AI. And I have questions about the universe that I'm also really, you know, the science part actually really excites me. So I think that's, I hope we can make the progress while also mitigating the risks. I don't think it's one or the other. I think it needs to go hand in hand.
It's a great note to finish our conversation on and thank you so, so much for an insightful discussion. It's on to you, our lovely audience. There are a ton of questions already. Look, what we're going to do is we're going to select questions in rounds and I'll pick two questions from the audience and I'll pick one question online as well.
Let's go with the person in the back as one of the questions in the audience and then one at the front here as well. Yeah, yeah. One, two.
Hi, Lella. Thank you so much for the conversation. My name is Camelia and I'm a data science and public policy student here at the LSE. Really beautiful talk. I think a lot of what you said resonated, I think. I grew up from a region similar to yours, so
And I've been thinking now a lot about transitioning to a career in tech, but I think a lot of, like one thing that's been on my mind is also like the concerns that are there around tech companies. I mean, I'm a firm believer that tech can be very good and I've seen, I mean, AlphaFold was an incredible example and I've seen a lot of great outcomes of AI for like humanity, but I've also seen increasing concerns about some of these use cases.
Quite often these uses of AIs that can be particularly harmful to other people might even occur without the knowledge or without the content of those working on it. So it raises kind of a concern and question for me is like how can I trust these tech companies that I want to work for? So my question would be why
what should I look out for when I look at these companies? What are the kind of signals that would make me feel like, okay, maybe this is something I want to steer away from, or this is a place I can actually see and drive change and be sure that what I do is actually good rather than harmful. Thank you for that question.
I'll get you two more questions and then we'll... While the microphone works its way to the audience member here, we have a question from Billy Wright, who's a year 12 student from London. And Billy would like to know how will we know the difference between human innovation and innovation through AI? Yeah, please. I'm Lugna Samara from Higher World. My question really hits on the last...
thing you were talking about, about our involvement. And we're such an exciting stage of our involvement really. And I kind of keep thinking about how caveman started irrigating the land, they freed themselves, they had a constant food supply and started being able to have a lot more time to evolve, to learn more about sciences and arts and cosmology and architecture and develop.
And it feels like we're in the same kind of juncture. And first of all, I'm wondering if that's a bit of a crazy thought, but if it's not, I'm wondering if you are also planning for beyond, you know, whether you're actually, whether you're actively looking to architect that future landscape that we have, or maybe you're leaving it to Gene Roddenberry or something. So, thank you.
Where do you want me to? Wherever you want to start. I'll take them backwards. On the last question, yes, we're thinking about, absolutely, and this is a critical role for social scientists that we're exploring with communities outside of our own, which is what, as AI becomes more general, more capable, more prevalent,
What does that mean? And if there are certain roles that do change, what does it mean for people wanting to learn and understand about the universe? And as you said, how do we ensure that there's that flourishment? So we are having conversations about it. I think there's probably more questions than answers, but I think that's part of the beauty of the process of exploring things together. So it's not a crazy idea. It's absolutely spot on.
The second question from Billy Year 12 was? Yes. The difference between human innovation and innovation through AI. I think his question was also an or question, but I think it's a, you know, there's also an and question, which is human and AI. One of the things as we've been working with artists that we've seen
and we've heard feedback from musicians and produced movie and film and more traditional multimedia art and art is using AI for the ideation process of like that iteration of being able to test hypothesis and ideas and
One of the things that we've heard is that it's helping them with that creative process, thinking of new ways to bring their skills and capabilities into the work. But we do realize not everyone is going to have that same approach. And I think as consumer of a lot of that work too, we're developing technology like watermarking so that you do have, we have some transparency in the system of when AI tools have been used.
it's not a silver bullet, but again, it's starting to make progress. And again, thinking about both the opportunity and how do we mitigate the risk. So again, this is something I think is going to continue to develop, but we are seeing where
human art and creativity is flourishing. We're seeing some where AI is maybe introducing new ways of doing things, and then we're seeing some really amazing things in the middle where they come together. No, no, and I definitely agree, and I think that's something that we're seeing more and more as researchers, sort of a collaboration between what we can do as humans and what AI can help us out with.
Yeah, much like moving digital photography and then Photoshop or things like that may have also introduced into just even how we take photos and capture moments and capture memories as they go along. The last question about...
I think you don't have to be in a tech company to do technology work, I think is number one. Number two is regardless of what organization you go to, you should interview them. They're not just interviewing you, you should interview them. Do you have the values alignment? Do you know what you're trying to get out of the role, out of the company? Is it a career there that you're looking for? Is it a specific job? And so I think just remembering that as
As you go into the job search process, a lot of this is also about you fitting with them as much as it is them thinking that you're the right person. And the other thing I always tell students who are kind of going through this is also understand values. Like I actually really appreciated an interview process when people ask me and kind of push on, you know, you have got to understand the values and then you've got to understand the process to keep the values in check.
And I think those are fair questions. But everybody has different motivations for what encourages them in a job. In a career, I think of it as many different jobs together. It could be at the same organization, different organizations. For me, that's why I spent the 50. I also had the luxury of being able to do the 50 hours of interviewing before I took this job. But it's also part of what's made it such a good fit.
We're at the end of our time. I know there's so many questions that are still in the audience, and it just tells me that we need to do more of this. But Laila, it has been a real joy, and thank you so, so much for coming to the end of the season. And thank you all for participating.
Thank you for listening. You can subscribe to the LSE Events podcast on your favorite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.