Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.
How can open source technology platforms keep AI trustworthy and safe? Find out on today's episode. I'm Mark Sermon from Mozilla, and you're listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College. I'm also the AI and business strategy guest editor at MIT Sloan Management Review.
and I'm Sherwin Kodobande, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Hey, everyone. Today, Shervin and I are pleased to be joined by Mark Sermon, president of the Mozilla Foundation. Mark, thanks for taking the time to talk with us. Let's get started. Thanks, Sam and Shervin. Mark, maybe let's start by hearing a little bit about the Mozilla Foundation and your role there. Could you describe that for us, please?
Mozilla has been around for 25 years now. It's our 25th anniversary, really making sure that the internet is in the hands of the public, that how we build the internet is something that balances not just commercial interests, but also public interests, personal interests that humans are in mind as we design technology. And in the first era, we focused on the web. We built Firefox. And right now, we're really focused on making sure those values show up in the era of AI.
We really want things like human agency, accountability for how tech gets built to show up in the era of AI.
We talk a lot about that when we hear people talking about responsible AI, but it's not how we see what gets built for us. Often there's just a rush to get stuff out the door. We've seen that really a lot in the last year with all the GPT-X and everything else. These things rolling out to billions of people without a lot of consideration for how they might impact people. And so really that's what we're trying to do is make sure that that changes and
trying to make sure it changes through advocacy, trying to make sure it changes through building new open source AI, and also slowly by building AI into things like Firefox, but in a way that actually keeps people in mind, keeps them safe, empowers them. I think that keeping people in mind is big. You use the phrase human agency a lot. One of my personal pet peeves is when we talk about artificial intelligence does X, artificial intelligence does Y.
people use these tools to do something. And, you know, when we use phrases like AI does something, I think we sort of abdicate any responsibility as if, oh, no, it's the machine doing it. We've got to retain some of that agency here that we are in charge or we can be at least, you know, at least for a while in charge. So what are these sorts of initiatives that you're talking about to, well, for example, you
But the foundation and Dressworthy AI and the progress you've made and Mozilla Ventures, tell us about some of these initiatives. Well, you know, I think that's right. People need to be in charge. And it is an AI that does things. Sometimes actually it is companies that own big pieces of AI that do things. And so a lot of what we are working on is how you put AI in the hands of people or smaller companies or developers. So one example of that is around open source large language models.
You're hearing a lot about open source large language models, but how many of us have actually used them for something? Used them to build a personal assistant, used them to help do sensitive research, used them in our work or our everyday lives. And so we've launched a company, Mozilla AI,
that's about taking open source large language models and making them user-friendly, making them trustworthy, letting you use them on your own personal data in a way that you control. You'll see things coming out of Mozilla AI early in this year. Some of the other things we're doing is looking at how you take the current wave of AI and roll it into something new
like in a browser, but not in a way that's about selling you something, actually in a way that helps protect you or helps you make better choices. So over the course of the next few months, we're rolling something called Fake Spot, a company we bought into Firefox that uses AI to help you spot scams, help you spot fake ratings. And so it's those kinds of things, taking the current wave of technology and putting it into the hands of people to make decisions for themselves. So I like that, let's say a push towards AI
making the plumbing of artificial intelligence easy for people to use. Otherwise it's going to be dominated by the large technology giants who can afford to put these models in place. And you mentioned the large language models, obviously they're fascinating and they're amazing, but they're also developed and delivered by large technology companies that don't have my personal objective function in mind. And I,
Maybe this model that you've talked about, when you get this plumbing available and open for other people, then we can start to have artificial intelligence. Its optimization algorithm is about what SAM wants, not what random technology company wants. How far are we from being able to do that?
I think we're a ways away from being able to optimize the AI for us, but not as far as we think. So we're at a spot that I think is similar to when Linux came out, which it was an alternative to Microsoft, but it wasn't an alternative that most people could use. And then it got to the place where developers could use it. And then you started to see like user-friendly desktop Linux.
And we're at a spot where the core of that, more and more people are coming out with open source large language models. How you deploy them, how you use them for what you want, that's hard right now. I would say over the course of this year, you're going to see more and more people come out with stuff that lets developers use open source large language models instead of turning to the big cloud hosted models. And then over the next couple of years, those are going to turn into things that all of us can use for what we might start to call open source personal AI.
The analogy to Linux, I hadn't really thought about that, but it seems interesting from a couple of perspectives. One, most of us don't actually even now use Linux on our desktops. So, you know, if that's the model, then I'm kind of worried that we're not going to have that penetration. On the other hand, if I switch around to think of how much Linux runs the whole of the Internet, then I'm wildly optimistic. And so maybe your point about if we can get the developers of tools out there, then that's
the market will decide about that objective function. If we have competing developers out there using these tools, then maybe the market will help with that. Well, you know, what happened with Linux was...
it became the underpinning for Web 2.0. Linux, Apache, Firefox, and the web stack, right? Open technology allowed developers to create alternatives to Microsoft, create whole new categories of software, of web services. I mean, you wouldn't have a lot of the social media that we use today had some web standards not emerged. Those companies wouldn't have got off the ground had they not been able to set up cheap Linux servers and so on. So I think that's exactly right.
We have an opportunity to create a much more decentralized digital economy than the one we see emerging around the big old companies and the new AI labs that they own. I think there's a chance for something that kind of much more rich and open than that.
The point you make, though, about Linux is a good one, which is it really shifted things for developers. What shifted things for people really was the web. And it was that, if you think back even further to the mid-90s, it's like all of a sudden anybody could create a web page. All of a sudden anybody could have a digital presence, when really that was something that felt like becoming a publisher and something only rich people could do.
And then we did swing back a little bit where Microsoft tried to vacuum the whole of the web back into Windows. And by the time you get to the end of the 90s, 98% of browsers are Internet Explorer, and they're all oriented towards kind of tying into the Microsoft and the Windows ecosystem. And then you had Firefox come along in 2003, 2004. And again, it kind of swings back to the people.
And I think that's what we'll see in this AI era is you move from a lot of open science, a lot of open research, like you think about the transformer paper that actually led to large language models. That was an era where there was a whole openness and people sharing what they were doing in AI. You now see a bunch of big companies, a bunch of big momentum trying to close it down and grab it all for themselves.
I don't think people are going to want to just live with that. Sure, those big companies will continue to exist, but I think you're going to see a swing back like we did with Firefox and the open web to more people wanting to control AI for themselves. I think that's an optimistic view or one of ours, but I'm somewhat wary of coming across two anti-big technology companies because I do think that there's a huge role for
All people in this ecosystem. But I think your final point there of worrying about some of the land grab is important. Even if the smaller language models don't end up being huge or taking off, I think their presence really helps too. Because otherwise we have an unchecked development around these large technology firms and they don't have to be dominant in the marketplace, I guess is what I'm saying, for there to be value from these sorts of initiatives.
Absolutely. There's a real connection in how I see it between open source, which, you know, if it's working out well, can give a lot of people building blocks to create their own things, and open markets and competition.
And really that's what you want, is you just want diversity. You don't want to close things down that there's just the land grab turns into a few companies controlling how everything works. And it's just finding that balance. One of the things we were really happy to see in the US executive order on AI that came out late last year was this push to the FTC to think about competition in AI early on.
And I think that's the thing that we didn't think about in the Web 2.0 era. You know, there were a lot of Landgraabs, really arguably a lot of anti-competitive behavior. So it's good to be looking at that early on in the AI era. Maybe we did learn something from the last time.
I've seen some recent papers out there looking at how much of AI is coming out of industry versus academia. That's something else I wonder about. Who has the resources to pull together these models? And perhaps the Mozilla Foundation and others are necessary here because the idea that Sam alone at night in his dark room pulled together a competing large language model is really unlikely given the resources it takes. So what's the model for Mozilla to support these?
I think public options is how I think about them in terms of people experimenting and trying things. And as you say, not all of us have the resources. In fact, most of them don't have the resources to build our own AI systems or train our own models. And you see people like the Allen Institute out of Seattle talking about building a whole kind of pool of open source large language models. You see community projects like a Luther AI where people are putting their resources to train things.
That's the kind of thing that Mozilla really wants to support and be a part of. So we're working with both Alan and Luther on this kind of stuff. And then you see governments, and I was really happy to see this both coming out of Europe and the US last year, saying we're going to build public infrastructure that researchers and others can use. That's a trend we hope to see continue.
And it's very much one that goes alongside of open source. Open source is about a set of public building blocks. We want to see those in AI. And then kind of shared infrastructure or publicly funded research infrastructure so that people can play with that open source, also critical. And together, those things can drive some innovation that's different and maybe differently interesting than what's going to come out of big companies.
You mentioned that you were supporting Eleuther and Allen. Is that through Mozilla Ventures? Is that how that's working or what's the mechanism for them?
That's a good question, Shervin. We're actually working with those kind of community partners in a bunch of different ways. So Mozilla AI, which is our R&D lab, which aims to take open source and turn it into stuff that people can use, basically. Commercial stuff, non-commercial stuff that people can use to take control of AI themselves. We work closely with other open source projects, just like we did in Firefox in the past. So people like Eleuther and Alan are people we collaborate with.
We also, through Mozilla Ventures, fund a bunch of open source AI companies. There's one called Flower, which is working on standards for what's called differential privacy. And then we do a lot of grant making and fellowships for people who are in the AI space, the trustworthy AI space, the open source AI space. People like Deb Raji, who's a real pioneer in open source auditing of AI systems.
You've described, I think, a very market-oriented approach so far, but you also mentioned the regulatory part. What do you think the role is for regulation in all this?
We're early in the development of these technologies. And I don't just mean AI. I mean, the internet. I mean, like weaving the digital into our life. And we're probably going to be with the digital or what's next in terms of the digital for hundreds and hundreds and hundreds of years. I mean, I think it's a big shift in humanity. And when new things come, you always kind of start with this era where you're not regulating the stuff because you didn't even know it existed. And then you don't know what it is.
I think we now live with the digital long enough that everybody agrees it's time to figure out what's the balance between the public interest and private interest. I just see us in that phase. And that phase means doing tech regulation. It means doing tech regulation, in my view, not in a rush phase.
and carefully. And so in the kind of wave that is coming out, you see the AI Act coming out of the EU, you see the executive order, which hopefully turns into action this year, coming out of Washington, you see stuff in really every country around the world. The key there is to tackle the big issues first, and then to learn. So I think those big issues are privacy,
competition, and really actually making sure that consumers are protected. I think if you get consumer protection, competition, and privacy right, you'll have the basis of what you need to govern AI.
And then I think there's a bunch of stuff that we're earlier on, which is making sure that we connect, say, what are human rights or civil rights and how do they connect to these technologies? I think that's something we want to make sure we put in policy frameworks. But I don't think we quite know yet how to build laws for that. Maybe what the EU has done around a risk framework is a good start. I think it's going to take us years, probably decades, to figure out how to do that right.
And the main thing is that we build the capacity inside of governments and the relationship between governments and industry and the public to negotiate that over time, to adapt it, to understand we're growing to live with something new. And as long as we balance the public interest and private interest, as I think we've tried to do in things like food safety or auto safety or things like that, we'll find the right path. That's optimistic. I was a little struck by you saying that.
oh, we're early in the internet days and, you know, it feels like it's been around forever. But no, we're still quite new on that. But then you paired it with privacy, which, I don't know, kind of bothered me a little bit as you said it. Because I think about how, maybe how poorly we're doing so far on that. And when we think about how much, let's say, lack of trustworthy infrastructure is in place in the internet to start with. I mean, we're just...
pulling our teeth out to get the world moved from HTTP to HTTPS, right? And while we didn't need that secure, that S on there when it was just 16 computers at DARPA hooked together, we do when we're connecting the whole world. But we find that once these things get entrenched, then they're just brutal to pull back out and retrofit.
So when you say privacy, then I started to get worried because if that's the analogy, then I'm worried that we're never going to be able to get back on top of this Pandora's box that we've opened. Give me more optimism there.
Well, you know, the privacy one, I was specifically talking about privacy regulation and maybe, you know, I'm talking about privacy regulation and consumer data protection regulation in the U.S. It's clear there's a lot of dimensions to privacy and we aren't doing well, exactly as you said. Okay.
And I think as we know that we're not doing well, one of the things we need to do is develop good consumer privacy regulations as a baseline for our social contract in the digital era. And you saw a first shot at that in the EU with the GDPR. I don't think that's really worked out. It's not very pragmatic, although many of the principles are right.
You see an attempt at that in the last couple of years with the California Consumer Privacy Act, the CCPA, which is just one jurisdiction in the US, but actually has some better ideas, maybe ideas that haven't fully been picked up yet, like the idea of
Us being able to have data intermediaries or data representatives who are out there acting on our behalf to protect our interests, which privacy is so complicated. Wouldn't you like to be able to delegate it to somebody you trust? Maybe Mozilla in the future? I think it's like moving into that topic from a policy perspective feels important, urgent, and it feels like the bedrock we need as we go forward in this digital era. Yeah.
You're the president of the Mozilla Foundation. I'm guessing that that wasn't your starting job. Tell us a little about your history and your career and how you got interested in these topics and what your background is.
When people ask me about that, my answer always, because it's true, is punk rock and the peace movement. When I was a teenager in the 80s, I was very much a punk rock kid. And punk rock was really tied into, or at least a branch of punk rock was tied into the fact that we're in the middle of the Cold War. It was scary. And the peace movement was there saying like, less nukes. And so I was kind of into both of those things, cared about both those things. I cared about the music. I cared about the politics.
And I happen to live in a very small town where I got to work in high school in a network TV station at night, running the shows, running the commercials.
And they kind of had a rule that we could play our own public service announcements. Like if the commercial time wasn't sold, you could decide you can play a Red Cross commercial or the big PSAs of the time. And I thought, why don't I make a commercial for my peace group? And just, you know, I've got this empty time. I can play it. And I did. It was very corny. It was the first video I produced. I produced many more later.
And I came in one day to play that public service announcement and I couldn't find the tape. And I went to the station manager and I said, do you know where the tape is? And he said, oh, well, the station owner said we don't play local public service announcements.
And that was completely arbitrary, of course. And he didn't like my message of punk rock and the peace movement. And I guess that was an early, you know, pretty privileged, but early lesson in censorship and how media ownership ties to censorship. And really my whole career since that, you know, went to film school has been around focusing on people having their own voice through communications and through technology and
And when the internet came in the mid-90s, I was like, oh, you know, this kind of activist filmmaking stuff that I had started to do, I bet you can actually do a lot more with this internet thing. And, you know, I haven't turned back since. So when you think about how Mozilla is organized, are two people thinking about AI? Are seven people thinking about AI? Is somebody thinking about it in their lunch break? How are you getting these sorts of messages throughout the foundation?
Mozilla obviously started out by thinking about the web. And that was the technology that defined the moment, defined what was going to happen for a decade or two, starting in the mid-90s. And
A few years ago, a bunch of us got around saying, "Look, we can't just think about the web. This AI thing, data-driven computing, that is going to define the next few decades. And we need to take our values, openness, people having agency, privacy, and make sure that those shape where the AI era goes."
And so we wrote a paper about three, four years ago on what we saw as a vision for trustworthy AI and slowly started giving more grants, doing the philanthropic side. But over time, kind of everybody across Mozilla has started to say, we need to do more to make sure that AI goes in a direction that somehow reflects the values that we have. And so we set up this AI R&D company, Mozilla AI. We set up Mozilla Ventures, about two-thirds of
The companies, I think there's 30 companies in there now, are focused on trustworthy AI. And gradually in our core products, including Firefox, we're looking at how we layer in trustworthy AI. So maybe it started four or five years ago with a few of us thinking about it on our lunch break.
And now we're at the spot where really everybody across Mozilla is starting to think about how do we play the role in the AI era that we played in the web era in terms of shifting the direction of things and decentralizing power. Mark, Mozilla Foundation is about to release a report on trustworthy AI. Could you tell us a little bit about the focus of that? There's four things in that report.
initial paper that we looked at. If you want trustworth AI, if you want more agency, you want more accountability in the AI era, what are the things you look for? And we talked about shifting the industry norms, how stuff gets built, shifting what the technology is and the products are that people actually have available to them, shifting consumer demand, and then shifting the policy landscape.
And so we looked at all those things in this recent report and said, how are we doing? And interestingly enough, we're doing okay on some of them and horribly on others. I mean, maybe that shouldn't be surprising. On the policy front, it's better than we predicted. You know, three, four years ago when we wrote that paper, we talked about just making sure that policymakers had the expertise to write good AI regulation.
And you've really seen policymakers step to the fore. We haven't solved it all, but you see more capacity. You see things like the AI Act and the executive order that came out of the White House last year. So that's promising. It's not solved, but it's promising.
On the flip side, if you go to industry norms, we saw a trend a few years ago for more kind of AI ethics people inside of big companies. That's turned around. We've seen a lot of those teams get let go.
The flip side of that, though, is you see people through Mozilla Ventures saying, okay, if the big companies aren't going to do it, I'm going to start my own. And we see a lot more trustworthy AI startups focused on auditing or even focused on social media that has a more human dimension.
And then I would say on that piece in the middle, like consumer demand and are the main products that we use and that we choose, reflecting a vision of AI that is more human, more trustworthy. I think it's a real sweet spot maybe to focus on in 2024 because you've got a real public awareness that there's something to worry about, care about in relation to AI.
but you don't yet have a way for people to act. Like, what are these products that are going to be different? And that's something where we hope to fill the gap and we hope that startups will fill the gap. And that kind of nascent consumer desire for something different, that nascent consumer worry about AI starts to get filled with products that people can trust. I like that. I mean, yeah, you're right. I do see the distinction between awareness and having those available and
Certainly having them available and no one aware doesn't do any good. But like you say, I do think there's more awareness growing. But of course, we're a bit like carnival barkers where we promised something and excited a need for something. And now I think there needs to be a rapid filling of that need or else we're going to have another sort of backlash behind that. So was there anything in that first paper that you feel like you missed that you really want this new report to address?
I think we didn't put enough focus on open source in that first report, and it really has become both a key opportunity and also a key battleground. The opportunity, as we've seen more concentration of power in AI, much more than we imagined three years ago because we weren't in the generative AI era yet.
is that open source could be one element in pushing back on that concentration of power and letting small players carve out a piece of the pie. And of course, it has to come with things like competition regulation and breaking down competition of power more directly. But open source feels really critical as the land grab happens to counteract it. And we didn't talk about that the first time.
And at the same time, as the regulatory conversation gathers steam, you're seeing various players, and I think it's probably some pretty self-interested players, questioning open source and AI safety, saying, what if open source gets into the wrong hands and people use that open source AI in scary ways? And of course, that's a thing to worry about. Any technology can be used maliciously, and this likely is going to be very powerful technology.
But time and time again, we've seen that that is true of both proprietary and open source approaches, that they can be misused. And frankly, open source approaches give us a way to scrutinize what's going on and to fix it faster than proprietary technology in many cases. We've got a segment where we ask you a series of questions. These are rapid fire questions. So just think about the first thing that comes to your mind. What do you think is the biggest opportunity for artificial intelligence right now?
I think the biggest opportunity for artificial intelligence is to take the digital world that's still kind of complicated to navigate and make it disappear and feel natural and a part of our lives in a way that it isn't yet.
and do it in a way that we have control and agency. So maybe the answer is a lot of the web browsers, smartphones, interfaces we have today disappear and are replaced by personal agents, things that naturally we express ourselves through as we interact with all kinds of digital things, other people, other organizations.
I like the flavor of disappearing, and I hope it's a disappearing because we don't have to worry about it and not disappearing because we don't know that we need to worry about it. And I guess that's some of the awareness we were just talking about. What's the biggest misconception you think people have about artificial intelligence?
Certainly people think that AI is a thing. AI is not a thing. There isn't any artificial intelligence. It's just an era of computing, a set of disciplines that are about using data to allow computer systems to be adaptive, to predict things.
to make things feel like they're happening naturally. So I don't think we should look at these things to be artificially intelligent or intelligent in any way, but rather as things that we should find ways to use and control and shape so that there's more ease in our lives. That sounds great to me. I'm ready for more ease. What was the first career you wanted after you finished your punk rock career and I guess your No Nukes career?
I definitely wanted to be a documentary filmmaker in the beginning and, you know, wanted to change hearts and minds by telling the truth. And maybe that's still what I'm trying to do in a different way. Perhaps more powerful with software than film these days, I guess. When do we put too much artificial intelligence in things? When is there too much AI? There's too much AI when we're talking about
things that make life and death decisions. There's too much AI when we're talking about stuff where we need a feeling of humanity, where we need kind of like human emotion in making good judgments or just actually in being connected to each other.
So, you know, we really shouldn't be using AI to do risky, dangerous things that somebody needs to be held accountable for. That's still a place for people. We shouldn't be using AI when we're trying to create deep human connection and have it simulate that. That's what we are for each other. So what's one thing you wish artificial intelligence could do right now that it currently can't? What's a limitation we have?
I wish artificial intelligence today could really just work for me. I think I could have an AI on my phone, on my laptop, even in the cloud that I knew was really accountable to my interests and had the capabilities to interact with all the other automated systems around us in ways that I kind of trained naturally and trusted over time. Through our discussion, you've talked a lot about this interplay between
the technology giants and the market forces and how these things come together. I think it's really interesting to think about. We can have too much market just the way we can have too much regulation. But one of the things that's really coming out from your discussion is this idea of balance and these forces working cohesively together to get us to a point that we want to be at versus dominated by one of those. I appreciate you taking the time to talk with us today. Thanks for talking to us. Thanks, Sam and Shervin.
Thanks for listening, everyone. We've just completed season eight of our podcast. We'll be back on March 19 with new episodes and have a couple of bonus episodes for you coming this winter. We hope you can join us.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.