Open source software is a set of building blocks, like a Lego kit, that anyone can use to create something. It underpins most digital products today, enabling startups and creators to innovate quickly by leveraging freely available tools. Open source is defined by four principles: it can be used without encumbrance, studied, modified, and shared, even for commercial purposes, as long as the original software isn't sold.
Motivations vary widely. Early open source was driven by individuals scratching their own itch—modifying or creating software to meet personal needs and sharing it with others. Today, motivations include collective or corporate utility, such as lowering costs and increasing reliability for infrastructure like cloud computing. Some also build open source as a business strategy, offering support or services around free software.
Open source in AI ensures transparency, safety, and accessibility. It allows multiple players to contribute to safety measures and innovation, preventing a few vendors from monopolizing critical infrastructure. Open source also democratizes AI, enabling smaller players and researchers to build on shared resources, fostering a more inclusive and competitive ecosystem.
Public AI focuses on creating AI that serves humanity by ensuring public use, public orientation, and public goods. It complements commercial AI by addressing gaps the market won't fill, such as safety, transparency, and equitable access. Public AI aims to build infrastructure that benefits everyone, fostering innovation and trust in AI technologies.
Open source provides a foundation for nations like India and Greece to adopt AI for critical needs like healthcare and education. By leveraging open source AI models and infrastructure, these countries can build solutions tailored to their populations without relying on proprietary systems. This fosters a global commons where innovations in one region benefit others, creating a virtuous cycle of progress.
Privacy in the AI era is complex because AI systems rely heavily on data. Traditional privacy models, like minimizing data collection, are insufficient. Instead, privacy must evolve to include user control over data exposure, such as on-device processing and opt-in features. Open source solutions like Flower AI aim to provide privacy-focused alternatives to commercial AI systems.
Governments should focus on three key areas: updating antitrust laws to ensure competition, establishing clear AI and privacy guardrails, and directing innovation funding toward public goods. This includes supporting open source AI infrastructure and ensuring that public dollars spent on AI research produce open, accessible outcomes that benefit society as a whole.
The future of open source in AI will likely see a coexistence of open and proprietary systems. Open source will dominate the infrastructure layer, providing foundational tools that benefit safety, entrepreneurship, and innovation. Governments and public funding will play a crucial role in fueling this open source ecosystem, ensuring it remains a public good that supports global progress.
For someone who hasn't really captured the idea of open source, what does that mean? Most of the things you use today, whether that's made by Google, made by Amazon, made by Meta, or made by some startup, or made by some weird artist, or made by your cousin,
In most cases, there's a bunch of open source underneath it. What are the motivations for someone to build the open source? The mythological early open source motivation was scratch my own itch. I'm using a piece of software. It doesn't do the thing I want it to do. So I'm going to modify it or I'm going to build something and like I'm a nerd, I'm going to build the thing I want and share it with other nerds.
And I think that is still a part of it. Do you believe we actually have privacy? Oh, it's such a tricky question. We struggle with what is privacy now. We think privacy, what it means is gotta evolve.
Welcome to Moonshots. Today we're going to be talking about the open source movement with Mark Sermon, who's the president of Mozilla Foundation. As president, he leads an effort to drive more open, equitable and trustworthy internet, focusing on advancing ethical AI. What does that mean? What does open source mean? That's our conversation.
We're going to be diving into a paper they just released called Public AI, Making AI Work for Everyone. I'm going to put the link in the show notes. And he also put out a paper recently on trustworthy AI. I'll put that in the show notes as well.
So if you've wondered what open source means, if you've wondered how it's going to impact AI, is Lama truly open source? You know, dive in with me here. All right, let's jump into the conversation with Mark. And if you like this podcast and the people I'm bringing to you, please subscribe. All right. Welcome, Mark Sermon. Before we get started, I want to share with you the fact that there are incredible breakthroughs coming on the health span and longevity front.
These technologies are enabling us to extend how long we live, how long we're healthy. The truth is a lot of the approaches are in fact cheap or even free. And I want to share this with you. I just wrote a book called Longevity Guidebook that outlines what I've been doing to reverse my biological age, what I've been doing to increase my health, my strength, my energy. And I want to make this available to my community at cost. So longevityguidebook.com, you can get information or check out the link below.
All right, let's jump into this episode. Mark, good to meet you in person, sort of, in this virtual world we live in. Sort of in person, yeah. As in person as most life gets these days. It's crazy, right? We forget how convenient it is that we can live this virtualized life. And for me to have met you and have this conversation in the past...
I would have had to literally jump in an airplane, fly to Toronto, in this case, New York, where you are right now. But we're living in this extraordinary world. So this conversation for me is an important one. And I think...
I want to dive deep to understand what you think of as the open source movement. What does it mean? Why does it exist? What are its advantages and what are the trades? Right. We hear about open source a lot. I've had conversations in the past with Imad Moustak about this, with Elon Musk, with others.
And as president of Mozilla Foundation, it's an area that you are advocating for. Let's jump straight in. What is, for someone who hasn't really captured the idea of open source, what does that mean? I think at the very top level,
It means that for the last 20 years, 25 years, however you want to talk about it, as we've started to build out like billions of us, this digital world,
that there is a set of Lego blocks that anybody who's capable of using them or wants to learn how to use them can build something out of. I think that's a, I'll define open source in a second, but I think that's really the key piece is that underpinning most of the things you use today, whether that's, you know, made by Google, made by Amazon, made by Meta or made by some startup or made by some weird artists or made by your cousin is,
In most cases, there's a bunch of open source underneath it. And that open source underneath it is the Lego kit that has let so many creative people, so many startups go fast because they've got a bunch of stuff they can take for free to start building something. And then on top of it, they might build something that's closed or they sell to or whatever. So I think...
And that Lego kit, I mean, there was a recent Harvard study that talks about open source over the last 20 years, creating $8 trillion in value. And so a critical part of the digital economy were
Living in with the creativity part and the money part of it coming from the fact that we have this Lego kit It's like what is well go ahead here. Yeah, I was gonna say so you've got this this I think Lego kits a great description of it chunks of software that are available for anyone to use without cost for free
Is it always for free? There's four things that it means. And it's not always like always free as in beer to use an old open source trail. It's free as in speech. And so the four things that make a piece of software open source are you can use it without any encumbrance. You can use it without paying for it. You can study it. So that means you can look inside. It's transparent. You can see how it works.
you can modify it so you can turn something that you pick up into something else and you can share it again. So sharing it also means you can make money on top of it as long as you're not charging for the thing itself.
And so, you know, that's what makes something open source under underneath. There's a lot of how does that play out in AI debate going on right now? And we can get to that too, but that's the basics. Mark, thanks. But let me dig a little bit deeper here because I really want to understand the core motivations and the core principles of open source before we dive into AI, which is equally important there. So what are the motivations for someone to build the open source software?
Is it just curiosity and desire to help? And then one other part of the question I have, even the open source software that someone is building, those Lego blocks right now, those are built using non-open source elements, right? I mean, at some point you go down and you're saying someone's paying for the bandwidth, someone's paying for the compute, someone's paying for the stuff that DARPA does.
Spun out and then Microsoft took over right? So it's not is the old Principle, it's not turtles all the way down. It's not open source all the way down. It's
So there's some open source layers, right? So help me understand that and then the core motivations for people playing. Well, I think it's important, and we'll get into this, I'm sure, to distinguish between it costs money and it's open source. So all the way down, there's stuff that costs money. Our time is worth something, even if we're not getting compensated in cash or the bandwidth, the compute, the DARPA, all this stuff. And in fact, in some ways, that cost of...
In some ways, that cost of building it can either get privatized or can turn into public goods. And so that's the nice thing about the history of DARPA and the Internet, right? Is this big public investment, which turns into these open protocols, not open source, but open protocols, which run the Internet, which are public goods. And so they actually take public dollars, tax dollars, turn into something that actually is open that everybody gets to benefit from.
So I think we want to just be careful there. In terms of what are some of the motivations, they really vary. So if you think about what are some of those building blocks, Linux or Apache or Firefox or Wikipedia, which is not software, but let's call it open source. I mean, it is open source, it's not software. Or an open AI, open source AI model like Ulma from the Allen Institute. Those
There's a lot of motivations. I think the mythological early open source motivation was scratch my own itch. I'm using a piece of software. It doesn't do the thing I want it to do. So I'm going to modify it or I'm going to build something. And like, I'm a nerd. I'm going to build the thing I want, share it with other nerds. And I think that is still a part of it, right? Is like, I just need to make a thing for myself.
is really the heart of it. I think Linus Torvalds, when he started Linux, it was some combination of, I want a version of Unix that works the way that I want it to work, and I'm going to make it.
But then, as you go on, it absolutely is maybe a more collective or corporate or even commercially driven utility. It's the scratch my own itch, but at the level of a Meta or a Google or an AWS, which is I need a software stack that's gonna run my cloud computing platform, that I can build a social network on. And lowering the cost and increasing the reliability of that
is my absolute goal. I don't want to care about that thing. I just want to make sure it works and it costs me as little as possible and that it's as flexible as possible. So that's why, say, something like Linux, which is really under most of the things we use on the internet in terms of cloud computing,
All of those companies I just mentioned put huge amounts of money or engineers back into Linux because it's the plumbing. And if they all basically collectivize the cost of the plumbing, it becomes the standard. It just works. It's much cheaper than buying...
you know, Solaris or Windows NT, which don't exist anymore. The proprietary things from the past or from building it themselves. So I think there's a next level of scratch to the itch, which is get the thing I want for cheaper in a way that works for me. And collective effort is the way that that works.
And then, you know, there is an end where somebody's really building open source as a business strategy on its own. And so, you know, I'm Red Hat and I take a lot of stuff that's already been built and leveraging everybody else scratching the itch. I'm building some other open source to glue it together so it's easier to use Linux and
And then I'm charging for software. It's not for software. I'm not charging for software. I'm charging for support. I'm charging for other kinds of services that allow people to use what's free in a way that's more effective. So I think you kind of have the personal pitch.
You have the collectivist or the kind of infrastructure play, and then you have an open source business. So it's going to be interesting as we dive into the conversation around AI. And I know I'm excited to hear your thoughts. You recently wrote a paper called Public AI, Making AI Work for Everyone. And I want to dive into that because there has been a lot of advocacy for open source and transparency in AI.
And at the same time, there's been a lot of, how do I put it, giga dollars put into the field. A lot of capital flowing in. It's an expensive, energy intensive field. So I want to understand how the two can coexist. You know, how do you make it happen fast enough, safe enough, with the proper motivations and
And how do you get the smartest people, sort of the meritocracy, to work on it with the right incentives? So this is going to be our conversation for those listening who want to understand where we're going to go. And I'm going to play both the fan and a skeptic on both sides of the equation, Mark, if you don't mind. Please. Yeah. So, you know, I've started, I don't know, seven or eight nonprofits over the years.
And I've sworn that I'm never going to start another nonprofit. And the reason for that is the inefficiency of what I see as nonprofits. And, you know, I run, I'm executive chairman of the XPRIZE, which I think is a
Highly efficient leveraged, you know, we put up we yeses we've launched 30 prizes over 30 years 600 million dollars in competitions But we've got to just struggle day in and day out to raise the money to convince somebody either with in kind or time or donations to fund us versus you know build a business that is churning, you know revenues and I reinvest it and
i've had an argument for years that google um even though it's a for-profit has done such extraordinary benefits for humanity making you know knowledge leveling the playing field making it accessible democratized and demonetized around the world that if you tried to create that kind of capability um in a non-profit i don't think it would i don't think it would have been possible
given the amount of capital needed. So how do you, so by the way, you can make the point first off that open source doesn't mean nonprofit just to, just to begin there. You made it for me, Peter, you made it for me. So, I mean, open source doesn't mean nonprofits. We go back to your original question of like, you know, how does, I mean,
maybe I'll take as implicit, open source compete or provide value or whatnot in a world where there's so much money on compute, it's so expensive to do this, we need the brightest minds, all of that.
And I think the answer is twofold. One, our vision of open source and certainly public AI, which is this broader concept, which we can get to in a little bit, isn't that it's exclusive from proprietary or closed things. Open source has always done well in a world where it is a counterpoint, a complement, right?
or something actually that comes a little bit later and replaces some commercial innovation. So Linux really is the server operating system, and you don't have the Solaris and the Windows NTEs. And that's because it just becomes such a commonplace thing that it kind of commoditizes as open source. So it's not to say open source exclusively.
And then on that question, like how does that get financed or how does it come to be? Because it's not going to just be somebody in their basement who's going to create the Linux of the AI era, although lots of those people are doing cool stuff.
You already see with Lama, for example, that Meta sees a very clear interest in NVIDIA, who's helping them, and lots of other companies who are in the open source AI space and IBM, us, that the idea of...
moving early to commoditize and collectivize what is effectively becoming commoditized infrastructure. Like a lot of what, I mean, I think you see the core innovations of transformers and LLMs are going to become pretty commonplace. They become commoditized. Isn't it crazy that the world's most powerful technology is effectively free?
Yeah, I mean that blows my mind. I mean, if you'd gone back a decade and said, listen, you're going to have these things called large language models and you're gonna be able to ask any questions, ask it to create video clips, images, and it's perfect. And it's like the world's knowledge is at your fingertips and it's got, and a company like man is actually putting it out there for free or the others are putting it there for free.
How much would you have guessed a decade ago that a license to that for an individual would cost, right?
I mean, it's insane. It's free. Well, what's interesting actually is a decade ago, if I'd imagined this technology, I would have imagined it was free. Two years ago, I would not. So I think we were actually in a moment as ChatGPT exploded where it looked like all this stuff was going to be thrown behind APIs. Nobody was going to release it. Because there's a lot of open source, there still is today.
in AI all the way along, how the innovation moves so fast, is you have researchers, scientists talking to each other, writing papers, sharing their code. So 10 years ago, five years ago, I would have imagined there's a lot. And I think we got to a moment where really stuff got enclosed and locked down. You think about OpenAI taking the transformer paper, innovating, really boxing everything up, productizing it, which is good in terms of end user value, and
and really move quickly that, you know, Lana was able to kind of get out there and, you know, provide basically the same thing because I think you see that meta is playing the Linux version of that or the Linux play. I was to say, what's Zuck's motivation here? Is it catching up? Is it just taking a sort of adjacency that,
that gives him a larger fan base or a different user base. Why did he do that? I think Zuck's motivation is the same motivation I put money into Linux is I'm not, if I'm meta, in the LLM business. I'm not trying to be OpenAI. I'm in the metaverse business or the social media business or whatever business they're going to be in.
And I need this current generation of technology to work as reliably and as at a low cost as possible. So they put a ton of money in up front. It's a big risk.
to be the, you know, try to have Lama be the Linux of the LLM era. But then I just saw an interview with Jensen and Zach where, you know, NVIDIA is supposedly putting 200 engineers into Lama. And so I think the idea is if it becomes the standard, everybody pitches in and it's a kind of Linux play. And so that's the motivation. I think that the tricky thing about Lama and Meta with that is they've put in this thing that makes it
makes Lama, in our view, not open source, which is this weird license piece that says at 700 million users, it's no longer free for you anymore. It's no longer open source. And open source licenses don't have those caps. It kind of breaks the covenant of open source. Imagine if
If Zuck had built Facebook on Linux and when he hit 700 million users, Linus Torvalds shows up in Palo Alto, knocks on his door and says, "Sorry, but the money..." Giant bag. Right. It doesn't work that way. And so I actually do think other things will emerge, and this actually gets to your question about nonprofits in a second.
that will be pure play open source and that will actually become the dominant infrastructure. Did you see the movie Oppenheimer? If you did, did you know that besides building the atomic bomb at Los Alamos National Labs,
that they spent billions on biodefense weapons, the ability to accurately detect viruses and microbes by reading their RNA. Well, a company called Viome exclusively licensed the technology from Los Alamos Labs to build a platform that can measure your microbiome and the RNA in your blood. Now Viome has a product that I've personally used for years called Full Body Intelligence,
which collects a few drops of your blood, spit, and stool, and can tell you so much about your health. They've tested over 700,000 individuals and used their AI models to deliver members critical health guidance, like what foods you should eat, what foods you shouldn't eat, as well as your supplements and probiotics, your biological age, and other deep health insights.
And the results of the recommendations are nothing short of stellar. As reported in the American Journal of Lifestyle Medicine, after just six months of following Viome's recommendations, members reported the following: a 36% reduction in depression, a 40% reduction in anxiety, a 30% reduction in diabetes, and a 48% reduction in IBS. Listen, I've been using Viome for three years. I know that my oral and gut health is one of my highest priorities.
Best of all, Viome is affordable, which is part of my mission to democratize health. If you want to join me on this journey, go to Viome.com slash Peter. I've asked Naveen Jain, a friend of mine who's the founder and CEO of Viome, to give my listeners a special discount. You'll find it at Viome.com slash Peter.
Before we go any further, I'd love you to give us a little bit of a background on Mozilla Foundation. It's the context in which you come, it's the work that you're doing, and I think folks should have a little background here.
Please. Well, it's interesting because we get back to that question of motivation. Mozilla started in 1998. I wasn't around. I was a fanboy from the outside. As the open source project on top of the Netscape browser source code. So, you know, Netscape was losing to Internet Explorer. They'd been the first big browser after Mosaic. I remember well. And so they thought, well, if we put it out there, maybe other people will run with it. And you did have...
Red Hat and Sun and IBM, a bunch of other people contributing to the open source version of Netscape, which was called Mozilla. But what really drove it was for about five years before Mozilla Foundation even existed, it was just really a bunch of hackers around the world trying to beat Microsoft. And so actually the motivation question was, these guys stole the web from us. Bill Gates didn't even want...
to have the web. They didn't believe in it. And then now they've got 98% of browser market share. They've got ActiveX, so you can only use web pages really on Windows or inside of Internet Explorer. And, you know, so there was an army of very angry geeks that said, Microsoft, you don't own the internet. We're going to show you. And eventually, it takes them five years. They get from this clunky Mozilla browser that was just a kind of a
slight and evil version of Netscape to Firefox. And Firefox is the kind of breakout of like, let's make it small. Let's make it sexy for people. Let's put in pop-up blockers and let's make sure it does JavaScript really well, which sounds boring today, but was radical because it's,
What it became was the thing that allowed people to develop interactive web apps instead of dumb web pages. And the joke kind of goes, like, what's the best version of Firefox ever released?
Internet Explorer 7. Because that's the one that had JavaScript in it. And until you had all the browsers working with JavaScript and Ajax, you couldn't do Gmail, you couldn't do Facebook, you couldn't do Twitter. So that was the set of people that really wanted the web to be open. So Firefox makes an emergence and it takes on market share from Internet Explorer.
But then Chrome comes along and begins to dominate. And what was it that, why didn't Firefox dominate? And why did Chrome, what allowed Chrome to come in with such a fury?
Well, the cheeky second answer to the joke about Firefox is the best version of Firefox ever is Chrome. Okay. Because really the goal of Mozilla is that the whole system is open. And, of course, we need to have enough market share, which we don't today with Firefox, if you ask me, to be influential in our values. So the other thing about Mozilla is it wasn't just that kind of passionate set of people who wanted to counteract Microsoft.
that only focused on a browser. They had a bigger dream. They had the Mozilla manifesto. And that was really about the internet being in service of all humanity. And so, you know, the idea is we've got enough openness to shift the market towards being open and towards the tech working for people.
And I would say Chrome came as a third real platform after Internet Explorer and Firefox. That was a boon. There was a period where we really lost the ball in terms of keeping up with the tech. And it's a lesson that is if you want to have this mission, and it's going to be true now in AI, of making sure that the tech has certain values, you also have to be on top of the tech being great.
And we didn't always stay on top of that. Isn't it true you also have to be ready for verticalization? I mean, Chrome has succeeded, as has Gmail, as has Android and a thousand other things, Maps, because Google was large enough to verticalize and build interdependencies on these things that made it super convenient for folks to use.
And I can imagine we're going to see even more of that with AI in that verticalization, that deep stack up and down the user experience. It's an interesting question about where verticalization, which is a natural tendency and ultimately monopolization, which is something we don't want and is illegal in our society.
That's a tendency that will emerge from companies that are trying to get as much market share and as many things as possible. And disruption interacts because Microsoft also found to be a monopolist in its era had totally verticalized, right? They owned the server room, the databases, they're trying to own the content, they certainly owned the browser, they owned the office suite, all of that stuff. And they get disrupted by the web.
And so one of the interesting questions is, the web era and the smartphone era are verticalized as well. There's real tendencies towards that verticalization in the AI era. I mean, Gemini actually, even being late to the game, I think it has a real advantage being built into all of the Google suite.
What will come to disrupt that and is it we've gotten so far along that? Disrupting the verticalization becomes so harder or almost impossible. It's a really critical question right now. You know, I Call me cynical, but I think every single company in every single product eventually becomes disrupted because that's how they become Comfortable, right? I'll never forget about six years ago Jeff Bezos goes on an investor call and says I
Yes In 30 years Amazon will not exist anymore or some some quote like that right saying I don't he's trying to scare his employees or just dock his stock price But it's everything gets disrupted right FedEx. I mean the dominant, you know overnight carrier as we see what Amazon has built is
Now, one of the questions is there is a situation where a company, a for-profit company with a great leader, motivated employees, just do a better job and they can reinvest and they can, as a meritocracy, just continue to increase their capabilities and
Yeah, you don't I mean there's a difference between monopolistic behavior and being a monopoly right if you have the best product in the world and everyone loves it and and Guess what? You know, you've got 99 percent market share Is that a monopolistic behavior or is it a monopoly or is it are you just providing a fantastic product and service? I
This is a tricky question before the courts on a number of topics, including right now. Right. It's a, it's a great product. Um,
uh so i mean that's that i guess for as we evolve our monopoly laws for for us to to figure out ultimately what you want from antitrust regulation is competition and the opportunity for people to come in and disrupt sure and so and that's where i know that you're being cynical i think you're being hopeful when you say ultimately every company is going to know i believe it because every company gets fat dumb and happy to some degree
And new technologies constantly, I mean, that's the laws of physics or technology. We just have constant, you know, we're in a super exponential period. And I'd say the day before something is truly a breakthrough, it's a crazy idea.
And we get disrupted by crazy ideas that a company that's reporting on a quarterly basis is unwilling to take. But some entrepreneur someplace is willing to, they have nothing to lose. So we'll take the bet. And oh my God, that's incredible.
Well, probably maybe five or 10 years before, not the day before. And that takes it from 2015 to... But I think your point is right. One of the things I want to pull out, and it goes back to a question I didn't answer earlier, but also on this, like companies are going to want to grow in this way, is there also is a question of disruption to one end, right? So some people will come in and disrupt because they've got a great idea. They want to build a company. And like...
capitalism and companies are great at solving certain problems and creating public good and even public goods, things that are shared in common, contributing to Linux, all that stuff. But there also are things to your question of not starting a nonprofit that companies are just never going to be good at.
And so that's where when we talk about public AI as a counterpoint, I don't think that a company is ever going to be good at what has happened with Linux, which is a collective public good that all the companies who build on it and researchers and everybody in governments build on top of. And for all it's a pain in the ass that Linux Foundation struggles with getting enough members.
that is a form of social organization that lends itself to being an independent third party.
And unless you want to get rid of governments altogether, I mean, that's another form of social innovation that has its purpose. I would call myself a libertarian capitalist, so that's where I bend towards. Yeah, and I'm an old-talk anarchist, and so sometimes we'll have some common cause there. But I think that the thing is, what social forms or social organizations are helpful to what innovation and accelerating what innovation? So are we happy that
In addition to NBC, ABC, CBS, we also had PBS in the broadcast era. I think it took on a role that the commercial players were never going to take on. And so to me, the question at any point, but innovation comes in and YouTube comes online. Great. And you don't need PBS or BBC or CBS. And so when we have a government formed platform,
provider, right? I would rather have complete and total open access for anybody to provide whatever they want. But that comes later in the process. And I think the question to always be asking is what's not going to happen if you just leave it to the market? Sure.
I think that's a very fair and a very important question for humanity's benefit. Everybody, I want to take a short break from our episode to talk about a company that's very important to me and could actually save your life or the life of someone that you love. The company is called Fountain Life.
It's a company I started years ago with Tony Robbins and a group of very talented physicians. Most of us don't actually know what's going on inside our body. We're all optimists. Until that day when you have a pain in your side, you go to the physician in the emergency room and they say, listen, I'm sorry to tell you this, but you have...
this stage three or four going on. And, you know, it didn't start that morning. It probably was a problem that's been going on for some time. But because we never look, we don't find out. So what we built at Fountain Life was the world's most advanced diagnostic centers. We have four across the U.S. today.
And we're building 20 around the world. These centers give you a full body MRI, a brain, a brain vasculature, an AI enabled coronary CT looking for soft plaque, a DEXA scan, a grail blood cancer test, a full executive blood workup. It's the most advanced workup you'll ever receive. 150 gigabytes of data that then go to our AIs and our physicians to find any disease at the very beginning.
when it's solvable. You're going to find out eventually. Might as well find out when you can take action. Found Life also has an entire side of therapeutics. We look around the world for the most advanced therapeutics that can add 10, 20 healthy years to your life. And we provide them to you at our centers. So if this is of interest to you,
please go and check it out. Go to fountainlife.com backslash Peter. When Tony and I wrote our New York Times bestseller Life Force, we had 30,000 people reached out to us for Fountain Life memberships. If you go to fountainlife.com backslash Peter, we'll put you to the top of the list. Really, it's something that is, for me, one of the most important things I offer my entire family, the CEOs of my companies, my friends, my
It's a chance to really add decades onto our healthy lifespans. Go to fountainlife.com backslash Peter. It's one of the most important things I can offer to you as one of my listeners. All right, let's go back to our episode. So I want to jump into your public AI, making AI work for everyone. And I pulled out five points and I'd love to dive into them a little bit. I think that
You know entrepreneurs listening to our conversation here This is you know, there are a few different debates going on the AI world one is will will you know? Digital superintelligence destroy humanity. That's a great debate not gonna have that conversation right now. Will it pull the jobs? Yeah, well AI you know pull our jobs will they will help us create, you know
longevity and fusion? The answer is yes, but we'll get back to that later. But the question of how do we assure safety and transparency and who's responsible for that? I mean, these are fundamental questions. So here's the first point I've written down and let's discuss what this means. So AI development is dominated by commercial interests. Why is that a bad thing or is that a good thing?
Well, I think the commercial interests are a part of driving the innovation is a great thing. And, you know, you often have these dances, if you think about the Internet overall, between
non-commercial research, like deep innovation. So whether that's DARPA and the internet or CERN and the web, then you actually figure out what to do with it. And there's a lot of commercial drivers of innovation from there. And then off and afterwards, there's a role for what is a commercial player not doing
where you take Firefox or Wikipedia or Linux as examples where the nonprofit players come and play a really hugely socially and economically beneficial role. So fast forward to today in terms of AI, I think it's great the amount of commercial innovation that is happening.
commercial dominance is a different thing than commercial innovation. And so it's where what we see and want to accelerate is that there also is a public option that complements it. Not, you know, just that kind of motherhood and apple pie, but because it is really important in doing the things we talked about before. What won't the market do on its own?
And so some of the things the market won't do on its own is I don't think it'll pay attention to safety in a way that is actually broad enough for us to be safe. I think people trying to corner the market on safety is actually a dangerous, I mean, you might get some good stuff on it, right? But if we actually want to protect humanity, the idea that there's one or two vendors or 10 vendors, like cornering the market on safety is,
is a dangerous game for humanity. Having an open where a lot of different players can pitch in on safety, see how stuff works under the hood. I don't think the market is going to drive that on its own. And that's a key piece. I agree with you. It's a key piece, but isn't that the role of the government versus open source or any particular company that,
You know, when I think about why governments should exist and where are they not overreaching, and this is a delicate balance, safety for the population it represents for me, whether it's safety from armed forces or police or regulation, is like the most fundamental thing I think a government should provide. Do you agree with that?
Absolutely. And we haven't figured out, although I think we're actually zooming towards it, if the American political system worked at all, it would be easier. Having that regulation is the role of government. What are the guardrails? And, you know, you take something like the evolution of transportation. I'm happy there are traffic laws and safety laws, and that is the proper role of government. And building cars is the proper role of the private sector. And
Public transportation is a thing that kind of sits in between. So I agree, providing safety, absolutely. Regulating the guardrails on safety, absolutely the role of government. But jump to AI, one of the critical things I believe and we believe in those regulations working
is transparency and the ability for people to collectively tackle the safety problems in order to comply with those regulations. And so the whole history of open science and everybody looking together to drive AI forward, you know, with enough eyes, all bugs are shallow, you know, that old open source principle, to me, that's actually the more likely going to produce an outcome that lines up with that regulation. It's not the government's job to do the implementation.
On the flip side, having a few people trying to corner the market in compliance with those regulations and what safety is and lock it all down, some of those could be useful players, but I don't think that's enough to keep us safe. My bent, again, is towards entrepreneurs to solve problems. You mentioned transportation. I think about in the space industry, which is my earlier part of my life,
You know, it was Lockheed and Boeing sort of the large defense contractors that were launching humanity into space and here comes you know And they were dominant by by far right and the government is building the space shuttle under contractors and
Then here comes Elon as a disrupter and captures 99% of the market right and the other companies will go the only reason the companies are that exist is the government likes having a second supplier in place, but we'll see we'll see relativity space and And Bezos with Blue Origin come in and those will become the second suppliers and
But I don't think you could have ever gotten to the level of innovation and brilliance in a governmental program or a...
the governmentally driven program. It was like just getting the very best people on the planet and forcing them or focusing them to take huge risks and innovation. So the question is, can you get that speed and energy in open source? Or is it some segments of open source and not others? Well, I think open source and governments are different. So I think the role of governments is either to create guardrails or to fund public goods.
And so certainly if you think about DARPA, funding public goods, I mean, you got a lot of innovation and to some degree speed, although it was a long game. And the government's playing that funding public goods role. You can get that in open source. I do think the role of open source often though is
after some of the high-speed entrepreneurial innovation happens. So, you know, you see Linux comes like 10 years, a half a generation after Solaris or Windows NT. And it's like, we want a different thing. We want to collectivize this and make an infrastructure. Firefox comes 10 years after Netscape. Wikipedia comes 10 years after Encarnate. But if you still have one of those CD-ROMs, you know, you get a prize. But, you know, I think the role of open source is actually to create
often the more malleable public good version of what has been driven by commercial innovation. So on this first point of AI development is dominated by commercial interests. Uh, D is there, so that, that is a truth and is continuing. Um, should that be dissuaded? Should that be, uh, you know, so the answer is so what and what do we do about it is the question. Right.
Yeah, and the so what is, you won't get to the public goods you need in terms of keeping the market open for smaller players, in terms of researches, in terms of safety. I think you need open source and truly open source that operates in a kind of way that we all have access to and doesn't get cut off at 700 or 200. But if the open source movie doesn't... But let me say why. Yeah, please. So I think in the so what... And how, if you would, and how that should work out.
So the why is things like, maybe I already talked about them, we have the Lego box for this era. We've got the ability to have transparency for safety. How, you know, maybe corporate players actually even drive that. And I think then corporate players, I have a little two by two matrix I have, which is like open, closed, private, you know, commercial, public. And you see different people, you know, kind of down in the commercial but open, you have...
meta trying to play. I think that's good. In the long run, I also want to make sure there's stuff that is kind of owned in common. So you have people like the Allen Institute, which was set up by Paul Allen before he died, Allen Institute for AI. Which he made the money to fund that from a monopolistic activity. You have a tax system that supports philanthropy in America. So that's what is supposed to happen. Yeah.
So, you know, he really believed in open AI. He really believed in open source AI. And you have an amazing guy, Alec Faradi, leading that. I think that could become the Linux or the Linux foundation of this era. And so, you know, both of those are useful players to kind of have out there.
I do think one of the critical things that we're missing right now is in every Western country, you're seeing huge amounts of money being thrown at compute or huge kind of looking at how we're going to deal with energy and AI. And to me, we really should make sure that those government dollars go to public goods. So if I'm giving you huge amounts of compute as a researcher or even as a company, I'm
That should produce open source at the other end of the process. If public dollars pay for it, public goods should come out of the other end. Yeah. I, my most talks made a, you know, a statement, which I really liked, which is, this is infrastructure. This is fundamental. The compute is fundamental infrastructure for every country and every country should own its own models and its own infrastructure. I mean, it's going to become oxygen electricity for, for nation. Yeah.
I agree with him 100 percent that it's infrastructure and it's not just the computer, it's the models, it's the whole stack. We did a big paper on that with a bunch of other people, including Jan LeCun was a part of it at an event last year or earlier this year.
And one of the things I would be really careful of is not to think that it's national sovereignty, but actually, you know, I think the democracies of the world building the system that is opened and controlled by them. And you're going to have the Spanish language law. You can have the Spanish language.
large language model or the Italian or the French or whatever on top of a shared pool of infrastructure which is effectively AI built that is open source and for democracy. But I actually think that that infrastructure is something that as a set of
countries in the world that have a certain set of values. We want that collectively that we can all lean on. Where do you come out in the discussion? Like we need to go as rapidly as we can because we're in a fundamental race with China. And that fast as we can is going to be, you know, government and investors pumping money into for-profit companies that have employed the smartest people on the planet. And this is a race for the
for the principles of freedom. So I agree, we got to go as fast as we can to build a technological society, including an AI stack that is driven by freedom and pluralism and values that I hold dear. And I think private companies are a part of it. I also think public AI is central to it. I mean, we talk about public orientation, public use, public goods, and that public orientation is
you know how do you put the intent of pluralism and democracy into the design of these products and test against it over time which is about safety which is about who gets to contribute all those things there's a great book or i mean it's only a digital book which maybe that's all that matters right now by audrey tang who was the digital minister for taiwan talking about you actually
can build. I mean, they basically have, you know, people who are very focused on private wealth and China, which is very focused on a particular totalitarian approach, shaping AI in their own image. And what we don't have is a high speed, fast approach to how do we build democratic, pluralistic AI? And I'm kind of, kind of buy into that is yes, go fast and go fast, not just to back people.
you know, somebody owning the market in the West, but go fast towards something that is a set of players building something that supports democracy and pluralism and you make money off of it. Real quick, I've been getting the most unusual compliments lately on my skin.
The truth is I use a lotion every morning and every night religiously called One Skin. It was developed by four PhD women who determined a 10 amino acid sequence that is a senolytic that kills senile cells in your skin. And this literally reverses the age of your skin. And I think it's one of the most incredible products. I use it all the time.
If you're interested check out the show notes, I've asked my team to link to it below. All right, let's get back to the episode. I mean there is a scenario where companies and the best meritocracies are attracting the very best people with the most capital, the most compute, or building the best systems and then the government becomes a user of those systems to support its people.
In the same way that NASA didn't build, you know Starship, you know kudos to Elon for flight five but is going to become when the largest users of Starship so Can we deliver on the public good? with private private models and private companies yes, and
There's always going to be stuff that private companies don't do. And there are private companies who play totally in a black box, like OpenAI, and private companies that play in a way that, for their own interest, also benefits the public good and creates public goods, like Meta is with Lama. And so I think you have to take a nuanced view of how do you get to that stuff. And our view is you want both commercial companies
and government and nonprofit or open source community players all pushing towards this kind of pluralistic open, you know, and public AI option alongside everything else, not too exclusive, you know, alongside everything else. It's not exclusive. I, I, I get it. And I agree. Um,
As long as you know again the government is not mandating but it's enabling the emergence of those of those resources of those You know open source teams and perspectives and so forth. I mean that makes complete the complete sense Can I ask another question because I think privacy is one of the main is a big driver for for Mozilla. Yes, I
Absolutely. Do you believe we actually have privacy? Oh, it's such a tricky question. So privacy is a core in the, you know, it's something like an individual's privacy and security is sacrosanct. It doesn't quite say sacrosanct, but it's in the Mozilla manifesto. It's one of the core. I know. And we struggle with what is privacy now, because we, for us, privacy was make a browser that collects no data about anybody and minimize data as
as much as possible. And we all know you can't make digital things now without data. I mean, it's, it's as fundamental, more fundamental than code. That's what AI is. Right. And so what is privacy in that context? And, and so that's a thing to work through. I do think it is built to go back to the question of like public orientation or building in values and,
And looking at how you build technology, AI technology that doesn't unduly expose information about you or lets you opt into things that are more private. So you see that with Apple intelligence in trying to do more stuff on device and lean in that direction where it doesn't mean I'm a completely private individual, but there's some stuff I want to keep close to myself.
And we've actually funded through Mozilla Ventures, a company who's building basically the open source equivalent of Apple intelligence called Flower AI. So we think privacy, what it means is got to evolve, is got to have a lot more to do with sort of
How you think about privacy in your physical life, which is, oh, I'm going to close the blinds or I'm going to talk a little quieter. Like we there have to be ways we can express a desire to be seen and less seen in the digital world we're building. I just you know, it's what it hits me is I've got, you know, whispered. So she hasn't come alive, you know, Alexa listening and she's listening all the time.
Right? I've got Siri here listening all the time. You can have an AI with a camera read my lips from, you know, 100 meters away. You can shake my hand, grab a couple of skin cells and sequence them. So, you know, I think to some degree, you know, privacy is an illusion that we like to believe in. And the question is... It's one of the reasons we talk about trust and trust for the AI as well. Because you can build that into the technology that...
the stuff is local or the camera is turned off or whatever. And do you trust or have enough control yourself to believe that that's true? So there's constraints that can be put on all the things you just talked about. And a lot of it is either I fully control it, which is pretty hard in today's connected world as you're super technical and willing to kind of put yourself on a very, you know, in an island.
Or that you trust the parties that are providing it. And that's what Apple is good at. It's what Mozilla has been good at historically. I think we want to be good at in the AI era. We haven't talked a lot about where we're going with our own AI work, but it really is in the trustworthy space and in the open source space because we think there's a lot of desire for those things.
Let's say the next president of the United States comes to you and says, what are the policies you'd like the White House to enact? Do you have a clear set of recommendations? Yeah. I mean, at the high level, they're pretty clear. Would you share? What's interesting, actually, is despite that you can't get anything done, I'm Canadian, so I can make fun of the U.S.,
political system, despite that you can't pass laws, I do think you have a lot of bipartisan commonality on some of these topics. And so I think
One is, you know, the antitrust law works in today's era. We haven't figured out how that works. But I think so there's space for entrepreneurs and there's space for innovation. And I do think that's critical, you know, a critical thing to figure out for this era. The second is we figure out what are the right AR and privacy guardrails. And that's where Europe hasn't gotten it.
perfect, but they have a little bit right in that where they focused AI regulations on uses of AI. It's not let's regulate all AI. It's like, if I'm going to use this for something that might be sensitive or dangerous or harmful, let's have guardrails on that. So we've got to do a better version and that's for humanity to figure out. But I think regulating that. And then the third is
is making sure that that innovation funding is going to public goods, is going to stuff that everybody can use, and frankly also that it's going to regions other than just California and Washington State.
Because, you know, really one of the other things about open source is the idea that people can innovate from anywhere. And I think you see not just corporate concentration in a few companies, geographic concentration in a few places. And I think it is an opportunity for government, if it's putting resources out there, to spread them widely. Yeah, I mean, but I do, again, go in my aerospace roots. I remember, you know, NASA's
Space projects would be distributed among 30 congressional districts and companies, you know strategic policy was to move in a congressional district that had no aerospace suppliers so they can get the contract which makes for a lot of inefficiencies Yeah, for sure. The good news is that we're talking about, you know bits not atoms. Yes. Yeah, for sure. A lot's changed.
From from there. What other what other policies? How do you you know? So the question ultimately is how do we regulate it to make sure we don't have you know disastrous Outcomes and how do we you know, how do we sneak up against AGI whatever that is and maybe we've passed it already or digital superintelligence that disrupts Disrupts the way of our lives so rapidly that it leaves our head spinning. I
Do you have any recommendations?
There's two answers to that. I mean, I think one is just keeping our eye on the ball. And, you know, you saw this in the debates around SB 1047, the AI safety law that Newsom vetoed recently, is really to push towards we need to build an evidence-based way to kind of look at where are the real risks and where are they not. And it gets back to, like, we should be regulating against risks and not just generally against, you
you know, amorphous fears. So it's urgent. It's important to kind of have regulation that can do that. It needs to be grounded in evidence. The thing that is actually much trickier to fix, and I don't know how to fix it, is we have a bureaucracy. We have political systems designed for the industrial or even the pre-industrial age. Let's, you know, wrap on a conversation about what your prediction is on open source. What is your hope of
on where this is going to go? Well, my prediction and my hope are the same, and I hope my prediction comes true. So, you know, my prediction is that there will be a public option, an open source option, and that you'll see both coexisting. But I also think that the infrastructure layer, that the people who are just trying to do the commoditized fundamental stuff
will not be the ones who win commercially. Netscape doesn't exist anymore. Sun, I don't know if they exist anymore, but certainly they're not the definer of the ecosystem that they were. So I do think that the infrastructure, the building blocks to get back to that Lego kit will be open. And I think that will benefit us all if it can be true from a safety perspective, from an entrepreneurship perspective.
from a safety perspective, from an entrepreneurship perspective, from a general creativity innovation perspective. So that is both my hope and my prediction. I guess the hope I layer on top of it is that we can be smart enough that if we're spending public dollars anyways on things like compute,
that the government plays a fueling role in this as it did in previous eras. And that it, you know, that's something it does with intent as opposed to a kind of constraining role and trying to micromanage things. One last aside on here. I just got back from India, which is, you know, a nation of 1.41 billion people, the largest on the planet and a nation that I think needs AI to
For its survival, you know, you cannot provide the education and health care to the you know There's a hundred million people that are the DAC the tax base for the nation the rest are in some degree of poverty and the only way you possibly provide health and education to them at scale is going to be AI on top of you know, the GO 5g network that's that's there and
And then I go to Greece and I meet with some leadership there and they're like, you know, help us get AI going So for a lot of nations, right that are not AI centric today that are looking and feel and know that they need AI in their in their nation to compete and survive and thrive and
How does the open source movement support them? Well, I think that is actually the core of public AI and why we are talking about that beyond just open source, right? The three things are public use, public orientation, public goods. Public use is like use this stuff to deliver health care and education. We have to figure out how to do that well.
And that's where the public orientation comes in. Like, build it in a way that is enabling of humans. And Darren Akamoglu, I never get his name right, who just won the Nobel Prize for Economics, talks about machine usefulness. Right? So, like, how do we actually make this helpful to humans, to public hands?
And then the third piece is public goods. And that's where open source comes in. It's like, let's, as we do these things, build into a commons. And certainly this government should do that so that what India does helps Greece and what America does helps India. And that virtuous cycle begins. And, you know, that's something where the best kind of human progress, I think, has come from Greece.
you know, those things all kind of heading up together. Mark, before we sign off, where can people find you, follow you and find Mozilla Foundation and how can they be involved? How can they support your work? Easy to find Mozilla at Mozilla on X, Twitter, whatever, whatever you call it these days. You can find us on LinkedIn. You can find us at mozilla.org where we've been for,
25 years. And, you know, I'm just at M Sermon on Twitter and LinkedIn. I'm kind of not in either of those places often, but you can find me there. And I think how you can get involved, it is really just looking at how does public AI, this concept of AI and service humanity change?
benefit you even figuring it out selfishly and then how do you get back if you're building software can it be open source if you're a policy maker can we make sure that public dollars go to public goods and so on amazing mark sermon an open source warrior to benefit humanity that's my new it's my new brand for you thank you i'll get the t-shirt printed here right now thank you for your work mark a pleasure to meet you thanks for the conversation likewise thanks for having me on peter
you