AI poses a really interesting challenge for us because it's becoming an operating system for a lot of things. And already there's a lot of industries where like 20% or even 50% of people's time and action is in some ways dictated by the computer. Like the computer, by seeing everything, by knowing what's going on is very powerful, suggesting what to do next. We
We are creating this control layer for our civilization that we are going to be following more. Is that authoritarian by its nature? Is there a way for that not to be authoritarian? How do we think about that? One of the most interesting questions. So there's a mathematician named Alfred North Whitehead who says that the measure of civilizational progress is the extent to which we can perform important operations of thought without thinking about them. So on the one hand, we should automate quite a lot. On the other hand, history gives us cases where we've automated too much. ♪
Socrates said that the unexamined life is not worth living and philosophy is important for our own lives, but it's not just about ourselves. Our world's in the midst of being transformed by AI. Our civilization has a lot of possible directions it's going to go the next 20 years. And, you know, the very nature of our lives is not just about ourselves.
our own realities and our own experiences is gonna be changed dramatically. And the people building this, a lot of them have no grounding in philosophy. There's a lot of really scary, really bad ideas that could lead to very dystopian, very authoritarian futures. At the same time, there's a lot of amazing things that completely change the world for the better for all of us if we get the philosophy right, if we get the grounding of these ideas correct. My friend Brendan is a very successful entrepreneur, has worked a lot in AI and defense and insurance and a bunch of other areas. And after having a lot of great success,
really started spending time on philosophy. He started the Cosmos Institute to help create philosopher builders. We're gonna take some of our best builders in society, we're gonna take those who are planning on being builders and make sure they're imbued with a philosophy to understand the challenges we're gonna face as a civilization, to understand the reasons our civilization thrives today and why the West has been so successful and now can keep being successful in a way that helps everyone thrive. Welcome to American Optimist. Really excited to have Brendan McCord with us here today. Brendan, thanks for joining. - Yeah, good to be here.
Brandon, you're the founder of the Cosmos Institute. You're an entrepreneur. You're a philosopher. Do you consider yourself a philosopher? Aspirationally. Aspirationally. I love it. And you went to MIT, HBS, Harvard Business School. So first, tell us about your background, your entrepreneurial journey. You built a couple insurance companies real quick and sold them. And you actually, you bought a bunch of stuff and fixed it with AI. Yeah, that's right. So my career after MIT actually started on submarines. So it's a very non-traditional start. I was like underwater for 610 days, mostly under the Arctic ice sheet. Oh, gosh.
doing stuff. 610 days, not all at once, but how many days in a row? So the longest from seeing the sunlight to seeing the sunlight was 66 days. The 610 is just added all up. That's rough. It's not as bad. I've heard they're five month journeys for some guys, but 66 days is a lot. And that's from seeing the sun. So even through the periscope. I hear your Spotify turns off after 30 days. Spotify lettuce goes bad. People don't realize that a lot of things, eggs, milk. So you end up shifting to powder stuff. It's
pretty rough. So you have kind of fresh-ish food for two or three weeks or something, and then you're crushed. Yeah. Precious 10 days. That's right. But this was, I think this is because I had this big desire to do public service, like to help humanity in some way. By going underwater in the ice sheet. By going underwater in the ice sheet. And that's how it translated. I got out
And then I started to think about startups. And when I got out, it was 2012. So AI was just beating humans at ImageNet. There's a few important breakthroughs like 12 years ago. Much prior to ChatGPT, but this was a big inflection point. So right place, right time, I started doing AI startup stuff. I ended up going back into the department because I heard Ash Carter say, DOD's got to get serious about AI. This was around that time. And so did very interesting open source projects.
Very interesting use case stuff. Maven, with the Google effort with Maven, also applied the first AI to beat humans in poker. Heads up no limit to this very urgent problem on the Korean peninsula. And then eventually got asked by the secretary and deputy secretary to write the strategy for AI for the department and to try to put together kind of an applied version of DARPA. So what years were you there at Maven? Yeah.
So that was 2017. That was after Google had said no? No. So I got involved pretty close to the beginning. I remember the very first time I came to the Pentagon was, you know, a briefing was happening. And I think Eric Schmidt was learning about Maven for the first time. Eric, by the way, is all in on defense. Great guy, very pro-defense. But Google itself kind of pulled out. They did. So I would go down to Google and I would badge in almost like I was a Googler.
but I was working for the government. I was an HQE and I worked with their AI team and we were working on counter ISIS stuff. And after I left, after I rolled off the project, we had deployed it in Afghanistan, all that, 4,000 people came together and they protested Google's involvement. So it was right after I had rolled off, like they became a huge deal. So you kind of missed that whole thing where Google went out and Palantir went in and all that stuff. Yeah, exactly. I mean, I saw it.
I witnessed it, but I was off the project at that point. And by that point, you're moving on and then you're starting companies by like 2019? Yeah, exactly. So then I started companies and because I had seen how good Google was internally at doing AI, at building AI systems and using it,
I thought, well, okay, most companies in most of the economy are not like that. And I thought the winning formula was let's raise a bunch of capital. Let's acquire these companies and then let's transform them. So I teamed up with the person who just sold legendary. Um, this is a dark night hangover kind of Thomas Thomas tall. Yeah. Yeah.
and the outside counsel for Berkshire Hathaway. And that was a permanent capital vehicle. We bought companies and made them better, and we returned billions to investors. - Thomas Tolles, a friend, he's a smart guy. - Yeah, yeah. And then after that, it was, you know, we were thinking about what to do in insurance, and it didn't make sense to buy a company. So started two from scratch here in Austin.
and sold those really quickly. As you mentioned, we just grew really quickly and we had incredible talent, sold them to a huge insurance platform. And then that's where the philosophy started. So up to that point, it wasn't really- - And now you made a lot of money and you're stopping, you're thinking, how do I have to go back and just to fight for society again in different ways? - Yeah, and I had my second of two kids and that was huge because I was trying to think, what do I model for these little humans? You know, what's the world they're entering? And so I started reading like 15 minutes a day after I put them down and that grew to like three or four hours a day.
And I had a really good mentor, Michael Strong, who you know, he wrote me a 17 page annotated syllabus, started with the ancients, moved all the way forward. And it just kind of took over my life.
And the first insight I had is there are more people like me who are tech entrepreneurs, who studied STEM or business or something like that. But they don't really have the philosophical grounding of what's going on in the world right now. Exactly. Exactly. And so somebody introduced me to Tyler Cowen and we put together this fellowship program to try to get really good entrepreneurs to read like Adam Smith, you know, and read Montesquieu. And it was broken down into, they got to read about the markets, about the nature and limits of government and about how
bottom-up problem-solving works in society. This is so important to draw. So I happen to have a background in a lot of those books and philosophy. If you look around the room, there's all this old stuff from the 18th century enlightenment I'm obsessed with. And it's the biggest frustration for me because our friends will make a lot of money and they have no idea about these enlightenment values. And then they try to apply things to fix our society that are completely missing the wisdom of why it works in the first place. Yeah. Well, I think if you're in an AI lab in particular, if you're an anthropic open AI,
Increasingly, the kinds of problems that you confront are not technical, not business. I mean, you definitely have those. You definitely have those problems still. But Jack Clark, who's one of the founding fellows of Cosmos, co-founded Anthropic, he writes in his newsletter that the more seriously you take, say, AI safety assumptions, the more willing you are to put in place drastic and dystopian measures, including bombing data centers, right? This is his statement.
And for me, that illustrates this kind of third problem where that's not a technical problem, this tension between safety and progress, right? It's not a business problem. That problem is the realm of philosophy. And to inquire better about that is something that I think, you know, AI researchers increasingly need to be doing. And so you founded the Cosmos Institute. What's the Cosmos Institute? Yeah.
So we're trying to, we're an academy to try to cultivate these philosopher builders. So we think that that's a key archetype that we need. And I can kind of, you know, the historical thing I think you'll appreciate because, you know, you know and think about a lot of these characters. So my contention is that whenever you have a technological revolution, you have two choices. You either build for freedom or you watch as others build for control. And I'll give you examples of this. So Benjamin Franklin in the U.S.,
He builds a network of libraries and independent publishers that takes the printing press innovation and realizes the democratic ideal of spreading knowledge. At the same time, you have Joseph Goebbels, who takes the printing press and turns it into a system of mass manipulation. Forward to the Industrial Revolution, you have people like Adam Smith, who
theorizing about markets, creating broad prosperity. You have the Soviets taking industrial technology, twisting it into a system of economic domination. And then lastly, in the internet revolution, you have people like Tim Berners-Lee who embed decentralization into the technical fabric of the worldwide web, give us an expressive open internet. And you have China creating the great firewall and the
the social credit system that creates this web of conformity and control. And this is a big challenge, obviously, today with China, potentially with AI, right? So AI, I guess you're positing, is the next big technical revolution. We're spending, I think, a trillion dollars on it right now over the next several years. And I guess there's a question whether or not China's vision or another vision wins for this, right? Exactly. Yeah, I think it'll be the biggest infrastructure build that humanity has ever done, I think. I think you could make that argument. I think we will hit a trillion dollars in the early 2030s.
You have China coming with its vision of techno-authoritarianism and control, and this is a very exportable vision. It could go to other places in the world. I guess the worry is there's people here who might actually go along with that as well, if we're not careful. People here, people in Africa. I mean, it's a very enticing vision, right? If you confuse AI with technology,
politics, then you have incredible tools for control. Mark Andreessen, if you look at recently, he's been on a lot talking about how it seems like that the previous administration was trying to choose just a couple of winners and control it that way and centralize things. Yeah. Well, and I think what's interesting is that you have a lot of different impulses giving rise to the same form of top-down control. So some people are coming at it
In the case of China, it is expressly a control goal. - Expressly authoritarian. - Yeah, but there are people who have genuine concerns about safety, say they think that we're gonna destroy humanity, and their prescriptions based on that are that we should have a world government.
And this is the same solution set that's a kind of vector for tyranny and invites tyranny from totally different means. There are also people who really want to focus on a specific notion of fairness. So you want to focus on material equality, and they think that AI could throw that out of balance. And so therefore we need a top-down kind of system to be able to redistribute wealth. - They want what is ultimately an authoritarian system 'cause they want top-down to impose their vision of the world on everyone else, basically.
Let's go back to the world government thing for a bit, because this is this is a huge point of contention. I think with a lot of the left, a lot of people seem to want to impose this on us and seem to want to work towards it and to force us into these global kind of frameworks to take away our autonomy. And obviously, to a lot of us who believe society should be free and work bottom up, this is like one of the worst things ever. We'd fight to stop. Like what's the history in the Enlightenment or more recently with this world government idea? Because it comes up a lot in a lot of this work.
Yeah. So one example here is when Alexis de Tocqueville comes to America in the 1830s or around 1830, he goes to New England townships and he sees something that's totally different from what he had seen in Paris. Paris is very hub and spoke. The government in Paris controls France at that time. And in America, New England townships, you see something totally different, which is bottom up people associating spontaneously solving problems. They need a bridge, they go build a bridge.
And it's this incredible release of energy. And Tocqueville thinks that this is very important. The reason he thinks it's important is not only because of the vibrancy, but because he recognizes that in democracies in particular, the individual is weak. The individual versus the state is not a fair match. If you're a serious aristocrat, maybe you have a chance, but individuals in democracy are weak.
they're also weak relative to other forces like the will of the majority. And so Tocqueville says it is essential that you have a decentralization and a vibrancy that comes from the bottom so that they can fend off these powerful forces. If they do not, then they'll end up with encroachment. It'll end up with what he calls soft despotism.
We could talk about that. More recently, I think one of the most interesting debates on this issue is Leo Strauss versus Kojic. And they talk about this. And I would just leave it to listeners to look this up. But if you want to look up this idea of world government or universal homogenous perpetual tyranny, Strauss-Kojic debate is probably the single best
academic piece on that point. Because this thing keeps coming up as a default of the far left in a way that's, I think, really terrifying and ties into a lot of these things. You know, I want to ask you about authoritarianism and control. Since you spent a lot of time on philosophy, you know, AI poses a really interesting challenge for us because it's becoming an operating system for a lot of things. And already there's a lot of industries where like 20% or even 50% of people's
time and action is in some ways dictated by the computer. The computer, by seeing everything, by knowing what's going on, is very powerful, suggesting what to do next. There's some really advanced areas of healthcare where this is probably actually what you want, is for the computer actually to be wiser and show what's... You can opt out, but you're probably going to go with what it tells you to do in lots of cases. And so in some sense, we are...
creating this control layer for our civilization that we are going to be following more. And I mean, is that authoritarian by its nature? Is there a way for that not to be authoritarian? How do we think about that? Yeah. One of the most interesting questions. So there's a mathematician named Alfred North Whitehead who says that the measure of civilizational progress is the extent to which we can perform important operations of thought without thinking about them. So what is he saying there? Well, what he's saying is we should offload.
He's saying like offloading is good. Example is I woke up this morning, I jammed a toothbrush in my mouth and it wasn't because I had reason from first principles on dental health. You might just do certain things by default. You do certain things by default. And moreover, that is what enables you to think about other things. Like it builds up this edifice of civilization. So on the one hand, we should automate quite a lot.
On the other hand, history gives us cases where we've automated too much. Normal people, people who don't have great evil, you know, genetically or, you know, they weren't engaging in active malice became concentration camp guards because they outsourced gradually, you know, bit by bit pieces of their moral deliberation. Just going along with what people were doing. Exactly. So letting people think for us,
can be very bad. So this introduces a very big paradox. It's like a Stuart Mill quote, correcting and adjusting his opinion by clading it with others. So like there's certain people who they just, they just did automatically just to think what everyone else thinks, right? Yeah. And Mill, I think has a very keen observation that we tend towards passivity. Like,
Human brain works on glucose. There's only so much glucose to go around. We're biologically wired to be passive. So AI presents this very interesting, sinister, it's very good in the sense that it can offload plausibly everything, but it's bad in the sense that it entices us to give up more and more. And so one thing I think about is like, let's make this practical. I think about what are some patterns we want to avoid? And one is like auto-complete for life. Like you
Like you're familiar with autocomplete, you write, you get another word. Like I don't want that to show up in all of life. Which is the scary thing for some of AI. I could just suggest what to do next, right? Yeah. And you have kids, like I'm sure that your kids, when they use AI, engage with it actively. We don't actually use screens very much because I'm a little bit afraid of the interactions with them. And like what, you know, I see a lot of this AI design as it is right now as like,
it's a form of drug in some sense, right? Because basically it's trying to trigger your brain with whatever dopamine to do certain things that are the goal of the platform. And I'm willing to take risk with myself with that form of AI drug. I'm not willing to take risk with that with my kids yet until I know more about it. Maybe that's a little bit too cautious, but that's the time I see it. So I think everyone who has kids can really understand this. There's another thing that I think sometimes about is like,
Let's say you go to a new city, okay? Today, you can get a recommendation for a restaurant, a neighborhood, a bar that fits your energy. Like, you know, it can do this today or maybe next month, right? But take it up a notch. Let's say that it can recommend the perfect date based on her psychological profile, her family history, all this sort of stuff. And then take it up another notch and say it can sort of script out when should you tell the childhood story? When should you show vulnerability? When should you lean in for the kiss? And then imagine that's your whole life.
And when I say this, it'd be terrifying. So yeah, that's it. Like you get this chill, right? You get this, like, you don't know exactly why you don't like it, but I think if you do, I mean, some people may think that that's optimal. I've met some people that think they just want to be told exactly what and when exactly it's a very, it's, I guess you define it as, I don't know if beta is the right word, but it's like, it's a very cautious way of living your life. I think it's NPCs or sheep, right? Like non-player characters, the NBC non-player character thing that there's something like that.
But that's scary because the average person maybe is that way and they do want to be led. Well, so I'm hoping that examples like that can kind of shake us out of this and say, look, we're holding on. We need to hold on to this precious idea of autonomy, which is to say individuals deliberating on and acting upon their own version of the good and not being coerced by others. In other words, not being made the agent of another. Right.
AI is very good at controlling a choice architecture. The simple form right now that we're already dealing with in our world is social media, because a lot of us, a lot of people spend four, five, six, seven hours a day, more even on these things. And, you know, another Stuart Mill quote, sorry I have him because I was just thinking I was trying to read it at a time for these things. But, you know, he who knows his own side of the case, you know, knows even little of that. If you only know your own side of the case, you actually know it. He comes from Cicero. He actually channels that from one, which is really cool. That makes sense.
And so, you know, in some ways of AI is taking you along, it's also keeping you in a bubble, which is happening, I think, to tens of millions of people on both the left and the right right now, because basically you're happier in your bubble. Yeah. I'm not happy in my bubble, by the way. I really enjoy going and like, I mean, it freaks me out sometimes, but I go on to the far left side of Reddit and I go to these other places and I want to be challenged. I want to see what people are talking about. I want to understand, you know, the other ideas on blue sky as much as that's kind of a toxic place. But, you know, it's interesting to see.
even though I spend more time on Twitter or X, obviously. But isn't AI naturally going to keep us in our bubbles based on these NPC preferences? Yeah, I think today that is the way it works. And you get, as you said, you either get kind of an impoverished echo chamber where you don't have access to these wide and diverse views, which means you don't have access. It's not truth seeking. And it actually isn't good for the development of reason. I mean, Mill makes an incredible case, chapter two of On Liberty, that
that even the wrong opinion, even being exposed to like things that are super wrong, that's what's necessary to even have like a lively impression of what's right. - Yeah, 'cause it helps you correct it and helps you understand. - Exactly, you know, if I say, hey, Joe, I think all values are relative. You know, I think it's, you know, you would like be hit with this and you'd like have to formulate and it's good, that's good for you to have. - You don't wanna protect people from dumb ideas. You wanna engage with them. - You don't. And the other good thing is, or the other thing that I think is concerning is,
it could create a hall of mirrors culture where everyone has a kind of a self-curated reality very convenient ideal even wasn't the problem is ai lets you personalize a really complex reality just for yourself that just keeps you feeling safe it does i will say though there are other ways right like a provocateur ai is a really interesting concept that we're thinking a little bit about now which is like how about an ai that challenges you how about an ai that like puts
puts you out of your comfort zone in ways that you don't understand. What percent of humanity opts into that though, for opting into this like really soft, like nice world? Yeah, so it depends, right? Like, so when you use a Bloomberg terminal-
You actually want to optimize for decision quality. When you are scrolling at night, you're kind of in engagement mode, right? So what does that tell an entrepreneur? Well, it tells maybe like, look at those markets first where like decision quality actually matters. Interesting. So actually there's actually a professional need to make a good decision, need to understand the world. That's where you're going to get people to engage with like much healthier areas.
I think that's right. And I have this optimistic hope that it can trickle down, that actually like you can show better decision quality and we can work out the engagement. It doesn't have to feel terrible to do that. It doesn't have to feel painful. Like,
Like Socratic, you know, dialectic actually feels quite fun. It feels hard, but like if you do it right, it's a blissful experience. I'm hoping that can kind of trickle down. I want to jump to just high level for people, the various approaches right now that dominate the AI discourse, right? So there's, I think you define these, there's the doomsayers, there's the accelerationists,
I think there's a regulators and there's techno authoritarians and there's a four list. So what do you think of each? What are the doomsayers first of all? Okay. So that's the most, um, that's the intellectual incumbent. Um, I would say that's the most common, uh, dominant philosophy in the practitioner community. And it's three philosophies in a trench coat, but they're all very entangled. One is effective altruism. One is rationalism. One is longtermism. So three things you might hear these terms, but you can kind of put them together and say, those are like the doomsayers. And the gist of it is that, um,
AI could pose a risk to humanity and we should pause. It's kind of like Oppenheimer when he had this, you know, hubristic awe of his own creation. Yeah. Or you could say like a tragic kind of Prometheanism where like we got fire from the gods and now we regret it or something like this. And so that school tends to then follow, like one of the main equations, people get mad at me for saying this, but this is actually the way that a lot of folks think about it is even a tiny probability of doom
by negative infinity utils equals negative infinity utils. I feel like this is Elon's comment on it as well. He, well, so Elon, I think, is an interesting person because he's like, um,
kind of started from this position. And then I think he's moved forward to a more positive view on, you know, interplanetary life and sort of those. Well, he's naturally an accelerationist himself and everything he does, but then he also has in the back of his mind, this awareness of that utility, which is scary. And it's a very math ready kind of proposition. It appeals to those who think that,
reasoning is the same as bayesian stats or that morality can be reduced to like a currency that can be optimized tell me more about that what's what's immoral about the james sayers
Well, so, I mean, the key premise is that moral questions can be reduced to a thing that's called the util or to a common currency. This is something I would reject. In other words, I think that endemic immorality is the idea that you have a bunch of good things that are incommensurable. They can't be calculated like courage, you know, courage in battle, familial love, religious observance. How the hell do you compute those things? You can't. It's actually sharply heterogeneous goods.
And I think because you can't do that, the sidestepping move that a lot of these folks do is they say, well, the thing we really care about is human life. We're just going to focus on human life. It turns out there are a lot of other human goods than mere life that we should be focused on. Yeah. We don't want to be in the matrix. Yeah. And so it's, so it's that, so I reject those, those premises. The other thing is that, um, we're very bad at predicting the future. Like the
The car came around and then it sparked the sexual revolution. Who predicted this? When you have complex sociotechnical systems, it's very hard to predict. You can pretty reliably predict...
like along the lines of scaling laws where like you add some compute, you get some performance. That's a narrow technical prediction, but then predicting within a broader socio-technical system, impossible. So this group has, I think, very low epistemic humility. And let's go to the accelerationists. This is generally seen as a positive thing, but I think there's, I think you say these people are pushing like towards like AI supremacy and innovation without considering some of the nuances of human flourishing. What do you mean by that? Yeah, so I think like,
The role of optimism, the power of markets, I think I'm totally there with that. I think they get that right. I think a lot of times this is a reaction to the Doom series where we're just going to, this is broken, so let's just accelerate and let's not let these people hold us back and block us from fixing things. So that's where it comes from. But what are they missing? Well, so I think they are taking this idea that, and I'm referring now to the more orthodox accelerationists, that, hey, human life began a certain way. It began because...
We spontaneously evolve some ability to absorb and harness energy. And I buy that. I think that's a fair, that's a plausible explanation for how human life started. Because it started this way, the logic goes, this is the ultimate measure. Our ability to
absorb and dissipate energy as a civilization. It's like a quirky thing to obsess over. Yeah. And so it becomes like the ultimate goal is almost the ultimate end of human life is a kind of technological progress, or it's a progress in harnessing energy.
I think that completely confuses the means with the ends. I think technology is incredible equipment for human flourishing, but I don't know. We do want to be a higher level of cars of civilization. That's probably very great because it gives us greater ends. And we want, we want better technology and better energy. And I agree with all that. But to say that that is the ultimate end of human life, that that's what we moved towards gives you these kinds of sinister conclusions, which is like, we should pass the baton.
to this higher, better intelligence if we can. You know, in other words, like- Yeah, that's a pretty scary- There's not a human good beneath it that we should intrinsically defend. You want to be pro-human as part of this. What are the other things that come out of that? Like, what are the other ends then other than that we should be thinking about? That we should be thinking about. So one of them, so one is like, what are you oriented towards? Is your North Star thermodynamics or is your North Star-
a notion of human flourishing, which I take to mean a discovery and development of one's unique gifts, and then the use of those to kind of
become the person you want to be. So it's part Aristotelian, part individualistic. That's a notion of human flourishing. You say Aristotelian, it like reminds me of like, like using like your higher, like it's like, it's like working on more complicated problems, like using your higher capacity to like engage and do good things. Using your higher capacity to do the things at the sort of top of the hierarchy. Like when I think about, this is a very helpful framework that people who don't do philosophy sometimes don't encounter, but like
A lot of times we do something for the sake of something else. Like we make money often because money is a means to some other end, right? - So acceleration has to be like the person who's making money for its own sake. It's almost like- - For its own sake would be, yeah. - If you're not careful. - But then you can walk all the way up this kind of causal chain and it's like at the top comes the things that we do for their own sake. And this is stuff like contemplation, right?
right? That we do for its own sake. And it's like the highest goods. And so what, what this definition of human flourishing wants you to do is it recognizes that you as individual have unique gifts and you have your own definition of what that good may be, but the development and the pursuit of excellence against that highest good of yours is something that we should promote. Okay. If, if you buy that, then there are some things that tend to conduce to that or like causally efficacious. One is the use of reason, right? And,
You and I can disagree about what the ultimate good is, right? But both of us have to use reason to an extent to get there, right? We have to think about the alternatives. We have to think about what do we do today, given our sense of what is good. So we have to preserve reason.
And as we talked about before, tending towards passivity, like this can atrophy. So that's one thing. Another is autonomy, which is to say, we've got to be able to, you know, make this deliberation and then act upon it. And we can't be coerced by, you know, some government to do someone else's bidding. You know, we have to be able to attempt to pursue happiness, you know, not to attain it.
And then the third is systems that resist centralized control. Like there has to be this decentralized release of energy to allow us to maintain those protected spheres. So I would say those are three examples of human goods that we need to preserve and that very few people are thinking about in the AI sphere. - Especially decentralization, I think is something the AI people tend to be very bad at understanding. - Yeah, I mean, I'm seeing interesting promise like,
There's a huge decentralized training run that I think Prime Intellect did recently that shows that the assumptions everyone has about how AI is just gonna be a centralized thing could be wrong. And I'm hopeful that actually there are these alternatives that emerge architecturally.
But I agree, like for the most part, people have decided that AI is a thing that needs to be centralized. Yeah, Peter Thiel always saw it as like a pro-authoritarian thing that's going to like lead to more of that, which is, I'm hoping that's not the case. I'm seeing some of the smartest people, like Google had a federated learning team. I think they still do. There are these like pockets of brilliance where people are thinking about alternative ways. And I kind of give the example where in telecom,
It used to be the case that you had like twisted copper wires that went into every house. And so in that era, people were like, oh, they're just gonna be natural monopolies as they say, right? But then wireless comes along and wireless changes everything. And so I caution people not to take,
current architectural assumptions as the given, like things could change. Well, it's interesting. You could have a natural monopoly without it being authoritarian too though, right? Cause, cause, but it's a question of whether or not this technology empowers pro-liberty or anti-liberty. Totally. And I even, I even put, put it in air quotes because a priori, you don't know how big a company should be. So I don't like when people talk about like big companies as being the same as, you know, they don't necessarily encroach on Liberty like that, that, um,
Big companies are fine. I do think you have to counter the power of big government and big companies both. I was like, I was, I was a libertarian kid. I was like all worried about big government, but now I worry about both of them because I think both are powerful. But I think you, you have to, I'm less like a focus on, uh, countering the possibility. I would rather look at like, where do big companies actually, um, uh,
yeah make sure something's actually happening as bad what is it i think the ftc just got changed up to focus more on that way versus versus harassing us preemptively which yeah it's probably good speaking of harassing us preemptively regulators is a third so we have the doomsayers we have the accelerationists i think this third area is like the lowest iq area of these of these four which is the regulators uh basically responding to every challenge with rules that just stifle new things way ahead of time there's actually some like populist
right people. So this is mostly a left thing, right? But there's actually even in Texas, some populist right people who really want to create a regulatory agency for AI, which is, and it's like, it's like, it seems like none of them have read philosophy or understand the whole purpose of like, of like what we're trying to do on the right to like dereg right now and get rid of this stuff. But, but to tell us about the regulators. Yeah. Regulating is what regulators do. But I, I,
I think so often it isn't even discussed whether something should be regulated or how, you know, the mechanics or the specific prescriptions do get debated. But what I mean by this is you could think about
like different modes that you could undertake if you cared about, you know, keeping us safe, let's say. The mode that always gets taken now is you do this thing called ex-ante regulation. So you say before the event, I'm gonna lay down a set of rules that will prohibit bad things from happening or mitigate them. And so you say things like, well, if 10 to the 26, you know, you put some compute thresholds on it and you say, let's set a regulation there.
The other way you could do it is ex post where you allow for the system to contact the world. You accept a bit of risk and then you learn from that. And to make it like, you know, kind of more clear, I would say common law works like this. So the common law, which kind of goes back to Magna Carta and is a big part of the UK tradition is,
What happens there is that little disputes happen and conflicts happen that then get adjudicated and they add on like a Scrabble board. So you have like a core set of laws that came from experience and then you're adding on like a Scrabble board, creating more laws. And that means that the law system is very evolutionary and adaptive.
versus having statutory law that, or civil law that comes from the top and says, you know, this is what we should do. It's more the French tradition. It comes from the Napoleonic code. It comes from the Romans as well. But yeah, it's very, very harmful in the case of complex
socio-technical systems. Well, just because you don't know what's going to be possible. Like there's people literally on this Texas side, they're running potential bills and it's like, we're going to ban any sort of AI to like respond to your, manipulate your emotions or to score people. And I'm like, that sounds good at a high level, but you don't realize like what is scoring internally in the system even mean? It needs to like keep track of things about you in order to serve you better and to be smarter, right? It's just like,
It's just like there's a completely misunderstanding the way to engage with innovation. Yeah. And it gets it wrong and it always will get it wrong. Like there's no point in time when we could have gone back and said, and been capable of writing regulations that would have made sense. Exactly. Imagine before the industrial revolution, you try to specify like things you don't want. It just, it would just, it would just, we wouldn't have the innovation here. You'd have it somewhere else. Yeah.
And so I think like we used to be common law dominated until the 1920s and then it shifted. And now, you know, we tend to use like our administrative apparatus and our legislative system to write these rules. And that's where the entire discussion is. So I hope people will get in and shake up
Doug Ginsberg is a really good thinker on this topic. He was a, you know, DC circuit judge, but I haven't seen a lot of people who have a principled stance sort of towards the ex post style of kind of adaptive regulations. I think this is what you wanted to back want with this series. I think we're going to go get in and just get rid of like a
ton of regulation. It doesn't make any sense at all. There's a million commands at the federal level. I'm hoping we can get rid of a lot of it this next administration. Yeah, yeah. And there's, if I could recommend one book for them, there's Richard Epstein's Simple Rules for a Complex World because one of the things that our system does is it tries to kind of micromanage every edge case with a law or with a
with a rule, whereas what Epstein says is he says, actually the best way in complex systems is to have a very simple uniform set of rules like property rights or contract law or tort liability. And these things are much better at adapting than trying to come up with a patchwork of systems that just lead to incredible complexity. - Hopefully teach our Texas legislators to keep this framework as well. We're gonna make sure to give them the clip of that. Maybe send a book along with it, I appreciate it. And so we have the doomsayers, the accelerationists, we have the regulators,
such as the Europeans who are banning open AI, by the way, which is very funny for the latest launch. And then you have the techno authoritarians. What are these people? - So this is just the fusion of AI with government to be able to kind of project power for authoritarians. - This is like the NGOs with Facebook and the Biden administration, like trying to just create lots of rules for society. Is that- - It could be that. I mean, the most, like the clear example is China though. It's just that the Chinese Communist Party with AI
puts forth social credit systems, puts forth incredible systems of control, leads to long-term conformity. It
Like it actually, I think, habituates people to be conformists. And so it's a means of retaining power. And this, by the way, if I were to like try to do the steel man of the Texas regulators, they want to stop the techno authoritarians. Like I think maybe the good side of them, like, oh, screw these guys. They're going to try to do this. We have to somehow create something in Texas to push back on that. So if I wanted to steel man these guys and instead of like regulating innovation in dumb ways, but they didn't want to fight the techno authoritarians. Like how do you fight that?
as a legislator, sure. How do you fight that here in America? How do you fight techno-authoritarianism? I mean, I think the... I think having very limited... I mean, like not using AI as an opportunity to claim greater power for government is one big thing. Don't create more power with regulatory state or government. Yeah, I mean...
AI touches everything. And so AI regulation is a unique vector for like government power. Let's just say there's someone in DC at some point, it's going to do something with the company and AI. Like how does the state push back on that? Have you thought about this issue? I haven't really thought about it. You know, I've seen, but I'm curious your thoughts. I mean, I,
I haven't thought about the state and federal dynamic. Other than that, the vast majority of states are pretty regulation happy and want to put up like 700 plus different versions. There's this Biden NGO that got together 200 state legislators, including from red states. And it was sponsored by like Bill Gates Foundation. It was sponsored by others like CZI, I believe. And they're trying to get them to create these AI regulators, which I think is tied to the safetyism. I think it's tied to the doomsayers. I think it has like slight techno-authoritarian biases that these people don't realize.
It's everything wrong. And my view is that if the state wants to do something to push back on big tech and AI, I think you should force more things to be transparent. Because at the very least, if there's ever anything that any company working in Texas is forcing,
forced to do by a government or by an NGO, they should have to tell us we were forced to do this. Like here's an action we were told we had to take. Here's an action we were not even told. If here's an action we were suggested to take, anything you're even suggested to do by the government, by DC, I think we should all know about. A lot of times they don't force them. They just tell them, but then they kind of feel like they have to follow it. So I feel like radical transparency around
any connection at all, anything at all going back and forth between any government and these companies that operate here. Like to me, that's what I'd be doing if I was running the state government as I'd be like forcing these guys to disclose. And I might even force them to disclose other things as well, such as if you're shadow banning somebody, if you're somehow like downplaying and making them less viral, whatever it is, if you're turning them off in some way, if you're demonetizing them in some way, I'd probably force you to say why.
Why are you doing that? What was the process? Like, like force transparency on that. And the big tech would hate it. And I think that's the right thing for Texas to do. So don't, don't create a regulator, just like demand. They do these things, you know, here's a local thing that I've been following. And I wonder if you've seen this. So, um,
this 40-year-old guy in Wyoming ran for mayor recently, and his name is Victor, and he runs as the humble meat avatar of a chat GPT, of AI system. So he says, like, I'm going to run because humans can run for mayor, AI can't, but really I'm just going to turn around and ask AI for the answers, right? He didn't win, but
But I think this is interesting for a couple of reasons. One is that it could be prophetic. Like we may have AI in positions to rule. There's a great Silicon Valley book about this happening in 2040 for president. Yeah, it's very, to me, it's very scary in some ways. It's super scary. The other thing that's interesting is why do we feel that AI can rule?
Like, what is it about us psychologically that thinks that AI has some title to rule or that has some impartial expertise? 'Cause it does have that air, right? I asked my kid even, I was like, I was asking my kid, you know, if you had a human and an AI, you asked the question, which is right? She said, AI, of course. And I said, what if you have two AIs? And they asked this and she's like, well, I don't know. But basically we have decided that AI is just better.
you know, at some things. And I think this is a really problematic conclusion. Especially because it doesn't have the philosophy or the principles built into it at all right now. Absolutely. No, it absolutely doesn't. It doesn't reason. I mean, there's, you know, this is like a contentious statement, but... You know, the three pillars of AI human flourishing, but if we jump to those, there's, it's reason, it's decentralization, and... And autonomy. Yeah, human autonomy. You usually hear autonomy in the context of autonomous systems, but...
the idea of ai ruling is to me very scary but but i guess i guess there's some possible is there some possible future where it's like built in all these principles and philosophy and it's it's convinced you that it's like this philosophically aware intelligence it's just really impressive then you're like actually yeah i wouldn't mind having this thing if we're careful somehow like have more authority or something is that even possible yeah i mean i think there are
interesting ways to include it in legal systems, in executive, you know, kind of conduct in politics. Even in regulatory systems, potentially, where it's unbiased, if it's been built correctly or something. So I think there are interesting ways in which AI can like review cases for systematic issues. I'd love to get my permits from the AI, like in like 10 seconds. That'd be great. And then you can always appeal or something if it somehow screws up. Yeah, absolutely. But I think, I mean, I tend to be, I tend to think there are some realms of
of contemplation in particular, like about the fundamentals of the political system that AI won't be able to do. - 'Cause it's just echoing us basically. - It is. And I also think like in particular, like take reason and this maybe doesn't get at the question of whether it could ever do this. 'Cause I think it probably can do this at some point.
But today AI systems don't, don't reason well. And I think this is necessary for, it's not something you see in politics a lot. Like you don't, politics doesn't. So in region, like truth emerges from the clash of diverse viewpoints and like, like, like what, what is AI not doing with this? So I think, um,
So you have systems like O1, this latest thing from OpenAI that talk about thinking, you know, talk about like reasoning as being a central property. But then you have Apple come out with a paper that shows that if you do minor perturbations, you drop the reasoning score on these systems by like 65%. Wow. So they're not really robust to those things. And what is the reason for that? Well, I think the reason for that is
AI researchers don't really know what reasoning is. And so what they do is they know what it's not. Like they know where it fails, if it's finding little patterns in the text or if it's doing approximate retrieval or if it's doing, you know, any number of things that are not reason. And so then they take those and they kind of have an inductive approach that say, hey, reasoning is a system that doesn't do that.
And so it's anecdotal, it's inductive. And then at the same time, there's this general sense that it should do something like what Kahneman said is system two, which even he didn't think that metaphor really held up totally, but it's like, it should do some kind of deep thinking. I would say that there needs to be a lot more work on positive theories of reason that allow AI systems to do things like this. This is something I would love to see it do, is if I ask you a question and you generate an opinion,
then what you ideally are able to do is generate an opinion that is robust to other probing questions. In other words, I can't ask you like this question and that question and that, and you switch every time, right? I ask you these questions and you've either thought about it or your position is robust to it.
AI cannot do that today. - It almost should probably have something where it automatically asks itself a bunch of questions before answering and so it's consistent or something. - Exactly, so what I would love for it to do is like you take a central question and I would love for AI to map out these other questions around it where it's like, yeah, you have to have an answer, you have to have contended with this.
That's how it works in the real world. Basically, there's lots of context and pairing of context in order to make it work. So we have these pillars, the inhuman flourishing. Like, what are you doing to apply these principles? What's the goal? How are you working on it? Okay, so the big thing here then is that I talked about how Franklin and Smith and Tim Berners-Lee made a huge difference, right, in having technology bend towards freedom and away from control.
And so the conclusion from that is we've got to cultivate people like that. And I call them philosopher builders, people who are on the one hand, willing and able to contemplate deeply about the ends of the technology and the other hand, they can actually build stuff. And I think about like, do our institutions
prepare those people today? And I say, no. And I say, like, you start at universities, right? And I'm gonna make a general statement. I'm a big supporter of University of Austin, which I think is an exception here, but they train narrow technicians or conforming ideologues. Tech companies train people that are very good at optimizing the means, not very good at deliberating on the ends. And then think tanks, theorists that don't build.
And so I look at the major institutions that should be producing these kinds of people when we need them most, and I say they're not there. So then I say, okay, how can we do that? How can we cultivate these kinds of people?
And I think partly it's education. Like I think there needs to be a course by whatever, you know, formulation we say that, that fills the gap where if you study computer science or you study business, this gives you a kind of scaffolding for some of these philosophic issues. So you understand and pick up that habit of mine. There's some education there, or we'll do a, we did a course at Oxford where when we talked about collective intelligence, we read Mill and
And then we also had the head of collective intelligence at mid journey. I think this format where you're kind of getting a little bit of the foundational stuff and a little bit of the bleeding edge, really powerful. I mean, the philosopher builder concept. Yeah. It's very similar to what we're going for at the university. I love it. And you know, I'm curious, Oxford is, I would kind of think of it as like the doomsday regulator place with lots of nihilists and cynical people. Like you're kind of going into the heart of the beast there. You think there's a chance to turn around some of these people? Yeah. So Oxford has this like thousand year old tradition, you know, people like
John Locke and Adam Smith and you know, a lot of- - There's some really great people there in the 18th century. - Some good people there. - Gotta bring it back, huh? - But it's supremely good for philosophy generally. The second thing is that it's a federal system, kind of like, you know, early US concept where each college has a lot of independence. And so it's very hard to start things, a lot of distributed complexity, but once you start, there's a lot of freedom.
Then I think the other thing is, you're right, Oxford is the epicenter for existential risk, effective altruism. Effective altruism comes from Bentham through Singer and McCaskill. This is like a rot in our civilization that's centered there. So it's problematic, but they've shown that it is possible for these academic theories to leap out of academia into tech.
I think there's an opportunity to do that, but with a much broader, not narrow utilitarian thought, but like a much broader integration, more human flourishing oriented. I just see the people there as people who like tear things down. You really can get this done without getting taken out by them there? I think so. I mean, I've gravitated toward that all my life. Like setting up the AI stuff within DoD was the same thing. That's fair. It's a tough place. But no, I have great hope. Yeah.
but it's gonna be radical and different. And I also think there's an Oxford Austin axis that I have to work on because what I like about Austin is it's entrepreneurs that are unafraid to build outside of the Silicon Valley orthodoxy. And I think that's really necessary right now. There are these like strong mimetic bubbles in Silicon Valley where people don't even really think they're busy and they're like,
All my other friends care about existential risk. I'm in, you know, and I, I worry that unless you can get outside of that bubble, you don't really have much opportunity to build. You need an outsider mindset to confront and fix these problems for sure. For me, Austin's perfect. Cause in some ways it's the outsider city, but then it also has this like deeply technical culture and a lot of builders here for that. So we got to do more here, but the other things we're doing are Tyler Cowen's a founding fellow. I know he's, you know, he's a legend. Yeah. And,
His FastGrants program is just, it's really good for giving young people and brilliant people moral support, financial support. So we launched that, it's called Cosmos Ventures. We back these radical prototypes. They're really cool. They're everything from games to simulations to like, you know, market turning tests.
And then we have this fellowship, well, at Oxford, I should say, we have the newest AI lab there. It's a human-centered AI lab. - That's amazing. People can apply to that directly? - They can apply, yeah. We have a thing called the Cosmos Fellowship. And so you apply to that and you can be, but basically it's like a philosophy to code model. So whereas most philosophy departments just produce like papers, we're going from these ideas about like,
reason or whatever autonomy all the way through to open source software. So you want technical philosophers. You want both. That's, that's, I like that. We need more technical people to be philosophers. If they want to work with this, they, they apply online to Cosmos. Is there, is this only at Oxford? There's stuff here too you're doing or? No, there's stuff here too. So if you're a Cosmos fellow, you can be many different places. Um, you could be here in Austin and doing things like that. So, and we're building out more of that in 2025.
Awesome. Well, we started American Optimist, Brendan, to push back on a lot of the cynicism and nihilism in our country. What's the best case scenario for an optimistic future with AI? So I think about our epistemic infrastructure, our
knowledge creation and inquiry in society, I think what AI can do to that will far surpass what Mill or the Enlightenment thinkers could have even imagined. So I think our ability to kind of create knowledge together socially and advance things like science or just move towards truth-seeking, AI could revolutionize that. What could the world look like in 10 or 20 years with the right policies and principles in place? Well, I feel like so. I think you could have AI systems generating new interesting experiments, creating...
you know, interesting explanations for experimental data that we obtain. I think we like knock down problem after problem scientifically, and we just pick up the pace on things like curing cancer. Today, I can't really do that, right? It can find a tumor in an image, you know, like that's something I can do, but to generate new experiments and new knowledge that help us actually kind of cure cancer. I think that's something AI could help us do in the not too distant future.
I also think like mass education. So I do stuff with Alpha School. We kind of try to experiment with these AI-based approaches for our own kids. But I think AI...
AI has the potential to scale mass education at very high quality. Can we make people philosopher builders with it, not just teach them math? I think so. So I think you can teach them the antecedents of reasoning. I agree. Like you don't want to just have people do formulaic math. I think you want to have them be like, have the curiosity be sparked, like a liberal education. I think AI could bring people a liberal education and it could bring it all the way around the world. So I'm very optimistic about that. Um,
As another effort. And I think like huge economic growth. I mean, we're barely scratching the surface with what we could do if we limited regulation, pulled back on some of the constraints to people having spontaneous access.
growth and progress in society, I think AI could help us deliver that. A lot of the builders who I admire most, whether it's Elon Musk, Peter Thiel, Charles Koch, they're deeply enmeshed in philosophy. They're philosopher builders and they've taken the time to go and kind of work on society to make sure our society stays something that's prosperous and innovative. So hopefully you can help us create some more of these people. Absolutely. That's the mission. All right. Thanks, Brendan. Thanks, Joe.