We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How to Fight AI Doomers with Gill Verdon

How to Fight AI Doomers with Gill Verdon

2025/5/28
logo of podcast The Atlas Society Presents - The Atlas Society Asks

The Atlas Society Presents - The Atlas Society Asks

AI Deep Dive AI Chapters Transcript
People
G
Gill Verdon
Topics
Gill Verdon: 我在魁北克长大,那里充斥着语言法律,政府过度干预和官僚主义让我感到压抑,这最终导致了我对自由意志的追求。我渴望逃离这种束缚,追求无限的上升空间。在加拿大,人们一旦变得士气低落,就会出现一种“高罂粟花综合症”,抑制了人们追求雄心壮志的自由。那些更有雄心壮志、渴望最大自由和能够获取他们为世界创造的价值的人,往往会搬到美国。我从小就不信任权威,必须从第一性原理重新思考一切,重建我对宇宙如何运作的理解。这种经历塑造了我对自由和个人能动性的深刻理解,并促使我积极倡导技术进步,以最大限度地发挥人类的潜力。

Deep Dive

Chapters
Gill Verdon discusses how growing up in Montreal, Quebec, with its high density of language laws and government overreach, influenced his views on regulation and bureaucracy. He felt a yearning for the United States' greater freedom and the ability to capture the upside of one's work, contrasting it with Canada's 'tall poppy syndrome'.
  • Montreal's high density of language laws
  • Government overreach in Canada
  • Tall poppy syndrome
  • Yearning for greater freedom in the United States

Shownotes Transcript

Translations:
中文

Hey everyone and welcome to the 255th episode of Objectively Speaking. I'm Jag, I'm the CEO of the Atlas Society. I'm super excited to have Gil Verdun join us today. He is the founder of Xtropic.

a startup tech company working on a new kind of computer chip to make artificial intelligence processing faster and more energy efficient. Also as based Beth Jesus on X, Gil's also the founder of the Effective Acceleration or EAC movement, which promotes rapid unregulated acceleration.

technological progress, particularly in AI to maximize human potential and economic growth. And I'm also very thrilled to announce that he is the keynote speaker at next week's GALT's GALT in Austin. Gil, thanks for joining us. Yeah, thanks for having me. It's great to be here.

So you grew up in Canada and historically ex-Canadians are overrepresented among the ranks of objectivists. Do you feel that growing up in a place that was infused with kind of egalitarian ethic and an embrace of a bigger role for government influenced your later views on regulation and bureaucracy or did those views evolve over time?

Certainly. I would say growing up, you know, I grew up in Montreal, Quebec, and has a

I guess one of the highest densities of language laws in the world. You know, for example, you're supposed to go to you're not supposed to learn English too early on. They want to they want to keep you around paying taxes for life. And so they have all these extra laws to regulate how you literally how you speak or how you write, like what you could put on your menu, on your

you know, your packaging and so on. And, you know, that's just the tip of the iceberg in terms of like how you just feel like the weight of all this government overreach and this bureaucracy hanging over your head growing up. And that's sort of,

creates a sort of back reaction eventually and i guess atlas shrugging yeah that's right that's right um and so uh you know in my case that was uh you know really feeling my whole life like i belonged more in the united states uh you know ideologically um and yearning to to go south and and here i am uh

But ultimately, at least trying to escape the sort of trap there in Canada. I would also say that there's a sort of, it's not just like top-down enforced. Eventually, once people become demoralized, that there's sort of no freedom to capture infinite upside. There's a sort of tall poppy syndrome that

that seeps in where, you know, if you try to be too ambitious and so on, people are like, "Aren't you happy? Why don't you just like have a median job?" Or, "Why are you too ambitious? Don't rock the boat." And that sort of mentality was definitely not a fit for me. And so it's sort of self-selecting, you know, the Canadians that are sort of

okay with that, they stay there. And then the ones that are more ambitious and want maximal freedom and to be able to capture the upside of what they create, the value they create for the world, they tend to move to the United States, which is the flagship when it comes to freedom and we should keep it that way.

Yeah, agreed. You know, actually, Peter Kopsis, who is one of our trustees here at the Atlas Society, he was one of the co-founders of the Apollo Fund. He and his wife were very ambitious. They wanted not to live mediocre lives. And after reading Atlas Shrugged, that is when they decided to also shrug and come to America.

So let's talk a little bit about quantum computing. How did you get into it and even move beyond it before most of us even knew it was a thing? - Yeah, it's been quite the journey. So originally I was trying to understand the universe, trying to, you know, I was never, you know, growing up I was, you know, I guess like,

In Quebec, there's sort of echoes of a Catholic authoritarianism. In general, they tell you how you're supposed to speak and what you're supposed to think. And for me, I think there was a big back reaction. I didn't trust authorities for answers, and I had to rethink everything from first principles. And I wanted to learn the first principles and reconstruct how the universe works

for myself, right, so that I can trust in my first principles inference. And so that sort of, again, I would say due to where I grew up and sort of my back reaction to sort of authoritarian thinking, I wanted to understand how the universe works. So I became

a theoretical physicist. I was working on black holes and quantum information, quantum cosmology. How did the universe begin? How may it end? What are sort of the limits of physics? What are the limits of space time? And naturally, I started trying to understand how nature computes and viewing the universe as a computer.

and to me, this is the most promising path to understand all of physics through a single lens. It's the it from bit, uh, sort of school of thought now called it from qubit. Um, and, uh,

I think there, naturally, there was a bridge to try to reverse engineer nature and work with computers that are computing in a way that is physics-based. So a quantum computer is a computer that leverages quantum mechanics to do certain operations.

And essentially allows us to understand pieces of nature that are operating quantum mechanically. So they're not deterministic, they're not in a probability over states, they're in a superposition. You could think of it as parallel universes.

And so to me, more precisely, I was kind of a specialist in quantum AI or quantum machine learning, arguably one of the pioneers of the field. I did some of the first algorithms that later got me noticed by Google. And there, the idea was to take inspiration from how black holes compress information. They are the most efficient compression in the world.

in the universe and essentially could we inspire ourselves in terms of algorithms we could run on a quantum mechanical computer to have an optimal compression algorithm. And one, another name for AI is machine learning.

And you could phrase most of machine learning as learning compressions, compressed representations of the world, right? If you install the Chad GPT or Grok sort of model would be maybe, you know, it'd fill out your hard drive, but essentially you'd have an approximate backup of the whole internet, right? And so that led me down the path to

pioneering AI on quantum computers and eventually getting approached by Google and going to build a product there known now as TensorFlow Quantum and later on working for Sergey Brin on all sorts of special projects, including quantum sensors and quantum internet. Over time, I realized that quantum technology was sort of

Sort of technology similar to nuclear fusion, you know, there's a sort of break even point we call fault tolerance, where what you get out is more than what you put in.

in terms of computation. And I see a path for that, but that path was on a far too long time scale for my impatient self. And I ended up jumping to, if you will, detecting that there's an opportunity in something akin to nuclear fission versus nuclear fusion, something we can do right now that arguably is more scalable immediately.

And gets us a sort of energy density gain that is similar to the energy efficiency and density gain of going from TNT to the nuclear bomb. And so that is what I set out to do.

three years ago, approximately. And now we are having the first results and are scaling our approach of what we call thermodynamic computing with my company now, Extropic.

And I'm sure we'll go into more detail. We're going to get into that in a minute, but I don't want to leave Google because you arrived at Google in 2019. That was about a year after James DeMore became one of the first very high profile victims of cancel culture.

after he was fired for writing the fateful Google memo, officially titled Google's Ideological Echo Chamber. Are you familiar with that episode? And how did it square with your experience at Google? Yeah, I mean, you know, Google is a great organization. I don't think it's homogenous ideologically, but certainly having an engineer try to point out

um, something in statistics and having an opinion and him getting completely canceled was sort of a warning shot for anybody else who would try to voice an opinion that wasn't the, the median or the mode of the, the population within the walls. I would say that that just created a sort of shadow network of like, um, engineers that are kind of,

Some things don't make sense. Some things that are prescribed top down don't make sense. And it's not just, I think it's across big tech players. There's just kind of an ideological capture that we saw across the board, I think in the mid 2010s. And again, it didn't seem like

To me, it's just always suboptimal whenever there's one culture that captures everything and is sort of self-reinforcing and there's no ability to discuss ideas. I personally was more interested in the discussion internally and externally of like should big tech and the top AI institutions in the world work with the government and defense sector

to put the right technology in the right hands so that we have national security. And that was also something that there was, again, inhomogeneity within the organization. There was Project Maven 1.0, where Google tried to work on AI for defense.

There was, you know, the sort of camp that cancels people, you know, tried to cancel Google itself, I guess, and sort of walked out. And to me, I thought it was very unfortunate that the premier organization in the world for AI research, I would say, you know, mid to end of 2010s,

was the golden era for Google. Everybody who mattered was there for research, and that's what brought me there. And I had a great time research-wise. They invented the transformer, which led to the prosperity we see today, amongst others, amongst many other things. But to me, the fact that there is an ideological sort of echo chamber and that, for example, it converged that we can't discuss

you know various social things or various um you know national security interests and we weren't mature enough to have an open discussion and there was not a free market of ideas internally i thought that was suboptimal for the growth of the company right and and that's just you know things on the outside i think on the inside you know like in general i mean this happens a lot sort of opinions sort of crystallized across allowed opinions and it could be across technical opinions as well

I would say big orgs tend to suppress ideas at scale because if every smart engineer they hire, they have 100,000 plus, they went with everyone's idea, they go in all sorts of directions, but then sometimes they overdo it, right? And sometimes they even miss

big opportunities like AI for coding, Google invented the transformer, they could have captured the market, they didn't, and that was an artifact of sort of a bubble of opinion. So it's not just social downside, there's actual shareholder value impact.

to having echo chambers. And so to me, it was just a lesson that actually having a free market of ideas and being very open and encouraging a variance of ideas to flourish and be considered was paramount to the functioning of any society or large organization, right? Google itself is a giant org, it's a sort of microcosm, it's a bubble in itself.

but for it to function well, it needs a free marketplace internally. And so certainly that episode and my experiences, again, more on the defense side, I think opened my eyes to the importance of culture. And that's kind of what led me to

start voicing initially opinions or ideas and prototyping them anonymously and eventually sort of starting this sort of movement for a free marketplace of ideas and freedom of speech, freedom of thought.

freedom of and we'll get into the AI part as well. But that was the the EAC movement, which, you know, I started pretty much after leaving Alphabet. Yeah. So you and I met last fall at the XPRIZE Visioneering Summit. Peter G. Hernandez, of course, has been a guest on this podcast and an honoree at our previous gala as someone who has

Read Atlas shrugged five times and considers it. It's his Bible now at the summit you were pitching a prize to solve the challenge of making computing more efficient in order to meet the coming energy demands of AI So tell us a little bit about your vision for that And how it ended up. Yeah, I mean originally

Again, I think even for technological progress, getting caught into

a local mode of thinking prescribed by authorities or establishments is a bad idea. And I think we've built all of this technology based on silicon operating deterministically for many decades now. And so there's a lot of inertia and skepticism that any massive disruption could just surprise us and be around the corner.

And so for me, it's been an exercise in having an extremely contrarian thesis, but again, derived from first principles of physics. And then, you know, taking a lot of heat for having such a contrarian thesis and then now starting to show we're correct. And it's been very validating and, you know, the benefits...

to society are going to be massive as long as we keep scaling. And so in our case, we looked at essentially what is a computation at the end of the day? It's a thermodynamic process. You have some input distribution over inputs and then you have a distribution over outputs. And you could phrase many, many algorithms that run the internet this way, including AI algorithms, right?

And because this process is between something that's probabilistic, that's not a deterministic input and a deterministic output, you can run computations probabilistically. And if you look at the

the physics of electrons and of matter, you know, when you go small enough, things are sort of jiggly and they're non-deterministic. You don't know where each particle is, right? And usually that's a problem for a deterministic computer. So we like to filter out that noise and that costs us a lot of energy. But instead we decided to start using that noise in order to run the algorithms more naturally.

And the energy efficiency gains you get and the density, even spatially, you get gain you get is over 10,000 X. Right. And so and that's pretty that's pretty massive. Again, that's on the order of going from TNT to the nuclear bomb. So I'd like to say we're sort of Manhattan project for AI. But again, I thought that, you know,

I didn't think a big lab or a government lab would move fast enough to execute on this technology and decided to do it in the private sector. And I guess I can show our progress so far. It's been a few chips now, but we've now achieved... Oops, maybe... There you go.

We have a chip that is in silicon and operates at room temperature and is a computer that's between 1,000 and 10,000 acts more energy efficient for these probabilistic algorithms that underlie so much of what we do. And so to us, this is the most important thing we could be working on because

It's one thing to produce more energy, which is what I talk about with the EAC movement. We're climbing the Kardashev scale. We'll get into that. But it's another thing to make how you use the energy and turn the energy into intelligence or the energy into value more efficient. And that's what we're solving. And so... And if we don't solve this problem, like I remember there was a debate, you know, it's a competition. Somebody says we need to solve...

you know, ovarian cancer, somebody says we need to save the whales. And you were saying that basically, you know, unless we get this done, right, we're going to be limited in how we're able to solve America, the world's other great challenges to humanity. Really, it's kind of we're solving the problem that solves other problems, right? If you have really potent artificial intelligence, and you have

a lot of intelligence per watt, then you can just apply that intelligence to solve your other problems. From a point of maximal leverage,

This is the problem we should focus on in order to solve the other ones. Right. And that was my thesis. And, you know, I think I was I was I was very correct when pitching at the XPRIZE that, you know, the market pull is so strong for artificial intelligence that we were going to start building machines.

massive nuclear reactor-based buildouts for these AI clouds. And that's been the case. And you just keep hearing news that more clouds with tons of reactors are getting built. And to me, it's good. It's good that we're creating more energy. But at the same time, it's not the most energy-efficient way to do it. And again, it's not a question of just like for the environment. It's also just from a

return on investment capital, it just makes a lot more sense to invest in build outs. If you're spending a trillion dollars, you want it to last and if five years from now, there's a technology that makes your current technology somewhat relevant, then that was a risky investment. I think both from a philanthropic standpoint,

making sure that, you know, intelligence is abundant and cheap for everyone and everyone has access, but also I'll get into it. Like I think, I think in order to avoid having an over-centralization of AI and AI being controlled by very few parties that then can prescribe what we think and what we say, I think a densification of intelligence in terms of energy efficiency and spatial density is necessary in order to maintain individuality in the era, in the

era of AI, in the era where we augment our own cognition with external AI agents, we need to have personalized AIs that we own and control personally, that's an extension of ourself, rather than one centralized cloud that runs all the AI for the world. And then whoever is in the background can

you know, tilt sort of the distribution of our thinking in much more subtle ways that are almost untraceable. And to me, that leads to sort of top-down authoritarian sort of thinking and can lead to sort of, you know, like just from a pure power-seeking standpoint, whoever is in control of such system will convince people

you know, to converge onto sort of a collectivist mindset. And then while everyone's kind of fighting for scraps, they would, you know, they would, those in control would have a lot of power and wealth and, you know, they'd suppress free market, the free market of ideas, which is really the sort of error correction mechanism against tyranny and all that would be gone, right? And I think this is something that also resonates with Elon. It's why he

He started opening IaaS open and now he started XAI. We can get into that. But, you know, I really believe that we're trying to, we're solving the hardest part of the decentralization of AI, which is the densification of intelligence at the hardware layer.

Yeah, so Elon Musk interacts with your BaseBeth Jesus account regularly. I don't know if yesterday was a record, but it was pretty intense. And then Mark Andreessen, my neighbor here in Malibu, is another account that regularly interacts.

with you. He's also a big fan of Atlas Shrugged. He described this chilling meeting that he had, that he and his colleagues had with the Biden administration's AI advisors in which the latter shared a vision of having only a few big AI players in coordination with government controlling the future of the industry. One

in which competing AI startups would be severely curtailed. He left that meeting and immediately decided to endorse Donald Trump. Do you share Andreessen's perspective on the dangers of this overly intrusive centralized government approach when it comes to a flourishing AI environment?

Yeah, absolutely. I would say that the previous administration's stance towards AI was kind of my personal key issue that drove me to leaning towards the current admin. And I would say that, yeah, I mean, that is one of the key issues that I've been fighting against with

with EAC, it's essentially that we were seeing a sort of convergence of centralization of AI and some corporations wanting to sort of merge with government. And that was very risky to me because again, if the only people allowed to have AI

where the government and a few corporations that are incorporated into it, then that leads to tyranny. It just does. And that's too much of a great opportunity for power seeking folks. And then

The solution to that is to keep AI power decentralized and for there to be the power of the individual, how much AI an individual can purchase and control to be sufficient. So there's a sort of deterrence, right? Similar to the Second Amendment, there's no absolute monopoly on violence. There is a backup if people own their own

weapons, they can defend themselves against tyranny. But I would say that violence in our age has been largely intellectualized or virtualized, and it's more how you can control people. And if you have an ability to predict people's behavior, you can steer them.

And you can engineer control signals or things you tell them to steer them towards a certain outcome. And if individuals wouldn't have AI to help augment their cognition and augment their individual skepticism, and you just had augmented capabilities of centralized agencies and governments to subvert people, like if those capabilities were jacked up to 11 and individual capability to resist,

cognitively were not, then that would lead to really bad outcomes. And I think Marc Andreessen was aware of this. I would say Elon was aware of this as well.

He instantly created a competitor to make sure that there's no one player that runs away with the whole pie. And it seems like, I think hopefully that chapter of some of the AI companies trying to be crowned

Kings too early in the game. Hopefully that's over and now it's just a free market, which honestly it's been great so far, right? Essentially our thesis one, there's not been only a few companies allowed to do AI. Many companies are allowed to do so again because of our efforts. Elon, Mark are kind of those that spearheaded things in government. I've just been on more of the grassroots side.

But essentially we've had a free market competition and you have all sorts of AIs with different cultural biases and the technology has just been accelerating and getting cheaper for everyone to access and everybody benefits, right? Instead of only a few using the technology and centralizing it to consolidate their power. - Yeah, so let's get back to EAC and effective acceleration.

To what extent did you see this as a reaction to the effective altruism movement that Sam Bankman-Fried and others had started promoting? Yeah, so effective altruism is a sort of funny movement. It's essentially...

hedonic utilitarians. They try to maximize hedonism and how good people feel or well, not just people, sometimes shrimp as well for some reason. But essentially they have this weird moral framework that yields really

suboptimal, like optima. Like if you try to maximize hedonism, you can converge onto sort of wire heading, right? So you're just like, you know, in a VR world or in a simulation and just in this nirvana forever, you're not a productive member of society. But overall, I think like EA was starting to find ways to capture capital in any way, shape or form from the free market. So they're trying to deform the free market.

And then reallocate it to these sort of nonprofits. Again, it could be for mosquito nets or shrimp farms or weird stuff. But eventually they ended up all concentrating, you know, 95% plus, don't quote me on that figure, but like it was very, very much most of their portfolio. They concentrated it on AI safety and sort of regulatory capture for AI, which is really just

Trying to put themselves in power their whole thing was Essentially AI is dangerous. We need responsible adults in the room that control it We're gonna be the responsible adults put us in charge, right? and and they would fund all these think tank like organizations that then would become arms of the of the big labs to to sort of continue this sort of

fearmongering and spread what is called AI doomerism, which is, I would say, a spin out of EA, effective altruists. And so we just saw this whole really well funded complex that was converging us onto a very bad outcome. And so we had to start the resistance to that movement. And that's originally how EAC started. It was antithetical to EA in some ways, but really that was kind of just

the first battle in general, right? Like this sort of enemy, if you will, sort of creeps up as a different name all the time. But essentially, you know, EOQ was there to fight for freedom, freedom of speech, individuality, you know, individual agency, celebrating individuals, and to make sure that there's no sort of

massively oppressive government or sort of weird complex that restricts your ability to be productive and over centralizes power. And so. Yeah. So how did you, you know, it's not just you, it's a crew of people and allies, some of whom have pseudonyms. I don't know if others have been outed as you were. So you might want to talk about how that

felt at the time and how you decided to lean into it. Yeah. So, I mean, you know, the movement gained quite a bit of influence, you know, in Silicon Valley, and it was starting to, in the shadows, get some influence in Washington. You know, for example, I think, you know, we were opposing the previous executive order on AI that was going to really kill open source AI.

Um, but essentially for some reason or another, I got, uh, doxed by reporters and really it was sort of, I think their goal was to, because I was, I was becoming a problem because I was kind of

getting people to rally around this cause and that was an impediment to the over concentration of power, I got doxxed and essentially, you know, the traditional media pylon, you know, they would fabricate and try to associate me with all sorts of movements I'm not associated with.

and or deform the message or try to click bait um like you know this man is building Skynet or something ridiculous um and um and so that was that was that was a lot right like that that was a big change in my life going from some scientist in a room that's advising all sorts of um important people but usually I'm kind of you know uh

the asset in the background to front and center and, you know, having my face plastered all over the timeline and getting, you know, 100 million views a month. I think, you know, that was a big change. But now it's been a year and a half since the doxing. And I guess, you know, I've gotten used to it. But essentially, I wanted to...

turn this attack against them. If you have an anti-fragile mindset, you can turn any sort of adversity into some upside. And so I used it as an opportunity to go on podcasts, spread the message further than it could have ever gone, and also leverage it to acquire talent for my company and raise awareness about this important challenge we're pursuing with Extropic.

that helped us hire some of the best and the fact that we've achieved these results such a short time scale since that are testament that if sometimes getting more attention can be useful to get the best talent and to move the ball forward for civilization and technologically.

So here at the Atlas Society, we are huge fans of Andreessen's techno-optimist manifesto. Curious whether you had any role in that or was it maybe just indirect or?

What can you tell us? Yeah, I mean, so Mark was a big fan of EAC from the beginning or from, I guess, pretty early on. And essentially, you know, we were corresponding for quite a few months, very actively kind of fusing our

or views on the world. And so, and that was the time during which she was writing the manifesto and, and I am, you know, one of the first sighted influences there. So, so more in the background, but it's very much I would say it's very much a, a version of the,

manifesto that's maybe more less sort of cosmic scale as I tend to think as a physicist and more kind of practical, like immediate sort of more policy prescriptions. And I fully endorse it. I consider myself a techno optimist. And so to me, it's what I saw in terms of

you know, how ideological capture happens is that you have a meme or an idea that spreads and then it just keeps mutating and then it comes up under different names and that's what we're fighting. And so the intent with the Yak was always to

spread a sort of central meme or complex of ideas and for it to mutate into many several forks and then for those to have influence and then it's much harder to take down something with several heads than just one. Even though they did try to take me down and take down the central branch, but by that time it had already forked, right? And so it's sort of compartmentalizing memetic brand risk.

And so techno-optimism is an example of sort of something very akin to EOCH that's more maybe professional, less from some weird corner of the internet and can be marketed in Washington. And now the vice president literally said he's a techno-optimist.

And so I think in terms of influence, it's been really great. And I would say that at least this administration, at least from what they say, have been very supportive of our

of our sort of requests for policies that maintain American competitiveness in AI, right? And maintain openness. So speaking of on a cosmic scale, tell us, explain in layman's terms what the Kardashev scale is and how we can climb it.

Yeah, so the Kardashev scale is a set of milestones that are on the log scale, so they're exponentially hard. And it's milestones to track how big our civilization is and how much energy we are producing and/or consuming. And there's three big markers on that scale, and then you could interpolate

uh on the log scale is sort of linear linear interpolation and gardashev type one is essentially we we produce as much energy and and leverage as much energy as there is impeding upon the earth from the sun

And so that is a certain amount of watts that is pretty massive. I think we're at 1% of, I think barely 1% of the Kardashev scale, not on the log scale, just in terms of for K1, Kardashev type one. Kardashev type two would be

We leverage all the energy or the equivalent of all the energy that is being radiated by the sun, right? Again, it's not energy, but wattage, so power. And then type three would be the entire galaxy, right? And so essentially I saw, you know, in my studies of thermodynamics and more precisely stochastic thermodynamics, which is the physics underlying life itself, I saw that actually there is a sort of

Darwinian-like natural selection over all of matter, I call it thermodynamic selection, that occurs on and the sort of fitness function is whether or not the system has dissipated more heat.

which is really weird. But essentially, and there's also, it tells us that the odds that you fluctuate back to zero, like the system completely dies, they get much smaller as the system gets bigger.

And that makes sense. And so to me, I saw that as, oh, this is fundamental to life. If we get bigger as a civilization as measured by thermodynamics, so what is our consumption of energy, that will ensure a lower likelihood of the destruction of this phase of matter or the extinction of civilization. And so I felt like we had a responsibility to scale up the Kardashev scale. And to me, that's the key issue and the one metric we should strive to

to improve for our civilization because unfortunately GDP and capital, it's hard to track, right? And it's imperfect and money sometimes gets inflated away or printed. It's not an objective metric, whereas energy

you can't yeah jewel is a jewel wherever you go in the universe uh or and so as a watt and so on to me that was just a a a better thing to optimize than you know heat ons right or hedonism which is completely subjective um and and leads to weird optima and so uh you know now i guess uh

I mean, this has been Elon's whole thing, but I guess we merged mimetic complexes and now he's very focused on climbing the Kardashev scale as the key issue. For me, what will accelerate our ascent is creating a way to convert energy into value and as efficiently as possible. So you get more value per unit of energy and that's going to increase the demand for energy

and thus create a sort of positive pressure to scale up the scale of civilization. And so for me, creating this technology that's the most energy efficient way to convert energy into intelligence, sort of steam engine for intelligence, operating at the limits of thermodynamic efficiency, to me was the way to create that sort of pressure to climb up, but also creating this social movement to raise awareness that this is the key issue we should all be

aligned towards. And it's naturally something that free markets optimize for, right? Free markets select for organizations that utilize their resources in a way that maximizes growth. To me, it's like literally

a fundamental algorithm that leads to self-assembly of complex systems that have emergent properties that are optimal. So our bodies are kind of a free market

of cells and they all kind of have some coupling with one another. They have some interchange chemically, right? They have a sort of chemical and energetic economy, but then the emergent property is this functioning organism that is you. And I view sort of capitalism itself as an AI-like or physics-like algorithm that is far more

efficient at capital allocation and growth than any sort of top-down prescription or top-down control. Imagine a human trying to design every cell in your body, we wouldn't achieve what, we wouldn't be able to do so. We wouldn't be able to design ourselves. And so it's a lesson in sort of humility. I don't think any one committee can design a whole complex system, but a complex system can design itself from

from self-assembly and it does so by constantly competing and having a freedom to explore and optimizing for growth. And so, yeah. I'm going to get a rebellion on my hands if I don't get to some of our audience questions. So let's try a few. My Modern Gal, always great to see you, is asking, Gil, what are your thoughts on the risk of disinformation, misinformation online today? Do you think AI opens up new risk

that we haven't yet accounted for, like AI-generated audio or video? Yeah, I would say, I mean, AI-generated audio and video is already here. I would say that, again, you know, if you have a sort of symmetry between the side generating and the side discerning in terms of capabilities, you know, if you had your own

AI assistant that you trust, and you own it, you control it, that tells you whether something is real or fake and can detect it and augments your own cognition instead of putting more cognitive load on you, then that's fine. You just want symmetry in terms of capabilities. I would say trying to suppress these capabilities is not the way forward. I think there's a lot of upside

left on the table and really, you know, everything's always an acceleration. It's a race in terms of capabilities. And again, as long as it's not just the government that has access to these tools and can then generate propaganda, and then you can't even, it's so good, you can't even discern if it's real, because you don't have access to AIU control.

um you know that that would be really bad that would be the main risk to me uh but if peer-to-peer you know we have the ability to generate and discern um you know just like humans you know a smart human can tell you something that's completely false that updates your world model and you either have the cognitive security or you have you augment your cognition with with a group of people you can talk to and it's like is this is this real is this correct

you know, they're sort of peer to peer ways to validate information. I think fusing, like if everybody has access to more intelligence, then we'll be able to sort of collaboratively filter things either between us and our AIs or collections of subgroups of AIs for people you trust and you feel aligned with their values, right? And I think that's the future. That's how it's been

since the dawn of civilization. And I think there's been an increase in capabilities, but again, for generation.

there's also increased capabilities for discernment of truth. Okay. M asks, when do you estimate launching or shipping the first commercial version of your product? Yeah. So this chip that I just showed, we're packaging into development kit for sort of enterprise and innovative startups and maybe a handful of individuals. It is a small batch. It is just our test chip.

That we're aiming by the end of summer to put in the hands of the first customers, which is very exciting in terms of timelines to go from concept to prototype that's delivered to customers on desks in three years is pretty great. Next year is when our million probabilistic bit chip is launched and should be widely accessible.

And that chip is a proper product, not just an experimental development kit for those trying to get ahead. But depending on the org already starting to experiment with thermodynamic computing is sort of essential because the disruption is coming next year and they need to get ready in terms of how this affects their

their algorithmic stack, whether if you're in finance or defense, obviously it's mission critical for you to have the most cutting edge capabilities. And then if you're in general AI, obviously there's a free market competition there that's heating up. And so any advantage you can get, you should take. And so

Yeah, if anybody wants to use the DevKit early, we have a signup form and we're going to put out some software more in the open first. And then the hardware is for, you know, because we just didn't do a very large run of these first chips, you know, it's between 200 to 1000 early customers. But you can apply on the website extrapik.ai and there's a signup form for those interested. Great.

Yeah, so I have another kind of comment here from Kingfisher. He says he's bullish for AI, but he thinks people are overhyping what AI is currently capable of. Too many think it's the be-all and end-all. How would you respond to that? Yeah, I would say the current AI capabilities are not the end game. And I think, you know, I think calling...

A human-like AI or human-level AI, AGI, is very short-sighted. I compare it to geocentrism, but in the space of intelligence. I think human intelligence is a mile marker on our ability to understand complexity of the world.

and predict it and steer it. I worked on AI for physics, which is much harder than emulating a human. So understanding matter, biology, matter, I was working on generating quantum matter and superconductors and esoteric materials using AI. I think it's going to keep going. I think current systems are not human level and they can

emulate what a human would respond, but they don't have the agency yet and they don't have the ability to have curiosity, to seek out new information, to decide whether they explore or exploit, and they don't have agency yet. And so right now we just have sort of raw intelligence. We have raw compression ability to predict the next token in a sequence, but we don't have that sort of agency. And so right now, whoever

leverages AI and becomes sort of the source of agency for a fleet of AIs can create products that generate a lot of revenue and really impact the world in a really positive way. I encourage people to do that. And everybody has the ability to do that. Even if you literally don't know how to code, you can just ask the AI to code for you nowadays. And so really,

human agency plus artificial intelligence right now is sort of golden period. Eventually, will we figure out agency? Yeah, probably for AIs. I think we're going to need way more compute than we have right now, and that's what I'm trying to bring forth. But really, my goal with this form of computing

is not an anthropocentric goal. I'm not just obsessed with trying to automate humans. That's not my goal. My goal is to understand the physical world in order to increase our ability to expand to the stars, right? And so I'm really targeting, can we use AI to understand our biology and control it, to simulate it, and eventually to help us with problems of material science and

all sorts of scientific breakthroughs, which I think would be beyond the reach of any human that's ever lived in terms of cognitive difficulty.

And so I think having the symbiosis between AI and humanity is going to be really important. But I think those that are closed minded to leveraging AI are going to be left behind. And those that are open minded and leverage it are going to do very well. And that's why I feel responsible to spread this message that you should

try to embrace AI because there's a lot of upside in it for you and your descendants.

So we are super excited to have you as our keynote next week, actually a week from tomorrow at Galt's Gulch in Austin. I know you've been busy changing the world and as we're hearing right now, getting your product ready to ship by the end of the summer. Have you had a chance to read any of Ayn Rand's literature? Because a lot of what you're describing seems pretty in line with some of her ideas.

Yeah, I really should finish what I started there. I think I started some of the audiobooks, but haven't quite had time to finish there. But there's something as well, I think, that's very validating about coming to... I came at it from first principles, from my own path, of course, through potentially culture and

Obviously, I've been exposed directly or indirectly to Inran's ideas. Maybe that seeded some of the ideas that then became EAC. It's hard to trace. Again, it goes back to what I was talking about with memetic complexes. But I think in my case, the fact that I came and converged onto similar principles from my own perspective,

journey um is really validating uh uh for this for you know objectivism and the set of principles and again for me it's it's it came from a journey trying to understand uh the physics of the world the physics of complex systems the physics of capitalism the physics of society at large and and to me this this seems um you know frankly optimal and so uh you know i i guess like um

there's two there's kind of two schools of thought let's say in academia right like one is you do literature search first and then and then you feel like everything has been been done and then you you maybe you get dissuaded from from sort of exploring a set of ideas another way is you you go forth and you build the idea out and then you see after the fact if there's existing literature that's that's strongly overlapping I think there's something about

are just this sort of creative spurts that if you feel like it's an original idea, then you're going to be more excited. I would say there's probably a lot of overlap within RAND of what I've been a proponent of. But yeah, I definitely need to connect the dots, probably like looking backwards. But I will work on reading. That will show, definitely. Or even just Anthem, because that was all about

She had a very unique vision. She actually published Anthem, which is her dystopian post-apocalyptic novella, 12 years before George Orwell's 1984. And while a lot of these dystopian writers of the time saw that this...

totalitarian future was going to be technologically advanced. In Rand's telling, it actually became more medieval, more primitive because they did not understand the value of individualism and freedom and they started with, as you experienced in Quebec, control of the language. So the word I has been abolished and lost and that's to control people's thoughts. So

Well, we're also going to be doing mentoring roundtables at the conference. I'm not sure how much time you'll have with us, but maybe if you could just tell us now what kind of advice you would have for young people who want to live a life of achievement and productivity and meaning with all of the changes that are coming at us as a society warp speed.

Yeah, I would say, you know, you can learn from people that have done things, but no one is a central authority for everything. And you should, you know, pick and choose advice from several parties. But you should ideally derive your own worldviews from scratch, derive your own set of

of values, obviously, you know, you can take inspiration from what, from objectivism and from what we've been saying, but you know, you should you should come at, you should converge on onto a set of values that you convince yourself are your set of values from scratch, ideally, because that's very robust to other people trying to influence you, right? Whereas I think those that converge onto collectivism tend to defer their cognition to the group

And what they don't realize is that they're giving up a lot of power and control and agency and also cognitive security by doing so. And if you're just believing what is prescribed to you to believe, then you will likely not have a great life. Or if you do, maybe you'll be...

You don't know the life you're missing out on if you do so. And I would say that sort of not taking no for an answer. You know, we have this saying on Twitter, you can just do things. And it's really true. You know, some people will tell you can't do it, but you could be like, really? Why not, though? Right. And then and then you keep going. Right. I think for for me, it was, you know, you know, I was in Quebec. I was like, hey, I want to be a theoretical physicist, right?

in the best schools in the US. And I want, and then after that, I wanted to be a quantum computer scientist and the physicists were telling me, what do you mean? What are you doing? And then when I was a quantum computer scientist, I said, I want to start a new paradigm of computing. They're like, you're crazy. You have a great thing going on. And so people will tell you, you

Whenever you have really high agency, they'll tell you you're crazy or you're taking too much risk, but that's usually the direction you want to go because people that want to keep you in their priors or keep you constrained will usually indicate the gradient of lower risk and you should take more risk. I think the highest risk is to take more.

No risk. I mean, this is common advice, but I really try to live by that. Anyways, yeah, thanks so much for having me. Yeah, absolutely. It entails very much with the kind of ethos here at the Atlas Society of our open objectivist approach, which I remind people, no one can think for you, not even Ayn Rand. So thank you, Phil. I'll see you next week. Very much looking forward to it.

Being back in person again. And thanks, everyone, for joining us today. Be sure to join us next week when we'll be in Austin. I'm going to be interviewing author Jimmy Soni to talk about his book, The Founders, the story of PayPal and the entrepreneurs who shaped Silicon Valley. We'll see you then.