The standards for software, even for a startup, are much higher now. You just can't have shitty software, even if you solve some unique, novel pain point. Customer expectations are very high, and that's going to pull the field forward. Welcome to Manifold. My guest today is Omar Shams. He is an AI founder who recently sold his company to Google. His position at Google currently is as a lead on the AI software agent.
And I'm really looking forward to talking to him because he's not only a leading thinker and researcher in AI, but he has a background in theoretical physics. So, Omar, welcome to the podcast. Thank you for having me, Steve. And we are broadcasting from the Equinox Hotel.
in Hudson Yards in beautiful New York City. Omar splits his time between San Francisco and New York, and I'm here giving a talk on AI. So we're lucky to be able to get together. Thank you. Thank you. So Omar, let's start with your background. So you went to Carnegie Mellon, studied math and physics, and then you went to grad school at Rutgers to study string theory.
And so talk a little bit about what the world looked like to you when you were in your early 20s and when you actually started thinking about AI separately from physics. Yeah, great question. So I think much like you, my first love was definitely physics. I was obsessed with that. I remember...
reading about the twin paradox in high school, actually, when I was 15. And I remember looking at this little cartoon in the back of my physics book. And it had one twin go out and come back and then
his twin was much older. And I remember thinking like, this is made up, right? Like this is a joke or something. And then I went to my teacher and I was like, Hey, you know what? This is big, greater. This isn't real. And he's like, no, it's real. And I was like, what? And it was like finding out that magic really exists. That was my, that was my honest subjective perception. And I was like, okay, I must learn magic. And so I, I was, uh, I did that for a good 10 years, uh,
My specialization at the time, I was a Tom Banks student and I ended up dropping out ABD with dissertation, but was holography. And Tom, and I think Tom still works on this, is using non-commutative geometries to reconstruct things like space-time. Could you go from a non-commutative geometry where space-time becomes actually this emergent thing? And the specific work I did was
doing some stuff on, which I barely remember anymore, on conformal killing spinners. Spinners are kind of like the square root of vectors, right? And using that to construct the Suzy algebra. Suzy means supersymmetry for our listeners. Yeah, sorry, sorry. Suzy meaning supersymmetry, which is an extra symmetry that these spinners, which again are kind of like the square root of vectors, furnish.
And then kind of towards the end when I was about to drop out, I got really into biophysics. And people, I need to be careful because people mean a lot of things when they say biophysics. So lipid physics and kind of the structure of that. I was more actually on the genomic side. So I did a little bit of genomics. And this is kind of...
not that surprising anymore. But at the time, you know, one of the very first things I did was, okay, let me do a PCA on this human mitochondrial DNA. I think that that was definitely a big part of it. Like seeing that, you know, the power of, you know, machine, you know, machine learning techniques on these kinds of days. That's as an undergrad, I also did some lattice QCD with, I believe Colin, Colin Morningstar. Yeah, I know Colin. Oh, you know, Colin. So I spent- And Tom too. Yeah.
You know everyone. So I did that for a summer. I did a nice summer with Colin as an undergrad. So that was probably my introduction to the field. But really my main introduction to doing this was my first job where I built a music recommendation engine as part of this small company called Hi-Fi, which later was acquired by Block. Okay, let's... Before we...
let you leave physics. Let's dwell on physics just for a little bit. I'm really struck by the story that you just gave about special relativity in high school, because I've had the same thoughts that, you know, when you're a kid, if your brain is wired the right way, even just knowing a little bit of algebra, you can rederive the Lorentz transformation, special relativity. It doesn't require any more than simple algebra, algebra two level stuff. And
I actually have an old blog post where I say something like, how can a kid be well-educated in the United States and not have been exposed, at least briefly, to the ideas of special relativity? Because it's such a glamorous thing, like Albert Einstein. And then, like, how could you not suddenly get interested in physics when you realize some very simple –
uh empirical input like wow the speed of light looks the same to anybody no matter how what speed that person is moving that observer is moving at that simple observation then with a little bit of logic and some equations leads to for example the twin paradox that you just mentioned so i i'm just always amazed when i meet another smart highly educated person like how come you don't
aren't more interested in physics. Like, didn't you see that part of the textbook you were forced to read in 11th grade or 12th grade where they explained all this to you? Like, didn't that strike like a light of fire in your mind? So you're like an illustration of the type of brain that I'm thinking of. And I don't understand all those other brains. So were you the only kid in your high school who cared about this? Yes. Yeah, I think so. Yeah. I think for me,
Just the pleasure of, you know, doing these math puzzles was kind of, you know, one level of like enjoyment or intellectual satisfaction. Yeah.
But there was something about physics where it lit up this fire where for me, it was very visual. Like there was this movie that was playing in my head when I do physics problems. That was very fun. That was almost like an action movie playing or some kind of thriller. I don't want to over-dramatize it, but really I'm getting, I get like visuals when I, when I was doing this stuff.
And I remember also in high school at the same time, I would catch clips, you know, on like whatever PBS or whatever of like, you know, uh, you know, a train moving with light and, you know, someone trying to like flash light in a bag and like, what are the properties of light versus, you know, uh,
you know, matter and so on. And I just remember it was like very like compelling to me. Like again, like discovering that magic is real. Yeah. This is a big difference between physicists and mathematicians because I think most physicists or physicists talk about something called physical intuition. And I think a big part of that is that part of your brain, I think for evolutionary reasons,
is wired to actually simulate, maybe visually, the real world. And if you can tap into that, you can do very powerful things. So some mathematicians I know just laugh at Einstein's special relativity because it's so simple. It's that the equations are so simple and it's like, how can this be a big deal? But the depth of it is not those equations. The depth is the philosophical thinking or the physical thinking of, wait a minute, if Michelson and Morley did this experiment where they found no matter what
frame they were in, they always measured the same value for the speed of light. That suddenly had incredible implications that you could actually get to with very simple math. And I think I'm sure Einstein was also just from the thought experiments, Gedanken experiments that he always talked about was very visual as well.
So later in our conversation, we're going to get to whether that kind of powerful thinking that physicists have, which is to integrate that kind of intuition, like, oh, what is a little ball rolling down a surface during gradient descent? What do we know about the way the ball rolls down the surface?
How does that actually become useful in modern AI research? So I think that's something I want to try to get out of you. So let's go back to you decided that you wanted to get into technology. And so I think you just mentioned the first startup that you worked at. But I think you've worked at a succession of the top AI labs, right? You worked at DeepMind. Were you in London? I actually was in London. And I sat just a few feet across from Debra Sennman. She...
What year was this? This was 2018, 2019. Wow. I wonder if you went to my talk because I gave a talk in London on genomics and AI. I just missed it. I was really excited and I don't know, for whatever reason, I just missed it. I have to tell a funny story about that because I went to give the talk and I came to the building, you know the building in London, and is it near King's Cross? It is near King's Cross. Yeah. So I went to this...
Great building. Although I guess they later moved into an even cooler one. But I go to the building and they've got this whole itinerary for me. And I'm meeting with all these people. And it looks like a job interview. It looks like the kind of set of meetings that you have if you're interviewing for like a faculty job.
But it also could be just like people who are interested in talking to me about research or something. Right. So I wasn't really sure. But then like the very first meeting was with a guy called Pushmeet, who's now Pushmeet. Yeah. Who's now like the VP who runs like the whole AI effort. Right. And AI research. And everyone that I met in every meeting kept asking me, are you joining our protein folding project?
And I'm like, why do you think that? I'm here to convince you that you should all be working on genomics, not protein. So we had a little little clash, not clash, but just friendly. Like, I'm like, no, this is more interesting. And they're like, no, this is. And they kept saying, well, we thought you were. We all thought you were a job candidate because you're a physicist.
And you already work on DNA. So we thought you're joining the protein folding project. So that, so that was a funny story. And like, now that I look back, I'm like, maybe I should have joined the protein folding project and been involved in Nobel prize or something. So, but too bad we missed each other. Yeah. Yeah. So when, what are your thoughts on deep mind? Because, you know, as you know, open AI got started because, uh, when Demis was involved in selling it to Google, um,
Maybe this isn't that well-known a story, but Elon and a guy called Luke Nosek, who's another PayPal mafia guy. He's the founder of Giga Fund down in Austin. You know him. I know him. They tried to buy, you know the story, they tried to buy DeepMind because they didn't want Google to have it. So they were at a party and Luke and Elon actually hid in a closet to get away from the noise and they called Demis and they were like, Google's offering 600 million, we'll match that.
Right. And then Demis, I think, according to the story, said to them something like, well, you can maybe you can maybe raise that money, but you can't match the compute infrastructure that I'm going to get. And so he ended up going to.
Google, DeepMind became a subsidiary of Google, but then those guys were paranoid because they thought DeepMind was going to get way ahead and solve the AGI problem before everybody else. And that's why Elon backed the founding of OpenAI at the beginning to have an open, I'm putting it in air quotes because we all know what OpenAI is like now, but originally like to have an open lab that was doing AI so that Google couldn't just snatch the prize. Yeah.
Did you ever hear that story when you were working? I never heard that story. That is very interesting. I should say that...
Just because currently I'm part of the Alphabet family, I'm going to deliberately avoid it. Yes, yes. Don't say anything that will get yourself in trouble. Yeah, yeah. But yeah, it's an interesting story. I think that AI race is interesting for many reasons. I mean, so many reasons, philosophical reasons. It's forcing us to ask questions that we kind of were...
you know, put off for a long time, even existential questions, I think. Yep. But it's also interesting because I think there's legitimately two bottlenecks. Usually there's only one bottleneck, but there's two bottlenecks that I think vie for how big of a bottleneck they are. One of them, of course, you mentioned chips, but the other one is increasingly actually just energy, like just raw power, like how powerful, how much property you have, right? Yeah.
Yeah. Yeah. Well, when I when I talk about the U.S.-China AI race, one of the issues that comes up, well, both of these come up. One is like NVIDIA versus Huawei for AI chips. But the other one is just like, how are you going to power these data centers? You know, like it's very tough to increase grid power supply in the U.S., whereas the the first derivative of the Chinese electricity production thing is just going gangbusters. They actually add power.
I think the correct statement is they add the equivalent of the power generation of like England or France every year. Every year. And the U.S. every seven years. Yeah, exactly. Yeah. And they're at 2x now. So it's like, how are you going to match that? If that turns if electricity turns out to be the fundamental component that gets thing that gets turned into intelligence, how are you going to match that?
Yeah, yeah. I mean, I don't want to go too far astray, but I wrote an essay titled The Moon Should Be a Computer, which I freely admit is speculative. There's a lot of assumptions I had to make in there. What's not as speculative is that, you know, if you increase the amount of energy consumption on Earth,
I mean, by a lot, by a lot. I'm not saying a little bit, but a lot, by two orders of magnitude, you do start to have thermal effects that affect the atmosphere. And I kind of use that as an excuse. But the real reason to do this, honestly, is because you just can't get the put on the base grid loads power supply in the U.S. fast enough because who knows why regulations or you just can't build it enough or there's some kind of conflict.
there isn't the competency to do it, but I kind of speculate like, hey, can we do this in space or maybe in fact on the moon? And I thought this was a kind of idea that came about through chatting with one of my friends in San Francisco.
who's an art anthropic. But I thought, okay, this is a crazy idea. I knew people would say it's a crazy idea, but actually some really, not community names, but some really important people read it and thought that it was a good idea. I found out later that Eric Schmidt, of all people, is now, through some special financing, is now the CEO of Relativity Space, which is another YC company. My company is a YC company, so he's now the CEO.
And one of his forwarded aims or reasons for doing this is he wants to put compute in space. I don't know that he wants to put it on the moon, but he wants to put it in space. And I think one of the reasons, again, is regulatory. Like you just literally cannot get the energy on Earth. Sorry, not on Earth, but in the U.S. In his space project.
Would the energy actually be coming from solar panels or is there a reactor that's in a space that's orbiting? I believe, I don't know that I've tried to find out. And if someone knows, please reach out to me actually, because I'm very interested in this. Yeah. But I believe solar.
I don't think you can do nuclear estates because it violates so many treaties. If the rocket went off, it would technically be a dirty bomb and you don't want that. So I think it has to be solar. But wouldn't you need like a square kilometer of solar panels in space or something? It is pretty nuts. Like I think...
I tried doing the math on this, and I could be wrong, but I believe to get a gigawatt, you need... Is it like a square kilometer, or is it more like 10 square kilometers? I'd have to check the math, but my intuition is that it's a lot of lift to put all that stuff into orbit. It is. Yeah.
And to put it, and you can't put it in low Earth orbit because if it's 10, let's say it's 10 square kilometers. Yes. It would literally, I did the cross section at some point. Yeah, astronomers would be a little mad at you. Yeah. No, no. Everyone would be mad at you. That's what I'm saying. Yeah, yeah. So you'd have to put it in like an, you know, Lagrange point. Yes. Thankfully, there's a lot of space in space. Yes, yes.
Okay. Let's, let's go back though to, uh, your experience. So as a startup founder, so your company was called mutable and you ran it for three years as founder and CEO. And I believe that it was in the space of basically AI tools for coding. Is that the right category? That's right. Yeah. So, um, I started the company November, uh, 2021. Um,
I quit my job and two weeks later, I was inducted into Y Combinator as solo founder. And that was a pretty brutal experience. Brutal, but amazing. I should say like one of the best experiences of my life. I basically did nothing but work. I didn't sleep.
So because I had nothing, I had nothing. I had no code, nothing. I just coded all day, went to the YC. You know, they have this curriculum, basically. And then pitched to investors. I managed to build something that, it doesn't matter too much, but it was something that cleaned up your Jupyter notebook code with AI.
I got one customer and I raised a seed round. Right. So it was mostly 10 or three months of my life. Right. But yes, I was one of the first AI developer tool companies. We came out around the same time as Copilot. Right. And...
Yeah, the space that I mean, the space is bananas now. Like Cursor is making, Cursor is an AI development tool is making over 100 million AR. A bunch of these companies now are making over 100 million AR and very quickly. Yeah.
Now, you're too modest to claim credit, but I know Mutable pioneered a few things now that are kind of common now. And I think Karpathy gave this keynote recently that was on YouTube where he talked about some of the ideas. I don't think he credited you, but I think you had these ideas, things like
you know, using the context in a certain way for software or actually creating much better documentation, Wikipedia style documentation from the code base at a company. Yeah, I think you guys did a lot of interesting things at Mutable. Maybe you want to talk about that. Yeah, yeah. Yeah, I'm not too modest to, you know, stand by the claim that we definitely invented a lot of the ideas behind probably the most popular manifestation of it today called DeepWiki.
from Devon, aka Cognition Labs. But basically, I had an idea, and at the time I was in Austin, of looking at all these open source code bases. Because I'm a very hyper-curious guy. I've always been that way. And I always come across papers and code that I'm interested in.
And, you know, you can get pretty far, you know, and you can develop the skill and muscle memory to onboard a new code base quickly. But it's always, you know, kind of a slog, right? So I was like, why doesn't the AI just help you with this? Like, why doesn't the AI, let's say, write something like a Wikipedia article to explain this code for me? And, oh, what's a good name for this? Oh, Autowikipedia.
So that's what we did. We do this for big recursive summarization to explain your code. But the backstory is actually a little bit more interesting than that. And I think this might be useful for people who are founders or aspiring founders. So YC always likes to say, scratch your own itch. That might not be a direct quote, but something like that. So at least you know you have one user, which is you. So I built the initial version of Autowiki
myself. And at the time, my team was doing something else. It doesn't matter too much, but we had another kind of product with an active pilot. And I looked at the first version and showed it to my team. And they were like, you know, this is whatever. And I remember looking at it. I was like, okay, this is not that good. So I kind of tabled it for a month or two. And then I had the same problem again. Okay, cool code base.
Cooling with load base. I really would love to onboard this load base quickly. So I was like, you know what? Let me dust off my AutoWiki code. I don't even know if I'd called it AutoWiki at the time. And I did some improvements I spent all day. I was just too curious. I had to. And
And lo and behold, at the end, I looked at the final kind of wiki and I was like, you know what? This actually is useful. And it helped me understand this code. So it's useful to me. It's probably useful to others. So I showed it around the team. The reception, I wouldn't say was very strong, but it was much more positive. And then, you know, at some point, you know, later, you know, and I kept chipping away at it myself. At some point later, we decided, you know what, let's commit to this. Let's actually commit to this.
And we launched it in January 2024, hit the front page of Hacker News. There were people reaching out from college, from my fast. We went pretty viral. And yeah, that was kind of, we decided to put the full focus of the company 100%. We actually kind of fired our customer. We were very nice about it. They were gracious. We came to some kind of agreement, but we put our full focus behind that. And it turns out,
So that's kind of the backstory of like the development and the product up to ups and downs. I hope our interesting is an interesting intro for sort of founders or sorry, founders, like I said, but the technical part that's most interesting actually is what Carpathi mentioned in his talk.
to startup school, I believe, which is that this turns out to be a very useful contact filter because LLMs are... I've gotten so much mileage out of thinking like LLMs. Actually, you should anthropomorphize the LLMs because in a way, they're trained after image. They're trained on human data and human experiences. So it turns out having these cliff notes, essentially, of your code helps the LLMs
One for retrieval. So people talked about RAG, retrieval augmented generation, but also for the generation part, because having these summaries
In a way, we preceded or predicted in some ways the reasoning models, which do the chain of thought, reason first, and then answer your question. So having it write a book report first and then answer your question was very compelling. We got much better results out of a code-based chat. So we built the code-based TRNA system that didn't require you to put in, oh, these are the files I want as context. You could just put in
The whole code base, in fact, we scaled it up to the Linux code base, and it could answer questions about the entire Linux code base. Let me ask you a few questions about that. So in the part where you build the AutoWiki, does a human have to go in and correct any problems before you then use it productively in further generation of code? We had a feature that wasn't much use where people could modify it, but you didn't have to. Look, there were hallucinations. Yes.
And I think some of that is solved. You know, there's techniques to get around that, actually. But it turns even with that hallucination, it was better to have it than to not have it. It's more light than heat. Right. So so even as a fully automated process, you basically had the model look at the code base, think about it, generate a persistent document. And then so in a way, it's a little bit like reasoning. Right. So it's generating this stuff. But then as it does other things for you, it's able to consult.
that reasoning that it did earlier. Right. So, yeah. So I think it's great. And there's actually a deeper, I'm glad you're a physicist because there's actually a deeper physics analogy, which it's an imperfect analogy, but it was always in the back of my head. The renormalization. Yep.
So with the renormalization group, for those listeners who don't know, it's this technology invented by Ken Wilson to explain critical phenomena in physical systems where you have, you know, this, you know, condensed matter system, you know, with, and you're way more on top of the physics than I am. It's been, I'm far from my physics training, but what I remember from my physics training from Peskin and Schroeder, you know, QFT, quantum field theory, you have these critical, you know, you have these systems, you know, it's,
Could it be quantum field theory? It could be kinetics, matter, systemics.
And you have the microscopic physics, but you want to get to the macroscopic physics. You want to predict a phase transition. So you do the successive coarsening, which in the, in the auto wiki, you know, scenario is successive summarization. Yep. And then you get to the, you know, the critical phenomena, which is, Hey, like this is a could be about X, or this could be about Y, or here's the answer to the question and so on. So that was kind of always in my, the back of my mind as an inspiration for,
And I know that was one of the topics you wanted to discuss. No, absolutely. I think it's, you know, the idea that you can depend on the model to start from the actual code base, but then describe it in progressively maybe more human-like terms and store that somewhere. And that's useful, right? That's an interesting idea. And it is like the renormalization group, different levels of description of the same stuff. I think other people who have just, in general research, neural networks have
often made this analogy as well that, so for example, the first layers maybe detect features and then next layers combine those features. And there's also a renormalization group flavor of how neural networks actually process information. But I met you in Austin and I think you did this, I don't want to necessarily call it a pivot, but I think when I first met you, you hadn't committed to the wiki thing.
And then, like, I think later when we became friends, you told me, hey, I made this pivot. And so very cool. And tell us, you know, how what's it like for an AI founder these days? What's what's the vibe like you were based at that time? You're based in Austin, but then you moved to San Francisco. Is that right? Yeah. And so what's the vibe like? Like you're out of it now because you're.
at Google. But when you were a founder, scrappy founder living in SF, I don't know if you're living in Bays Valley. What's the vibe like? How does it feel?
Yeah, the vibe in SF, there's this tremendous energy, I think. From what I hear, some people differ. Some people say, oh, it still hasn't fully bounced back from COVID. But I think at least in the AI field, there's this tremendous energy. I think SF is like the ancient Athens of the Western world. It's small. There's 700,000 people. I think ancient Athens was whatever, 250K maybe. Yeah.
But yeah, the smartest people are there, in my opinion, the most generative, more importantly, the most generative people. Agentic people. Agentic people. The agents or the human agents are there.
The player characters are all there. Yes, exactly. You know, and I love New York. I love Austin. I love SF. Those are my three favorite cities in the US. But I rarely learn new things, I have to say, you know, in Austin or New York. But I feel like I'm always learning new things in San Francisco. To the point where it's a little overwhelming? Yeah.
I don't know. I mean, I'm kind of a junkie, but yeah, it can be. I was there recently and there's five events, literally. There's this AI engineers conference that was an amazing industry conference, probably the best conference that had all these people like Greg Brockman and so on, but also just really good actual people doing the work. People doing the work.
And there's, you know, YC reunion, there's a private event on, you know, AGI futures and so on. Oh, you were at that one too. I was at that one. That's great. I asked one of the, and I, you know, I love the person who answered this, but personally, but I just, I don't know if I agree with the answer, but I asked one of the people who spoke.
"Hey, how have the empirical developments of AI changed your views of AGI or post-AGI futures?" And he said, "Not at all." And I'm like, "Oh, come on." - Yeah, and you were just like mentally, okay, discount this guy. - Yes, yes. - Reduce weight to close to zero for this guy.
But the reason I'm asking you about the SF vibe is because, you know, for people who are not in AI specifically or they don't live in that area, it's hard for people to really understand what the scene is like, right? It's how intense it is. Where, like, everywhere you go, there's people talking about AI models or, you know, post-training, RL, reasoning, you know, it's chips. You know, it's all-encompassing. And on top of that, there's a philosophical layer where people are talking about
AGI, what's this mean for, gonna mean for the society? What's this gonna mean for the existence of the human race? It's all super concentrated there. And I think as you leave that area, people don't really get it. People are like, why are you guys so into this AI thing? I like chat GPT.
You know, I use chat GPT, but they don't really get the intensity of what's happening in the Bay Area. Yeah, yeah. Yeah, I think, you know, there's almost like this fun where, you know, there's these public events that are hosted by different startups and VCs often who, you know, for obvious reasons, they're looking for companies to invest in and so on.
And then there's kind of another layer, which is, you know, those events are good. But there's another layer, which are like founder dinners, which I often get probably the most value out of where people host, you know, private dinners, again, sometimes VCs, sometimes different companies. And it's kind of a friendly thing, but also sometimes a recruiting thing or a funnel thing for VCs.
And those events are really good because people, you know, kind of share their war stories and then technical tips and tricks and so on. And then I think there's just all sorts of there also is like kind of a vibrant. I wasn't as plugged into this, but there's like kind of a almost like a party scene, actually, like SF house parties and just random, you know, billionaire. You get a new to do it and you're you're you are sure already are billionaires.
at this point, but you really do get immune to it. It's like, so like you're building, everyone's a billionaire, you know, but it's, it's crazy that the number that the, the level of people there and the kind of the ideas. I also think there's a lab scene as well. Like if you're in one of the labs, you tend to socialize people in the labs, but I think there's a lot of cross-pollination and I'm not the first to make this observation where people live in a house and there's literally, you know, one person from one lab,
living with another person from another lab and you're like, hey, how are they expected? I mean, you know, I'm sure everyone's careful about their, you know, the confidentiality. And I'm personally very serious about, you know, any agreement like that. But it's hard not to
It's hard. I just, yeah, I can't imagine, you know, that knowledge doesn't get out somehow because humans, even without people violating their confidential agreements, like, you know, just subtle intonations or what have you. And also people job hop, right? Yeah.
And my impression, and again, for the purposes of our conversation, I'm excluding Google because I don't have opinions about Google because they run employer and views here on my own. Sorry to make that incantation. Yeah, no worries. No worries. Like the orthodox creed. Yes.
Warding evil's birth away. But I think I'm very much under the opinion that all the lads, again, including Google, which I'm not sharing my opinion on, basically they're doing the same things. I don't think that there's too much novel IP or even if there is something that you could write on a napkin, how valuable is that? If it's something you can write on a napkin, it's going to get out. So I love that. Okay, I love where you're taking this conversation because obviously, yes, there's going to be tons of diffusion. And imagine you're
very curious or stuck on some particular issue in your own work and say you're at lab a and your housemate or some guy that you meet or you're at a party with your housemate and like but you meet your counterpart who's solving the same problem or in the same domain at lab b
How can you resist, like, actually just saying, like, are you guys finding X, you know, or have you looked at Y? Like, that stuff has to happen, right? I actually think they really read them the rights. I actually do think, and I don't know, at least maybe I'm just projecting how I behave. Like, I'm...
a consummate yeah i really am you know right you're you stick to the rules i stick to the rules i my impression of people i talked at the labs they're also very careful actually because they really read them the rights okay but so i don't think there's any like thing flagrant or that they're i don't i didn't mean flag i didn't mean flagrant but i meant i meant like soft diffusion soft diffusion happened and it could be like not directly from a guy employed by a to b like a tells his friend he went to grad school with yeah and the friend
tells somebody else and it gets to, you know, somebody at lab. But even something as innocent as, Oh, this is a cool paper. Right. Right. Oh, that, that doesn't work at scale. Yeah. You know, like, Oh, how do you know that? Like, did you guys try it? Yeah. Yeah. That doesn't work when you scale it up.
But related to this, so I think your thesis was just primarily, though, that there aren't big secrets. I don't think so. So then what is Mark Zuckerberg doing paying $100 million for an individual? What is he getting from that individual, not secrets? Are there differences in capability that manifest at that scale?
Yeah, I can't speak for Mark, but and I don't know how confirmed the hundred million stuff is. Some big number. I think that they are reporting he did. He did steal a handful of really good people from OpenAI. I think that is confirmed. OK, yeah, yeah. And I know. Well, I'm going to actually. Anyway, yeah, I do think that there's some probably, you know, companies outcomes are probably distributed. Yeah.
You know, I like to joke for my own case. I won't go into too much detail if I was cheating, but, you know, like in the movie, I always like bringing up this movie, Glengarry, Glengarry Ross, just because it's a hilarious movie, right? Yes. And it's like the first prize is, you know, a new car. Second prize is a set of steak knives. Third prize is your fire. Fourth prize is your fire. And so on, right? So I got the steak knives, right? But I think there's something like that with people as well, actually. Yeah.
I don't think the normal distribution is actually a good predictor of outcomes. I think there's something about people's capabilities, kind of like an airplane, where maybe you have a good engine, but if you don't have the wings, or if you have the wings but you don't have the engine, you can't take off. So there's something like that with people. So maybe it's a steel man mark.
Maybe there's something like that with just paying top dollar for people at the TLMs, maybe? You might be interested in, there's some articles, quasi-academic articles with titles like
How normal distributions are transformed into power law outcomes. So even though the individual people have power law distributions over their abilities, the way it interacts or nonlinear reinforcements in the system, you end up with power law outcomes. And so I think what you're saying has some theoretical basis, actually.
Yeah, we see this. We definitely see this with like a founders where like you could be an amazing technical founder, right? But if you don't have good communication skills, it kind of kills you, honestly, especially in, in, in my opinion, in the West. Yep. It just kills you because like the VCs are not technical. No, usually it's all vibes. But is it all vibes with Mark and building his super team for super intelligence? I,
I, again, I don't want to opine. Look, I think Mark is a really strong founder. I think it's really gutsy. It's something only a founder CEO can do with like the super voting shares. I think the jury is still out. I'm optimistic actually, I would say, but. I'm not, I'm not questioning his making the bet. Cause you know, his, his engine throws off so much cash. Like, oh, visors, yeah.
You know, meta worlds like this is not the dumbest bet, not, you know, not the dumbest bet that he's made with his spare cash. Right. So just betting on ASI and just building the best team. And plus, I think just he's just personally interested in like if I were Zuckerberg and I has had his resources, I'd be like, well, why not assemble the best team that we can we can with our spare cash cash flow?
And like, why not have it here? You know, even, you know, and so I'm not questioning that strategic decision. I'm questioning, like, if you're going to allocate 100 million to get, quote, the best hires, is that is that the strategy? Like, maybe you just have to do that. You could argue because there's only a limited number of people who really know their shit. Yeah. But but the opposite thesis is, no, there are a lot of people who know their shit. Yeah.
Yeah, yeah. So this is good. Yeah, because I didn't really answer your question. Like if I'm saying there's really not, there's really no secrets, then why piggy-pap dollar for these people? They're not completely contradictory, but there is a little tension there. There is a tension there. It's very hard for me to say, but I will say that even...
Even if there aren't secrets, there are, or in my opinion, at least really compelling secrets that you're paying for. I don't think you're paying for the secrets, but you're paying probably for this tacit knowledge where there are people, there are these subtleties that come up in building these that you don't want to maybe wait for someone to up-ram.
And maybe from his perspective, he tried that. And, you know, Lama for, I don't want to speak ill of them because... Oh, you're so careful. You alphabet dudes are so careful, but... It's not just because I'm so careful. It's because I know how hard it is to build something. And I don't like criticizing builders as a rule. Like, I think builders...
are like heroes. And like, I think it's just easy to criticize from the sidelines. But yeah, I think that even that team would say Lana 4 wasn't the best, you know, showing, right? So, so I think for it, you know, it is worth like, this is serious stuff. Like, look, AGI is happening. Like that's what I believe at least. It's happening. It's happening soon. So like he can't miss out on this. Right. So he's better off overpaying than missing out. Right. You could just say like, look, so what if he overpaid? He can afford to overpay. My interpretation would be
at the level of what is meant by there are no secrets, it means like A kinda knows what B is doing, or at least knows the range of things that B is doing. So there's no secrets in that sense, but some guy has exquisite taste.
And he just has a nose for like, well, we should try this first. Or we've tried this for a while and it hasn't worked, but we should throw more resources at it. Those are all subtle, nuanced decisions, right? And it's not a secret. It's like a subtle judgment call that someone has to make. And maybe one guy makes that better than the other guy. A little bit like trading too. Like in finance, you'd have a similar situation. But I think that's a possible justification for the $100 million cop package. Yeah, yeah. So-
I mean, you'll see the same things at hedge funds where there's somebody gets like an enormous comp package based on some past performance. But who knows how predictive. Or people see the same thing about CEO pay, right? Yes. In the more popular disc. Right. Right. Right.
I guess on the one hand, I feel a little FOMO. But on the other hand, I feel like it's good that geeks are getting paid like that. Like, shouldn't a geek get paid more than Ichiro Suzuki or Tom Brady? They should, right? Give them the money. It'll be interesting to see what people do with their money. Yeah. Well, it all feeds back because a lot of billionaire type people...
are very willing to fund science philanthropy. Like for them, it's cooler to say like, oh, I helped CalTUT build this telescope than like, oh, I just bought another thousand square meters worth of house, you know, in Miami or something. They're not that into that stuff. They're more into like putting something into something really cool. So I think just as we were saying, these people are agentic and smart and
If you give them huge resources that they don't really need to live, I think they'll recycle them in interesting ways. They might fund biology research and they might fund some research that's really valuable. I hope so. I am a little skeptical. I think it's true in some cases, and I certainly have friends. You and I have friends. I think almost like we're...
I hate to psychoanalyze you on the spot, but I think you're going to tend to attract the most interesting segment of people who make tech money, who do do this kind of agentic and interesting stuff. But I actually think that's not the norm, unfortunately. I think people do just kind of retreat into their shelves. There's not enough Medici. So I hope you don't mind while I'm on your podcast. I will kind of opine that more people...
who make tech money should do more interesting stuff with their money. Like for example, the, you know, like the Vesuvius project, like it doesn't have to be philanthropy. You could just do interesting stuff. Yeah. Just do something interesting. You know, the, the guy, I forget his name, who confirmed that the site of the Iliad really existed, like ancient Troy really existed, just had an idea in his head. Like, I'm going to go to this,
you know, part of Turkey, I'm going to go dig out this site. And I think that's where Troy is based on this evidence, right? Like more people should do so. Well, I totally agree with you. I mean, also being a professor, I'm the guy with the begging cup trying to get the wealthy guy to like fund some academic science, right? So I 100% agree with your thesis. I do think that the kind of guy that Mark pays up 100 million for to help build AGI or ASI is probably the kind of guy who has these broader ideas.
like interest in science and stuff like that. So on average, compared to a guy who just like runs a macro hedge fund here in New York and what's he going to do with his money? His wife is going to collect a lot of art or something, right? So...
it's almost like we've lost the ability you know people talk about you know not to sound too long but you know people talk about throw around all these both words but like and I'll do a little bit of buzz wording myself but you know aristocratic tutoring or like taste you talked about you spoke about taste earlier mentioned taste earlier we just don't we just don't develop this sense of you know adventure and
I don't know what it is, adventure or culture or like agency and people. It's, you can be very successful in one domain. And then once you collect your earnings, you just don't use it, you know, at all to do something. Very typical. Yeah. Anyways, well enough about that, but yeah. Well, I agree with you. Okay. So let's come back to one topic we said we were going to talk about because this is a part of your role now is agents. So if someone comes to you and says, Hey,
I'm on social media. Every day I see some clip of some guy's agent that can do everything for me, but nobody I know actually gets much value out of agents right now. So where's the dividing line between hype and reality for what agents are good for? Yeah, no, good question. I think, look, it's just really, this field is moving very fast, and I think it takes time for these developments to spread. I don't know if I'm a full...
Tyler Cowen asked, you know, he has this whole thesis, which I want to explain briefly about how this is akin to electrification. And it's going to take 100 years for EGI to diffuse into the economy. I think that's too slow. I don't fully believe that. I get where he's coming from. There's regulatory hurdles. There's people who just take time to change their minds and change habits and so on. Who was that physicist that said, you know, physics advances one funeral at a time? Maybe Heisenberg.
Was it? Are you sure? Well, yeah, because it's somebody in that group. Because the thing that historians never say is like all the old guys pre-quantum revolution didn't believe quantum mechanics. And like now we don't even think about that. But even that transition was a rough transition. Yeah, I think there's actually something similar happening with AI. You see this sometimes with some of the old school software engineers that just still don't believe in AI. They're like, really? Yeah.
Really? At any rate, so I think there's something like that. There's probably something like that that's going to happen with different industries, but I want to directly answer your question. I think undoubtedly today there are software agents, which I work on,
And again, sorry, I'm doing the alphabet thing. I want to be very careful when I comment on my actual work, but I can talk about Peel freely. I think that's clearly like a thing. People are using Cursor. People are using Cloud Code and so on, all these other tools, right? It is making a big difference in their day-to-day, you know,
software development. People are, and I saw this even in my time at Needleball, running Needleball, that the standards for software, even for a startup, are much higher now. You just can't have shitty software. Even if you solve some unique novel pain point, customer expectations are very high, and that's going to pull the field forward. So I think that is happening in software. Other domains...
I agree it hasn't fully happened, but we see a little bit in the legal domain, actually. Harvey, supposedly, and other companies like that, they're making a lot of money. There's a few other domains like that. And I think slowly but surely, or sorry, actually not even that slowly, we were going to see, you know, white-collar labor get these software agents and...
I don't know what it's going to do to people's jobs, but it's definitely going to help with the development. It can take over the development. Okay, sharp, sharp question. So I think it's reported that for the graduating class of 2025, people with computer science, software engineering training, the job market is poor. There's been a decline in offers or, you know, the increase in the employment rate.
How much of that is due to AI-driven improvements in productivity? How much of it is just like big tech not hiring as many people for some other reason? It's hard to say. I would bet currently it's more that the tech companies are not hiring as much. And I think there is like this very Zerpy era where you would, you know, make a job offer to anyone, you know, kind of, you know, anyone basically even halfway plausible. And I think that was...
that was unsustainable and probably, um, that led to at least a short term, uh,
bell typing because there's just an overhiring spree. And even if you have layoffs, people are reluctant for very human reasons and other reasons, morale reasons, to do cuts that are too deep. So they're probably companies in general overhire and they're not hiring as much. I do think that there's something deeper going on though. I think that is a cop out a little bit when I say that. I think there's something going on with
A combination of AI and the computer science curriculum, where the computer science curriculum is kind of weird. I know they've changed it from what I hear, but basically they're learning, you know, the street math, they're learning algorithms, they're not learning actual software engineering. So often a green CS grad won't actually be that effective as a software engineer.
he or she will just not be that useful to you. And that's kind of why I personally didn't hire that many people who are newer. I did hire, and I'll tell you the exception of who I hire. And maybe this is if there's younger listeners, because I am interested in helping out people, other founders, and perhaps if there's people who want their first job in the field, I'll try to give a little bit of advice here. I did hire a 19-year-old.
And that person, you know, didn't go to college, actually. They just had graduated from Princeton High School. And they did a bunch of robotics projects. They did a bunch of rust projects. And I could tell, you know, when you talk to someone, their power level, you're like, okay, wow, this person's power level is very high. So there was something about that person that I hired that I could tell was amazing. I wasn't sure. Of course, I interviewed them. And they...
Yeah.
at this point. And it shows agency more, you know, and I think agency is going to be more and more important. Right. So that's advice to young people. But just to pin you down on the answer to my question, I think you I think you landed in the safe spot, which is
There are multiple explanations, multiple factors contributing to the decline of offers to software engineers in 2025. Part of it is post-Zerp era, the companies overhired and they realized they were bloated and they're shrinking in the higher interest rate environment.
But part of it is also a recognition that there's some increase in productivity from AI tools. Is that? Yeah, I think so. I think, I think there's definitely a part of that. If I had to guess that's double digit effect, at least because right now that, you know, I can do a lot of the work of a junior engineer and the job is kind of moving toward being like a TLM or a TL where you're managing these teams of agents. Team leader. Yeah. Team leader or technical lead sometimes. So like, do you need a junior engineer? Yeah.
anymore um maybe not and then their junior engineers were always probably actually a net negative in the short term yeah and it was always something you did like oh i'm growing my pipeline i'm hiring this person not because they're gonna really actually help me that much and actually they're gonna maybe be a net negative actually beginning in the first maybe even in the first year or two but i need you have to grow my pipeline i need to keep up so maybe there's a feeling like
You just can get by with much less. And I think I'll point out two people. One is more kind of more in agreement with what you're saying, Dario Amarday, who's saying, hey, like there's going to be massive job loss in two years, actually, because of AI. I have a standing bet, actually, with a friend who works at Infopic. We put it on both of our calendars.
almost exactly two years from now that he predicts it's going to be, you know, like at least 30% drop in layoffs and, you know, tech companies across the board, including, uh, companies like, uh,
that are leaner already like Tesla, let's say. So we'll see. I mean, I made the other side of the bed. I think 30% is too high. But even less dramatic than that are people like Toby Luecke, I believe the Shopify CEO, who he didn't really opine on layoffs as much, but he did mention that he expected all of his teams to use AI more and to
see how much they can do with AI without hiring more people. And you hear about companies like Quora where like...
They're hiring someone just to automate as much as possible of what they do with AI. But there is a sense that AI will just make you more productive and you can get by with less and less people. And then they therefore don't need to hire as much. But then that goes back to what's that economic law again? The Jevons paradox people throw around. So who knows? Who knows? It's so hard to predict. I don't have a good answer. It's so hard to predict these things.
Okay. Now coming back to the word agent. So I think you mentioned a few cases where it is clear that there's a productivity gain from AI tools. But I want to differentiate between AI tools where you, you know, you send a query in to ChatGPT or, you know, you ask ChatGPT to revise something or write a first draft of it.
I don't really think of that as an agent. I think of an agent as something that's a little more autonomous and takes multiple steps autonomously without human supervision, as opposed to a one-shot or a few-shot tool where the human is looking carefully at everything that comes out. So what's an example where...
you know, I, I want to write some function in my code base and I just let the agent go hog wild and it does a bunch of non-trivial stuff and returns it. And, you know, is that, is that a real thing now? Oh, absolutely. All the examples I mentioned have true agents in my opinion. So like you can, in the settings of all of these tools, like Chris or whatever, and,
any honestly any of them level whatever put the setting so that you don't have to affirm yeah like confirm all the actions yep and you can just let it go hog wild and way more than a function you can have it build like a feature you can have it build a web app you can have it build and I
And I think it works decently well. I think there's an argument to be made that people, the argument people usually make is like with increasing nines of accuracy, the, the, the, the, the set of things that can build that require more steps to build, you know, as you're multiplying the, you know, the 99.9, you know, it's due to just improve the kind of this, the, the horizon, type horizon and the sequence horizon of actions it can take. And therefore it can build more sophisticated stuff.
And often that, by the way, uses a justification for why are people spending so much on, you know, the idea centers, chips, energy, because scaling laws...
People often portray it, maybe it's just my perception, but as like some kind of free launch, like, oh my God, this is an amazing thing. But actually it's kind of terrible, right? It's a logarithm. Scaling laws are a logarithm. And I think the only way you can justify it is, you know, this kind of the increasing nines argument, but also, and maybe this is a nice physics connection, is the emergent abilities argument. That maybe, you know, there's a scale at which, you know, going back to the airplane analogy where you have every, you know, it looks like an airplane, right?
It sounds like an airplane, but it doesn't sound, you know, not an airplane. It doesn't take off, you know, and then you go up in order of magnitude. Oh, wow. It takes off, you know, at the near, you know, so I think there's going to be like things like that. It's very hard to predict though. And I think scaling laws, by the way, are one of those things where I have this, you know, you know, hobby horse almost of like, you know, scaling laws are like thermodynamics, right?
where we discovered thermodynamics and steam engines way before we discovered statistical mechanics. So as an open question to other AI researchers and people on the field, what are the statistical mechanics to scaling laws? Because I've seen stuff, and I've asked you this question before. I'm not quite satisfied with the answers. I think there's something deeper there, but I could be wrong. Yeah, we've talked about this before. I think like
For people who are in a little more relaxed situation where they're not trying to ship the next version of Claude or something, and they have a little bit more of a theoretical bent, I think you're going to see people coming up with more fundamental models that then explain
particular scaling laws that we're seeing. I think that's what you're searching for. But I haven't yet seen anything quite of that nature. But I do see papers where people are trying. So I think eventually there will be some better understanding of these scaling laws. Yeah, yeah. I think just to give maybe a more concrete example, um,
So Tim, I think Detmers? Detmers. Is he the quantization guy? Yeah. Yeah, I had him on the podcast. Oh, nice. Nice. Okay. So like that, you know, just to review some of his work, I believe he showed that after 7 billion parameters and many families of language models that there's these outliers that emerge. And it's this, there's a phase transition and kids are obsessed with phase transitions, right? Because they happen all over physics. They happen in particle physics. They happen in physics. They happen in quantum physics.
even in glyphic physics. So, you know, the, I heard an interesting rumor that, you know, other groups, this is Google, of course, but other groups, one of the other labs was not able to confirm actually his results, by the way. Interesting. Interesting. Yeah. Not able to replicate. Yes. Interesting. So who knows? But more things like that, like,
come up with these like quantitative measures, ideally, not just based on benchmarks. I mean, a little bit too of like emergence. Right. And then like, could you predict those? Is there, is there like a stat mech for this? Is there a way to predict these? Right. Or at least, you know, a good, a lower, relatively low parameter way of predicting it. One, one drum you'll always hear me beating on X is about open source models, because for someone like you who has access to the whole
Google alphabet infrastructure, you know, you can, if you get interested in something, code it up and run experiments, you know, quite easily. But for academics, the existence of, you know, these small open source models like QWEN and DeepSeek or Distilled DeepSeek are super important. When you look at the papers, someone has some theoretical idea, they do some theory, and then they do a bunch of calculations using open source models to verify, you know, that the behavior they're
they theorize about is actually empirically observed. And that would be impossible, actually, if they didn't, if these, because sometimes they have to modify the models in some way, or, you know, it's, they need it to be really open source, actually. So I actually think for this theoretical investigation, it's really important that academics have access to open source models. Absolutely. Yeah. I've seen so much interesting work, especially recently on, on RL reinforcement learning on these models, you know, like when and yeah, you can someone, you know,
Okay, we're near an hour. So what's a topic that I didn't ask you about that you'd like to opine on?
Yeah, I mean, maybe two. One, we commented on briefly. Another one that is another one of my kind of topics that I go back to. So one is like, what is the role of physical intuition in AI? Oh, great. Yes, we were. I'm sorry. We said we were going to discuss this. I forgot. We did touch upon it, though. Yeah. And so on. So I think that there's something to be said. And before I go into the direct arguments, let me give kind of some of the social proof to quote on this.
if people didn't know
a lot of the developments in the field have been made actually by ex-physicists. And, you know, Steve and I have to, you know... We have to pip the physics. Was it Ilya or was it Karpathy recently who just tweeted out something like, theoretical physicists are like the stem cells. The baryonic stem cells. I've seen them become everything. Yeah, yeah. Who was it? It was Karpathy. Oh, it was Karpathy. But it's totally true. I mean, you know...
I mean, just historically, it's true, right? That, okay, you can build bombs, you can design microchips, you can solve problems in biology. Theoretical physicists have done all that stuff in the past, so it's not surprising they could make some contribution in AI. But my question for you is, what is the special...
intuition or capability or advantage that theoretical physicists bring to this field in particular? And also, what are the blind spots? What are the things that theorists or physicists are weak at where maybe the CS guys bring more strength to it? Yeah, I actually can answer both, I believe. So, physics, unlike math, physics is honestly like, it's almost like you get everything that a mathematician does, and mathematicians might not like to hear this, but it's true.
And you get this physical intuition, which some mathematicians have. I actually basically did a double major math. That was whatever I had off my course. But I think real analysis, I took a bunch of real analysis classes in undergrad, and there is a kind of something that feels like physical intuition with the epsilon delta proofs. That's the closest I think mathematicians get. I think sometimes geometers, there's not that many geometers, though, in my opinion. I think a lot of mathematicians are
There's a lot of algebra. Apology doesn't really give you the physical intuition. But anyways, physical intuition, this movie that plays in your head, at least there's a movie that plays in my head. That's super satisfying that I see when I do physics problems or I read physics textbooks.
that intuition carries itself to AI research very directly because loss curves are like an energy manifold, basically, where you have, you know, this ball rolling down the hill and you're trying to optimize this manifold, right? And like, there's so many analogies between information theory and like KL divergence is where it looks like, again, you know, like, you know, a literal Hamiltonian, right? Where we have partition functions, you have all these things in physics that
not only are analogous, but they basically exactly... Well, you know, after all, the word entropy does come from physics, right? So you open a theoretical AI paper and the word entropy is going to appear there at some point, right? So, yeah. Yeah, I think there's just too many opportunities where there's something happening where there's a physical process en masse where they're, you know,
having a physical intuition for how that moves and unfolds is helpful. I also think like familiarity with the kind of math actually, but the like path integrals, you know, I mentioned some of the other math, you know, sat math, partition functions, more continuous math, dealing with continuous math, applied math, you know, dealing with approximations, right?
I think is all very helpful. And that's why, you know, you have people like Jared Kaplan, who was, I was reading his, you know, some of his string theory papers. And then I foolishly, I like, I love to tell a story. It's so funny. I was reading another Jared Kaplan's papers in AI. And then a year later, I was like, wait, it's the same Christian. So there's something, there's something there as to put, and in terms of the weaknesses, and I experienced this firsthand because like, I was really a physicist. I didn't really like, yeah, I was a tree. I just, I think some of the like,
very like algorithmically algorithm algorithm stuff with like there's subtleties with like some of these algorithms and like for loops and you put the bit here and there and and then oh actually you didn't account that the bit was shifting this way and
You know, you can learn that as a physicist, but it's not your bread and butter. So that's definitely a weakness. Some of the more traditional CS stuff, you're just not trained in it. I agree with you completely. The discrete algorithmic stuff is not necessarily our strength. But on the other hand, we're better at dealing with continuous systems. And I think when you get to enough parameters and enough...
you know, quasi-continuous values of those parameters, it becomes a continuous optimization problem, not a discrete optimization problem. So it sort of shifts it a little bit more toward physics intuition. Yeah, the tension between continuous and not to open another flag, but the tension between continuous and discrete math is super fascinating. I think it's a tension that's a rich source of mathematics and physics. And in physics, we, of course, come across it all the time with like,
We have the wave particle duality and we learn later, oh, everything's really quantum fueled and so on. But you have these manifestations where you have things that are very discrete, like Ising model, Ising systems, but there's ways of thinking about these that are in limits that become very continuous and so on. So I think it's a very, very rich tension, but physicists do actually deal with discrete phenomena.
Now, not to put you on the spot here, but as advice to Zuck, should we throw some of these hundred billion dollar packages at former theoretical physicists? What do you think? I think so. Yes. Can only help, right?
Yes, yes. I mean, there's a reason why, you know, Amphrabic, you know, so many people there are, you know, because even people like, you know, I think a mutual acquaintance or friend, John Schulman, you know, I think he did an undergrad. He was an undergrad physics major at Caltech. Yeah. Yeah. And Karthi was an undergrad physics major. There's tons of tons and tons of. Yeah. Yeah. The other thing about physics is that.
The math you learn is specific to the most useful, interesting problems that humans have encountered. So it's like, it's like biasing you toward not the math that's most interesting to pure mathematicians, but the math that's actually most directly related to real world systems that people have gotten control over. Right. And so it is, it is the right kind of the right curriculum for whatever other thing you're going to do in your life. Yeah. Yeah.
Great. Well, that's a wrap, I think. And I want to thank you again for being on the show. I'm sure that there's some young guy out there who really was stimulated or appreciate your insights in the conversation. What's the best way for people to get in touch with you?
Yeah, so you can go to my website, omarshams.ai, and all my contact information is on there. You can follow me on X, which is my handle is omarshams, that word's right to the left, or my email is on my website as well. So feel free to reach out. Okay, we'll put some of that in the show notes. Thanks again for having me. Yeah, we're good.