Akash Systems is a venture-backed company in the Bay Area that develops diamond-based cooling technology for semiconductors, focusing on space applications and AI data centers. Their technology uses synthetic diamonds to improve thermal efficiency and boost performance in computing platforms.
Diamond is the most thermally conductive material in nature, making it ideal for cooling semiconductors. Akash Systems leverages this property to significantly reduce heat in chips, enabling faster and more energy-efficient computing, particularly in space and AI applications.
Akash Systems' diamond cooling technology works alongside existing cooling methods like liquid cooling and fans. It targets the chip level, reducing temperatures by 10 to 40 degrees, which enhances performance and energy efficiency without replacing current cooling infrastructure.
In space, cooling is more challenging due to limited area, lack of airflow, and high power densities. Akash Systems has successfully cooled chips with power densities of up to 5,000 watts per square centimeter, a level far exceeding typical AI server requirements.
By reducing thermal throttling and enabling higher chip densities, Akash Systems' technology can accelerate AI performance, potentially hyper-accelerating Moore's Law. This could lead to significant improvements in tasks like AI model training and inference, reducing production times for complex tasks like generating feature-length films.
NextGen Data Center and Cloud Technologies, India's largest sovereign cloud provider, selected Akash Systems to address heat management challenges in their data centers. This collaboration highlights Akash's ability to scale and provide material science-based solutions to sovereign data concerns, which are increasingly important globally.
American manufacturing is crucial for national security and maintaining leadership in critical technologies like AI. The U.S. must invest in both large and small companies to strengthen the supply chain, ensure self-sufficiency, and avoid reliance on foreign manufacturers, especially for technologies with dual-use applications in defense.
Akash Systems sees AI as a powerful tool for accelerating material science research. AI can help researchers explore chemical space, optimize designs, and iterate faster, enabling breakthroughs in areas like battery technology, thermal management, and semiconductor materials.
Diamond cooling technology has the potential to revolutionize AI and space applications by enabling higher performance, energy efficiency, and reliability. It could unlock new possibilities in AI model training, satellite communications, and national security technologies like radar and satellite-based surveillance.
Welcome to No Priors. Today I'm chatting with Felix Zajecum and Ty Mitchell, the founders of Akash Systems, which makes diamond-based cooling technology for computing platforms, from space satellites to AI data centers. Their innovation uses highly conductive diamond to help computers run cooler and faster while using less energy.
Felix and Ty, welcome to No Priors. Good to be here. Thank you, Sarah. I think we should start with just a quick introduction as to what Akash is.
Sure. Again, very good to be here with you, Sarah. Acop Systems, we are a venture-backed company based in the Bay Area that is starting from the ground up at the material science level. And we are making, using proprietary materials, materials specifically of diamond that we grow in the lab, using it to make electronic systems that are disruptive in the world by an order of magnitude.
It's in contrast, oftentimes when we start companies, even in the hardware space, we tend to start injecting ourselves in the middle of a supply chain at a cosh. As material scientists, we come in at the periodic table level and we start there to build up chips,
boards, systems that ultimately change the lives of our society, whether you're in business or as a consumer. And we do that in several ways. We change the structure of a basic material. The systems that we've chosen to affect are
We started off in the space world where we make some of the fastest satellite radios ever made by humans. And then we go over, as we are doing now, to AI where we are able to cause compute errors.
a GPU to go faster than has ever been done before since the beginning of this new space, or reducing energy consumption in the data center by a significant amount, all because of innovative material science that we've pioneered at the ground floor.
So maybe that's a good segue into how you got started working on this, because you've had this idea for a long time. As you said, you started on space applications earlier. Can you talk a little bit about your background and, you know, the original scientific idea and how you thought it would be applied? Sure.
Sure. So my background is in material science and electrical engineering. I obtained a PhD in electrical engineering with a minor in material science and device physics from Cornell. And in my PhD, I focused on bringing together very dissimilar materials in such a way that one plus one equals ten. Okay.
And, you know, for example, silicon, very well-known, ubiquitous material that's ushered in the current modern era that we have today. But then there are other materials, plastics, other types of semiconductors that don't actually do as well as silicon, but they have their own strengths.
And so for my PhD, I looked at ways of trying to bring together, say, the optics world with electronics, silicon, and merging them together such that the overall system is incredibly powerful, very
That philosophy I've brought to Akash when I started Akash with Ty in 2017 to try to do the same thing. I've often found personally, actually, I think it's a very good metaphor for humans, how we interact together. When you bring different people that have different strengths together,
the combination can be incredibly powerful in ways that excel and exceed the simple summation of the parts. And that's exactly what we do at Akash, where we bring artificial diamonds, well-known as the most thermally conductive material ever grown in nature or ever to occur in nature, and then silicon or even gallium nitride, quite frankly,
any other semiconductor, when brought together, amazing things happen. Very happy and excited about doing that in the world of AI. We did that in space where we have now made and launched the fastest radios ever made by man. And now with...
AI were able to achieve performance levels, whether in energy efficiency levels or compute speeds ever obtained by simply using these artificial materials that we've created in the lab. Ty, are you the silicon or are you the diamond here? I'm actually a little bit of both. I'm the silicon carbide guy. My PhD was on silicon carbide. What that taught me when I went into the business world
working for company Cree and Wolfspeed is that Cree and Wolfspeed developed a very good silicon carbide materials level technology.
And applying this technology to any sorts of systems like radar systems or power electronic systems for EVs or light emitting diodes, one of the things you learn is that when you have a materials level advancement, as the person who has that material level advancement, you really have to make the system to convince people that you have the solution.
If you just go to someone who is making, let's say, a car and you say, hey, I've got this great silicon carbide diode, a Schottky diode or a MOSFET, they'll say, all right, great. You know, I already using a silicon IGBT. If you meet their price, I'll put you in.
And you say, look, if you put my part in, I can increase your range 200 miles. I can increase it 40%. They'll say, OK, yeah, sure. However, if you make the car, OK, you find a partner to get your part into the car, or you make the box, you actually make the MOSFET, you make the module, the
the power module, the farther you go in the system, the better chance you have of convincing the customer that you have the solution. So that's the approach that we took at Akash was even though we had this materials level technology, we would make the system and then go directly to the consumer or to the customer and be able to prove our technology out that way. And with AI, one
Why did we go into AI? We were in space solving problems that were very difficult, actually more difficult than the AI problem we face today. Because in space, you have a limited area. You don't have any fluids that you can use to cool because there's no airflow in space.
And and you have a bunch of other reliability and survivability requirements that are much more difficult to meet than an AI. So we thought, all right, AI is is a very difficult problem. They have heat is a very difficult problem they have to solve. It's it's growing and nobody really has the right approach to solve the problem. And we think we can help. So that's that's what caused us to dive in.
And just to give a couple of numbers to what Ty just said, comparing space to AI, the power densities that we cool in space are at the level of 10 to the 3 watts per square centimeter. So 4,000 to 5,000 watts per square centimeter, the chips that we cool in space.
on the ground in AI in a typical server is a full order of magnitude less than that, a couple hundred watts per square centimeter. So that's what gave us the confidence that if we could address the problem in space, that we absolutely using the same technology, even backed off a little bit so we can rapidly ramp. If applied to a server, it would be a home run.
Yeah, I think you guys had a demo to just sort of explain the, you know, advantage in connectivity that diamonds have or that specifically. We do. Thank you for making that very nice segue.
What I'm going to show you is how diamonds can very effectively cool down or rather melt ice. So this is an ice cube that you're looking at here in my little video. This is diamond, a little piece of diamond.
This is a diamond wafer that we grow in the lab. You can see the Akash name. And what I'm going to do is show you that heat from the ambience and more specifically, my fingers, my body temperature will flow through this diamond rapidly into the ice cube and melt it as rapidly as I touch it. It'll be just like butter. I wish you could touch it yourself. It'll feel cold to the fingers. So I'm just going to...
wedge it here and you will see, and there you have it going in. I'm feeling very cold right now. I don't know if you can see it, but there you go. - Very cool. - You can see it wedged in. - And cut ice. - Yeah, you can cut ice with your fingers. And now, rotate it so you can see.
And that's the feature, that's the property that we bring to bear with our chips, with the GPU. We're looking at reducing temperatures. Initially, we're starting off with 10 degrees, which is already worth millions for any data center who has a small number of servers. But we're looking at further reductions, 20, 30, 40 degrees down the line over the next 12 months. In space, we already reduced temperatures there, 80 to 90 degrees there.
So it is quite significant, the effects of this and the economic impact is far reaching. Yeah, maybe this would be a good time to actually just talk a little bit about why heat dissipation is a problem at all for AI chips and AI GPU servers and data centers, right? So, you know, if you imagine...
these chips in servers and racks of servers and big rows of them in the data center. Like we have cooled large data centers for a long time with fans. Like how does this fit into the, I guess, alternative set of fans and liquid cooling and like how is like
large-scale AI training changed the game at all? So this technology that we have, this materials-level technology using synthetic diamond, it actually fits with any other cooling technology that's used today. Today, the cooling techniques that are used are really at the data center level,
where they do airflow containment and air containment, you know, keeping the air from mixing. It's at the rack level where you were using liquid cooling, you know, CDUs and manifold and pumping liquids right to the devices to keep them cool or using fans. And you have it at the chip and package level where people are using techniques to either speed up the chips, right,
make more transactions on the chip so that you get more efficiency transactions per watt or in packaging, doing things like fan out packaging where you're spreading out the heat as much as possible for any chip or group of chips. Also things like HBM, remember you can stack up right in the same package with the chip and
and give it more efficiency and essentially also more transacted to a lot. So these are all techniques that are being used today. And the beauty of our approach is that it works with all of these. You can use diamond cooling by itself, or you can match it with anything else that you're doing and give yourself additional operating margin, give yourself additional performance margin.
that you can then use to drop your temperature, run your system hotter, and give yourself the opportunity to perform more transactions. And that's something that's critically important because you see just in the last year,
when Nvidia has introduced a couple of new chips. First, the Samsung HVM, there were heating issues back then. Then with Blackwell, again, heating issues came up where that delay, that rollout was delayed due to heating issues. And this is now starting to, for the first time, I think if you listened to the last conference call with Nvidia, heating issues were mentioned.
This is something that's going to increasingly be on the radar for companies, for investors. I think that Jensen was able to not directly answer the question. People are very skillful in these conference calls, but more and more of these questions are going to be asked and people are going to need approaches to address them. We have what we think is the most effective approach because it goes right to the heart of the device.
Yeah, it's interesting because intuitively not being from the data center management space, like I get nervous when anything has a mechanical component. Right. But I do have friends running large scale data centers of this type. And I think even though there have been announcements of, for example, like the liquid cool GB 200 and VL 72 system, like and some companies.
interest in adoption of liquid cooling, like fans and liquid cooling come with their own reliability issues, of course, right? It's just complex to go implement that and keep that from, I don't know, leaking and breaking and things that movement requires. Yeah.
Yes, Sarah. Actually, the servers that we ship today, the H200s from NVIDIA, for example, is both liquid-cooled and diamond-cooled. So just to illustrate Ty's point that our diamond technology layers on top of whatever technology that you use, whether it's liquid, fan, or both. But to push on that point further, we believe at Akash Systems that if
a material science and more specifically a physics or chemistry approach to solving the heat problem is not used today, that the needs of AI data centers around the world based on projections today will crash the grid as
as we know it. And if it doesn't crash the grid, we believe that the cost of electricity will be exorbitant. It's not sustainable at the path that we're taking today. And that's part of our inspiration for attacking these problems, starting with physics and chemistry.
So take an example is your laptop. Your laptop has a whole bunch of chips inside of it. You put it on your lap and you feel the warmth of it. If you bring an ice pack and put your laptop on top of an ice pack, nothing will change. It will not speed up your CPU. You're not going to change the heat extraction of your CPU because there's just such a great distance, a thermal barrier between your lap and the GPU or the CPU.
going straight to the heart of the heat, the heat source, the chip, the material with physical chemical solutions allows you to make a difference. And that's what we're doing. We don't see a lot of approaches like that out there. We think that this is going to be really key to curb
the consumption and actually allowed the AI vision, I think that we all hope, love to see actually come to fruition. If I think about the complaints that people have running large data centers as blockers or issues to deal with, it is chip supply, GPU reliability, power supply, heat. Like how does heat relate to all of these other issues?
Everything you mentioned is heat. I mean, we see the same thing, by the way, in space. Every problem that one addresses in space, it's almost like a whack-a-mole. Unless you go to the source of the problem, which is the heat producer, you're really just playing whack-a-mole. You knock it down here, it'll show up elsewhere.
Everything you've just described is a heat problem. And by the way, this is right. There are billions of these chips. One simple server, the ones that we ship today, have eight GPUs in each single blade and then scores more of other chips equally heat producing. And so we at Akash are actually just scratching the surface of
of this problem. It's a pervasive problem up and down the supply chain. You just mentioned the way that it shows up in the world, the fact that we're trying to do more with what we have. All it's doing, it's breaking the bank. It's costing a tremendous amount of money. It's leading to reliability issues. Servers, oftentimes it's not uncommon for them to suddenly have an infant mortality. It shows up and it has problems right away when it gets up to that
you know, 80, 90 degrees C temperature, it starts to curb back performance, thermal throttling. This is something a lot of your listeners are going to be familiar with, the fact that the operating system has to back off the workloads on the GPU, which means slowing it down so that they can do the inferences that the customers are asking of it. So it is a significant problem. And I think that
Just attacking it at the network level or sort of putting band-aids at the system level will just kick the can or add cost to the overall ecosystem. You have to attack it at the material science, at the level of physics and chemistry. And Sarah, you used an interesting word with a blocker.
And we are getting to the point where people are starting to get stuck with this issue. We mentioned the issues with the device rollouts and the delays that this has caused. You're going to see this happen more and more going forward, where the drive to increase performance, you know, whatever, two to three times every two, three, four years, it's
We're not going to be able to do that just because drawing power and water to the site and then getting the parts to run with this heat buildup. And you are stuck inside the server. Anything you want to do inside the server, because right now you've got layers. You've got, you know, your chassis, which is probably aluminum. You've got like a copper heat sink.
Then you have some sort of epoxy bonding material. Then you've got your chip material, which is silicon. Then you've got some sort of...
solder, gold tin. Then you've got FR4 or some other polymer board. Then you've got more gold bumps. Then you've got another board. So you've got this sandwich and I'm just listing off some of the layers. And every time you have an interface between those sandwiches, that's a thermal barrier. Okay. So what are you going to do about that? Because they're really, you have to attack that. And right now that's not really being attacked. And this is going to have to be attacked because
Otherwise, it's not going to block creation of data centers. But you look at the multiples in the market, you look at earnings. This is something that probably the people at AMD and NVIDIA are thinking of. The results are growing now, right? But look what happened at Intel, okay? You missed two, three quarters and you go from being at the top of the mountain to...
you know, listening to those bells tolling. And I think this is something that's going to be very, very, this is going to be top of mind for any executive at any of these companies.
Yes, I understand. You've got to cool the sandwich down from the inside as the sandwich gets bigger and hotter. I think one of the things that really resonates with me, at least hearing from friends in the industry, is the thermal throttling that you described. The fact that you see and have to manage erratic behavior when these GPUs are at higher temperatures. The vast majority of people working in machine learning right now, it's very...
abstract software field, right? And so the idea that you have these challenged non-deterministic behaviors based on how the materials themselves are interacting and you have to account for them, there's a real, like a, you know, burden of that is, it's just a new domain to think about. Maybe just because it is your area of expertise, how do you fit into the sort of partner ecosystem of like the NVIDIAs of the world, the super micros of the world, other SIs, et cetera?
So just to be clear, so we are buying chips. We're not making GPUs. We're taking the hottest chips in the world and we're cooling them down so that we can open the envelope of performance for the system architect. And so we fit in, we're coming into the world as a server maker that is opening performance envelopes for folks in inference work, folks training models,
data center operators, cloud service providers. Okay, that's our entry into the world. It makes sense that we would go to the most challenged parts of the market, the folks that are struggling most at that performance edge. So I'm going to go out on a limb here and say that
We think that with our diamond technology, that we will be able to hyper accelerate Moore's law so that in two years, we will be achieving what previously folks had to wait six, seven years in the past to get to in terms of performance. Because remember, Moore's law is about squeezing transistors closer and closer and closer together. But
you can only go so close before you have thermal crosstalk between these devices. And so right now, the limits we see in AI, the pace at which we can do inference work is limited by that thermal crosstalk. If we open that thermal crosstalk and we're able to allow greater densities, then all of a sudden, you know, we can create a feature length film in seconds rather than the timescales of, you know, if you're doing it offline, you know,
you know, months, years, if you're doing it right now with the thermal limitations that there are in AI, uh, probably days. Uh, but, you know, I think we'd like to see seconds in production time to do a full 90 minute feature length film, uh,
And that'll happen because of the unblocking of the thermal limitations inside the GPU. I'm going to ask a silly question, but you've mentioned it several times. You're growing diamonds. How does that process work for the form factor that you want? Assuming, you know, the vast majority of our audience has only ever heard of the concept of growing diamonds in the realm of like jewelry. Yeah.
It's really no different than growing other semiconductor materials. If you're growing silicon or silicon carbide or gallium arsenide or indium phosphide, any of these electronic substrates,
You start with a seed crystal and then you use a typically, you know, some sort of process, chemical vapor deposition to grow out from that crystal to grow, you know, perfect single crystal material out from that crystal. And that's the same way. That's the same way you grow diamond. Diamond is just carbon. Right. So so you take a seed crystal of perfect carbon of diamond.
And and you you you use a plasma to to grow the diamond in a in a reactor. You know, it takes very high, very high temperatures, you know, very high pressures to do this. But but it's essentially a similar process to growing to growing silicon or silicon carbide wafers.
And Sarah, our specialty, our secret sauce lies in how not only we grow the diamond, but also how we intimately couple that diamond with the semiconductor using physics and chemistry. Okay. So it's not a trivial process. It does take some work. You were asking about work.
Why does it take so long? It does take time to do material science. But that intimate coupling of the atoms of diamond with the atoms of a semiconductor is what we understand. And that's what we bring to bear in both space and in AI. And that's what makes this not so easy.
But we're very excited about it. We're deploying it in the servers that we ship and very strong market pool that we see right now today, given what's going on in the world.
You guys announced an exciting customer just this past week, NextGen Data Center and Cloud Technologies. Can you talk about why they're a good early customer? And they're also like a sovereign cloud player. So I want to talk about that as well. Sure. So NextGen is the largest sovereign cloud service provider in India.
They handle the country's data in a very careful way, making sure that it stays within the sovereign borders of the country. We see that requirement coming from countries all over the world. Nobody wants their people's data leaving the boundaries of their country. We see that, by the way, in space.
When data is coming down from a satellite or being pulled away, that satellite has an obligation to keep the data within the boundaries of that country. So that's an opportunity for us. It means that we're going to be able to address this issue with every country individually. So that's number one. Number two, they are the leader in that region, in India. And so we thought that that would be a very good
test case to show the world what is possible. The fact that we as a small growing company can scale to the kinds of volumes that they need and we can scale rapidly. You know, we got to ship all of this stuff within the next quarter. And a lot of small companies trip at that. NextGen selected us believing that we have that ability to scale. Thirdly is the fundamentals of the technology.
I think they saw very quickly, and they're led by some very innovative leaders, that this is a problem that will stay with us for a very long time unless we get to the very heart of it, the material science nature of a solution. You're just going to be tippet toying around forever.
the big elephant in the room. And so we were very excited when they saw that opportunity, the fact that, okay, this is a company that's coming at this problem from the material science. We jumped at it when they saw that we could scale. We were very excited about that. We jumped at that.
And then, you know, I think NextGen is positioning themselves and using our technology to not only scale within India, but potentially scaling around the world. Okay. Again, respecting the sovereignty of country data within that country. So we think that this is a very, very nice match.
And they opened with this size order. We're excited about the things that are even coming down the pike with them in just 2025. This is just the beginning. And these problems are also faced by U.S. companies as well. You know, India is not the only country that's dealing with this. And we're talking to them as well. This is definitely a very important
important topic that you brought up, Sarah. Yeah. What do you think is the importance of American manufacturing of AI chips and data center capacity? This is especially relevant given you're one of the only small companies that is a Chips Act recipient and, you know, Gelsinger just stepped down. What is your current view of American capacity and your outlook for it? My view is that the U.S. is not doing enough and
and needs to do a lot more. This is a technology that the US needs to be the leader in, and it needs to lead all the way up and down the value chain. We can't just rely on what Nvidia or AMD has done to date. We have to continue to invest in not only the larger companies, but also the smaller companies like us, like others.
who are working on some of the very critical problems because as you know, AI is not only a critical technology for business, it's also a critical technology for national security.
And these are things that the US cannot rely on other countries to develop for it. And so we have to really drive technology development. We have to drive manufacturing all the way up and down the supply chain and put a lot of investment into this technology. It's going to be very important for the future of this country. And we're there to support that.
And that's what we're focused on. Let me add to that by saying that our receiving the CHIPS Act is a testament to our support of USA, USA, USA. We're all about doing the things that Ty mentioned, strengthening our supply chain. That's one of the key tenets of the CHIPS Act. Supporting national security, we supply to defense, right?
It's public that we work with Raytheon, iconic American defense company. This technology allows Raytheon and US Defense to maintain defense military supremacy around the world in a way that has never been done ever in the history of mankind.
This technology secures our commercial supply chain in a way that I think we started to slip and it became bare. COVID laid that bare when we saw that, oh wow, we're depending on others to backfill key chips that we used to be able to make ourselves. Now we can make them at home right here in California and in Texas. We're going to be creating jobs, okay, in both California and Texas.
We have support from a broad spectrum of investors, brilliant investors, Vinod Khosla, Peter Thiel among them. So I think that this CHIPS Act is something that's going to enable us to fulfill all of the tenants' mandates of the CHIPS Act, but also things that everyone in the country can be very proud of.
You said you worked with Raytheon, you'd worked on space applications before. I think it may not be intuitive to every listener, like why, you know, satellites and radio communications are so important from a national security perspective. But I think increasingly you're going to see conflict and warfare defined by your understanding of the RF spectrum, be it space or other systems, right?
And I think the ability to support that is critically important. It's totally separate from any of the AI system work that you're doing.
100%. When radar was developed during World War II, that was a huge game changer. Without radar, it would have been very difficult for Britain to win the Battle of Britain because that early warning system was critical for them. Just on a personal note, my father-in-law was a radar operator in World War II, and he was one of the first people to get exposed to this technology.
And he said that they would chase German submarines off Florida and the submarines would go beneath the surface. They wouldn't know how they kept finding them. Okay. So it just goes to show that when you introduce these new technologies, they have outsized impact on the world. And yeah, it's true with RF and it's also true with AI.
Maybe because you think broadly about this problem as a participant, like in the AI supply chain, you know, the U.S. is not doing enough, needs to do more. There's increasing risk. What do you think the other critical problems are that like are even feasible to take on? Like, is it credible that the U.S. is going to have fabs and lithography machines and sort of, you know, these other core components in any near term time window? Sure.
Yeah, the U.S. will do it. The only question is whether the U.S. will do it because it has to do it or because it wants to do it. The U.S. can accomplish anything. We have the people. We've got
tons of natural resources. We've got the capability. And the only question is, and part of it is corporate culture is driven by earnings, right? And if we only focus on earnings, then if it's cheaper to make something in Asia, make it in Asia, okay? But there's a national security component to that where
Maybe it's not best for the country if you make it in Asia. Maybe you need to make it here. So we need to find a way to bridge that gap so that everybody's not just chasing that last penny of earnings and sending manufacturing of these critical technologies overseas, doing it here. This is a sector where we should start it
correctly and start doing all these things here, all the way up and down the supply chain, everything from the chips and the server, the software, you know, to the frames, the housings, the racks, right, all the big dumb metal pieces, and then the centers themselves. We can do it all here in the United States, and we should be doing that and focusing on that and focusing on
you know, federal programs and funding to make sure all of that happens here. I do think that AI will play a role in manufacturing and sort of 10x-ing or even 100x-ing manufacturing so that, you know,
We can actually outperform humans in other countries. I think that's the way it's going to look. OK, so we will not have to reduce labor costs in order to perform and compete with China. I think that that will come through, you know, extraordinary feats in an AI powered manufacturing industry.
You know, universities today, when I was in grad school, everyone had to get training in a machine shop. Cornell, I remember, had about four or five machine shops. And freshman year engineering, you had to get training in how to use these machines and operators. Today, every professor, almost every professor has a 3D printer. So that's just a...
you know, added jet fuel to the capacity of everyone on campus to manufacture whatever they want, whenever they want and however they want at a very low cost. Um, I think you're going to see the same thing with the use of AI in, in manufacturing where, um,
we will be able to make per capita a thousand times, a hundred thousand times more components, more equipment, more parts, more chips compared to anyone else in the world. And that's what AI makes possible.
No, I was just going to say, I believe that is possible, too. And it's a much more aspiring, you know, inspiring vision for the future than we completely seed the supply chain and are strategically, you know, at the at the mercy of others. Right. Yeah. You both have, you know, very esteemed backgrounds in material science.
One of the most interesting things as a, you know, I'm coming from software computer science world, but the applicability of transformers, diffusion models, and just, and
effectiveness of deep learning overall as it scales is very interesting in that it applies to so many different domains and there's increasing excitement about its applicability to material science. Do you guys think about that yourselves? 100%. Yeah. Um,
And it's very appropriate to research and helping research go a lot faster, because if you think about how can AI help you and how can AI models help you do work? You know, personally, I've been using AI as like an assistant that greatly increases your pace of research because...
A lot of trying to solve a problem in technology, especially core technologies like material science, is first figuring out what's been done today. Because there's a lot of smart people in the world and there always have been a lot of smart people in the world. And the key to your solution might be something that somebody did back in 1963, but never went anywhere because they didn't have the ability to do as many iterations as you can do now. So your ability to go back, find that information, and then
and then apply it to the problem that you can solve now, this is one of the things that I'm really excited about applying these models to, because I think it can really help drive innovation. And we're not all going to be robots and slaves to AI. AI is going to help us innovate even faster.
Do you feel optimistic about these ideas around using AI for inverse design or better exploring chemical space or accelerating DFT simulations or more fundamentally in the process of discovery? Yeah. So, you know, chemical space, density functional theory, you know, any of these any of these topics that you're talking about are all just they're all just problems to be solved.
So no matter what topic or what approach, anything that can be solved through asking questions and iterating,
it can be done. So any of these approaches can apply to material science, it can apply to the medical field, it can apply to software, right, to coding. And we've already seen it at lower levels, but really we are limited by our ability to frame the questions and
It's really, that's it. We're limited by our ability to frame the questions and only by our own imagination, really. So I'm very excited to really get into this and to figure out how we can use it because there's problems that we want to solve right now that we don't know how to approach solving it because we know it's going to take so many iterations and there's so much information we need to find to take a good run at our hypothesis.
that it's difficult to get started. But if you've got somebody working for you and working with you who can do these iterations, a billion iterations in a second, I mean, it's really exciting to think about.
I've been at Coastal, our investor, and I would have these conversations about how sometimes I think as entrepreneurs, entrepreneurs can sometimes be limited in how they imagine because you're constantly guided by the boundaries, the constraints of the world as it is today. Okay. And what AI does, um,
is open up those boundaries so that we can begin to imagine the things that you're talking about. Inverse design, right? So can we, you know, think about like what if compute, if we could, if we could, you know, run processors a billion times faster than today because the thermal envelope is no longer there, right?
Could we accelerate the calculations, the modeling that would have taken a hundred years, but now it takes a second? What does that even mean? Like what, what is, what, what is not possible? I think that, yeah,
I think the greatest challenge is our own imagination. I think Ty hit the nail on the head. I think that the difficulty is trying to ask the right questions because now we almost have infinite processing capacity. And that's the difficulty is how do we get out of the way so that compute can try and solve these problems.
I think that in the biotech arena, getting drugs that are dialed in to every form of cancer is now well within reach. I think that being able to have battery capacity that is so optimized because we don't have the thermal constraints that we do today, that can take us from SF to New York in a
you know, in, in, in one with one charge. I think that that is well within reach. I think that really the sky's the limit and I'm excited about that future. On that note, Felix Tai has been a wonderful conversation. Thanks so much. And congrats on the progress. Thank you, Sarah. Thank you. Thank you, Sarah.
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.