Renaissance Philanthropy, for my money, the most exciting philanthropic venture in the U.S., is getting a one-year check-in. Kumar Garg was on right before I went on paternity leave, and now we're checking in. They recently released their annual report, having catalyzed over 200
million of very cool philanthropy. We'll talk about how they did that, what impact they hoped it has, how does this rate in comparison to what the Trump administration is doing to the broader American science and technology research industrial complex, and what plans he has for...
year two. Before founding Renaissance Philanthropy, Kumar worked in the Obama Office of Science and Technology and was at Smith Futures for a while. Kumar, welcome to Chinatown. It's great to be here. I like that this is becoming an annual tradition. Yeah, we got to set goals this year and we can hold you to them in 2026. Great. Why don't you start off with the 101 of Renaissance Philanthropy and how the thesis has played out over the past year?
I'm grading myself. So, you know, this is a biased view, but it's been a very strong year one. Part of what I was trying to think about when we were launching the organization is we were trying to do something different. And the different part was most philanthropic organizations basically exist in a single model, which is they work for a single donor. That donor has a set of resources, whether they sit in a foundation or they sit within their
unallocated inside their DAF or they're just their personal wealth. And what they do is they work for them and they say, you know, how much money on what topics would you like to give it? And then they run their philanthropic giving. Then there's another class of organizations that are basically the people who are spending their money. So that's a researcher running a lab and they're doing high quality research.
And the philanthropic system basically has mostly operated with givers and takers. So the folks who are operating these organizations and the folks who are doing high quality work. The idea behind Renaissance Philanthropy was to actually sit in the middle and to say, we want to style ourselves more like an investment fund, more like what happens in the world of finance where
The folks who actually are the holders of the capital, who have the money, actually mostly don't spend most of their time trying to directly deploy that money. So if you work as an LP for a family office, you might have a team of 10, 20, 30, and you've got billions of dollars that you're supposed to deploy. Well, what do you do? You actually go out there and you find intermediaries. Those might be private equity funds, hedge funds.
venture capital funds or other folks who are experts in particular sectors, in particular areas, and you give them the money and then they deploy it on your behalf and they help you earn a return. Philanthropy has mostly operated, it's an odd thing, but it's very historically contingent. The investment world went in the world of specialization from the 70s on.
And philanthropy went in the direction of direct giving. So you have really large philanthropic organizations, often really well staffed by experts that do the giving. The challenge is that there's a subset of donors that want to build large organizations, and there's a large set of donors that don't. And the ones that don't
have been sitting on the sidelines. And I think what ends up happening is, you know, maybe at the time of when they retire, they build an organization, or maybe when they die, they actually bequest it to a nonprofit or to a university. And that just leaves a lot of value on the table. And so the idea of Renaissance was on various science and tech topics, can we actually do what an investment fund does, which is we're going to write down a thesis in three years, in five years, we want to get this goal.
recruit a field leader to actually run that fund, and then treat the donors almost like LPs, where it's a philanthropic fund, so we're not giving them a return back, but they're putting the money to work against that strategy. And I have to say, like, a year ago when I sort of told this story to people, like, oh, I'm going to create an organization that does this,
I think the operative advice was like, good luck, you know, which is you're going to cover the waterfront. You're doing across AI and climate and economic social mobility. And two, you're going to take on this massive fundraising goal that seems like a very hard way to operate. And you have no natural advantages. Like you're not spending one person's resources. You have to raise the money and deploy it. Seems like doubly hard. And the thing that I was sort of interested in is growing the pie.
which is, can we actually use this model to bring new donors in? So I think a year in, the early grade is strong. We've been able to stand up multiple philanthropic funds. We have a fund that is using AI to accelerate the pace of math research. We have another fund that's using AI to say, can we deliver public benefits better?
Just recently we launched a body of work that is on the world of climate emergencies. So can we solve for the fact that there might be runaway risks in climate and what are ways that we increase the technology readiness level of various climate technologies? So we have a bunch of different funds in various areas.
And then the idea is in each of them, they have this basic structure where they have a thesis that they're driving against. They have a field leader that's running it. And then we're recruiting donor money against that strategy. So, you know, I'm part of what I'm hoping for is that this starts to just become, you know, not that this is the only way philanthropic giving happens, but it just starts to become much more of a credible path.
And this allows more donors to sort of be active without necessarily having to take on all the operational load themselves. Can you like order of magnitude the hope of the sort of new model that you're trying to manifest and like, I don't know, the NIH budgets being cut by a third? I don't think there's any world in which philanthropy fills the gap, partly because if you take a step back and you say,
How has the U.S. built its lead? Well, the U.S. is spending on the order of $200 billion a year on R&D, once you include basic and applied, across DoD and civilian. That is just an order of magnitude more than philanthropy spends on research. I think the place where
these new models are going to get traction is that I think that how do you organize scientific organizations? It's just suddenly become much more of a jump ball. So it used to be that the academic bundle, being at a top university, everything sort of stacked on top of itself.
That was you could get really good talent that way, graduate talent and post-grad talent. You have great students. You build your lab there. You can do cutting edge work. Usually the university actually gives you a lot of flexibility to do a bunch of things on top of it. So if you're an academic who's doing well at being able to build, doing cutting edge research, you can do that within the four corners of a university.
There are some researchers that have left the university and build what are basically academic research labs outside the university. So, you know, you've got the work that Patrick Collison is supporting around the ARC Institute. You've got, you know, the Flatiron Institute that Simons is supporting. You've got, so you've seen some of these pop up. You've seen the FROs that Convergent Research proposes. For a long time, that has been like a very alternative path.
It's rare to do, often figuring out what happens to your university affiliation, how does it change your path? I think if you're a researcher who's ambitious and wants to do big projects, whether you're doing them within the four corners of the university or whether you're doing them in your own nonprofit research lab and then partnering with the university becomes more of an open question, especially in a world in which university funding might fluctuate based on what's happening politically.
I don't know how that will totally play out over time, but I do think that we're three months into a deeper shift on how institutional financing will happen. But I could imagine that having a big implication. Generally, I just think that on net, if the federal government does not play its important role in funding this research, it's all a net negative. If federal funding comes back to a healthy level, I still think
You know, researchers are going to take this as a wake up call to think about how do they structure their research organizations in a way that is, you know, like you don't have these sort of systemic shocks. Yeah, more resilient. So there is this big debate and like the.
Well, maybe it wasn't a debate. When I observed the EAs kind of like being very excited about how many lives they save based on the bed nets they bought. And then you sort of like net net that out to USAID no longer existing and all of the sort of like human suffering that's going to come out of that. Like the...
correct, like calculation may have been to just spend all of your money lobbying in Congress to like get people to focus on this. So, I mean, you
I think both of us are pretty in line and we've done other shows on sort of immigration policy and university funding and, you know, what's happening to the NSF and NIH budgets going forward. But like, why do Renaissance philanthropy when Kumar can be spending, you know, 100 percent of his time in D.C. banging on doors and trying to like, you know, make it five percent more likely that we get an extra 10 billion dollars a year for this stuff?
Yeah, it's a great question. I mean, I think in general, being policy adjacent is very high ROI. And in general, no matter how you run the numbers, policy advocacy, especially on science and tech topics, punches above its weight, no matter what you're doing. I mean, it's partly why I spend a bunch of time in government as a policy staffer. It's partly why, no matter what I'm doing, I'm constantly interacting with policymakers and trying to make the case.
And it's also why when funders sort of say to what extent should advocating on behalf of the research community be a part of their work, I'm like strongly supportive. The part, the reason why we sort of structured Renaissance the way we did is that, you know,
One was I wanted to specifically think about how do we go the pie of philanthropic funding because I thought no one was doing it. I think most, there are organizations that are working on policy advocacy. Very few organizations were trying to bring new donors into the mix. But I do think that we would be failing as an organization if we were not constantly trying to think about how our work works
didn't end up having some impact on shaping the debate on the future of R&D funding. So we do think about that. So we try to both be in conversations with both Congress and the administration, but also just sort of policymakers in general up and down the ladder to say, here's why this work matters. Here's why the future... And part of
a bunch of the new models that we're funding, whether it's things like FROs, things like AI accelerating science, is to also casemake for why investment should happen. So a lot of the ideas that I funded over the years, you can see echoes of it in the new Heinrich legislation around accelerating science through AI, where they're talking about
How do we make sure these AI investments can actually accelerate the pace of science using new models? Philanthropy, when it's done well, opens the aperture for what funding could do. Hopefully, we're playing that role. One of the areas I would like the conversation to end up is we're currently in this dialectic between science is important and science needs to be dismantled because it made mistakes.
And I would like to get us to a place where there are important things we can do about helping reform the way we do science. We should be bringing more of a discipline on trying out new ideas and trying to bring in new ways of funding and new voices and having a reflection on past mistakes while also remembering that the investment agenda around science is critical for its utility.
Hopefully we can be part of that dialogue, but I totally in some ways you're pushing on something that I think about all the time, which is, you know, I am a policymaker at heart. So, you know, not to forget not to forget the deep utility of that in my story.
All right. So I think the answer I would give to you is like, this is the sort of like federalist model of policymaking almost where, you know, as you said, the sort of the inventiveness that you guys can come up with from a sort of like form factor discipline perspective when it comes to doing the science and technology research is the titillation.
type that, you know, is weird enough that it's not happening in government, right? But also, you know, two, three, four years down the line, like once you guys have some really awesome case studies are the sorts of things that can then get 10x or 100x in our, you know, gorgeous NSF circa 2027 that has been remade to fully, you know, aligned with the Kumar vision of how
how change gets made. So with that stance of optimism, let's talk a little more in detail about some of the projects you guys have stood up. Take us on a little tour, Kumar. Where do you want to start? Sure. Yeah. I'll go through a couple of the funds and projects we've landed just to sort of give people like a taste of how the model sort of works. So let's start with
sort of our work in AI. Our sort of operating theory in AI is that we're living through a period of a huge capability overhang. The idea just being that the core technology is rapidly developing, but the number of people, projects, and just work overall that actually is applying these tools towards actual hard problems in society is really small. So I'll give an example.
So we have an AI and education fund, which is specifically focused on how can AI accelerate the pace of learning outcomes? Now, if you're someone who just like reads, you know, you follow social media and others, there's so many people who write and talk about AI and education.
it would give you the sense that a lot of people are working on AI and education. But if you actually dig into the space, the number of actual technical experts who have knowledge of how education works and knowledge of how AI works is still shockingly small.
So we run this thing called the Learning Engineering Tools Competition. It's an annual competition that invites tool developers to actually say, oh, what are the, do you have a cutting edge idea that uses AI to actually advance learning outcomes? We've been running this competition for a couple of years. I started it even before Renaissance and then brought it into Renaissance. That competition is the only large scale ed tech competition in the world.
it still blows my mind. Like no one is actually out there in a systematic way asking for what sets of ideas for people who want to build AI for education. We have another part of our AI education portfolio that specifically thinks about moonshots. So like what's a really hard problem in education that AI could solve? So we picked middle school math. It's really important to advancing for future degrees and advanced
And students really struggle with it. So we said, okay, can you actually emulate the results of high dosage tutoring, which, you know, the number of studies that J-PAL and others have done that show can really double the rate of learning for students in math? And can you do it under $1,000 per kid?
So can you bring it under, you know, what it would make it such that you could offer it to every kid? So we have that running as a program. We have seven teams in the program. We have two teams that are actually on track to potentially accomplish this goal. Which is wild. Right. But like when those teams are working on it and we say to them, oh, who are you like collecting lessons from? You know, there's not a big field that they can go out to.
When they go out and they interview the AI labs, you know, the ones that get written out every day, those AI labs talk about education, but they don't have in-house education teams that can actually help these teams. So the big piece I would always say to people is like the coalface is like,
There's tons of room to do work because when you actually start to work on it, you realize that the number of people who are actually working on it is shockingly small. So we're now starting to explore our next moonshot area, which should there be something that basically looks like the intersection of AI and early learning? Can we actually build a universal screener to best guess if a child is off track when it comes to early language development, them speaking into a device?
There's a bunch of interesting work happening in this area, but people will, we don't actually have a way to diagnose like early learning challenges like dyslexia just by having a student speak into a device. It could way increase our ability to help them be able to get to a speech pathologist, get back on track and be reading by, you know, by the third grade, which is critical to future reading and learning. So that's just one track, which is AI and education.
And, you know, so that so that's just like one compelling thesis, obviously, there is going to matter for education. Hard to find people to argue about that. Talk a little bit about sort of finding the donors and finding the teams. Like what was the what was the work that you guys had to do to make and launch this work? Yeah. So what's been interesting is.
It has been hard work for us to build out the team because the number of technical experts who actually know both things, AI and education, is small. So we have slowly built out a team of ML experts who have educational backgrounds, basically. So we call it sort of like a hub model. We basically have created an engineering hub.
And we recruit technical experts into it that specifically have this technical background. So I have somebody on my team, Ralph Abood. You know, he has a, you know, he has a machine learning PhD, you know, and he was, you know, he did his PhD in graph theory. You know, he's not an education expert.
But we brought him onto the team. He has been now working with a lot of these educational teams that we brought in. And what's interesting is that like his ideas on what kind of language models they should be building are really good. Like it took him some time to level up on the education side, but now he is one of their highest value contributors, even though he sits on our team and he's contributing there.
So there is this transition where you can build up talent that sits across these two areas, but in AI and education, we had to mostly build it. It was hard to immediately find directly. And so now we have a constellation of these AI and education experts, some of which sit on our direct staff, some of which sit inside these teams that we're betting on, and it's been great. Now we have, I think, a field team that can really go after more problems.
I think on the donor side, we've really lucked out that we found our core donor for a lot of this work has been the Walton Family Foundation. They have a long history in funding and education. What's been interesting is that they've been interested in investing more in what they call their innovation portfolio, but
didn't know how to necessarily bridge that technical divide, which is, hey, if we're going to do more in this area, who are going to be the technical experts who will actually do it? And that had actually kept them more experimental but not doing. Their partnership with us has meant that they have become way more ambitious on how much investment they want to make on this sort of technical AI and education platform.
And so that's sort of like our core thesis, which is, can we be the permission structure for donors to go much bigger on innovation? And we've sort of seen that in other areas. We have, so like slowly where, you know, their support is causing other donors to come in as well.
And so that's basically whether it's you've been a longstanding donor but not active on science and tech topics or you're sort of an early donor altogether. And what's the like Renphil management fee? It's a good question. So we...
build in our cost recovery into each fund. So usually the way that works is that, you know, if we're operating multiple funds, each fund both has like, you know, let's say it's doing grants and money that's sort of going out the door for the actual deployment. But then we're building in our cost for the actual staff that are operating the funds, whatever services, technical support services we're providing, you
the work that we're doing to partner with various funders, as well as our overall studio support. So it varies fund to fund, but donors have found it, compared to having to try to do this themselves, have found it to be much more actionable. And for us, we want to build a thriving organization. So we don't want to cut corners. We want to build an organization that can both operate those funds, but then also be looking for the next ones. Does anyone complain about that?
Does anyone complain about it? I think the way that I think it comes up is there's a type of donor that actually has the answer in their mind. They're like, I think this needs to happen. And really what they're looking for is an operating partner to then just do that. I want a conference. I want a workshop. I want to fund these three organizations. And we are, our model is
we're the product. You're actually hiring us to go build out the strategy, recruit the team, deploy. And so if you actually have the answer in your head, we often tell them like, we're way too fussy for that model. You should just, you know, there's much more simpler ways you can operate. So I think that's where the delta comes in. If you already have the answer in your head and you're just looking for a partner to execute for you,
we're probably not the right fit. You said this on another show. You were like, we are for donors to take off the cognitive load. The idea of being like, yes, if I have $10 billion, maybe I'll allocate...
$1 billion in investing in like stuff I know and think is, you know, I have some subject matter expertise, but like I still have to put the other $9 billion somewhere, probably not cash. And like, yes, I am comfortable paying a hedge fund or a financial advisor or whatever, like a management fee to do that. A big part of it is opt in, which is people don't know what journey they're on. But what they worry about is, am I going to feel stuck?
So I think a lot of folks end up not getting active philanthropically because the decision feels weighted by getting stuck. Okay, if I hire somebody and then six months from now I decided,
Maybe I want to change direction. Now I'm going to have to let someone go. People hate that. Or I met a researcher. I liked their research. I gave them one grant. But now they've reached out and said, you know, there's so much happening in the world. I've lost funding from the government. Can you double the grant? I was just giving them a grant because I met them and I thought they were great. I don't.
But now they've sent me a note that they might have to let go of postdoc students. Now I'm in this uncomfortable situation. If I say no, I feel like I'm hurting them. If I say yes, you know, so people have all these experiences where they feel uncomfortable by the relationship they have around their resourcing. And that rather than that causing them to work through it, they actually hold back. And one of the things we sort of say to them is, you know, our model is ones where we're the ones making the decisions.
We're going out there, we're finding researchers, we're finding projects, we're developing strategies. You can be as involved as you want. You want to be meeting the researchers? That's great. You want to be learning from the strategy so that you can do direct giving down the road? That's great. But if you also took six months off and decided like, "That was great. I learned for a few months. Now I'm off doing something else."
Nothing will stop. We're a fully operational organization that will execute on everything that we said we were going to do, whether you're involved or not. And so it just takes the pressure off. You can opt in if you want to learn and you want to be involved, but you can also choose not to. And that actually, I think, frees them up to want to learn without the sort of like, am I about to get stuck? And that sounds very psychological, but I do think that
People forget how hard it is to get going on things like I'm going to start to work out more. I'm going to start to do this. I'm starting is hard. And so we want to make starting easy by saying you can provide a lot of value into the system without necessarily having to own all of that execution. There's a lot of sort of like pieces of people's jobs that seem easy.
Like more and more AI can kind of like chip away at it or enable it or launch it or whatever. And, you know, it's interesting because like some of the things that you guys are doing, like, you know, you have these seven playbooks. I was just...
of like ways you can tackle problems. And like, I would love to like talk to a, you know, upload seven of those to chat GPT and say, here's my problem in the world. And like the AI can like help me pick through which one. But the sort of,
someone who's really rich, who's feeling kind of uncomfortable about, about giving money into starting to donate, um, philanthropically in a serious way for the first time seems like the, one of the more human things where, um, you know, there's really going to need to be the friendly Kumar Garg, who now has a nice microphone. He can do zoom calls with, um, uh, to sort of like, you know, uh, uh,
What did Derek Thompson say? Like, you know, whisper the dulcet tones of, you know, comfort and competence in their ear in order to sort of get them on this path. So I don't know. It just seems like a very human thing that you're engaging in on the donor engagement side. I'm curious for any reflections you have on that. We are very curious about how much of our own internal processes we can automate.
because why not? We sit next to AI. We should be thinking, we should be dogfooding. I think the place where we've seen it sort of already provide some value is just what you would consider baseline automations. Like there's a lot of grantee reporting that you should be able to do automations on. We're like definitely interested in just, hey, we have a hunch around a thesis in this area. Can you do like a research report and tell me
what's the relevant stuff to know. Sort of like scoping. We've even done, we've even used it for, hey, we might do an RFP on this topic. Who are some researchers who should apply? And sometimes found some interesting suggestions for researchers that we should affirmatively reach out to. I will just say that we're still far away from it actually helping on anything that we would consider high stakes. As you're saying,
A huge amount of what we're doing is making something that feels like a trust fall. Like, hey, this is an important decision, but one where people who take their job very seriously and they put their own personal, you know, legitimacy behind the work is an important part of it. Like when we screw up, it's on us, right? We sort of we stand behind all of the work.
And people appreciate that, that these are serious people who stand behind the work that they're putting before them. They're not some faceless intermediary. So I don't think... Maybe that will change, but that's an important part. I think even on the information you should know about various people and stuff, I think these kind of current AI models are not that great. So the place where I... We have...
this sort of intuition that there should be parts of being a program leader that you should be able to have an AI assistant for, right? You take more and more of the tasks of being a program leader or a fund leader and be able to say, okay, I want to do a workshop on this topic. Generate me an agenda for how you would run the day. And it sort of generates based, it takes a bunch of your past workshop flows and generates a sample workshop design. How much of that can we create so that
we really could get to a point where a program leader or a fund leader is basically able to operate without that much additional support. Obviously, we need to create some cross-cutting support. That I'm sort of interested in. But the chance that we're going to get to an AI advisor, I think we'll have to wait. And the trust fall works in lots of directions, right? Because you need the researchers to give up their PhD or leave their programs or spend half their time with you.
as well as you need the donors to like, you know, give you their money. Right. And, um, having, uh, yeah, having like a face who's on the other side of that just is, you know, you know, a face with a track record and some, you know, skin in the game on the other side of that seems like a,
something that really is not going to go away anytime soon. Well, one thing we had debate about internally is that there's a lot of my own workflow that is tacit knowledge. Yeah. Like when I'm talking to somebody and they're telling me about their work, like 20 minutes into the conversation, I'm like, Ooh, that say more about that. Like, why is the field stuck on this point? And they all start describing it. And I'm like, Oh, so that feels like
if there was a canonical data set with this much dimensionality, would that solve it? And they're like, yes. And I'm like, well, why doesn't that exist? Well, it's locked up here. And there's a part of me that always is striving to say, if we could figure out what... Because when we recruit somebody new onto the team, they'll say, could I just sit on your calls and watch you sort of work through this with somebody?
And there's a part of me that thinks like, are there ways we could be taking these things that feel like just expert behavior, tacit knowledge, and making it more explicit? Because it feels wrong to just say, you just sort of get this feeling like that's the opportunity. Let's like pull on this thread. And so the more we can go from tacit to explicit, but right now we have an apprenticeship model. People learn by doing it and by being in these structures. But I don't know if that's
That has to be the end point.
And I think like, you know, a lot of what you do is this human matching, right? Of like putting people in touch with each other. And, you know, I think some of that you could, if you've fed all your calls you've ever done into an AI would feel up on. But I think there's like, you know, an emotional and personality matching piece of this, which you're doing as well. And that is like still very much a human processing thing that the models aren't quite out yet.
And it changes over time. But I think a lot of what people get out of when I connect them is that I took time out of my day to think that the two of them should know each other. Sure. Right? So, like, that's the actual signaling value that my time is precious. A slight non sequitur. But, like, if people want to establish...
trust and rapport, like the first thing you should do is spend $150 and buy a microphone for your zoom calls. Like I am just recommendation for everyone out there. Like it just, I do my calls and I sound like the same as I do in my podcast. And people are just like, Oh, it's so nice. Like you, you feel like a embodied person, not this, like, you know, compressed down air pod. Um, so I don't know. Recommendation to everyone out there who wants to
Make friends and raise money from billionaires on Zoom. Well, I'll just sort of echo this part, which I don't think I've practiced, but there's an old adage, you know, for someone who comes from politics, is that there's like microphone technology. So if you actually go back and you sort of look at politicians, there was a time when microphones didn't pick up intonations really well. They were just projecting...
really loud sound. And once microphones could sort of pick up really subtle intonations, politicians that were really good at like that way of speaking started to really take off. And so there's people who sort of point that out with like, you know, President Clinton, where he was really good at like really subtle use of the microphone.
And then I remember a paper that sort of said, well, that's actually because the technology had actually gotten better where someone like him could actually do that. And so politicians are not a bad way to sort of pick up on, you know, because ultimately communication is a big part of trust building with the electorate. Totally. I mean, like if you if you listen to like old clips of like Warren Harding speaking or Teddy Roosevelt speaking, I mean, they're basically screaming, you know,
into a microphone and like that's fine and Teddy Roosevelt was really good at screaming and like you need you needed to be very loud to like stand on a soapbox so like enough people could hear you you know 20 rows back or whatever but like yeah now it's now it's the dulcet tones of microphones another another fun fact Kumar so like the microphone I'm using has been made
for like 60 years. So it's kind of remarkable that actually like the technology itself of like how well the mics can pick up on your voice is basically, you know, we basically like maxed out on it. But I will try to go find this paper. I wonder if it's about mobile, you know, like you're in some random union hall and you need to like mic up and put, you know, a stand held mic there.
in front of a politician is that why microphone technology has gotten better but that's interesting I'm so curious about this okay that's great
I've been on the hunt for years for the perfect reader app that puts AI audio at the center of its design. And over the past few months, the 11 Reader app has made it onto my iPhone front page and is easily getting three minutes of use a day. I plow through articles using 11 Reader's beautiful voices and love having Richard Feynman read me AI news stories as well as, you know, Matilda every once in a while too. I'm also a power user of its bookmark feature, which the 11 Reader team just added in after I asked for it on Twitter.
China Talks newsletter content also comes preloaded into the feed. Check out the 11 Reader app if you're looking for the best mobile reader on the market. Oh, and by the way, if you ever need to transcribe things, 11 Reader's Scribe model has transformed our workflow to get transcripts out to you all on the newsletter. It's crossed the thresholds of where these models used to be to just being, you know, 95% good to, you know, 99.5% wow, this is amazing, saving our production team hours every week.
So check it out the next time you need something transcribed. So Yasha Monk was on Substack and he recently said, over the last few months, I've attended a number of gatherings and conferences and dinners at which leaders of some of America's biggest foundations have sought to figure out a strategy for how to defend democracy. Few of them were as openly devoted to the most extreme forms of identitarian ideology as they might have been a few years ago. But the reigning worldview at the
top of the philanthropic world is that little has changed since the summer of 2020. The general consensus holds that voters turned to Trump because American democracy did not deliver for the quote historically marginalized. And the solution supposedly revolves around quote mobilizing underrepresented communities. The most urgent imperative of the moment is to quote fight for equity and quote listen to the global majority. Um,
I find this kind of wild. Kumar, can you interpret as someone who's a new entrant to this world? I think that in general, so a couple of different things are happening at the same time. I think that there are some responses that are happening by philanthropies that
that would be the equivalent of like a dinner table conversation. People engaging in their hot takes about why did the election happen the way it did? And here's my view on where America is or where the American people are. A lot of that sounds as random as you would feel if you were hosting a dinner party and people gave you their hot takes about American politics. Some of what is happening is that there's a state of like,
what is happening. You know, the first hundred days of the Trump administration have just been very active and on a range of things that were unexpected. You know, I think a lot of people expected it to feel similar to Trump won. And so they sort of looked at their portfolio of topics and said, oh, you know, here's the sets of things that are going to happen. And that's not what's happened. And then I think this question around how much are folks actually rethinking? That is a good question. I think, you know,
The place where I think the most immediate rethink is happening is just around like, you know, what is it about, what is it that we're missing? You know, so I'm seeing this a lot in the science community, which is, you know, the science community is waking up to these pretty, you know, disastrous across the board cuts.
where, you know, researchers are losing funding, you know, funding to various universities being put on pause. Students who are in graduate school on various topics that matter to U.S. competitiveness are having, like, visas pulled. And I think the community is like, what exactly? We don't remember this being a big debate topic. So, like, what exactly happened that is putting the community into a political football? So I think the place where
there is a lot of like, like open questioning is like, what is it that we're missing? Was, was there like a conversation we weren't invited to where we were getting talked about? And, and what is it that we're sort of missing? And so, um, but I think on this question around like, you know, um, I think there are donors who have much, who have certain political stripes and, you know, I don't think they're going to change those. Um, but, um,
the place where I found it is more around less about like the American political scene, but more about like why certain issues that, you know, like foreign aid is a good example. I don't know how much, you know, the U.S. posture on foreign aid and how well the foreign aid system was being run was a live topic during the campaign. And so, you know, I think people have like readily asked, like, what did we miss some big debate?
That said that, like, the United States should dismantle its leadership on a range of these topics overnight. Like, what is the policy debate that we missed around it? So that's, I think, where a lot of the confusion is happening, which is donors being confused as to where some of these things are coming from.
I mean, there's also like an interesting dichotomy between like the foundations where the people are alive and the foundations where the money is dead. You know, you had Gates recently say he was going to spend down all his money faster than he may have planned to. Yeah.
presumably in response to what's, you know, what's happened over the past, the past few months. And, you know, when you, when you have the foundations that have the leadership, which is, you know, active and reading the news, like they can be sort of a little more responsive to it though. I mean, I am conjecturing here, but like when you have a, a, a,
you know, a flagship philanthropist who's been dead for 75 years and an organization that has kind of like hired and built programs around a worldview, which is now no longer relevant or like not necessarily meeting the demands of the times. It's much harder to sort of like turn and pivot because you've got all these kind of like institutional blockers and a board.
whatever, as opposed to like a person with the checkbook who's like, no, we're going this way instead of that way. Yeah, I definitely think that's part of it. I guess the piece I would sort of say just to sort of on the like that sort of dovetails with the Renaissance model is that I think people underestimate how much philanthropic organizations just end up being sort of tied to the programs that they've created because it's just
you know, not in a negative way, but just if you spent two and a half years scoping a program, you know, where you did a field search around topics, then you did a national search for then the person who's going to lead the program. Then you recruit, you convince that person to move across the country to take that job. You gave that person coaching to do test grants. And now they're like six to 12 months into that grant cycle and
If you then said, oh, like the world is different, let's just cut it. After you've done a big press release announcing that there's going to be a big new strategic direction for your giving, you kind of, it looks messy. And so people end up feeling this like, almost like portfolio regret. Like they could start from today, they would have a different set of programs than if, than they sort of created. And so I often, one of the arguments that we make to donors is,
If you structured yourself more like an LP where you're deploying money into funds, at any given moment, all your capital is fresh. You've deployed it. Whatever you have, it's still fresh. You don't have the kind of incumbency problem of everyone in your team kind of looks at you like, what are you doing to me every time you try to pivot? Because they all have things you hired them to do. And so...
Flexibility is like a little bit of a mindset, but it's also flexibility as a structure. And donors sometimes create a lot of built-in structure and costs to pivoting when actually on paper, they could keep themselves quite light and flexible if they chose to.
That's so wild because you'd think it was, you know, this is like giving money away. Like, of course, you can give the money away the way you want to give the money away. But like the, you know, emotional sunk cost around philanthropy was not something that I had necessarily priced in. And so it also just means that people spend a lot of time. It's like a duck swimming on the water where they're
the feet are moving rapidly under the water. People are trying to keep the strategy looking the same above the water while changing all the content of it under the water to pivot to the moment. And it just leads to a lot of confusion, conceptual confusion, because you're like, we have always had this program, but it's like, no, no, actually under the hood, the program is totally different because the situation's changed. And part of the reason I like the philanthropic fund model is
is, you know, it's like what's on the cereal box. It's a three-year fund. It does this. It will begin and end. Maybe it's not of the moment. That's fine. But like your new program can be of the moment. But this idea of like constantly kind of having these really broad things that then you're constantly reworking under the hood means that if I said, well, has this program been successful?
People will say, well, the program's kind of been changing. So then it's really hard to evaluate it as this focused thing that ran for this time, that had this goal. Did we achieve the goal? So there's very basic things that people just don't do. So for example, I was talking to one donor and I said, in the investment world,
You have, you know, people put it on their Twitter bios, like I was the first Czech into this now major company. Right. And so people say, like, I'm really good at betting. Right. I said, who are the 10 best program officers in America? Who on the philanthropic side has been the best Czech writer? And they're like, well, how would we even know that? I said, I don't know. But like, don't you?
Even if it was qualitative, don't you want to know who are the best check writers? So in my view, like a fund model, even if it's a philanthropic goal, allows you to be more honest around that fund paid out. That fund was half good. That one blew up. Fine. But like the person who then led that fund gets to then take that to the next job as like an actual employee.
career step. And I don't, I think we deny that to people when we engage in this, like we have always had these programs. They're just run by different people with slightly different strategies. Like it obscures rather than clarifies. And I'm like, we could just be honest. Like we did that fun. Now we're doing this new thing. It has a beginning. It has an end. Yeah. Um,
It's such a wild thing. You know, I've spent a fair amount of time on these websites, like hunting for like whether or not the Ford Foundation will give money to China Talk. And like, yeah, they have the it's like we're like democratizing equity. And it's like, OK, that's awesome. Like, I agree. We should like we should further democracy and like make the world, you know, have more opportunities for people. But the problem is, is like when then you like when you're only a taker.
right, of pitches, like you have to, like you are letting the grantees kind of define what success is, right? And the counterfactual is like very difficult, right? Because like they're probably going to be there whether you give them $100,000 or $500,000 or not. Versus like if you start out with we are trying to achieve X thing by Y timeline and then sort of like
go from the goal orientation to find, you know, the people and the organizations who can take your money and try to, you know, give you the highest percentage chance of achieving that. It's just, I don't know. It's, it's, it's, it makes me very frustrated. And it's not even from like a politics lens of how they're setting their goals. It's just like, you guys just gotta like, like get in the game a little bit more. But why don't you and I can play this out? Like, let's say you and I were thinking about
a fund model versus a program model for increasing, you know, like the, our collective intelligence on what's happening in U.S. China, right? Sure. There's a way we could write it that is vague. We're like, this is the U.S. China program.
It is going to have three tracks. It's going to fund scholars that are studying stuff in China. It's going to, you know, talk to policymakers about those insights. And it's going to be, you know, engaging in like warehousing data and research publications on these topics, right? That's a three track. And then that's how a lot of programs look. They have basically a frame and they've got a couple of tracks and then people apply under those tracks, right?
But then if I said, like, what is winning? How do I know this program is successful? They're like, well, like people applied. We gave out grants. Right. We're like, I think you could do that same thing, but just be a lot sharper. You could say in three years, what would success look like?
Yeah, it's like I want 10 books written, which undeniably Ezra Klein would have to book the guest on because they are so thoughtful and essential to, you know, the future of U.S.-China relations and American policy that he would be crazy not to sort of feature the thinking that these grants created. Right. So so then like we back out the money.
You know, the dollar amount that I estimate of like, OK, 10 books, let's say it's a one in five chance that someone is good enough if they have a good proposal to like execute on it. And it's like that many people and is the pipeline, whatever you can. I can come to a number that will actually give me a 75 percent confidence interval of having those 10 books written by, you know, 2029. Right. Exactly. So in my mind, that is a good way of thinking about an attack strategy.
It's built on what feels like a very tight OKR, which you may totally fail at. Like you might be wrong. Right. But it's tight. And then you say, OK, well, let's stitch our strategy back to it. Then you find someone like it's the Jordan Fund. If you pull it off, then people are like, Jordan, how did you pull that off? You wrote down this goal.
You built a strategy on it and you pulled it off. Like, clearly you're good at doing this. You can, when you're doing your next thing, you're like, well, I did this side fund called the Jordan Fund. We like set this crazy goal to have these 10 bestsellers on US, China, and we pulled it off. And people are like, oh, well, how did you? It just feels a lot more
real to me as like an actionable strategy, a thing that then the field leader can walk around, actually does stuff in the world, even if it fails. Like, let's say you get only partially the way you have lessons versus the U.S.-China program, make some awards, does some things. But then how do I know whether it's working? And it's so funny because like
you know, the market is so powerful and like you can't get away with this bullshit if you're
trying to run and scale a business and you use and you're taking other people's capital and trying to turn it into, you know, a positive return for them. Right. But, you know, I'm curious, Kumar, because like there are these very capitalist people who like get very touchy feely when it comes to, you know, not even from a donation perspective, but it's like it's like there's an emotional layer of like, oh, this is like
giving this like we shouldn't be using these mindsets and like, OK, are like, give me a break. Like it feels dirty. But here's the part that I think is just important, which is we have to distinguish between donors today and potential donors, which is people spend a lot of time on people who are active today. But if you actually look at the stats on what is potential giving,
we might be at like, they might represent one to 2% on what is the actual possible addressable universe. So one question is, is would we actually attract a new class of donors? Totally. If we actually brought this level of rigor and the sense of betting and precision and targeted nature, because that's actually what they, that feels much more like their day job.
Right. And but then I think this question around, well, why don't they demand it? I actually think that there is a lot of pent up interest in this sort of thing. But people sort of think that people people go back and forth between. Well, like since this is about making money, we're going to substitute with having a really complicated theory of action.
But that's where we're going to use our brain power. And I often say like, just because it has a bunch of boxes and a bunch of slides, that is not a substitute for like, I have an attack vector. Rigorous thinking. Yeah. Yeah. And the media nonprofit workers, like, you know,
likelihood of being socialist is like five times higher than, you know, your average person. So maybe maybe there's I don't know, maybe there's like the people who are more attracted to the touchy feely type of logic are just kind of in the current organizations. Sometimes I think whether it's a nonprofit sector or the philanthropy sector spends a lot of time
engaging in like what feel like we're all collectively on the same mission and then that's enough, you know? But actually like we're not, we actually have roles to play and there's a role to be played for making high quality decisions around where to deploy the money because the money is finite. You actually have to make decisions. And so you need a strategy, you need to make bets, you know, so that can feel a little bit reductive, but
But it's actually like what the responsibility part of this job is. You have to be a responsible steward because...
high quality decisions actually do more good. So it's just, I think sometimes people struggle with that. And by the way, if you don't make high quality decisions, you get USAID canceled. And that's like, this is the world we're in, right? Is like, you did not have an evidence-based organization that could do a really great job of justifying itself. You had a handful of good projects and a handful of bad projects. And you
You had a small but loud movement with organizations like UnlockAid, whose founder we had on a few years ago and are going to have on again, kind of talking about how you needed to bring more rigor to this stuff because there was a lot of fat and inefficiency. And if you let this stuff fester for too long, I don't want to say the universities, the NIH or the NSF had it coming, but one of the best things
antibodies you can have is like a tight thing which can really stand up for itself. I don't want to engage in victim blaming and I don't want to let the conversation off the hook for what I think is sometimes bad faith behavior. But I do think your point that like there are all these systemic advantages to
caring a lot about the systemic impact you're having and bringing that rigor and constant, you know, like that it's useful for the work, but it's also useful for when those fights come and then say, like, look, we are building out this thing. I think in this, in some of these cases, who knows what impact it could have had, you know, we're living through odd times. But part of the reason I think we're getting traction is that there is a lot of pent up demand. Good.
All right. That makes me feel a little better, I guess. I don't know. What should donors know about China? That's my question for you.
If the sort of impetus of China talk is like thinking about long term, like national strategic competition and competitiveness from a sort of industrial systems technology perspective, like what are things that people could do to kind of nudge that in liberal democracies favor? In the Biden era, the things I saw that were kind of errors that.
you know, legislation and executive action could fix or, you know, 5% here, 10% there. And a sophisticated understanding of like what is happening in China, like could meaningfully like help you get, you know, squeeze that extra 10% out of this or that decision. So like, like,
The policy change that we've seen over the over the past few months when it comes to things that matter for long term strategic competition, like how the U.S. is going to relate to its allies, how we think about global nuclearization, how we think about science and technology funding, immigration are much bigger, but sort of getting to a better place now.
does not really require you to like understand what made BYD successful or like how, you know, how Huawei is thinking about developing its chips or even, you know, what China's like new AI policy is going to be. It's just like much more fundamental stuff that, you know, the thesis that I was operating under in the Biden era is that like more deep considered understanding of China will lead you to like pursue smarter policies is like...
like a sideshow relevant to like, no, like, okay, if we take the base case that like science is important and like immigrants are important to making better science, then like, let's just do this thing. Like I would take that 10 out of 10, like coming back to what I asked you at the very beginning of the order of magnitude questions, like I would take like a NATO that is a real thing,
10 times out of 10 over like, you know, the right tariff level to set on, you know, Chinese electric vehicles or batteries or whatever. So I feel that that's kind of why I was like, no, like let's, let's just have them like, you know, buy, like be nice to ally bumper stickers kind of like for you on the NSF funding thing, as opposed to, you know, any like tightly nuanced, like we need to do a better job of understanding China type things.
type questions if we're coming back to the China talk, you know, decade old competition mission set? Well, one one thing I have been thinking about, which I don't know the answer to yet, is what new institutions we need. Sure. And part of that is just that, like, so much of what I care about has been turned over, you know, when it comes to like how science operates in this country, that like,
the idea that we're going to get through this period with the same exact institutions seems unlikely, whether it's who's making the case for science, who are the messengers for science, how do we do science, all this stuff. And so this question around not just what the policies are, but what the institutions are. And so obviously, renaissance is part of that. But I have this bigger question around, we're probably going to need new institutions to
Who are our players on the field is going to have to change because a systemic change this big would require everything else to be engaging in quite a bit of adaptive change for us to get through, and it seems unlikely. So that's a big sort of thing I've been asking the team. We're not going to be able to do it all directly, but what are the institutions that would put us back into a better footing? Do we have them? Do we need to create them? Yeah. I mean, I think sort of like
Like, where I'm starting to spend more of my energy because, like, I don't know how many... I don't know if, like, the extra marginal podcast that China Talk records talking about how allies are important is going to, like, do much. But I do think kind of one of the constants that you can bet on over the next four years is the AI stuff and the rapid technological change. And I think, like, the...
regardless of the crazy stuff that Trump does, like, the Defense Department is still going to be there. And, like, America is still going to need to, like, protect itself. And, like, you know, we're going to... You know, America fights wars every, like, three years. Like, we're going to do that at some point again. And sort of the...
I mean, I don't know if this is just like Jordan going into like, you know, Chinese monk mode after the Ming Dynasty fell. But like the I've been reading a lot of sort of military history and thinking about like
times of rapid technological change and like what it means to actually use these tools in a in a way better than what your adversaries do so it's not really answering your question but like well I'm almost like going on like intellectual journeys as opposed to like uh um I don't know policy ones here's what I would say and I think this is like dovetails back with I think the role you're playing in China talks playing so one of the things I was telling Jordan before we started is that
Often, like a decent share of the time when sometimes people reach out to me, they'll be like, oh, I heard your great China Talk episode. So it could be not that you set out to do this, but that you're playing of, you know, a useful role in actually shaping the role that other people are finding, especially technical folks on problems worth solving.
And just like, what are their mental frameworks for the age we're living in? You know, people are looking for understanding, meaning in this. So one question I think is, it's not what is the marginal extra podcast, but are you giving people...
I don't know, like new vectors for what their lives can be and what careers they can have. Because you yourself, I remember when you and I first met, you were like, well, I'm a nerd on all these topics. I don't know exactly where I'm going to channel all that energy. And you've done something quite distinct. And so there's, I don't know, I think we might be living in an age of oddly shaped careers. And we need to give people more room for that.
I think that's fair. And I think it's like, yeah. I mean, and it's also like, I kind of forget, like, because it feels, some of this stuff feels obvious to me, but may not be obvious to people who don't live and breathe it, right? But it's weird because I don't like, I don't feel like
I don't feel like I'm part of the resistance. I just feel like I'm like a guy who has some takes on things. And, you know, sometimes some days I wake up and I feel helpless and other days I wake up and I feel really empowered. And this is not a direct response to your question. But I think you're reasoning in public. I think you're you're widening people's sense of how do you sort of think through this stuff? You're widening people's sense of who they think
Does this work? And what I keep finding all the time is people are highly siloed. They'll be like, well, I know this, and they'll name check, and I'm like, oh, I know exactly who you're reading or listening to. Sure. It's like you've just, you know, and so if you can widen their sense, but then give them a next step. Sure. What are things to do? So that's part of my goal always is there are lots of hard, interesting problems to solve. Yeah.
There are, there's no arena that is the wrong arena. Like people who are dismissive of politics are missing it. Like we're all living in it, whether you're dismissive of it or not. So like, don't be dismissive of the arena. Like just be, understand that there are many different dimensions to it. There are hard problems to solve. And like, you know, like it's not, like nobody benefits if you just live in the cheap sheets, you know? Yeah.
All right. And maybe the maybe the way I justify all my World War One reading is like no one else is doing it. And I'm just like bringing it to you in a way that is also someone who reads the news and can kind of like, you know, and has the freedom to spend 10 hours a week, like just like going on sort of weird journeys of things I think are relevant to today. OK, I don't I don't.
Seems like an end. Well, you know, we'll do an update in a year. Hopefully the Republic stands. Well, you know, Kumar, you got to set a goal for yourself. You got to do so much stuff that I have to have you on back in six months. Yes. I mean, I think we want...
One of our goals is we want to become even more international. We have a partnership with the British government to build out their R&D ecosystem. We want to do that in more places. So I think we want to build an organization that continues to be more international because science and tech is international. And then I'm hoping that like we...
this idea of the fund model, we're able to get, you know, new donors who have never done it before on board. And then, you know, I'm also hoping like we're not just talking about the doing, but also some of these, like the work actually starts to play out in the world and we're starting to see results. Yeah.
info at Ren Phil dot org. If you're rich, if you got a good idea or you just, I don't know, feel like you're there's there's some science and technology itch that you need to scratch that you might need a little help getting. Yeah. And I'm on LinkedIn. Reach out. I mean, we're eager. I mean, we consider ourselves at core talent network. So I'm always eager to chat with people with ideas. Awesome.
All right. So we should do a little parent corner. We'll keep this part of the annual check-in. We talked about slime last year, I think. We did. Well, here's a question we started talking about before we started, which was I asked you about sleep training. And then you told me that you were shy pushing sleep training on others. And I...
I will just sort of say so that is that I'm strongly of the view that sleep training is a gift you give your children. We had twins. We were like we sleep trained and they're 11 years old and they're great sleepers today. And we attribute it to like sleep training back then. And so for any parent who's like on the edge and just wants random advice from somebody that they're listening to,
And I can't offer it to everyone who's listening because I usually offer it to people I know. It's like, I'm always happy to be everyone's texting buddy, giving them the extra metal to last through those terrible first few days when it feels like you made a horrible mistake. Because on the other end, you have kids who can sleep and that's good for everybody. Yeah.
Sure. I'm with you. Yeah, I outsourced this to my mother and maybe one of the best decisions of my life. What is it? What is a cute thing that we've learned today? Oh, so I bought like a ukulele two years ago being like, oh, it'd be nice to play with my kid. And
What has been very cute, my daughter's nine months old now, is like there was a certain point where her like manual manipulation was such that all she could do was kind of grab the string and like pull it. But one day she figured out plucking and now she's like actually plucking the strings. And it's like that's like such a cool like like activation for her. She's like, oh, I make the sound now instead of just like dragging this thing around the room.
You have, I mean, that this period from nine months to 18 months is like a wonder, right? Because you're going to walking words and then like the takeoff, the words take off. It's like all I mean, it's just like crazy. So I'm very excited for you. Okay, good. All right. Let's call it there. Kumar, thank you so much for being a part of Chinatown.
Thank you for everything you're doing. I'm excited to be on and look forward to chatting in the future. You know, sometimes we're not prepared for adversity. When it happens, sometimes we're caught short. We don't know exactly how to handle it when it comes up. Sometimes we don't know just what to do when adversity takes over. And I have advice for all of us.
I got it from our pianist, Joe Zabidu, who wrote this tune. And it sounds like what you're supposed to say when you have that kind of problem. It's called Mercy. Mercy. Mercy. Mercy.
Mercy, mercy, mercy. Thank you.