Since 2012, there's been no increase in American life expectancy.
From 2012 to 2019, literally, it was almost entirely flat life expectancy, whereas the European countries had advances in life expectancy during that period. During the pandemic, life expectancy dropped very sharply in the United States, and only just last year did it come back up to 2019 levels. In Sweden, the life expectancy dropped in 2020 and then came right back up by 2021, 2022 to the previous trend of increasing life expectancy.
Whatever those investments we're making as a nation in the research are not actually translating into meeting the mission of the NIH, which is to advance health and longevity of the American people. Because they kept saying, we don't care. And so it's almost like big segments of the public feel like they caught us in something as scientists and we won't admit it. And they're not just pissed off. They're kind of like, darn.
I hear it all the time. And again, this isn't the health and wellness supplement taking, you know, anti-woke crowd. This is a big segment of the population that is like, I don't want to hear about it. I don't care if labs get funded. I want to know why we were lied to or the scientific community can't admit fault. I just want to land that message for them because in part I'm here for them and get your thoughts on what you think about
Let's start with lockdowns, masks, and vaccines, just to keep it easy. And what do you think the scientific community needs to say in light of those to restore trust? So first, let me just say, I don't think I'm the NIH director unless that were true, unless what you said is true. Otherwise, I'm not the NIH director. So I was a very vocal advocate for
against the lockdowns, against the mask mandates, against the vaccine mandates, and against the sort of anti-scientific bent of public health throughout the pandemic. I've also argued that the scientific institutions of this country should come clean about our involvement in very dangerous research that potentially caused the pandemic. The so-called lab leak hypothesis. Welcome to the Huberman Lab Podcast, where we discuss science and science-based tools for everyday life.
I'm Andrew Huberman, and I'm a professor of neurobiology and ophthalmology at Stanford School of Medicine. My guest today is Dr. Jay Bhattacharya. Dr. Jay Bhattacharya is a medical doctor and a PhD and the director of the National Institutes of Health. Prior to that, he was a professor of medicine at Stanford University. And I should mention that he did all of his formal academic training at Stanford, his undergraduate, master's, PhD, and medical school training.
Today, we discuss the past, the present and the future of publicly funded research in the United States. The National Institutes of Health is considered throughout the world the crown jewel of basic and medical research, explicitly because the basic and clinical research that it has funded has led to more treatments and cures for disease than any other scientific enterprise. Basic research is focused on making discoveries without any particular treatment or disease in mind when that work is done.
It is absolutely clear, however, that basic research provides the knowledge base from which all treatments and cures for diseases are eventually made. Today, Dr. Bhattacharya shares his vision of which aspects of NIH are especially effective and which need revising and improvement.
We discuss how scientific ideas are evaluated for funding and what can be done to create more funding for more ambitious projects leading to treatments and cures. This is a very timely issue because despite its strengths, the NIH has gained a reputation over the last two decades for favoring safer and less bold work and therefore leading to fewer discoveries. We also discuss what will be done about the so-called replication crisis.
The replication crisis is, as the name suggests, the inability for certain findings to be replicated. Dr. Bhattacharya shares with us new initiatives soon to take place that are designed to verify findings early and to incentivize replication so the knowledge base built by NIH science is accurate. As some of you may know, Dr. Bhattacharya stepped into a very public role during the COVID-19 pandemic when he co-authored the so-called Great Barrington Declaration, which argued against lockdowns.
He was also quite vocal against mask mandates, and he addressed vaccine efficacy versus safety, especially for young people.
Those stances, of course, were very controversial, and he explains the logic for his stance on those topics. That discussion leads into a very direct conversation about vaccines more generally, not just COVID-19 vaccines, but also measles, mumps, rubella vaccines and the very public and controversial issue taking place right now about vaccines and autism. We also discuss drug prices and why Americans pay 10 times or more for the same prescription drugs sold in other countries.
and the relationship of that to public health. I want to emphasize that the issues we discussed today will impact everybody. If you're a scientist, they certainly impact you. If you're a physician, they impact you. And if you're young, if you're old, if you're a patient, if you're healthy, if you're American or if you're outside the United States, they will impact you. Dr. Bhattacharya was incredibly generous with his time and his answers, directly answering every single question I asked, nothing was cut. As a consequence, it's a lengthy podcast,
But I felt it was very important to get into the nuance of these issues so that you, the listener, can get real clarity on where things stand and where they are headed. As a final point, my graduate student training, my postdoctoral training, and my laboratory, first at the University of California, San Diego, and then at Stanford, where it is now, were funded by the NIH. So you'll notice throughout today's episode that I'm very impassioned by the issues at hand.
At the same time, I strive to include questions that I keep hearing from my followers on social media and from listeners of the Huberman Lab podcast. Some of those come from ardent supporters of the NIH and others, as you'll see, are more skeptical or even critical of the NIH. I strive to represent all those voices during today's conversation. I certainly have my own opinions and stance on many of those issues, and I do voice some of those throughout today's episode. But again, I try to be thorough and broad encompassing.
As you'll see, Dr. Bhattacharya cares deeply about basic science and the future of medicine and health in this country and throughout the world. He is our appointed leader in the science discovery public health enterprise, and I'm grateful to him for taking the time to share his vision and for his willingness to listen to the many and wide range of voices, including those critical on these literally life-sustaining topics. Before we begin, I'd like to emphasize that this podcast is separate from my teaching and research roles at Stanford.
It is, however, part of my desire and effort to bring zero cost to consumer information about science and science related tools to the general public. In keeping with that theme, this episode does include sponsors. And now for my discussion with Dr. Jay Bhattacharya. Dr. Jay Bhattacharya, welcome. Thank you for having me, Andrew. I've been wanting to do this for a very long time. We are colleagues at Stanford, although now you've formally moved to Washington to be the director of the National Institutes of Health.
But you've played such an essential role in shining a light on certain aspects of public health, mostly that happened during the time of the pandemic, related to lockdowns, vaccines, et cetera. We'll talk about that. But now you are in the chief position of directing research dollars and the initiatives of what is arguably the most important health organization in the entire world, not just in the United States.
So thank you for taking the position. Thank you for being here. And the first question I have is, for those that are not familiar, what is the, not just stated mission of the NIH, but what is the really essential mission of the National Institutes of Health? So let me start with the stated mission, because the stated mission is something entirely worthwhile. Anyone who listens to it should say, yeah, we should do this. It is that to...
support research that advances the health and longevity of American people. And of course, the research that we do doesn't just advance American health, it advances the health of the entire world. For a very long time, the NIH, the National Institute of Health, has been the premier biomedical organization supporting research that translates into
Almost every drug that you take, the NIH has had some role in developing. Almost every, you know, all the fights over what's the right thing to do to get good sleep, what's the right thing to do for your diet, the NIH has played some role. And for American biomedicine, it's the essential thing.
institution. It supports the careers of a very large number of biomedical scientists around the world, and specifically me. I mean, I got NIH funding for most of my career. I was a reviewer for the NIH, a scientific reviewer for grants. It's an absolutely essential organization. Yeah, I agree. My lab...
ran on NIH money primarily. So thank you, taxpayers, American taxpayers. And I think for most people, when they hear that word health and what you just said about the mission statement for NIH, there is this assumption that most of the work being done at or funded by NIH is human clinical studies or even mouse studies that are testing a particular drug, a dose response curve. What's the lethal dose of this? What's the half-life of that?
But as you and I both know, much of what NIH does is fund basic research, research for which we don't have any clear idea, maybe even the foggiest of ideas, that there could be a potential upside for human health. Things like what controls the pigmentation patterns of the noses of
Doberman Pinscher dogs. I bet you we could find that grant. So when we, maybe not anymore, but when we step back and we look at basic versus applied, aka clinical research, what percentage of the NIH budget, which we'll talk about in a moment, is directed toward basic research and what percentage is directed toward clinical research?
clinical studies or the testing of some drug, what we call preclinical trials, testing in mice or non-human primates, et cetera? So there's big fights over exactly what that demarcation line is. So I'm not going to commit to a single number. What I will say is that a substantial part of the NIH portfolio appropriately focuses on basic science. Basic science meaning fundamental biological evidence
facts that can be used in many, many, many drug studies, other research where you don't necessarily know specifically in advance when you're doing it what the applications are going to be.
The NIH very appropriately funds that work, especially work that's not patentable, right? Because no drug company has an incentive to do that work, and yet it's vital. Let me give an example just to—
put some meat on the bone of it, of something that the NIH didn't fund, but actually is within the mission of the NIH to have funded if it had. Let's just take the research that led to the understanding of the structure of DNA as a double helix.
right, Watson, Crick, Rosalind Franklin, all those folks in England in like 1950s. Well, that work is not patentable. It's hard to imagine like someone like trying to patent the double helix structure of DNA, right? So that means that it's not going to be in the interest of any specific company to support those scientists that discovered that. And yet it's vital to almost everything we do in biology.
The NIH very appropriately funds that kind of work, the work that is not in the interest of any particular company to do. It solves a market failure if you think like an economist. The market failure is there's no incentive of the private sector to do that kind of basic work. And yet that basic work really advances human health in ways that are sometimes unpredictable.
And so it's correct and right that the NIH continues to fund that kind of basic science work, as well as the applied work where you take the advances and say, okay, well, here's a drug that might work to treat this disease, right? That kind of work also is appropriate for the NIH to fund. There's an interesting...
dividing line where the question is like, what should be left to the private sector to do? So the private sector tends to fund large-scale clinical trials at sort of the tail end of the development process. Sometimes they'll fund earlier clinical trials. But the private sector has an incentive to
to fund those kinds of studies because that gives them exclusivity, patents, things. So why should the taxpayer pay for that when there's already private actors that are willing to pay for that? So there's this interesting dividing line. You want the NIH work to be translated so that patients can have it. That means the private sector has to be involved to some degree, certainly has to be using the products of the NIH research.
but that dividing line is fuzzy and controversial. Same thing with between basic and applied. As I said earlier, it's the, they're huge, like almost religious horse over where that, where that dividing line is. Are you a basic scientist or are you an applied, applied scientist? So I, all the numbers like don't make sense to me exactly given that religious war, but the fundamental thing, which is we have to fund basic work that I believe in pretty strongly. Well, as a,
basic scientist, I'm not a clinician, but I worked on clinically relevant issues in my lab related to restoration of vision in blinding diseases like glaucoma, things like related to anxiety, et cetera. I also know that we have some beautiful cases, as you pointed out, of basic research leading to important, I will say cures to serious diseases. And there was no thought at the beginning of that basic research
that the outcome would be related to human health. I'll just briefly mention a couple. I wanna ask more questions than I wanna speak, but my scientific great grandparents, David Hubel and Torin Simviesel,
did the early work defining the structure and function of the visual system, first in cats, then in monkeys, eventually it was clear the same was true of their findings in human work, and early plasticity, changes in the visual system of say there was a cataract or a droopy eyelid or divergent eyes, strabismus or convergent, you know, so what we call cross-eyedness and things of that sort. And we know on the basis of that work that
Children need corrective surgeries early or else the brain is forever blind to the perfectly fine eyeball if the eyes aren't correctly aligned. Okay. In other words, the old practice of, oh, you don't want to put kids under anesthesia, it's too risky, etc.,
The work of Hubel and Wiesel saved the vision of millions and millions of children in the U.S. and abroad. People with cataract have those cataracts removed early and on and on. And I would also say as a second example that much of the basic work on cell biology that took place in the second half of the last century, you know, where are the mitochondria? What's in the mitochondria? Electron microscopy lights. Let's talk about all the folds in the mitochondria. Let's talk about the Golgi. All that basic cellular biology that is the stuff of textbooks was in
as we say, necessary, perhaps not sufficient, but necessary for the development of essentially every existing cancer treatment.
But the cell biologists that did that work weren't thinking about cancer until much later in that work. So those are just two examples that I would argue NIH had funded a tremendous amount of. And the reason I'm setting it up this way is because I think nowadays, part of the reason you're here, is that we are potentially looking at a redirecting of a significant amount of the research dollars that taxpayers provide to the NIH and the NIH to labs away from basic research, which
understandably has some people concerned. That said, in order to translate things from the lab to the clinic, we also need to think about translational work. So I just put that out as kind of an offering, uh, to elaborate. Andrew, I have no intention of, of implementing that, uh, of shifting the balance between, I think, as I said, basic science work and applied work, uh,
are both tremendously important parts of the NIH portfolio. And the question to me is, what's scientifically important and interesting in terms of accomplishing the NIH mission, which is, again, advancing the health and longevity of the American people? Both basic work and applied work
can contribute to that mission. And in fact, I think any large-scale scientific institution that seeks to support the mission that the NIH has, has to have both in it. So I don't have any intention of gutting basic science. I mean, I personally, I do epidemiology, health policies, health economics, statistics.
That's very, very applied. But I have great admiration for my colleagues like you who do basic science work. I think it's what advances and fuels the next generation of advances.
So it's going to stay part of the NIH mission as long as I'm the director. Thank you. I and many others will be very relieved to hear that answer. I think there is this fear that the new administration is going to eliminate basic research somehow and replace it with only applied research and clinical studies, and that somehow this
This is not my belief that there's going to be some private interest related to that and it's all going to get co-opted in some kind of cloudy way. What I'm hearing from you is that is not the direction that NIH is going to take. In fact, I've not heard anyone inside the administration tell me to do that or suggest that as the appropriate path. I just, I mean, everyone I've spoken to about my vision has said, yes, that makes sense. Great.
I'd like to take a quick break and acknowledge one of our sponsors, David. David makes a protein bar unlike any other. It has 28 grams of protein, only 150 calories and zero grams of sugar. That's right, 28 grams of protein and 75% of its calories come from protein. This is 50% higher than the next closest protein bar. David protein bars also taste amazing. Even the texture is amazing.
My favorite bar is the chocolate chip cookie dough. But then again, I also like the new chocolate peanut butter flavor and the chocolate brownie flavored. Basically, I like all the flavors a lot. They're all incredibly delicious. In fact, the toughest challenge is knowing which ones to eat on which days and how many times per day.
I limit myself to two per day, but I absolutely love them. With David, I'm able to get 28 grams of protein in the calories of a snack, which makes it easy to hit my protein goals of one gram of protein per pound of body weight per day, and it allows me to do so without ingesting too many calories. I'll eat a David protein bar most afternoons as a snack, and I always keep one with me when I'm out of the house or traveling.
They're incredibly delicious and given that they have 28 grams of protein, they're really satisfying for having just 150 calories. If you'd like to try David, you can go to davidprotein.com/huberman. Again, that's davidprotein.com/huberman. Today's episode is also brought to us by Eight Sleep. Eight Sleep makes smart mattress covers with cooling, heating, and sleep tracking capacity. One of the best ways to ensure a great night's sleep is to make sure that the temperature of your sleeping environment is correct.
And that's because in order to fall and stay deeply asleep, your body temperature actually has to drop by about 1 to 3 degrees. And in order to wake up feeling refreshed and energized, your body temperature actually has to increase by about 1 to 3 degrees. Eight Sleep automatically regulates the temperature of your bed throughout the night according to your unique needs. Eight Sleep has just launched their latest model, the Pod 5, and the Pod 5 has several new important features. One of these new features is called Autopilot.
Autopilot is an AI engine that learns your sleep patterns to adjust the temperature of your sleeping environment across different sleep stages. It also elevates your head if you're snoring, and it makes other shifts to optimize your sleep. The base on the Pod 5 also has an integrated speaker that syncs to the 8Sleep app and can play audio to support relaxation and recovery. The audio catalog includes several NSDR, non-sleep deep rest scripts,
that I worked on with Eight Sleep to record. If you're not familiar, NSDR involves listening to an audio script that walks you through a deep body relaxation combined with some very simple breathing exercises. NSDR can help offset some of the negative effects of slight sleep deprivation.
And NSDR gets you better at falling back asleep should you wake up in the middle of the night. It's an extremely powerful tool that anyone can benefit from the first time and every time. If you'd like to try 8sleep, go to 8sleep.com slash Huberman to get up to $350 off the new Pod 5. 8sleep ships to many countries worldwide, including Mexico and the UAE. Again, that's 8sleep.com slash Huberman to save up to $350.
I'd like to talk a little bit about something that
that most people perhaps are not familiar with in terms of its acronym, but it's a very important issue, which is this notion of IDC, indirect costs. So my lab ran on NIH grants for many years and my lab and other labs would apply for grants if we were fortunate enough to get one of those grants funded. We might receive, let's say, a typical grant would be a million dollars over the course of four years. So 250 a year for four years. But then in addition to that,
My home university, Stanford, would get some percentage above that, not a percentage of that million. I would still get the million to spend on mice, antibodies, graduate student salaries, et cetera. But some percentage of...
That $1 million, and I think at Stanford it's roughly 50x percent, so let's say another $500,000 would be given to the university for so-called indirect costs. This is not something that just happens at Stanford. This is typical of every single NIH grant that I'm aware of.
And the indirect costs pay in principle for administrative handling of the grant and the, you know, the various infrastructure things related to the mouse care, keeping the lights on, having a janitor empty the trash at night, these sorts of things. IDC, as it's called, has become a hot button issue for two reasons. One, as soon as the new administration came in, the Trump administration came in just this last year.
They cut the IDC rate across the board, not from, say, 55% at Stanford. Other places were 75%. Some places were as low as 30%. They said, nope, we're not paying this stuff anymore. The National Institutes of Health, in other words, the taxpayers, will pay up to but no more than 15, 1.5%, above any given grant.
I'd like your thoughts on that because this weaves into some bigger issues that relate to a lot of the sentiment that, you know, why should taxpayers be paying for these universities to run, especially when universities, some, not all, have large endowments? Right. So actually, I just preface my remarks by saying that there was litigation against that 15 percent, which essentially said the government couldn't
impose that 15%. So it's been blocked? Yes. So right now, the rates are whatever they were. They're not the 15% based on that court order. I can't comment on the litigation and I can't comment as a result of I'm now a member of the government. It's like, I'm not allowed to do that. But I do want to talk about the broader issues related to indirect costs. And I want to put it in a broader context, right? So the context is this, right? So in the
Mid-40s, Vannevar Bush, who was like one of the main science administrators in the United States, he made an argument that the federal government should partner with universities in organizing the scientific infrastructure of the United States.
The universities were tremendously important parts of the scientific infrastructure, and the federal government had an appropriate role in supporting the universities of the country to do scientific research of interest to the American people.
So the indirect costs kind of structure came out of that commitment. And frankly, it makes sense to me. It's appropriate that the federal government have some role in deciding how to support the universities of the country to be organized around research that is in the American interest.
The question is how much should it be? How should it be structured? You know, in what way? Those are the key policy issues that we're really talking about. We're not talking about should there be some federal support for the universities. The question is let me just step back and talk about like the current structure, the way it works because it's really non-intuitive, right? So first –
You're a brilliant scientist. You apply to the NIH. You get a grant that gives you a million dollars a year. I'll just make a clean number, right? So a million dollars for the next five years. The federal government is going to give you money to run your lab and do all this kind of stuff. You work at Stanford. Stanford has a 55% indirect rate. So that's on top of the million dollars a year. The administrators at Stanford then will get $550,000. Right.
So for your million dollars of work, the taxpayers will pay one and a half million dollars roughly to Stanford a year. Now, as you said correctly, that half a million dollars will go to the fixed cost of doing research, the stuff that's not specific to the lab you're running, the people you have to hire to do the work that you propose, but the fixed cost, the building, the
the maintenance the uh the you know the uh the the all the all the stuff gotta take the biohazard stuff away all that stuff and it's not just you like there are other folks who are like using the same material like radioactive materials and so it can support many many research projects not just one right so it's funding that kind of work right so and again that's a legitimate use of that money um so right
Here's the way that the economics of this work. In order to get fixed cost support, you have to have brilliant scientists like you that can win NIH grants. If you don't win NIH grants, Stanford doesn't get the 550. But in order to attract brilliant scientists, you have to have the infrastructure where the scientists can do their work.
So it's a ratchet, right? So in order to have the money, the infrastructure support, fixed cost support, you have to have scientists. In order to have the scientists, you have to have the infrastructure. It's a ratchet that essentially makes it so that we concentrate the federal support for the money to a select few universities. They're winners and losers.
And so the scientific infrastructure of the country is concentrated in relatively few universities, mainly on the coasts.
And they're brilliant scientists in other places that are not at those select few universities that have trouble getting NIH grants even though they're brilliant scientists. It draws the federal support away in a structure that essentially says lots and lots of states, lots and lots of institutions are going to have trouble getting the infrastructure support that they need in order to have the scientists come there.
So that's the basic economics of the way indirect costs actually work. And so the question is, is that the right structure? There's also questions about, you know, like, so for instance,
Your science involves – your basic science involves lots and lots of fixed costs, right? Radioactive disposal, all the stuff. The research I did, epidemiology, health policy, statistics, it's basically a computer. Me with a data set and a computer, I can hire some biostatisticians to help me or – We call that a carpet lab. Yeah. And so like –
Does the university need the same indirect cost support to support my fixed costs as it does yours? And the answer is obviously no. And yet that's the structure we currently have. So there are policy questions to be answered about are we –
Have we structured the indirect cost support in the right way? Are we inducing the right incentives? Can the American taxpayer be sure that we're auditing the use of the indirect costs appropriately? Those are the policy questions I think that are at issue in the indirect cost fight. Again, I won't get into the litigation. I'm not allowed to actually comment on that. So I wanted to abstract it to a higher level because I think the policy question is,
Not should the federal government support universities to do this kind of research, to have sort of the facilities. The question is how should it be distributed across the country? To what extent should the researchers get it versus the administrators get it? And then on the back of that, there's also other research institutions that have very different indirect cost recovery rates for the same university.
So I think Gates Foundation is, I don't know the exact number, like 15%, something on that order, whereas the NIH is 50% to the same university. That looks funny. The question is, I mean, sometimes I've heard, well, the Gates Foundation puts more of the money into the directs, right? So that maybe they'll charge you for the...
the rental cost of the building or something. I don't, I don't know exactly, but I'm very familiar with foundation versus NIH money and it differs by foundation, but typically a university, and I've been at two, I'm tenured at Stanford, but my lab started off at university of California, San Diego, a public university. Typically when foundation money comes in,
There's the university imposes a minimum of about 8% administrative costs just for handling, like just to do the paperwork, to pay the admins that do the handling. There's something very important in what you're bringing about. There are actually two issues. So I want to backtrack to one issue to make sure that we, that people really understand this. Cause I realized that some of this might sound a little bit down in the weeds, but it's just so important. The first thing that I really want to, um,
draw up from earlier in our conversation is you pointed out that the current model of NIH is that taxpayer dollars pay for the basic research and for the exploration of whether or not the findings from that basic research will benefit disease. If there's any technology, device, drug, whatever, that is brought to the public through the private sector. Put differently, the taxpayers fund the research and development and
but they don't capture any of the upside from the private companies that make money selling you the SSRI, selling you the not hopefully someday novel Alzheimer's treatment. We don't yet have a satisfactory treatment for Alzheimer's as we'll get into. So the general public who are not basic scientists, in other words, if I take off my hat as a basic scientist and I say, yeah, I'm a taxpayer, I give a significant amount of my income to the state of California and to the federal government and
I like science. I certainly would like to live a long, healthy life. And I hope some of that science helps me do that. But I'm going to have to buy back the results of what I paid for. That's where I think a lot of the general public sit. And I'm not saying they don't like, appreciate and respect science and scientists. But to any rational person, you don't need a degree in economics to say that kind of sucks. I'm paying an end made worse.
If I want to read a paper that was published with the work that I provided for my tax dollars, I have to buy that from the journal. By the way, that changes in July. Okay. Yeah. I mean, this is a huge issue. That's one of the decisions I made. Yeah, it's $34. Not anymore. Listen, I've been grateful to publish in Nature and Science. You know, these are like Super Bowl rings for scientists. I'm sure it's part of the reason I got tenure at Stanford and I had great fun doing the work and I believe in the work. It stood the test of time. But
Were I not an employee of Stanford that pays for the subscriptions to those journals, I have to buy the work back using my tax dollars that funded the work. This is crazy. This is like me giving you the money for the supplies to build a home. I get to, you get to live in the home. I don't even get to see the home. I have to purchase a ticket to see the home.
That's how irrational it is from the perspective of somebody who's just not understanding the pipeline of basic applied research. So let's just – I want to return to that briefly because this relates, in my opinion, directly to IDC. So that's a crazy picture for anyone that doesn't understand how one piece relates to the next, relates to the next. And now that I'm in public –
I'm in media, I'm public facing. What I've come to learn is that the general public is very smart. Max Delbruck was right. Assume infinite intelligence and zero knowledge, but it's very hard for people to connect more than two or three dots. They're busy. So we could talk all day about how this leads to that, leads to this, the brick on the wall model, and then there's this treatment. And they're like, I'm paying for this stuff and I can't even read the paper about it, let alone glean the positive benefits without paying out the nose.
Yeah. So a couple of things. Let me go backwards because you had two major issues you brought up. So first, the journal thing. My predecessor, Monica Bercanoli, who was the NIH director, National Institute of Health director before me, she made a decision, a really great decision, essentially to say if the NIH supports a scientist's work and then that work leads to a journal publication, that publication ought to be available free to the public
immediately upon publication. You're not allowed as an NIH-funded scientist to publish in a journal that doesn't have that as a policy. That policy was due to go into effect in December of this year. I think it's a great policy because I agree with your analysis entirely. If the American taxpayer pays for the research, why shouldn't the American taxpayer be able to read the research for free?
Because they already paid for it. Why did they pay a second time on the back end after the research is published? And it's not like it's free if you're a university employee. The university has to purchase a very costly subscription to the journal in order for a faculty member to read the papers. Now, I'm lucky enough I can access pretty much any paper in the world. But that's because Stanford spends millions and millions of dollars. And it's made worse. I forgot the one real stinger in this.
When you publish a paper, you use taxpayer dollars to pay the journal. That's correct. Thousands of dollars to publish it. Then they sell it back to the general public. Nature charges $12,000 for like the major pieces. So that's a racket. Right. Sorry. I realize I'm talking more than I'm asking questions. No, no. I mean, like I'm agreeing with you. So Monica Bertinelli, the previous NIH director, in December of this year –
was the she made a policy that those papers have to be able to the public for free i made a decision one of my first things i did was i said why wait till december let's just do it in july great thank you and so starting in july that what you just said will no longer be the case the americans and everybody will be have access to the papers that the americans are already paid for from the if they're nih funded uh for free thank you on the behalf of
Literally, this isn't a political statement on behalf of myself and every other American citizen. Thank you. We've been paying for this research forever.
And I've had to pay to get it back. I mean, it's not like journal editors make that much money, but the journals make a fortune. So Macmillan Press, El Savior, I've done my homework on this. We're talking billions of dollars in income. And the marginal cost of publishing now is effectively zero. It's just you put it online, right? And there's, I mean, yeah, there's some cost for maintaining the webpage and all that, and there's some editorial staff. But like the
The level of investments the public had been making for the NIH to then be asked to pay $30, $50, $100 for the papers itself that are published, I mean, it's just insulting. And actually, it impedes the progress of science because it makes it so that there's this barrier where regular people can't get access to the things that scientists are talking about, right? There's like this public transparency aspect of it.
where the scientists ought to be engaging with the public about their ideas, right? The idea is that we are just living in this ivory tower and only we get to decide what's true and false and then we impose it on the public. During the pandemic, we saw the folly of that model, right?
So it's, I think, a small step forward, but an important one. - I think you're being humble. And I'd like to point out that I think it's a big step forward because it's not just a token to the public for all their dollars over the last, how old is the NIH? - 100 and some years. - 100 and some years. It's really what should have happened a long time ago. So thank you very much. And I guess thank you to Monica as well for initiating this, but thanks for accelerating that.
I think when people start to understand how the NIH works a bit and they understand this IDC thing, this indirect cost thing, the question comes to mind, you know, how much of the cost of running science at a university, public or private university, should the public be responsible for? I mean, that's a kind of really interesting question. Yeah. I mean, I think –
So let me tie it back. As you said, these are all interlinked topics. Let me tie it back to something else you just said earlier, which is, okay, so the NIH funds your work.
Your work then results in maybe not necessarily you, but somebody else who uses your work to like create a product that they patent and they make a lot of money off of. They sell it to the public, at least indirectly or sometimes directly. Those patents are funded by American taxpayers.
Well, the NIH also has a big intramural program, but it's like a scientist who worked directly for the NIH. They make some advances and sometimes those advances result in patents. And those patents then result in products that are sold that are above marginal cost. And so the question is like, again, by American taxpayers.
because the patent protects entry into those markets. So the question is, how much should the American taxpayer be funding for this kind of work? Should there be private actors to be allowed to make money off of this resource the American taxpayer funded? And the question, as an economist, I'll say the question is complicated. And the reason it's complicated is you might say, okay, well, there should not be a patent at all.
It shouldn't be patented at all. There was a law called the Bayh-Dole Act in the mid-'80s. I forget the exact date. That essentially said that NIH-funded work ought to be patentable. And the reason was that it's the last mile problem. Like you have some fantastic basic science research that has some like fantastic biomedical results that there's no way for the patent, right?
then there's no interest to develop into a product that then advances health. The wisdom of the Bayh-Dole Act was to say, well, look, if you allow there to be patent on the last mile, then now we've created a commercial interest to take the basic science advances and translate them into something that actually benefits people.
Now, the price is going to be higher, at least while the patent is still in place, but then eventually the patent will go away and then the thing will be available to the public at large to accelerate the transition from the basic science investments we make to things that actually benefit the public very directly. So in a sense, there's a tradeoff there, right? So you're trading off the fact that for a while there's products funded by the American taxpayers that are at higher prices than it kind of would be in a purely competitive market.
for the fact that you get more rapid access to the benefits of that investment. So that's the basics trade-off at play. And that's why I say it's complicated. Well, when I joined UCSD and when I joined Stanford, I signed something saying, if I make a discovery here that translates to an important device or drug, that the university is going to capture some of that upside. And Stanford is a place where there's
let's just say a history of people going into biotech and to neurotech and because of the influence of the engineering school. There's actually a great joke about Stanford that a former president of Stanford told me, which is there's only two kinds of Stanford faculty, Stanford faculty with companies and Stanford faculty with successful companies. Discussion for another time. But it's commonplace for faculty at Stanford to have companies.
to split their time between the university and their companies. But most places, like most of the NIH grants that I reviewed when I was on study section reviewing grants,
Most of the great work I would hear about at meetings came from people at universities who were really focused on charting the cell types in the retina, understanding the activity patterns in the brain during sleep and how it relates to neuroplasticity. Very few of them were involved with companies in a serious way, let alone had their own companies. So for the taxpayer who make up the majority of our listenership,
Giving money to universities and the universities are spending that money making discoveries. I think most of the time that the university and the scientists who do that work are not capturing the upside. The general public isn't capturing the upside. They're actually paying for the upside. So it's a little bit like the journal situation. That's why I brought that up. It's a little bit like the journal situation all over again, where we're as taxpayers funding a lot of this and then have to buy it back over and over again.
Okay, so there's one other complication about the United States versus the rest of the world. So let's just put that aside for just a second. Let's get back to that. Before I get there, I want to say in response that, in fact, when you take a medication or when you have some health advice that actually works, often the NIH research was wrong.
somewhere in the path leading up to that involved.
And there are huge returns to that, right? If you have a drug that treats your disease well, you know, congestive heart failure, and now you have a drug that allows you to live longer, more health, you know, in a way that allows you to live more fully. Or if you have, you know, diabetes and you slow the progress of the disease so it doesn't result in your kidneys failing, you're going blind or whatnot. Right.
Those are advances that are really worthwhile. And even if the price is higher than marginal cost, it still could be very worthwhile, right? So you take metformin. It's a very cheap drug now, but once upon a time it was a patented drug. And you prevent the progress of type 2 diabetes. That's a big advance, right? I agree. So the value that you get from the NIH-sponsored research then is potentially very, very high in terms of improving your health.
even more than the marginal price for the drugs that you end up paying or the products or the advice or whatever it is. So you're saying it was a good investment for the taxpayer. Yeah, even for the taxpayer, right? Now, I wanted to put aside the business about the international, like the U.S. versus the rest of the world. Now I want to bring that to the forefront. It is also true that American taxpayers and Americans pay somewhere between two to ten times more for the same product
It's the same drug product as people in Europe pay. Why is that? There's, again, a lot of complicated reasons around to do that. But I mean, just it's a very, very simple observation. There's something in economics called the law of one price, right? When you have one country charging 10, there's a market in one country where the price is 10 times more than for another country. What you'd expect is somebody to go buy the goods from the other country, from the cheap country.
Let's pay the cheap price, then go resell it in the country that has a high price. And now what would end up happening is that you'd get an equalization of the price. So you'd get – as long as there's sort of like the capacity to like move across and essentially close this arbitrage opportunity through competition –
You'd see those price differences collapse. And yet for decades, Americans pay two to ten times more for the same product, often made in the same manufacturing facility, than Europeans do. It's, again, complicated reasons why, but it has to do partly with the way that American health insurers interact with drug companies.
Drug companies essentially use Americans as a way to fund their research and development efforts. That's what they say. The higher prices that we pay fund the last mile research that the drug companies do to test the new products. Are you saying the last mile research is the most expensive because it's the stage four clinical? Yeah, the safety. Right before we go into humans at large. Yes.
We want to know if anyone's going to drop in. That's the argument that they make, that the drug companies make, is that, well, yes, the Americans are paying this high price. It's really worth it to do that. And then they go to Europe and Europe says, well, we're not going to pay those high prices. We're going to charge you. If you're going to market the drug in France and Belgium and in Germany or wherever, you can do it, but you're going to have to charge us essentially marginal costs. So
If I understand correctly, the United States taxpayer is funding the late stage and most expensive research and development that the drug companies do.
They sell the drugs to us at a premium and they use the difference between the real cost and the sort of allowed cost abroad to make it very cheap overseas. In other words, we are paying for the insurance, so to speak, that the drugs that are marketed in Europe and elsewhere are safe. Yes. So the taxpayers in the United States are funding the basic research and the
the clinical late stage research for the entire world.
Yes, in large part. I mean, like Europe does have some institutions that invest in basic research. So it's not entirely zero. And there are, of course, private foundations that do it. But through the NIH, that's the single largest investment in basic science research in the world and also applied research. And also by higher drug prices in the United States relative to the rest of the world, we are funding the –
the phase three trials, all the research and development efforts that happen at the tail end of the research pipeline that the drug companies do. So essentially American taxpayers are the piggy bank for the world for almost all of this research pipeline. Wow. Okay.
What is being done to bring drug prices down in the United States? I heard this recently as a press release from President Trump that drug prices in the United States are soon to come down.
Knowing what I know now, based on what you just told us, the immediate question becomes who's going to pay for that late stage safety research? I mean, it's not expensive because it's fun to do expensive research. It's not expensive because they're still exploring the basic chemistry of these molecules or functioning of the devices. It's expensive because you have to make sure that people aren't going to drop dead or form some other worse pattern of illness through the use of these drugs. And that means a lot of human subjects are
and many, many measures. It's not just one endpoint. Like, did it lower blood sugar? It's like, did it lower blood sugar? And also, did you blow a gasket in here, you know, some capillary in a critical part of your brain? So, I mean, this is a very expensive work. So it still needs to be done is what I'm saying. Who's going to pay for it? Okay. So let me just take a couple of cuts at this. So first, like that phase four surveillance, that happens after the drug's been marketed. That's typically the FDA that conducts that work.
NIH can fund some of it, but it's mostly the FDA that tracks the safety and efficacy of drugs in broader populations after the drug has been approved for use. So again, American taxpayers are paying for that.
The phase three studies, the studies of large-scale clinical studies to check the effectiveness of a drug, check, again, the safety profiles from larger populations, that's typically the drug companies paying for that, right, in principle. But then American taxpayers pay for that with higher drug costs. President Trump, in the last couple of weeks, issued an executive order essentially saying we have to make the other countries of the world pay their fair share.
So he put an executive order in place with various mechanisms. If you want, we can talk about some of those mechanisms that will reduce the difference in price between what the U.S. pays and what the rest of the world pays. What will likely happen is that Europe will pay a slightly higher price, again, funding the research and development efforts to do that last mile of research. The U.S. will pay a lower price. And so the world will share that R&D burden more equally than we currently do.
Currently, it's American taxpayers on whose shoulders that burden of R&D currently falls. What President Trump has said is that that's not an equilibrium that should hold, that there ought to be policies that allow us to equalize those prices. And the kind of mechanisms used include things like including drug prices.
price discussions in trade negotiations, linking it to the tariff policies he's implemented, allowing reimportation of drugs. So the idea is that, let's say I'm in Europe and I'm charging basically nothing for some drug and you're the United States. Someone can come to me, buy the drugs from Europe or Canada or wherever, bring them to the United States, resell them at a much cheaper price.
and make a little bit of money, but that then would equalize the price. And various other mechanisms try to bring the United States much more close to where the price of the rest of the world. It's not that the R&D won't happen. It's just that the prices everywhere will be more equal so that the burden of R&D is shared more equally across the developed world.
what is to say that these other countries will simply say, no, we're not going to absorb more of the cost. People, uh, don't like to see prices go up. Um, they're comfortable with seeing prices go down for obvious reasons. Um, and I can think of one example, maybe not the most critically important example in the, in most people's minds. There's a class of drugs that was released last year or about last year called the Doras. These are, um, drugs that encourage sleep, uh,
suppressing the wakefulness mechanism as opposed to promoting the sleepiness mechanism in loose terms. They have much lower abuse potential than a lot of other sleep medications. And given the essential role of sleep in mental and physical health for, you
And I'm a strong believer that behavioral tools, sunlight, et cetera, are critical, but some people truly struggle with clinical grade insomnia and it's extremely detrimental, it's widespread. These drugs are very expensive, $300 a month or more in the United States.
Knowing what I know now, just the idea that some of that $300, let's say, let's make up a number, 200 of those dollars is to cover the research costs so that in Northern Europe, it can be available for $50 a month. I mean-
That borders on upsetting for me. Yeah, it is upsetting. And I think I understand why President Trump issued that executive order. It's upsetting for me too. It makes no sense that the American taxpayer should bear the burden of this R&D expenditures when there are lots of rich countries in the world. Why shouldn't we be more equally distributed? The question is like what will happen? It's how the drug companies respond to the executive order and how our
Our allied nations respond to the executive order is open still. I don't know what it's going to look like. But what I can say is that the current equilibrium is not sustainable, right? The American taxpayers, once they understand what's actually been happening, and this is decades long, they're going to say no.
And so the way that it plays itself out, it's hard to project exactly. But what I do know is that every effort – the government currently is making every effort to make sure that those prices get more equalized. I think –
Just take it from the perspective of a European citizen, right? Someone's a French citizen or a Spanish or Portuguese or English citizen, right? Or, you know, citizens of Great Britain. For them, raising, allowing this price is more equalized in a way that so they share the burden essentially creates an interest of the drug companies to focus on the kinds of health conditions that they have.
Most of the research now, since it's paid for by Americans, the drug companies are focused on problems that Americans have. It aligns the interests of the drug companies to think more broadly about what they should be investing in to include the health problems that Europe has. Is it true that – I've heard this before – 90 percent of the psychoactive drugs, like the antidepressants, the SSRIs and related things in the world –
are prescribed and consumed in the United States? I got no other specific number, but it is a pretty substantial. I think as far as like drug profits go, I think it's like
Two-thirds or three-quarters of all drug profits are had in the United States. And are most of those for the sort of Adderall and psychotropic type stuff? Sorry, I don't know if psychotropic is the correct term. I'm going to get beaten up by people if I don't get this right. Let's just say psychoactive, excuse me, I meant to say psychoactive drugs like SSL.
SSRIs, which by the way, in my view of the literature, they're not always bad, but we hear that they are bad in some instances or many instances, but in like for the treatment of clinical grade OCD, the SSRIs have been a tremendous tool. They haven't cured OCD in every case, but they've been a tremendous tool. So I don't want to, I want to make sure not to demonize them. Uh, so I don't know the specific numbers for, for, for psychoactive drugs. Um, but I, but, um,
As an industry as a whole, it's the United States that drives drug company profits, that pays for drug company profits. I think it's like two-thirds or three-quarters. I forget the exact number. And so what are these American problems? So it's obese. Are they obesity-related issues? Yes, obesity, depression. I mean, those are a lot of the – I mean, the United States is – I think it's like Mexico is now above us. But like for a long time was the most obese nation in the world. Right.
You know, big nation in the world. So like the diseases related to obesity. Now, admittedly, the European countries have those problems too, but just to a lesser degree. The drug companies, their research and development efforts naturally go to where they're making the most money.
And so what this will end up doing is it'll align the drug company incentives to focus on the problems that Europeans have at higher levels than the Americans have relative. Now, these are all rich countries. So it's not like there are unique diseases that happen in Europe that don't also happen in the US. It's a question of relative levels of investment, right? And so I'd
I don't think that's necessarily bad. Like an excessive investment in just the things that Americans have at scale don't necessarily translate to better health for Americans. So you can see this since 2012, there's been no increase in American life expectancy.
From 2012 to 2019, literally, it was almost entirely flat life expectancy, whereas the European countries had advances in life expectancy during that period. During the pandemic, life expectancy dropped very sharply in the United States, and only just last year did it come back up to 2019 levels.
In Sweden, the life expectancy dropped in 2020 and then came right back up by 2021, 2022 to the previous trend of increasing life expectancy. Whatever those investments we're making as a nation in the research are not actually translating into meeting the mission of the NIH, which is to advance health and longevity of the American people.
Right. We've had some tremendous biomedical advances have now allowed us to treat diseases that were previously untreatable. But which is great. That's a good thing. But it's not actually as far as the broad health of the American public address the chronic disease crisis that we face or address the crisis in longevity that we face. The next generation of kids, our kids are likely to live shorter, less healthy lives than we have lived as parents.
as American parents. And I think that that's, that I think is an indictment on this entire industry. Like we focused on
managing illnesses and treating illnesses and try to hold on, especially chronic diseases, and as a result, and we're failing at it. Europe, on the other hand, is seeing expanded life expectancy. This, I think, this change of trying to equalize drug prices, aligning our portfolio of NIH investments to meet the health needs of the American people, it's a long-needed corrective.
You asked if it will succeed. I hope so. That's the reason I took this job. I'd like to take a quick break and acknowledge our sponsor, AG1. AG1 is a vitamin, mineral, probiotic drink that also includes prebiotics and adaptogens. As somebody who's been involved in research science for almost three decades and in health and fitness for equally as long, I'm constantly looking for the best tools to improve my mental health, physical health, and performance.
I discovered AG1 back in 2012, long before I ever had a podcast, and I've been taking it every day since. I find it improves all aspects of my health, my energy, my focus, and I simply feel much better when I take it.
AG1 uses the highest quality ingredients in the right combinations, and they're constantly improving their formulas without increasing the cost. In fact, AG1 just launched their latest formula upgrade. This next-gen formula is based on exciting new research on the effects of probiotics on the gut microbiome, and it now includes several clinically studied probiotic strains shown to support both digestive health and immune system health, as well as to improve bowel regularity and to reduce bloating.
Whenever I'm asked if I could take just one supplement, what that supplement would be, I always say AG1. If you'd like to try AG1, you can go to drinkag1.com slash Huberman. For a limited time, AG1 is giving away a free one-month supply of omega-3 fish oil along with a bottle of vitamin D3 plus K2.
As I've highlighted before on this podcast, omega-3 fish oil and vitamin D3K2 have been shown to help with everything from mood and brain health to heart health to healthy hormone status and much more. Again, that's drinkag1.com slash Huberman to get a free one month supply of omega-3 fish oil plus a bottle of vitamin D3 plus K2 with your subscription. Today's episode is also brought to us by Levels.
Levels is a program that lets you see how different foods affect your health by giving you real-time feedback on your diet using a continuous glucose monitor. One of the most important factors in both short and long-term health is your body's ability to manage glucose.
This is something I've discussed in depth on this podcast with experts such as Dr. Chris Palmer, Dr. Robert Lustig and Dr. Casey Means. One thing that's abundantly clear is that to maintain energy and focus throughout the day, you want to keep your blood glucose relatively steady without any big spikes or crashes. I first started using levels about three years ago as a way to try and understand how different foods impact my blood glucose levels.
Levels has proven to be incredibly informative for helping me determine what food choices I should make and when best to eat relative to things like exercise, sleep, and work. Indeed, using Levels has helped me shape my entire schedule. I now have more energy than ever and I sleep better than ever. And I attribute that largely to understanding how different foods and behaviors impact my blood glucose.
So if you're interested in learning more about levels and trying a CGM yourself, go to levels dot link slash Huberman. Right now, levels is offering an additional two free months of membership when signing up. Again, that's levels dot link spelled, of course, L.I.N.K. slash Huberman to get the additional two free months of membership.
Well, I really appreciate that you explained so clearly what's going on with this drug price differential and who's paying for it. I was not aware of that. Perhaps I should have been, but I was not aware of that. And as we talked about a little bit earlier,
Most of the general public, even the science and engineering, mathematics trained, they can connect two or three dots, but they're also very busy. And the general public, like I said, I believe are smart, but it has to be spelled out very clearly the way you did for people to really understand. I'm a health economist. Right. Well, I think – and I mentioned that in my introduction, but I think it is very important for people to understand that you look at things through the lens of science.
science and medicine, but also epidemiology and economics. There's a saying in laboratories, which is that just adding more money doesn't improve the science, but it certainly allows you to take bigger risks.
in service to health and discovery. And without money, no science gets done. I mean, no money, no science. You can't pay graduate students, postdocs, et cetera. I don't want to spend too much time on the structure of basic laboratories, although that's my leaning. I could spend hours talking to you about what's going to happen with the universities, et cetera. We'll come back to that. But there is one piece that we opened up earlier that I think it's important that we close the hatch on, which is the notion of indirect costs being now, well,
It's pending litigation, but level to a lower number, 15 percent if the administration has their way back to the variable rates, depending on the university. If the this lawsuit has its way. And here's what I hear a lot to just put in the simplest of terms. Stanford, Harvard, UT Austin.
Big universities, often the private universities, have big endowments. So money that's been given by donors, some might have come in through tuition, it's been invested, they sometimes will spend the interest. But as you and I both know, no university likes to spend the endowment. Just like no one really likes to spend their savings, right? People like to spend the interest they make on their investments from their savings. Nobody likes to spend their savings, universities included.
The general public tells me all the time, not just on X, but on all platforms and whenever I interact with the public, why should we pay for research at these universities that have these large endowments? To which I say, now it's true Stanford has a very large endowment, Harvard as well, UT Austin and other places. But many universities, fine universities, superb universities throughout the United States do not have extremely large endowments. And as you pointed out, there's excellent work, important work, actually.
I should say, being done those places. So to cut the IDC to 15% for everybody, I can see where I'd say, well, why don't they just dip into their savings, the endowment? But if you're, I'm not going to name names, but if you're at a smaller public university, in particular in certain areas of the country, not on the coast, unless you're at like a WashU in St. Louis or UT Southwestern and they got riches, I'm honest, they have a lot of money. There isn't a savings account to go into.
The buildings don't look the way they do at these other universities. You don't have these impressive lawns and thousands of gardeners, which we're so blessed to have at places like Stanford and Caltech that have tons of money. So to cut the IDC across the board for everybody isn't just sort of trying to restore order to the rich. I do think it potentially punishes the less wealthy universities and important research. I say that in service to them and frankly, just
Being at Stanford, it wouldn't be right for me to be like, oh, yeah, 15% will dip into the savings. It doesn't quite work that way if you're at a public university. Well, I think you're hitting on the exact policy question, the right policy question. The question is how should the federal investment in fixed cost of research be distributed?
Right now it's distributed in a very unequal way where the top universities have access to that money because they have scientists that can win NIH grants. It's a funny thing because like if you think of it as like a fixed support for the fixed cost of research, you have to have scientists who are good at getting support for the marginal cost of research in order to get the fixed cost of research. But if they're fixed, why?
Why would you do that? Why wouldn't you have the money go more equally spread across, right? The endowment money is another more complicated question. I think that the endowment monies often are like focused on particular projects. There are restrictions on it. But you're absolutely right. It does make a buffer for some of the bigger universities.
that allow it to like survive the vicissitudes of NIH funding or the economy more so than for universities that don't have that endowment. But from the federal perspective, the key thing is how should the funds be distributed across universities? There's a program called IDEAS program that the NIH, the National Institute of Health has, right?
And I apologize that I don't remember the acronym, but I'll tell you what it does. It says for research institutions in the 25 states that are in the bottom half of the distribution of NIH funding, it gives them a leg up in being able to get access to this federal funding for the fixed cost of research. I think that's a great program because what it does, it says, look, the federal government shouldn't just be funding the top universities.
It doesn't make sense from the point of view of trying to get the biggest bang for the buck in scientific knowledge. Just like a very narrow – this isn't a narrow thing. It's like an important thing. I think scientific groupthink happens when scientists are all just on the coast and the only scientists you interact with are scientists that already agree with you. Geographic dispersion of scientific support allows more –
richer conversations about science that allows different scientific ideas to develop just simply because it's more geographically dispersed. It combats scientific groupthink. There's other reasons too, as you said, like the other excellent scientists in universities that aren't in the, you know, like the Stanford's, Harvard's or whatever, that if you gave them a environment where they could do their work, they would have it
you make tremendous advances, right? So I think for lots of reasons, it makes sense to do that.
I don't want to comment on the specific 15% of our subject litigation. I will say that the key policy issue is exactly the thing you said. How should the money be distributed for fixed cost of research across the universities? Like one system you could imagine would be where like different universities compete on costs, right? So a university that's able to more inexpensively provide a square foot of lab space for
fully supported with radioactive disposal and all other stuff, maybe the NIH ought to be giving money to that university more than a university that has to provide it at much more expensive rates, right? That's not the current system, but you can imagine a system like that, right? So I think this fight over this 15%, I think it's a great
time now to rethink how the NIH and the federal government supports the research infrastructure of the country. It's for the first time in, like I think in 40 years, it's now part of the public consciousness, this thought. And I don't think, I've not seen anybody who says that we shouldn't have federal support for universities. The question is how should it be structured and to what extent?
Those are, I think, legitimate questions for public policy debate. Yeah. Well, before moving on from funding and the relationship between tax dollars and universities, I want to ask one more question. Then we'll move into issues of public health specifically.
But having been on study section, I realize I never explained what study section is. Study section is when a group of scientists convene. It used to be in different cities or virtually, and they review grants. Typically, the people who review the grants are expert or near expert in a given area, typically three primary reviewers, a bunch of people vote on the grant. And to make a long story short, whether you get money to do research from the federal government, a.k.a. the taxpayers, or
is voted on by a jury of your peers. This has distinct advantages, in my opinion, because real experts or close to experts are evaluating your work and they either have to advocate for it or they actively try and kill it. From the perspective of a reviewer, you're given 12 grants and you know that only three of those can be funded.
And so you literally have to advocate for the one or two that you feel most strongly about. And you find ways to legitimately make sure that the other grants are not scored as well. And you evaluate each one on the basis of its merits. But you go into those study sections knowing like, goodness, like this grant, I sure would like to see this one. And this other work is kind of pedestrian. It's kind of like all the others. Now, this is a great model in principle, right?
However, you talked about groupthink. It lends itself very well to people who are very good at grant writing, which is important. Grantsmanship is important. Continuing to get money and in particular new ideas, ideas that are outside the vein of what a researcher has been doing for the last five, ten years.
from promoting the i of doing new ideas of chasing new new concepts new hypotheses it tends to make science move very slowly and very in very incrementally and so that's one issue however i realize i'm weaving two questions but there's what you described before that the the majority of science that's funded at these universities on the coast has this geographic effect group think effect what about the rest of the country in these other places
The study sections, the people who review the grants intentionally include people from throughout the country. It's related, in fact, I think, to the distribution of the electoral bodies and people who lobby in Congress. So in other words, there's no study section on a given topic, say Alzheimer's, where you don't see people from the coasts.
but where you also don't see somebody from the Midwest, somebody from the desert Southwest. There's always been geographic coverage in the people who decide which grants get funded. So I just, this is a historical component here. But so the question is a very straightforward one, which is given that a jury of peers decides what gets funded, that checks off the box of, are they experts? Yes, more or less.
But it also means that nothing really that new can get funded. Yeah, I mean, I think you've hit on a real problem, which is, I think, let me contrast it with Silicon Valley, right? So in Silicon Valley, you're an angel investor, a VC or something, and you're a venture capitalist, and you invest in a portfolio of 50 projects, right?
and 49 of them fail and the 50th succeeds, it becomes Google or Apple or something. That's a very successful portfolio. The process of how we, the NIH, review grants embeds in it a certain conservatism, a desire to make sure that every grant that's funded succeeds. You can have a portfolio where every grant succeeds, but then the portfolio as a whole is not as productive as it ought to be.
Because how do you make every grant succeed? Well, you just fund incremental work that you know will work. We call that turning the crank. There was a professor at the Salk Institute, a superb institution down in San Diego, who
said to me, you know, two kinds of science. There's a kind of science where you really test a really bold hypothesis and most of the time it will be wrong. But if you hit something, it's apt to be spectacular, maybe even open up an entire field, maybe cure a disease. This has happened before many times over. Or there's the science that will get you funded where you turn the crank. You look at a different protein in a pathway that
is marginally interesting but is predictable in terms of its ability to create papers students need papers postdocs need papers most of them don't want to go on to be lab heads so they just kind of need papers and a phd and you learn something along the way and hey you might stumble on something really interesting but it's kind of like stand on one foot stand on the other spin around and with
Without money, there is no science. So you could understand why people would be incentivized to do this kind of more incremental, I'll just call it pedestrian, kind of like, really? They're showing this again? You go to the meetings and it's like, they've been doing this stuff for like 15 years, but they keep their NIH grants. And then at the end they go, we were funded for 30 years. I've had this, when people brag about having the same grant for 30 years, I just go, oh my goodness, you should be embarrassed. Now, how about...
Yeah.
And this is coming from a tenured professor. Yeah. So, like, what gives? Well, I was formerly a tenured professor until recently. But you gave it up by choice. I did, yes. Okay.
Okay, so before the pandemic in 2020, it was actually for a decade before, I'd been working on measuring the innovativeness of the scientific portfolios. I had a paper that was published on the eve of the pandemic asking how innovative is the NIH portfolio in particular? And so let me just describe the methodology because it's easy to understand, right? So take every single published paper
published in Biomedicine in 1940. Take all the words and word combinations in it and just list them, okay? Then you do the same thing for all the papers published in 1941 and subtract off all the 1940 words and word combinations. What you're left with are the unique words that were introduced into the biomedical literature in 1941. You do this for 42, 43, 44, into 2020, and what you get is a history of biomedicine.
It comes right out of the words that were actually published. You can do this because computers, right? And so you have an age for every single idea that was introduced in biomedicine that just comes out of this automatic process. You go back to the papers and ask, how new are the newest ideas in the papers when they were published?
So just to take a concrete example, polymerase chain reaction in 1982, 83 was a new idea. And so if you were, Gary Mellis, publishing a paper with the words polymerase chain reaction in 1982, that's a paper that's relying on new ideas. If the newest idea in your paper in 2020 is polymerase,
it polymerase chain reaction, well, that's an idea that's, you know, almost 40 years old, 40 plus years old, right? And now it's in the method section. Barely. Right. Because, you know, it's just like Xerox, right? You barely mention it, right? So the point is that you can use this method to ask how new are the ideas in every single biomedical paper that's ever been published.
So we did that. Me and my colleague, Miko Pakkala at the University of Waterloo, we asked, and then we asked, for NIH-funded papers, has the age of the ideas in the paper shifted over time? And the answer is yes. Papers that were published in the 1980s with NIH support tended to work on ideas that were one, two, three years old. Papers published in the 2000 teens were working on ideas that were seven, eight years old.
At the same time, in the 1980s, the age at which you could win a large grant at the NIH, they're called R01s. I mean, you know all about that. But the reason why these large grants are important is because they are the ticket first to getting funding so that you can actually test your ideas and do the experiments you want to do. But also, they're the ticket to getting tenure at fancy universities. In part, I should say.
Because R01s, these large grants, carry large amounts of IDC, indirect costs. Let me put it differently. If a professor comes to a university and does absolutely groundbreaking work, but does it entirely on foundation money, which carries very little indirect funds to provide to the university, there's a chance they'll get tenure, but very small chance.
professors that have r01s stand a much higher probability of getting permanent employment at that university called tenure there are ways to lose tenure but in principle it's academic freedom tenure was never really about a job for life it was really about the freedom to explore ideas it turns out there's some subtleties in that there's some subtleties in that but but i think it's so important for people to understand so much so that when i heard about this
perhaps reduction in IDC to 15%. My first thought was, whoa, that's a big cut. My second thought was, who will get tenure and who won't get tenure? Now it will have to be based on the merits of the work. Now there is a correlation, right? People who do spectacular work tend to get grants. People who get grants tend to get more money and then you can explore more, et cetera. And the dirty secret in all the R01 stuff is that everybody knows that the R01s are used to
to fund the next bout of research, but what you propose in an R01, sorry to break it to you, it's work that's already completed. This is the inside secret of every scientist. Every scientist. Because you want to say, look, I can do this. I've had R01 support also. You show them the preliminary data. This is what I did.
this is what I'm going to do for the next five years. But the dirty secret is, this is what I already did for the past five years. I get the money. I do the next thing. This is the shell game that every scientist learns to play because otherwise, as you say, you get it in the neck, which is grant speak for you're done. You fire your, you have to,
can't take students or postdocs. You got to fire your technicians. You close your lab and you become what's called dead wood. So there's a game that's being played and it's, it's not a dirty game, but it's this kind of like, how about like kind of don't ask, don't tell game. Everyone knows that people are doing this and look, scientists are good people. I want to be very clear. They're just trying to survive. Most scientists, I think I,
I believe most scientists are trying to get it right. I think the local culture can contaminate things and this grant, this need to be funded. I'll grant you most of them. Okay. Yeah. And you know, I'm here in part as an advocate for the public and in part as an advocate for the science community. I can't split myself. So am I, Andrew. But with lower IDC, who will get tenure? I mean, who will get tenure? What's it going to be based on? Yes. I mean, that background is really helpful, but here's a fact.
In the 1980s, the age at which scientists won their first large grant, R01, was mid-30s. Okay. I got mine – let's see. I started my lab when I was 35. I got mine – my first R01 I got when I was 37. But I started my lab in 2011. Right. Right.
In 2011 to 2020, you were young for R01. I was, yeah. Right? It's a typical scientist within the mid-40s before they got their first R01. I didn't have a family. I worked 90 hours a week. Right. So the point is that young early career scientists take much longer now to be able to get support to test their ideas out than they did in the 1980s.
This is important for innovation because it turns out that – this is another paper that I published before the pandemic. It turns out that it's early career scientists that are most likely to try out new ideas in their work, in their published work. So in fact, this is depressing, but for me as a man with gray hair, but like it's monotonic. Like the first year after your PhD is when you're most likely to have newer ideas in your papers.
And then every year after that, for every single year of chronological age, the age of the ideas you tend to work on tends to increase by about a year. Well, the late Ben Barris, my postdoc advisor and beloved colleague at Stanford who unfortunately passed away in 2017, he used to say – he was 60 when he died roughly. He used to say – he's like, nobody does anything after they get full professor. I was like, that's crazy. We have Howard Hughes investigators, people that wouldn't know. He goes, oh.
work is done early. I said, what about you, Ben? You're there. He's like, oh yeah, I'm done. You know, this was before he knew he was dying. You know, I mean, this is the dirty secret because when you're young, you're hungry. Given the space from your previous mentors, you are, you're going to go for it because you have to go for it. And I, if nothing else comes of today's discussion, already a lot has come of today's discussion, I want to put in a really strong vote for encouraging, I'm going to catch so much heat for this, but encouraging
The older labs talk about funding the next generation of science while taking most of the pie for themselves. I really believe, like, if I could just – I'm not going to beg, but I am going to be emphatic. No, you don't have to beg. We need young labs to be funded. This is an open door. Thank you. In my Senate testimony when I became – like, before I became an ICE director, this is a major initiative that I want. I mean, I think that early career –
Let me put like a probably too sharp a point on it, right? So right now what we do is we take the careers of young scientists and effectively put them at the service of older scientists, more established scientists. So the early career scientists are essentially doing the work of the older career scientists, right? So you have to have postdoc one, postdoc two, postdoc three before you have any chance of getting an assistant professor job where you could test your own ideas out.
Essentially, the labor of young scientists is devoted to the ideas of older scientists in the current system. That wasn't always true. And the NIH has played a role in that. And it's part of the reason why we have had essentially this sort of more incremental progress than I would have hoped for. You know, when I...
did my PhD and did my MD in the early 90s and then into the mid 90s, I envisioned a career where there'd be huge advances in science that I would spend my entire career thinking about and chasing. And there have been some huge advances,
But frankly, I have this sense that there have been fewer of them than I would have expected as the 1990 version of me. Especially in the biomedical sciences because I think we see the expansion of AI. We see the expansion of computer science, etc.,
I could not agree more. I actually think some of the programs like the post-bac programs at NIH, I don't want to destroy this program by saying this, but these are where people finish college and they decide to go two years of research before they decide to go to graduate school. This, in my mind, delays and kind of drains the initiative of a lot of, look, there's nothing more beautiful than someone graduating college who's still excited about biomedical science. Taking that energy, usually they don't have a lot of other commitments yet.
I think we should fund them so they can have a healthy life. They don't need to have a lavish lifestyle, but a healthy life and spend as many hours as is reasonable in the lab making discoveries to get through their PhD, do like it used to be a short postdoc, start a lab and hit the ground running in their 30s and get major funding to be able to test new ideas.
it's not just the Silicon Valley model. It captures everything we know about brain plasticity. Their brains are still plastic. They're full of energy. They're full of dopamine naturally. And I'm not saying that everyone past 60 is like dead wood, old wood. There's some amazing work being done at the, but it's very top heavy. And of course, no one wants to give up their lab. I know people in their seventies and eighties, they don't know what to do. If they retire, they think they'll do. I don't care. Get a hobby. Let the next generation in. Actually, there's, there's a one good result. One result that was, uh,
What made me a little bit comforted was in this paper that I did with Mika Pakkalan on age and the trying out of new ideas. That is that teams of young scientists, first author relatively young, teaming with a mid-career or later career scientist as a senior author, that combination is most likely to try out newer ideas in their work.
It's like you kind of need the- So keep the old folks around. By the way, I'm turning 50 in September, so I'm nearing these numbers. You're still a young man. All right. Well, I have plenty of, I'm very passionate about this in part because some of my former graduate students and postdocs are now
professors at universities working extremely hard on extremely interesting questions. But I know they would be pursuing even bolder questions related to immune system function and autism related to, you know, visual repair to cure blindness. I mean, these are not trivial issues that they're trying to pursue. They deserve and their peers deserve the majority of the taxpayer dollars for discovery because I think that they're in lie the discoveries.
And there is this culture in academia of people kind of pinning awards on each other as you go up the ladder. Some of those awards are nice. A good friend of mine just was, he's a member of the National Academy of Sciences. He called me. I said, congratulations. I was like, this is fantastic. And he said, it feels good, but like, you know, I want to be in lab. I want to be in clinic. I mean, that's what's important. The titles are, in the end, they're meaningless. I've seen so many colleagues die. Like their offices get cleaned out within a week.
They're gone. And so the discoveries that young scientists make with tax dollars, to me, is the most important and beautiful thing that can happen. I mean, it will soon migrate into a discussion about public health, but I'm so relieved to hear, A, that journals are going to be accessible to the public, and B, that you feel this way about young scientists, because I got nothing against the old. I'm not an ageist, but let's face it.
Youth is when discovery happens. But I think, let's bring this back to something you brought up earlier and I haven't yet addressed, which is how we evaluate science at the NIH, right? These study sections. They're inherently, and you alluded to this, they're inherently conservative.
Right. So just to put a real fine point on it. So I think in the 2000 teens, there was a policy that in order to be on an active member of a study section, standing member of these grant review panels, you had to have an active R01, a large grant, active large grant.
Think about that, right? So I am a scientist. I'm really well accomplished in my field. I have a large grant. By every measure of scientific success, I'm a success. And now I'm sitting judging young scientists pitching their ideas, some of which, if they turn out to be true, maybe undermine my ideas. I mean, it's really hard to like open your brain and say, oh, okay, I'm going to support this.
project that might undermine my entire career. I mean, everything we know about cognitive bias supports what you're saying. There's another aspect too, which is, you know, letting go of one's own ideas, especially if your funding and your ability to pay your people depends on them is tricky. There's another kind of, this is not just inside ball. If you're on study section, your grants are evaluated differently.
A lot of people are on study section because you get what's called a special where people you know and you know who they are, a small team of people that generally like you and you like them. You even can suggest names for who's going to review your grant. Being on study section helps you get grants.
You have to get one first in the open water of grant study section. But I hope what people are starting to understand is that the system isn't corrupt. It's just structured in a way that doesn't favor bold, innovative change. And those words, bold, innovative change, are thrown around a lot. I was part of the National Eye Institute's Audacious Goals Initiative. We'd get into a room every year. We'd sit around. How are we going to cure blindness? What are we going to do about pigmentosa, macular degeneration?
And then everyone went back to doing the same work they were doing before. And so a lot of times these phrases get thrown out there, websites get put up, and like nothing changes. When I talk to the public about science, like there's a couple of modes. Like now post-pandemic, a lot of it is just purely cynical. But like there's another mode of like,
thinking about scientists are just like sitting around thinking deep thoughts, making big advances. But in fact, what you're saying, and I agree with is true, it's not entirely cynical, but like the fact is that there's a sociology to science, right? So I'm trying, there's a, there's a sort of like a careerism inside science. And sometimes it's, it can lead to good, right? You know, your competition with other scientists to like make the big, next big advance. But sometimes, but I think in the current way we structure incentives in biomedicine,
very often we discourage that kind of sharp innovation. We encourage essentially incremental advances so I have a safe scientific career for the rest of my life rather than take a big scientific risk where I might fail, but if I succeed, I cure macular degeneration, I cure type 2 diabetes or whatever, right?
The structure of this, essentially, if you want to put it down as the key problem, is that in biomedicine, academic biomedicine, we are too intolerant to failure. If you have a big idea that doesn't work, essentially you're out.
That's not true in Silicon Valley, right? Silicon Valley, a failed startup doesn't mean that you can't get another draw at trying to make a successful startup, right? Silicon Valley does not punish failure that sharply. And that is the key to its success. Whereas in biomedicine, the current version of it we have now, we punish failure way too sharply.
I completely agree. And I should definitely point out, I never had trouble getting grants. So I'm not coming to this with any cynicism. I moved on to podcasting and I still teach and closed my lab out of a joy of what I'm currently doing. It wasn't that I couldn't fund myself. I did see excellent grants get killed. I also saw some excellent work progress. I definitely agree with this analysis that you did. Thanks for doing that paper. I'll take a look at it. We'll put a link to it that
work in early in one's career tends to be the really innovative stuff. There's just something about the younger brain that is more ambitious. It's higher risk-taking. And unfortunately, now there's so much pressure to get funding for IDC reasons and to get tenure that oftentimes young investigators will lean toward the more pedestrian turn the crank type of science, get tenure, and then think they're going to go do something. But typically there's something bigger
I am very relieved to hear that young investigators, young scientists, new ideas are going to be prioritized, hopefully through where it really matters, like brass tacks, like
I think early career R01s should be bigger than late career R01s. It should be inversely related to the size of a laboratory. I think smaller universities should get a bigger piece of the pie, I do, if the work is up to par, right? You don't just want to give them money just because. But I imagine if R01s were, I don't know, 50, 75% bigger for new investigators, maybe they weren't four years or five years, maybe they were six years. You could really take a run at something or multiple things.
And then maybe older investigators who had grants for a while, you don't want to turn them out to pasture too fast. You want to pivot them slowly. I'm kind of joking. But maybe there are ones should be smaller and they should be more selective about what they're doing because with a lot of grants top heavy in the older generation, they can kind of just spread it around. Well, that postdoc went back overseas and that didn't work out. I hear about a lot more kind of –
quiet exit type failures, as opposed to we tried really hard. We thought this signaling pathway was going to be the thing. It wasn't. Close that hatch, pivot quickly to the next thing. There's a few things we could, I mean, one of the nice things of being the NIH director, there's lots of smart people who've given me fantastic suggestions, especially for this specific problem, which I think is at the
Probably the most important thing I'm going to be dealing with. That plus the replication crisis we talked about. And I'm not sure exactly what the exact portfolio of things we do will fix this, but we have to support young scientists, early-career scientists. We have to punish failure less, and we have to change the incentives around so that people want to test the big thing.
the big thing that translates into advances for some of the most intractable health problems we face. And if we don't do that, the NIH, we're going to look back and say, well, the NIH portfolio of investments the American taxpayer made have not paid off, just from a macro scale. I mean, you can frankly say this for the last, at least since 2012, we have had no increase in life expectancy in the United States. The NIH portfolio in that sense did not pay off.
I've heard, and I think it was the former director of NIH in a public forum at the end of last year, it was November of last year, I tuned in for that, said that we've developed more treatments to extend the life of older people, or at least to limit their suffering somewhat. So cerebrovascular disease, cardiovascular disease, things related to dementia, small differences to keep them alive longer, but the real dearth of meaningful treatments is
sits around younger populations who are dying deaths of despair or whose health is in really just in dire condition due to obesity, diabetes, and mental health issues. So in other words, young people are getting sicker earlier and staying sicker, and older people are getting sick but holding on to some remnants of health longer.
And most of the treatments are geared toward the older population. Is that true? Yeah, that's true. That's exactly right. That's a terrible situation because it essentially is not preparing for the future. Right. So what we have is a system, is a sick care system. The advances we've made have allowed people to stay sick longer. It hasn't translated to a longer life, right? There was a hope, I think, when I first started doing research in 2001 in population aging,
There was this idea of a compression of morbidity. That is, you live a long life and the time you spent really sick and disabled was compressed at the very end of your life. Rather than spending a long time disabled and sick and you die after having spent like a decade or more very sick, the idea was that we have advances in our culture as produced results so that you –
You live a long life and you only spend a few months really sick at the end of your life. That hasn't panned out. In fact, we have very little increase in life expectancy. And for many, many people, unfortunately, a very long period of time,
in a state where the quality of life is not that high, not that good, right? Dementia, chronic disease leading to, say diabetes leading to all kinds of kidney failure, macular degeneration, you name it, peripheral vascular disease, heart disease. You end up with a situation where all of these amazing biomedical advances that we've had over the last decades have not translated to actually improving the health and wellbeing
and longevity of the American people. I think that the biomedical infrastructure, research infrastructure of the country has to translate over for results for real people, for the American people. Otherwise, people can ask us, why are we doing what we're doing?
It can't just be that we're doing cool things. I mean, not that we're not doing cool things. A lot of cool things are getting done. But if they don't somehow eventually translate over, again, I don't mean to distinguish basic science work. I think basic science work is really important. But eventually it has to translate over or else people will say, well, why have we made these vast investments? The key thing is if we're not actually improving health as a result of the research we do, then we haven't accomplished our mission, right? That's the...
The research agenda of the NIH, as we've talked about, it's very -- it's like we talked about, you know, international relations as determining in part what scientists work on, you know, in drug pricing. We talked about how politics determines the agenda that scientists work on, right? So you talked about HIV, right? So the political focus on HIV led to the vast investments the NIH has made in HIV with some positive effect, actually a lot of positive effect.
And then also the sociology professions, the scientific profession, these are all complicated things that result in the portfolio. But if the portfolio ultimately doesn't meet the health needs of the American people, then it's not doing what it's supposed to be doing. Part of my job is to make sure that it does meet those health needs. The Make America Healthy Again movement, that's what it's asking for, that the health institutions of this country actually meet the health needs of the people where they are.
And in the large part, we've not successfully done that in this country for decades. Otherwise, we wouldn't have this major chronic disease crisis we're currently facing. And so that's, you know, it's a complicated question. It's not like, you know, it's not just solved by funding one grant or making specific decisions. It's about the incentive of the system at large.
to focus on – to create incentives for the – so that scientists turn their ingenuity toward those health needs rather than just advancing their careers incrementally.
I'd like to take a quick break and acknowledge one of our sponsors, Element. Element is an electrolyte drink that has everything you need and nothing you don't. That means the electrolytes, sodium, magnesium, and potassium in the correct amounts, but no sugar. Proper hydration is critical for optimal brain and body function. Even a slight degree of dehydration can diminish cognitive and physical performance.
It's also important that you get adequate electrolytes. The electrolytes, sodium, magnesium, and potassium, are vital for functioning of all the cells in your body, especially your neurons or your nerve cells. Drinking element dissolved in water makes it very easy to ensure that you're getting adequate hydration and adequate electrolytes. To make sure that I'm getting proper amounts of hydration and electrolytes, I dissolve one packet of element in about 16 to 32 ounces of water when I first wake up in the morning, and I drink that basically first thing in the morning.
I'll also drink Element dissolved in water during any kind of physical exercise that I'm doing, especially on hot days when I'm sweating a lot and losing water and electrolytes. Element has a bunch of great tasting flavors. I love the raspberry. I love the citrus flavor. Right now, Element has a limited edition lemonade flavor that is absolutely delicious. I hate to say that I love one more than all the others, but this lemonade flavor is right up there with my favorite other one, which is raspberry or watermelon. Again, I can't pick just one flavor. I love them all.
If you'd like to try Element, you can go to drinkelement.com slash Huberman, spelled drinkelement.com slash Huberman, to claim a free Element sample pack with a purchase of any Element drink mix. Again, that's drinkelement.com slash Huberman to claim a free sample pack. This is a perfect segue for a discussion about the replication crisis. It's a perfect segue because up until now, and still now, the independent investigator model, for those that aren't familiar, is...
Andrew Huberman gets hired as a assistant professor who might get tenure at a university. And then the so-called Huberman lab before it was a podcast, it was also a lab actual laboratory space, physical space has to come up with a set of ideas that hopefully pan out. You get funded for, you get tenure and then you can pursue new ideas, but it's an independent kind of startup of its own. My neighbor two doors down in the hallway works on something else. Um,
One of the major issues, I believe, that led to the so-called replication crisis is that it is very difficult, even with the best of intentions, for two laboratories to do the same work in an identical way.
Five minutes longer on a countertop at room temperature might change an antibody that could lead to a different outcome. I mean, there are so many variables. The solution to this is collaboration. Instead of having independent investigators, you have clusters of laboratories, hopefully distributed throughout the country, working on the same problems, collaborating. There are grants of this sort, but here's the problem. As you point out, it's a sociological issue. The graduate student in my lab needs a first author paper if they want to eventually get their own lab.
The postdoc in another laboratory doesn't want to be a middle author with 20 other authors. To continue to flesh out the world of science with scientists, the independent investigator model works. Those independent laboratories are naturally going to come up with different answers, talk about them at meetings, and maybe there'll be some convergence of ideas. But
Wouldn't it be beautiful if laboratories collaborated to try to solve important problems related to public health and everyone was incentivized through perhaps not easier but more plentiful funding to do the research, salaries that these people can live on reasonably while they're graduate students and postdocs, and maybe even laboratories that are more structured around a problem so it's not called the Huberman lab?
It's called the Laboratory for Curing Blindness. And there's another laboratory for curing blindness at WashU and another one in University of Illinois. And we all collaborate and we try and cure blindness as opposed to making it all about the principal investigator, the independent investigator. The rock star model of science is
Kind of works and it kind of is part of the problem in my opinion. I agree with you about collaboration in in the following sense. So um science is a collaborative process, but the the incentives within science the the for for individual advance uh can often lead to a sort of a a structure that that elevates um Elevates careers without necessarily producing truth. So let me let me um
Let me flesh this out. Very tactfully put. Okay, so there's a colleague of ours at Stanford named Johnny Enides. He wrote a paper in 2005, absolutely brilliant scientist, I think a most highly cited living scientist in the world, right? So he wrote a paper in 2005 with the title, Why Most Published Biomedical Papers Are False. I mean, when you make a title like that for a scientific paper, it better be convincing. And in just a few pages, it's an utterly convincing paper. And...
And it's not because scientists commit fraud. That's not the reasoning behind it. Because science is hard. It's hard in exactly the way you just said, Andrew. So you publish a result. You believe it to be true. You have some statistically significant result at some level.
We say P equals 0.05. What does that mean? That some percentage of the time, even though you believe the result is true, it's been peer-reviewed by your colleagues. The peer review actually doesn't involve, as you know, the peer reviewer's
taking your data, rerunning your experiments. It doesn't mean any of that. They just read your paper, looked for logical flaws, didn't find any, and then they recommended the editor to be published. So the peer review is not a guarantee that it's true. You have some significance that say that your data meet. Even with that, some percentage of the time, the published result is going to be false. Now, if you think of science, a priori is hard. Any result...
that you publish is most likely going to be a false positive result. So-called negative results aren't incentivized. Yes. They're very hard to get a good paper published for showing that something isn't true. It happens. I had a paper published in Science which argued that at least one aspect of a theory was not true. Um,
It was a very prominent theory. It turns out other aspects of that theory were true. So sometimes it happens, but no self-respecting graduate student or postdoc who values their life is going to say, hey, I want to go in and try and disprove the hypothesis of one of the more famous people in the field. In fact, I didn't set out to do that. It just so happened that's the way it landed.
And no one shows up in graduate school and says, you know, I love these papers. Let's replicate them. Yeah. Right. So let's get back to that because you're absolutely right about the incentives. But this is – let's just – before we get to that and the incentives, we're talking to analyze that, just put a fine point of the nature of the problem. The published biomedical literature, something that I've searched basically every day for the last 30 years, 40 years –
Oh, my God, 40 years. That published biomedical literature, most of the time that I'm reading papers in that literature, the papers I'm reading, even though they say their result is true, is likely not true. Look, I had a professor in medical school who once told me, he was one of my favorite professors, he told me, look, half of what we're teaching you is false. Well, okay, so I'm glad you're pointing this out. I asked a very prominent neurosurgeon.
perhaps one of the most prominent neurosurgeons in the world. I said, what percentage? Someone else asked him, but I was right there. What percentage of information in medical school textbooks do you think is false? And he said, half. And then the second question was, what do you think the implication is for people, for human health? And he said, incalculable. Right, exactly. And that's true of the biomedical literature as well, right? So the published peer-reviewed biomedical literature is not reliable.
Is the bottom line. So a lot of the things that we think we know, even with some fair degree of certainty, are probably not true. And the question is like which half? Well, we don't know the answer to that question. It's probably a mix. Parts of papers are probably true and other parts are not.
Right. It's not like all the papers from one – well, there are those labs, but they don't last long. And this is done even with pure goodwill and no fraud at all. And the reason is a combination of the fact that science is hard and the incentives we created for publication. Those two together mean that the scientific literature is – the biomedical scientific literature is not reliable. I've talked with drug developers who tell me that they –
before they make vast investments in a phase three randomized trial or even phase one or phase two trials, studies, they conduct independent replication efforts of the basic biomedical literature to see if it actually is true. Now, those are private replication efforts so that the drug developers know which parts of the literature are true and false, but the scientific community at large doesn't know.
We've set up a system, a publication that guarantees that much of what we think is true is not true. That's a major problem for science.
And it's linked to this idea that you have to publish or you're out. It's linked to this idea that if you fail, if you publish failure, you're out. It's linked to this sort of reward that we give to scientific volume, like the number of papers we publish, and scientific influence. That's what citation counts are.
There's a number of, I'm sure you know this, Andrew, so I'm explaining it to the folks who are listening, something called the H-index.
So if you go to a site called Google Scholar, every scientist listening to this I'm sure has gone and looked at their Google Scholar page. They have a little card at the top right that essentially looks like a baseball card to me. And it has a few statistics. And if you're not a scientist, you won't necessarily know what those statistics are. But what they are are things like an H-index is – OK. So if you have an H-index of 10,
of 10, that means you have at least 10 papers published in the peer-reviewed journals that have 10 citations each. But you don't have 11 papers with 11 citations each. So in order to get a high H index, you have to have both a lot of papers and a lot of citations to those papers.
It's a funny number because like you can imagine, just to bring back Watson and Crick, imagine Watson and Crick, the only paper they ever published was the structure of DNA. Good paper. The double-heels. Let's say it has a million citations. Not peer-reviewed, but good paper. That's a fantastic paper. And was never peer-reviewed. Right. But a million citations. Imagine it was their only paper. Well, they have one paper with at least one citation, but they don't have two papers with two citations. So their H-index is one.
Or you could have a million papers in the Journal of Irreproducible Results, each with one citation each, and you have one paper, at least one paper with one citation. But you don't have two papers with two citations, so your index is one. Or you could write a lot of reviews because reviews get cited like crazy. Yes. Okay. So now what you have then is a –
incentive for scientists embedded in Google Scholar that says, look, you have to publish a lot of papers, you have to have a lot of influence, because that's what a citation is. It's a measure of influence. You go to scientific meetings in order to sort of shop your ideas around, right? And so we reward scientists for the influence that they have, and we reward scientists for the volume of papers they publish.
What we don't reward scientists for is honesty about their failures. We don't reward scientists for pro-social behavior, like the sort you suggested, where you collaborate, you share your data openly and honestly. In fact, we punish scientists for that. So right now, if somebody comes to me and says, Jay, I want to replicate your work. I've trained myself not to think this way, but it's really hard not to given the structure we're in. I'm going to think of that as a threat.
What if they don't find what I found? Now I'm a failure, right? The failure to replicate is seen as a failure of the scientist rather than the fact that science is hard and it's difficult to get results that are true even with the best of will. And we punish scientists for that.
So we essentially reward scientists for a set of things that creates incentives for the replication crisis to happen. I see. Right? So the solution to the replication crisis is to address those things, measure the pro-social things that scientists could do.
Recreate the incentives away from simply influence and volume. I'm not saying you shouldn't reward influence and volume. I'm saying you should reward a fuller set of things. It's like in baseball, you reward a hitter for home runs, but you don't also measure strikeouts. Well, you're going to get a lot of strikeouts and not necessarily, you may get a lot of home runs, but that may be bad for the team in total, right? So you want a full set of statistics that
measuring the things you actually want scientists to do in order to solve the problem. So let's say we had statistics that said, look, do you share data with others in your published research work?
And we have that as among the baseball statistics we put in Google Scholar, right? Let's say we ask, is your work subject to replication? Actually, if your work is subject to replication, you have ideas that are worth looking at by other scientists. That's a success no matter what they find. How frequently do you publish your false results or results that turn out to be not true, right? Imagine we had those statistics. We would have a fuller picture of what scientists –
like the capabilities of scientists, the outcomes of scientific work, and we would reward the pro-social things that would solve the replication crisis. And so what you have now is a real problem that's not been addressed. We've known about this now for decades, but it's not been addressed adequately. There have been a number of efforts by the NIH over the last several, couple of decades to try to address it, but it hasn't solved the problem. Well, I feel like the issue that really cracked this open, the reason the general public might have heard of
the so-called replication crisis is this idea that there were some findings in the field of Alzheimer's research that were false, but they were wrong potentially for the wrong reasons. As
as a scientist, you learn it's okay to be wrong for the right reasons, meaning your measurement tool was inaccurate, but it was the best you had at the time and you thought it was accurate. You know, but better tool comes along, you get a different measurement, new result, because you were wrong for the right reasons. But you're not fudging data. You're not hiding data. There is this idea that in the field of Alzheimer's research that somebody might have fudged data, made up data, and that the field kind of went along with it. That's not my understanding of what happened. My understanding is that somebody...
fudge data, and then nobody went back to check the primary data in that paper. And as a consequence, many years down the line, a number of subsequent findings were nested on a false finding and the whole thing tumbled like a house of cards, more or less. The process you just described is the replication prices playing itself out, right? So you make investments built on a house of sand, on a foundation of sand, and you eventually get fancy drugs,
that are supposed to prevent you from getting the disease that you're trying to prevent, in this case, prevent you from progressing to where you can't remember the name of your kids and you can't live a normal, full life as your memory goes away. The drugs don't work for those things. And your question is why? They're built on the best science going all the way down. It turns out the best science all the way down is not replicable.
The fraud aspect of it is actually not even the most – it's important, but it's not the most important part of it. Like it's almost just an afterthought, right? Ask yourself, why have there been so many scandals brought down the former Stanford president? The NIH, again, just within Alzheimer's, there was a director of neuroscience who had
Apparently he had 100 or more papers with his Photoshop fraud. So the question is why have so many prominent scientists been brought down where their work has been shown to be fraudulent? It's not a moral failure on the part of any individual scientist. The structure of incentives we've created produced those behaviors. We created them is what you're saying. Yes.
We said you will get advances in your career if you publish a lot of papers that have a lot of influence. And if you admit that you were wrong about something, your life is over. Your career is over. Yes. I think one of the most beautiful things in science was when Linda Buck, co-recipient of the Nobel Prize with Richard Axel for the discovery of the molecular structure of olfactory receptors, retracted, I think it was three papers from her laboratory. A postdoc
either with sloppy or fudged data. She retracted the papers because the papers were wrong. People told her this stuff doesn't replicate. Not only did it not hurt her career, it helped her career. She was right about the olfaction work that got her the Nobel Prize, but she was willing to admit a mistake. Someone in her laboratory made a mistake. Ergo, she needed to retract those papers. What happened in the case of our former colleague, still, well, you're not at Stanford anymore, was
Let's just put it this way. In every major laboratory that's publishing at a phenomenal rate, inside the field, there is always discussion. Postdocs talk. Graduate students talk constantly. And people know...
That work is solid and other work, like there's something that just gets said at meetings. It's like, no, nobody believes that. When somebody says, and that gets passed around, so then no one follows up on it. But it's rare that somebody goes and whistleblows the way that those papers got whistleblown. And then the right thing to do, in my opinion, is you correct or retract the paper. If you make a mistake, you correct the mistake. There are ways to do that. People publish corrections all the time.
Or you retract the paper if it's wrong. I think that the system, as you pointed out, has made it feel very dangerous for scientists that are approaching the pinnacle of science, like within reach of Nobel Prizes, winning Laskers, winning international awards, as was the case in all these instances.
That they could admit that they were wrong. Andrew, it's all up and down the system. Imagine you're a postdoc and you have to get your paper – you retract your paper. You're essentially starting over or leaving science. Yeah, you're leaving science. It's existential. Like the structure – so the problem of fraud in science then is a symptom of the broader problem of the replication crisis.
rather than the main driver of it. So the right solution then is not root out the fraud. The right solution is change the incentives of science so that we, as scientists, engage in pro-social behavior. Pro-social in this case meaning behavior that rewards truth rather than rewards volume and influence alone.
Music to my ears. How is NIH going to do that? So we were talking about the innovation crisis. That's a much more complicated crisis. This one actually I think is doable within the context of the NIH. I think you have to do three things. So first, you have to make it a viable career path to engage in replication work in creative ways. To some extent, there's some of this with like meta-analysis. Meta-analysis is the science of
analyzing the scientific literature to ask what the scientific literature as a whole says about a particular question, right? That's what meta-analysis is. And so there are people who make careers on meta-analysis. And so that's, in a sense, a kind of replication work. Studying studies. Yes, studying studies, right? But it's really difficult to make a career out of doing replication work as a general matter within science. You can't
win a large grant at the NIH currently where you say, "Oh, I'm going to do meta-analysis. I'm going to do replication work," which means then you're not going to get tenure at a top university because you can't win the large grant that you're required to get in order to win. So you're not going to focus on replication work as a young scientist, even if you are very good at it, even if you could think creatively of how to do it at scale.
But it is discovery, right? Like I think, you know, we need to reframe it, right? Replication is kind of a dirty word. It shouldn't be. But, you know, years ago when gene arrays first became available where you could look at gene expression in cells or tissues. Now you do, you know, single cell sequencing and you can do deep sequencing and this has really evolved. None of those –
dare I say, are experiments. You're not testing a hypothesis. They are hypothesis generating experiments. You get a bunch of genes and you go, well, that one's much higher in the cancer cell and that one's much lower in a non-cancer cell. I think I'm going to go do like a knockout of that gene or overexpress that gene. I mean, that's testing hypotheses, but there is work that's necessary, but not sufficient. And what you're describing in terms of
meta-analyses, aka replication, maybe should be recast simply because branding matters. It shouldn't, but it does. And incentivize it as...
discovering whether or not discoveries are actually discoveries. What's more important than that? Yeah, like essentially saying, is the scientific literature true? Like assessing the truth of the scientific literature, right? That's what that is. And that's a real fundamental actual advance, right? Exactly the way you say it.
But we don't reward it. The NIH doesn't reward it. That will change. Well, drug companies, it occurs to me, should be incentivized to do it because it will save them perhaps – They do it.
They have an increased confidence that molecule A, B or C does A, B or C. Sure, they'll test it again because they're about to put dollars behind it. No one wants to put dollars behind something that they aren't absolutely sure about.
is true, but you'd like the funnel to be narrower. Yeah, I mean, and right now they test it. They do the replication work. Drug companies, before they make those investments, do the replication work, but it's private so that only they know which results are true and false in the literature. I see. Right? So if the NIH does it,
The knowledge about which results are true becomes public, which makes the entire scientific literature much more reliable as a basis not just for drug discovery but also for individual behavior. Which health behaviors should I – I mean what food should I eat?
to make myself healthier. Well, that- No one can agree on that. I know, but the reason why- We can only agree on what you shouldn't eat. And even there, you know? I mean, I shouldn't eat the Skittles with the- I heard that processed foods are bad, but the other day I saw, you're not going to believe this, but there's a kind of a emerging movement in one sector of the media that the demonization of highly processed foods is a conspiracy theory, which is like,
Like if that – but that's a perfect example of sort of what we're talking about more generally, which is that language matters. You can throw something in the trash bin very quickly by calling it a conspiracy theory until somebody makes – or a group makes the effort to bring it out over and over again and determine if it indeed is. You can also throw something in the trash bin very quickly if you just call it just a replication study or a so-called negative result.
A negative result says this particular pathway, molecule, mechanism, et cetera, is not doing what we hypothesized it would. That's a real advance in our scientific knowledge. Absolutely. Without question. So the reason why we don't have consensus on what the right thing to eat is because the scientific literature – well, first, it's a more complicated question than just science. But like part of it is that the scientific literature around it is not replicable. And those studies are really hard.
Yeah. Get people to eat the same things. No, I agree. People are probably sneaking Skittles. People lie about what they eat. By the way, I don't like Skittles. I was more of an M&M person just for the record. Whatever. Let's just leave that aside. Well, it's clear that the new administration both champions healthy, unprocessed foods, but every once in a while, you'll see one of them consume it. I've cut down the Skittles since I've joined the Maha movement. Whatever. I mean, or the M&Ms. Okay, so...
Let's go back to what we were talking about. You asked, how do you fix this, right? So one is you give large grants to people in the scientific community
who do replication work in creative, important ways, scalable ways. You farm out to the scientific community the question of what results in scientific literature really need replication? The key sort of rate-limiting step kind of results that we need to know if they're true to advance science and advance human knowledge on questions of health.
So you reward large scale, large grants for scientific scientists. So now all of a sudden their status is lifted.
compared to where they were before, which is down in the basement. Will there soon be an institute or a set of grants set aside specifically for meta-analyses and to resolve this, to help resolve some of this so-called replication? Yes, I plan to do that. Fantastic. I don't think you'll get any pushback on that. However, every dollar spent one place is a dollar not spent elsewhere. Yes, but at the same time, making the entire scientific literature more reliable is money well spent. That is my belief as well.
Second, you have to have a place where you can publish this work. Right now, if you send your replication result to a New England Journal of Medicine or Science or Cell or Nature, they will not look at it at all.
The NIH can stand up and will stand up a journal where these replication results can be published and made searchable in an easy way so that you have some scientific paper you ask yourself, is this something that other people have found? You can go to the scientific journal that we're going to stand up. You can search it very easily and ask,
Where are the other papers that look at the same question and what do they find and get a summary of it? This is a little bit like community notes on Excel.
In a way, but it's the scientific literature producing the community notes. Right. These are formal papers with method sections and credentials. Yeah. Not just anyone doing this work. It's part of the community of people that are looking at this question in rigorous ways, right? So the point is that you'll have kind of a Cochrane collaborative. Cochrane is this group in the UK that grades scientific evidence on technology.
on a whole bunch of different health questions in a way where they elevate rigorous
your randomized control studies is the highest level of evidence, and then N of 1 kind of studies is the lowest level and a whole bunch of things in between. And they'll produce reports that say, well, there's weak evidence suggests this is true. There's excellent evidence suggests this is true. There's no evidence suggests one way or the other on this. They're very, very nuanced in summaries. You should be able to do that, but with the published replication work as the core of it.
And a scientific journal put out by the NIH, a high-profile journal, will then make publishing replication work a high-profile scientific, high-prestige scientific activity.
And the journal could also publish negative results. I tested this idea out. It didn't work. Published in the journal and now it's discoverable. It's no longer the threshold of you have to have a significant result in order to get your result published. You just publish the result because it's interesting and true even if it was a negative result, right? The journal then –
that the NIH will stand up, will plug a hole in the literature where we don't reward, where we punish failure. Instead, we would reward it where the constructive failures are published and communicated to the public, to the scientific community at large. We reward replication work, right? So fund replication work, create a place where it's publishable and essentially rewarded. And then third, this is probably the most important,
measure pro-social behavior by scientists. Make it part of the suite of statistics we use to measure scientific productivity, not just publication, not just influence, but also do you share your data? Is your work, has it been subject to replication? Do you cooperate with those replication efforts? Do you yourself engage in replication efforts of others? And make that part of the suite of statistics we measure for scientists to measure their productivity.
And now all of a sudden replication becomes something you want to participate in, even if you yourself are not doing it. It fundamentally alters the culture of science so that it rewards truth. Scientific truth is determined by replication, right? By independent research teams rather than influence. It's hard to think about as scientists. We think about scientific truth as were you published in the New England Journal or you published in Science Cell or Nature or whatever. That's truth, peer-reviewed papers.
But in fact, the ground truth of science is determined by something really much more humble than that. It's by replication.
We need to reward the things that produce the ground truth rather than the things that reward just pure influence. And we don't do that. It's hard and it's almost impossible as scientists that have grown up in a community of people that reward influence as the primary measure of success to think what it would be like if we were to reward truth. But I think if we do these three things, it'll completely transform the nature of science. Why would you want to commit any fraud?
You're not going to get a reward for it. Yeah, you get a published paper. You might even be a top journal, but no one's going to replicate it. You won't want to share your data with people because they'll find out you committed some fraud. All the incentives to commit fraud will just dissipate.
it will it'll be liberating for scientists to be able to focus on the things we actually care about which is learning about the world uh true things about the world things that the reason why we went into science to begin with rather than this sort of like competitive process of trying to get uh climb up a ladder that doesn't necessarily produce any truth amen to all of that i feel very blessed that i had a graduate advisor who said
She said it was wild. She unfortunately passed away young as well. But she said, you know, why would any scientist make up data? It's crazy, right? You're trying to figure out what's true. So that essentially means they're willing to lie to other people about their data and to themselves in some sense, right? The other thing is I'll never forget revising a paper with her and saying,
I remember thinking like, oh, well, we have this, this. And she said, whatever we do, we can't give the reviewers what they want. And I thought, that's a weird statement. All you ever hear is, you know, you got to give the reviewers what they want. But it's a very dangerous statement. And the reason she was saying don't give the reviewers what they want is you have to stay, you know, wholeheartedly committed to what you know and observe to be true. And you were closest to the data, so you would know. The other thing that I learned from her, and this relates to what you're saying, is that
It not only is okay, but it should be encouraged to publish papers in an array of journals.
I think the pressure to publish in high-profile journals in order to get a really great job is so great that it leads some postdocs, as it did in some of the cases we were talking about earlier, to either make up data or to throw away data that didn't fit in order to please the boss. Then the boss gets pulled into it. Then the boss tries to dissociate. This has been going on for so long. I feel very blessed that I was encouraged to publish some papers if they had a chance in Science and Nature, but other papers in fine journals like the Journal of Neuroscience
where the accuracy and in some sense, the volume of data was also encouraged. You could put a lot more data there. But now with online publishing and electronic formats, there's no limit to the amount of data you can put. So you can no longer use the excuse, well, you know, the high profile journals, you only can have four figures. So I think everything you're saying is very reassuring and should be reassuring to people. It's music to my ears, frankly, and I think it will be music to
to the ears of graduate students and postdocs who feel this immense pressure to make a major discovery, to make the lab head happy so that then they can get promoted to getting a job. Because most of the job process is powerful PIs picking up the phone and saying, I've got this postdoc, you should hire them. That's like, it's a lot of it. It's not all of it, but that's a lot of it. So having an elder that supports you is huge. The other thing that I just am so relieved to hear is that
The system has been around a long time, and it sounds like from what you described, it worked really well up until about the 90s, mid-80s, 90s, and that at some point something happened, something changed. And I don't doubt that scientific fraud took place a long time ago. There wasn't replication, but I feel like some of the pure essence of science that you were alluding to earlier, people tackling new issues, that there isn't really –
It's more survivalist careerist now than it is about the spirit of discovery, which is really about the spirit of finding out the truth. So any reflections on this notion that we're sort of in a more careerist mode of science nowadays? I think part of it is just the sheer funding levels have been so high. I think over the time period you're talking about, there was a doubling of the NIH budget. There was all kinds of increases in the sheer volume of research done.
I think the way that we – I think it's worthwhile investments to have those investments, right? But to have such high volume relative to what we had in the 80s, to have such high levels of funding relative to what we had in the 80s. Are you saying that we have too many scientists? No, I'm not saying that. What I'm saying is that we have to create structures that are appropriate for the volume that we have.
so that we produce in this volume. It's like a fundamentally different problem than we had in the '80s. So the structures that we had in the '80s where we rewarded publication and peer-reviewed journals as the measure of success,
It might have worked to create incentives for pro-social behavior in the 80s, but it doesn't work at the volume and the levels that we have now. And so we have to change the structures we have so that given this volume of investments, people have the right incentives to have those pro-social. We have to change how we think about we structure the incentives in science to create the kind of pro-social incentives that we once had.
All right. Now that we're through the easy stuff, let's get to some of the harder stuff. I'm just kidding. You have a tough job, my friend. Let's talk about some of the recent changes in NIH funding that most people have heard about. And then we will segue to the barbed wire topics of vaccines and lockdowns. But before we do that, I heard, or at least my understanding was that when the new administration came in, they essentially went through and looked for the letters DEI,
and for the word transgender and basically halted or eliminated some lines of funding to particular labs. I also saw on social media, and I didn't validate this, that some studies that were focused on transgenic, not transgender, but transgenic mice, which is a very common tool in biomedical research,
um, got flushed in that process so that maybe it wasn't a, uh, a clean vetting of transgender versus transgenic. Um, look, every, every, uh, administration, every person makes mistakes. So I'm not trying to, uh, highlight mistakes, but I think this blew up. Um, and it would be great because you have an opportunity here to reach a lot of people to just sort of clarify, uh,
what the rationale of eliminating grants that had a DEI or transgender component was. And then we can talk about this, what appeared to be a mistake.
Yeah, so first let me just talk about the mistake. First, most of this happened before I became NIH director. It was like early April is when I started. I think much of the – When did you start? April 2nd, I think. All right, so don't come after Jay for anything that happened prior to that. And it was actually quite frustrating to be on the outside looking in going, I can – Yeah, they were waiting for you to step in so you could take responsibility. Yeah, anyway. So they could blame you for something you didn't do. I don't mean to say like I – I'm still like responsible for like –
addressing this going forward. So I actually don't know specifically about transgenic. That's obviously a mistake. The transgenic mice are a key tool for discovery. If that was cut, I think we... Maybe it might have been a wording in a public address from the president. I don't know that they actually eliminated grants simply for studying transgenic mice. I know that grants focused on... Look, years ago, I said sex differentiation in the brain and body. So
Not all studies where you give a male rodent estrogen or a female rodent testosterone are studies of transgender biology. Those hormones are active in both sexes. There are a lot of grants that you can imagine that got flushed that were studying hormones and sex differences. My sense is there were some false positives like this. And I've worked very hard to make sure that those are corrected. There's an appeals process that I've set up so that researchers were stuck in this
with the false positive, we've restored a whole bunch of grants like this. Great. Where it's good science, but it got caught up in this DEI kind of focus on like refocusing the NIH portfolio away from sort of politicized ideologies and more toward things that actually advance health. So let me just address DEI specifically, okay? Sure.
First, this is really important to me. In my own research, I focused a lot on the health and well-being of vulnerable populations. A lot of my research is focused on the health of minority populations. And there are legitimate scientific questions that where somebody's race, sex matter pretty fundamentally to the biology of
And so, of course, as the NIH, we have to be able to look at that. Yeah, some mutations only exist in certain races or, I mean, breast cancer and the BRCA mutation, more common, much more common in women. I mean, you can't pretend this stuff doesn't exist. Correct. And so, like, that's part of science. And the NIH absolutely supports that kind of research still, despite all of the changes in DEI. So I want to give you another example of an NIH success study.
is the research on sickle cell anemia, right? So the research on the strategy is this gene editing strategy essentially is to switch the cells so that they express the fetal hemoglobin rather than the adult hemoglobin that has this problem that causes sickling.
That's a fantastic result that's going to, I think, result essentially in a cure for sickle cell anemia. Amazing. Right? Amazing. And it's a thing that affects African Americans much more frequently than it does white Americans, just based on the genetics of the thing. So the NIH...
has in the past and will continue in the future to focus on research that advances the health and well-being of minority populations. It absolutely must. If the mission is to improve the health and longevity of the American people, that includes people, African-Americans, it includes Native Americans, it includes women, it includes minorities, it includes people of all different sexual orientation. All of that is still part of the portfolio of the NIH.
I want to distinguish that from DEI. DEI, I think, is something where, just to give you a sense of this, right? So in 2020, I was quite upset with Stanford with the way that it was, we can talk about this maybe later in the podcast or a different podcast, but I'd grown disillusioned with the academic freedom kind of that I as a scientist enjoy at Stanford despite being a tenured professor.
And so I applied for a job outside of Stanford. I applied to a university. And one of the things they had me fill out essentially was a DEI, a loyalty oath, right, where you had to say essentially your commitment to the DEI ideology. Which was, I mean, just maybe we put, as you would say, a finer point on it just because, I mean, I think these words, diversity, equality, and inclusion, right?
I think our equity and inclusion are, you know, they're words, but what are they really talking about? That you're committed to having a lab where you include a certain number of people of different backgrounds? Or is it just sort of saying, I care about these groups? The key thing is race essentialism. That what makes you you is your race essentialism.
First and foremost, there may be other things about you that matter, but the most important thing about you is your race and nothing else matters of the same scale. Right. That essentially is the heart and soul of the DEI. So just to give you another, again, a concrete thing, the idea that structural racism is responsible for the health outcomes of minority populations primarily.
Right. Now, if you think about that, you say, OK, well, you know, maybe true. You may think it's true. You may think it's not true, depending on who you are, what you're listening to. But all I'll say is that I cannot think of a scientific experiment to do that would in principle falsify that idea.
Now, I can think of experiments to do that would say, okay, well, look, minorities are more likely to live in food deserts. So the food they get access to easily makes their health worse. That's a scientific hypothesis. You can test it. You can imagine the result being not true or true depending on the data you find, right? That's a scientific question.
That's not DEI. That's a scientific question about the health outcomes of minority populations you can test scientifically. Whereas the idea that structural racism is responsible for the health outcomes of the minority population country, that's not actually scientific in the same sense.
You mean there isn't a clear variable to focus on? Well, there are a lot of variables that could support or refute that idea. I don't think so. I think the problem is one of demarcation between what is science and not science. I see. I think it's like a structural...
So like, you know, Karl Popper had this like demarcation as a philosopher of science in the 20th century, probably one of the most important philosophers of science in the 20th century. He had this demarcation criteria that said, look, is your scientific hypothesis in principle falsifiable?
So the structure of the atom involves certain hypotheses about what you can and can't observe about the momentum and the position of an electron at a particular time. Like what's falsifiable? There's falsifiable questions. You can do an experiment that in principle could have falsified the Heisenberg idea, right? Yeah.
versus, for instance, Freudian psychology. He made the point that there was in principle no scientific experiment that was outside the system so that you could falsify the Freudian idea. Everything inside the system was – so it's not scientific. Yeah, I see exactly where you're coming from. I will just push back a little bit in service to the conversation, which is for descriptive work in science, there's no hypothesis. Right.
Billions of dollars of NIH money went to gene array, single cell sequencing. Those were hypothesis generating experiments. Could you falsify those experiments? Okay, a given cell, let's say a cancer cell and a non-cancerous cell from the same tissue express gene list A and gene list B.
Could you falsify those lists? Well, you could run it again and get a different list, but at some point you're running statistics on those. And did you falsify the first one? Not really. So just anything descriptive, like an electron micrograph, for instance, of a nerve cell, you see lots of stuff. Wow, the mitochondria are there. The vesicles are there. Now I get a more powerful microscope.
And I look and I go, oh, what I thought was one thing is actually two things. Did I falsify it? In some sense, yes, but I actually just separate it with a better tool. So a lot of descriptive science upon which like many of the great truths rest, including the double helix, right? Crystallography to find the double helix structure.
It's still a double helix thing. Thank goodness as of this morning, I think it's still a double helix. No one's proposed different yet, but most science isn't subject to this idea that you could like just falsify it with a counter hypothesis, or I would say a lot of science doesn't quite work that way. Now, what you're describing is a merge of sociological phenomena and scientific principles. And so maybe I'll just pose the question a little bit differently in an area that falls squarely in your court.
Up until I think pretty recently, maybe still now, but I think this was eliminated. If I had a grant from the NIH and someone was potentially coming to my lab who was an underrepresented minority, I could call up my program officer. That's not a parole officer, by the way, but they're kind of similar in that they control a lot of your life. And I could say, hey, listen, I've got a terrific young scientist coming to my lab.
I don't even need to say that. I'd say, hey, I've got a scientist who wants to come to my lab that's an underrepresented minority. And they would say, great, we will now add funding to your grant specifically to fund that person. I mean, they have to be what we call above the bar. They have to be capable of doing the work, et cetera. That has been eliminated.
I'm neither advocating for that nor fighting against it, but that's something that lands squarely in your camp. And it is clearly DEI. It's not a question of whether or not they're the best person for it. It's just more taxpayer money specifically to fund a researcher who would not otherwise have the opportunity. That's key because they are an underrepresented minority.
Okay, so let me—you have two items there. Let me address them both. Per usual, yeah. So the question about, like, hypothesis-driven science, right? So, like, inductive versus deductive science. The NIH funds both, and it should fund both, right? So the idea of a scientific project demonstrating differences based on race or some other variable that's biologically relevant—
for some health outcome without necessarily having a hypothesis, that is good science often. Women get breast cancer more often than men. So there's nothing wrong with that. There's no policy at the NIH not to fund that now. In fact, the NIH still funds and will continue to fund exactly that kind of science, right? Because it's still science. It's part of the scientific method. Whereas like
purely structural racism causes your health problems for minority, I don't believe is science. That's more of a psychology question than a biosciences question, right? I don't even think it's a... I don't even... If it's a psychology question, it's not a scientific psychology question. I don't think it's science. I just... I think it fails the demarcation problem. It's a... Again, that's a hypothesis, right? So there's no problem then with hypothesis-driven science. It
if it's actually, you know, sort of focused on health problems that matter rather than just purely trying to demonstrate, you know, sort of sociological outcomes that are outside the purview of the NIH to try to address, right? Okay, so let's leave that aside. Before we do, there's an old saying that I learned from a very famous, excellent scientist, also deceased.
He used to say- You know a lot of great dead scientists. All my advisors are dead. So the joke in my field is you don't want me to work for you. Oh my gosh, okay. But I didn't have to deal with competing with my mentors and I did not have to deal with disappointing them or pleasing them.
So, you know, but I would do anything to have them back. Truly, they were wonderful people. I was very blessed. But there's a saying, which is a drug is a substance that when injected into an animal or person produces a scientific paper.
Which is basically to say that there are many things that when you, many studies that when you introduce a variable, you're sure to get a difference. Like if I want a paper, I give a drug to a person and I measure the amount of rapid eye movement sleep because basically every compound alters rapid eye movement sleep, usually for the worse. It's kind of wild. An aspirin will do it.
I don't want to discourage anyone from taking aspirin, but it's so easy to tease out effects when you just introduce a dramatic variable. So I think that's what you're referring to. And it's not junk science, but it's not great science. Yeah. I mean, so like, right, for instance, that you don't have a control group. You're like, okay, what's the... You're just looking for differences so you can publish a paper. Yeah. Okay. So let's just leave that aside. So some of it's good science. Some of it's not good science. Some of it's not science. The DEI shift...
has been, in terms of funded science, has been to try to excise from the portfolio things that are purely ideological boondoggles. Can you give me an example of some of these grant titles that no longer exist? I don't want to single anybody out, so I don't want to do that. But just sort of a general flavor. I mean, I'm having a hard time kind of— I mean, like structural racism is the cause of worse—
cardiovascular disease in African-American populations. So something like that. That would be an example. It's not actually a specific example. Again, I don't want to point to any specific thing. No, it's a thematic example. Yeah, exactly. So that would be an example, right? So now let's talk about the support for underrepresented minorities and the set-asides. The position of the administration is that we should follow the civil rights laws of the country. Civil rights laws of the country say that we shouldn't be discriminating against people based on race.
when you have an institution like the NIH that essentially says, "We're going to consider your race when we decide whether we're going to give you support."
You can understand why for a large part of the American public, they say, well, why are you doing that? With their tax dollars. With their tax dollars, right? And actually, I should say, like, from the perspective of a minority student, it's actually quite condescending. Like, I believe very fundamentally based on lots and lots of experience with some excellent students I've had that minority students are –
often, if they make the right investments in the time and effort they put in, they can have become excellent scientists. Sure. There's no barrier to that in the scientific... The only barrier are the structural problems with what the incentive scientists had to make those investments in young careers and so on. But those are common across race. I think that if you solve those problems so that we invest in young scientists...
And not just at the level of the, you know, where they're like competing for NIH dollars, but even before, where everyone has access to those kinds of resources that the URM scientists used to differentially have. First, you're going to end up with a set of scientists that actually are more capable of
And you're also going to have minority scientists represented proportionally to the kind of desire that people have to become scientists. There's no –
like there's no field of human endeavor where you say well i have to have exactly the right proportion of race i mean if that's the truth then what you have to have is like indians and chinese uh represented all the time like that's like almost three billion people of the the eight billion people of the earth you don't the the justice isn't that isn't that kind of like race essentialist representation justice is
Our people who want to make the investments to become scientists have the capacity, the resources that we as a society are providing so they can become excellent scientists, right? That has to be the case, right? And we're not – by shifting the investment portfolio toward this race essentialist thing,
All that matters is a URM, underrepresented minority. It doesn't matter if you're an excellent scientist. It doesn't matter so much. It may matter some, but that's not the key thing. It doesn't matter if you have a fantastic idea that challenges entire fields. All that matters is what's your race.
It moves the emphasis in science away from what really matters in science. Like, what are your ideas? Are they advancing human knowledge? Are they translating into health for large populations? Are they true? Are you working in things that advance our knowledge and reliability of the entire scientific literature? I mean, those are the things that matter really for scientists, right? Right.
Why are we caught up then in this idea that somehow we can address? I mean, I just want to be very, very clear. There are real problems that minority populations have faced based on the history of the country. There are real injustices that happened as a consequence of them. But we're essentially asking the scientific institutions of the country to somehow solve these deeper problems of essentially cosmic injustice.
in ways that we don't actually have the capacity to do and in some ways, A, distort the investments we make and B, cause large chunks of the American people to distrust us. Say, look, you're not really focused on the things that really will improve my life. You're interested in sort of cosmic justice rather than actual science. I think it's the right thing to do to say let's focus on the mission.
The mission is how do we advance, how do we make investments in research that advance the health and longevity of the American people? And I don't believe there's any place for this sort of race essentialism in it. So you've talked about the DEI movement.
topic slash issue from the perspective of which science does or does not get funded. Okay. So like testing race as a theory and a non-falsifiable theory is not something that the NIH is going to continue to support. We are also discussing DI in terms of which scientists get to be called scientists and which ones get funded. I suppose the universities decide who they hire and then NIH plays a major role in deciding who gets funded.
So if I understand correctly, as now, the funding of a given grant can't have anything to do with somebody's race or background. To which I say, why not just make it blind to who the investigator actually is? Now, I realize when people write grants, they say, previously we've shown or my lab does this. But why not just eliminate identity entirely and just say, what is the best proposal on the table? Let's fund this.
those proposals. When we talked about earlier, we talked about early career scientists and providing support to them. That's essentially along the same lines, right? So we're saying we're going to de-emphasize the track record of scientists in deciding which scientific projects to fund. That's essentially what you're saying when you say we're going to fund early career scientists because early career scientists tend to have less of a track record. I agree with that. I think the key thing is the ideas that
Are the ideas powerful? Are they promising? Are they worthwhile in terms of being able to translate to improved health for populations, right? So...
I don't know that it's possible to get rid of some elements of identity. Like, you know, you kind of want to make sure that they've had training as a scientist. Sure. Well, they could check some boxes. I'm not here to solve every aspect of the mechanics. But I guess it has to be relevant identity, like relevant. Like your race is not relevant to whether you have excellent scientific ideas. I've learned from people of all races, scientific ideas that have changed how I think about the world.
And it doesn't matter, the race was not the key element in deciding whether they knew, had a great idea or not. What was really mattered was the idea. Now, it may be the case that some people have, based on their background, will have an idea, more likely have an idea in a particular field than a different, with a different background, right? So allowing people of lots of different backgrounds to have their say is
matters, right? But rather than focusing on the race, focus on the idea. Is the idea important? Is it likely to translate to improved health for populations? Well, having sat on
a fair number of study sections over the course of like more than 10 years, either as an ad hoc or regular member. I don't recall ever feeling in the room or anyone explicitly saying we need to fund this grant because it comes from somebody who's an underrepresented minority. There were grants that came from underrepresented minorities, some of which are, were terrific grants and some of which didn't get funded because they weren't as terrific. So are you telling me, and it's been a little while for me, not a long while, but that there are,
has been a recent pattern. I'm not trying to, you know, see the question, but are you telling me that some grants were getting funded specifically because of the identity of the person writing the grant? I always thought grants were funded or not funded on the basis of the science in them. And I never saw that to not be the case.
I mean, I think there are markers of that that were increasingly emphasized. You already mentioned one, actually, Andrew. You said like you could call up your program officer and say, look, I've got a great postdoc who's a URM, which essentially means a minority, and would you like to fund him? And the answer would be yes. No, there was a pool of money. It was always a – it was a – no, it actually ran in the other direction.
It was well communicated from NIH that if we had someone who was underrepresented minority who wanted to join on our grant, that there was additional money to be had. Yeah, that was a there was a state. I think there was a website. It told us this. And, you know, OK, well, it's clear that NIH as it stands now in the new administration, it's clear where their stance on DEI is. I am relieved to hear that grants that might have been caught in the filter
um, of this recent change can be that did not qualify for what you're describing that people, that there's an appeal process, because I think that shocked some of us in the science community. We're like, oh my goodness, there could be terrific grants. It just got the ax. Yeah. So there's an appeals process to fix that. Um, I think, um, let me just make a, an, uh, an analogy of something that happened during my, my career in, I think it was around 2010, um,
The NIH put out a priority statement that said they were not going to fund health economics research, more or less. It was in the wake of Obamacare. There was a whole fight over cost-effectiveness research, and cost-effectiveness research became this like political football where – and the NIH said, look, we're not going to fund this kind of work anymore.
impacted my career. Some of the work I'd done previously had to do with like the relative cost effectiveness of various, you know, drugs or whatever. And so the question was, so I had to like, I had to pivot away from that research if I wanted research support from the NIH. It actually impacted my career quite negatively. There's priorities. And, and, and,
The thing is, I don't want to argue the wisdom whether that was right or wrong to do. I personally think it was wrong, but let's just leave that aside. I think the thing is, it's normal for the NIH to put out priorities that reflect the sort of social circumstances that are around us. Here, I think what we have is a shift to priorities that focus on the quality of the ideas the science has done rather than the racial identity of the people doing the science.
And I think fundamentally it's more healthy both because we'll end up having a set of scientific ideas that are more likely replicated and more likely be able to translate it into advances for health. And also it's better from a sort of social perspective.
racial, social point of view, because it de-emphasizes things that are irrelevant to the progress of, mostly irrelevant in terms of the progress of science, right? It shouldn't matter if you're a minority student, a very promising minority student, or if you're a very promising non-minority student for the NIH to support you. Both should get support. It shouldn't make any difference whether you're minority or not. And for the American public at large,
I mean, a lot of there's a sense of like unfairness, right? Where like, why are you, like, I just, let's leave, let's move aside from the NIH and like move to like Harvard University and the case that it lost over the admissions, right? I'm sure you remember this case, right? Where Asian students were found to be at a disadvantage in admissions into Harvard. They had, actually the facts of this case are really shocking, right? So what happened was,
Asian students who applied to Harvard and non-Asian students would be evaluated by alumni interviews where the alumni would evaluate their personality. Asians and African-American kids were both had roughly the same average personality score as evaluated by an interview with alumni. Then the Harvard admissions officers would find similar kinds of scores based on essentially personality.
But the admissions officers had never met the kids. And Asian kids routinely had much lower personality scores than African-American kids that applied.
That's what led the Supreme Court to say that was an illegal act of discrimination by Harvard against Asian kids. I think this focus on race, I can understand it because we have a history where race has been the crux of so much pain and suffering and injustice in this country. We have a legacy of slavery that goes back centuries. We have racism.
You know, laws against that discriminate against African-Americans in, you know, like the Jim Crow laws. We have this painful legacy of slow progress in civil rights that goes back, you know, generations, centuries. So I understand that that's the backdrop. I'm not naive about that.
What I'm saying is that these kinds of scientific, these kinds of like using the NIH to solve that problem is an inappropriate use of taxpayer funds. And actually, I think it makes things worse for those problems than better.
And in particular, and for me as the director of the NIH, this is the most important thing, it doesn't allow me to meet my mission. The mission is to do research, to support research that advances the health and longevity of the American people, all of the American people, whether you're a minority, whether you're an American Indian, no matter who you are, we should be doing research that advances your well-being. And that means to me I shouldn't be using the NIH for these sort of cosmic purposes.
justice purposes for which the NIH is poorly suited. But instead, we should be using the NIH for the purpose it is well-suited, which is to advance science that advances the health and well-being of the American people. Yeah, I can see the parallels to something like, you know, the space program where, you know, the space program is incentivized to try and figure out the best way to meet the specific goals of the program that year and in subsequent years. And if the public thought that
taxpayer dollars were being diverted according to a social justice issue in order to try and advance the space program in that way, as opposed to getting onto Mars or whatever it is. Maybe that's a bad example. It's so specific to Elon, but you get the idea. So it's very clear based on what you've said that you believe that the best way to serve everybody in the country in terms of health and longevity is to make the discoveries
verify those discoveries and then distribute the devices and therapeutics for those discoveries and behavioral tools that will allow for the health of all Americans. And anytime someone says all Americans, it sounds like a political statement. I realize that. But and to leave aside social justice issues in route to that goal. That's what I'm hearing. Yeah. I mean, except to the extent that there are the social justice issues that
can be articulated as clean scientific hypotheses that actually matter, right? So, you know, race differences in biological variables, it's a fact that matters. Sure. Certain mutations run in certain populations. Certain advantages run in certain populations. So the NIH still supports that kind of research, but again, that's in service of the scientific goal, not in service of some social justice goal that the NIH is ill-suited to achieve.
- Yeah, as somebody who worked on vision science for many years, glaucoma is much, much more common in darker skin races. There's certain areas of the world where glaucoma is at outrageously high percentage of the population, and it's not lost on people that there's a genetic inheritable component, and some of the treatments might need to be tailored to those specific populations. - My grandfather went blind from glaucoma, so-- - Yeah, so get your pressures checked, everybody. Take your drops, get your pressures checked.
I'd like to pivot slightly to some issues related directly to public health. We have a kind of fork in the road here as to whether or not we focus on issues of public health from the recent past for which you became best known, aka COVID and the lockdowns, or whether or not we focus on public health issues that are more relevant now. I was told by many, many people who are not scientists but care a lot about science that
that, quote, until the scientific community acknowledges two things, they don't want to give another dollar to science. Those two things are, one, the replication crisis. We talked about this. And by the way, I think your plans to deal with that are fantastic. I love this idea. And I think many students and postdocs will be excited to be part of the correction process that will evolve science.
And the second one is an admittance of error in our past. I want to be very clear not to protect myself. I have plenty of work to do no matter what, but these are not my words. But the words were the scientific community did us wrong. The lockdowns were unfair to, in particular, working class populations. We were told one thing about masks, then told another.
We got a kind of loop-de-loop of foggy-speak politico messaging about vaccines and what they did do or wouldn't do. And basically, I hear from a lot of the general population, not just people on the MAGA, MAHA, whatever you want to call it side, but also a lot of stated Democrats and people who are truly in the center, that they lost trust in science and scientists, and they will not –
consider restoring that trust until scientists admit that they made some mistakes. And it took me a while to hear that message because I'm like, hey, listen, I have friends trying to cure blindness, cure Alzheimer's, use brain machine interface to cure epilepsy and get paralyzed people to walk. And you're talking to me about something that happened, but I finally had to just stop and listen because they kept saying, we don't care. And so it's almost like
Big segments of the public feel like they caught us in something as scientists and we won't admit it. And they're not just pissed off. They're kind of like done. I hear it all the time. And again, this isn't the health and wellness supplement taking, you know, anti-woke crowd. This is a big segment of the population that is like, I don't want to hear about it. I don't care if labs get funded. I want to know why we were lied to or the scientific community can't admit fault.
I just want to land that message for them because in part I'm here for them and get your thoughts on what you think about, let's start with lockdowns, masks and vaccines, just to keep it easy. And what do you think the scientific community needs to say in light of those to restore trust? So first, let me just say, I don't think I'm the NIH director unless that were true.
Unless what you said is true, otherwise I'm not the United States. So I was a very vocal advocate against the lockdowns, against the mask mandates, against the vaccine mandates, and against the sort of anti-scientific bent of public health throughout the pandemic. I've also argued that the scientific institutions of this country should come clean about our involvement in very dangerous research that potentially caused the pandemic. The so-called lab leak hypothesis. Yeah.
Right. So let's just stay focused on lockdowns. And I want to make the scientific case that they were a tremendous mistake. And that was known at the time they were a tremendous mistake. And let me just focus on one aspect of it. We'll get broadened out to other lockdowns, just the school closures. So what the public at large now sees is that American kids, especially minority kids, are two years or more behind in their schooling.
We decided during the pandemic that children ought to learn to read as five-year-olds or six-year-olds remotely in Zoom. We decided that in-person schooling didn't matter anymore. My kids in California were kept out of school, public school, for hours.
a year and a half. If they saw the inside of a classroom, it was with plexiglass, separated from their friends, eating lunch, isolated alone, right? The message to American school kids was essentially your school doesn't matter. Your future doesn't matter. American public health embraced that entirely. In Sweden, they didn't close schools for kids under 16 at all.
That was not a policy of the Swedish, Anders Tegnell, the head of Swedish public health, explicitly made that a priority. In the summer of 2020, the Finns and the Swedes compared their results.
The Finns had closed schools in the spring of 2020 and the Swedes had not. And they found there was no difference in health outcomes for COVID. The teachers in the schools, in Swedish schools, actually, they had no worse outcomes than other workers in the population.
And on the basis of that evidence and the fact that we know that closing schools harms the future health and well-being of kids, even short interruption school, we knew that for a fact based on a vast literature that existed before the pandemic. Many schools around Europe opened up in the fall of 2020. The scientific evidence was abundant and clear, even by late spring 2020, that the closure of schools and kids was a tremendous mistake.
And yet, when I wrote the Great Barrington Declaration with Sunetra Gupta of Oxford University and Martin Kulldorff of Harvard University in October 2020, I faced vicious attacks by the scientific community and the medical community for being unscientific about school closures. Were there threats to your job at Stanford? Yes.
And that like real threats or just people saying we're going to take away your job? Okay. In March of 2021, I was part of a round table with Governor DeSantis, a policy round table, where he asked me whether there was any evidence that masking children had any effect on the spread of the disease. And the answer is there's not a single randomized study that looked at kids. The U.S. was an outlier in recommending that kids as young as two years old get masked. In Europe,
Like 12 was the age. There were no studies. In response to that, a hundred of our colleagues signed a secret petition essentially effectively asking the president of the university to silence me. Were you contacted by the university administration? No, I found out about the petition from a couple of my friends who leaked it to me.
And then I went to the press and said, look, this is – you should go ask the president about this. And then he had to say that – he had this mealy-mouthed statement about academic freedom but also essentially that we – that it's really important that we obey public health authorities or something.
Like political, like boilerplate speak. Yeah. And in 2020, I'd been subject to all kinds of sort of attacks on me. I mean, just, I don't want to relitigate this history, but I'll just say that Stanford failed the academic freedom test. It didn't hold a scientific conference on COVID with alternative viewpoints, with viewpoints that were anti-lockdown until 2024 when I organized it.
Even though I asked to have a conference in 2021 and 2022. But your job security wasn't threatened in a direct sense. No, that's not true. Like no one came along and said, hey, like quiet down or else you're going to lose your job. So in that sense, you had academic freedom from the top. That's not true. So I was asked to stop going on the press in 2020. I was by the dean of the medical school.
My academic freedom was pretty directly threatened. I wrote and published a study on measuring antibodies in the population, a study that was replicated dozens of times around the world. And I was essentially ordered to redo that study. They interfered even before I had sent the paper in for publication.
When I say they, I mean the administration of the medical school. I mean my academic freedom was pretty directly attacked. And I wrote a piece with how Stanford failed the academic freedom test. You can go read about it if folks want to read about it.
I don't want to relitigate the pastor. No, I ask, listen, I'm not trying to dig for dirt. I ask because, well, I never saw a petition cross my email path. I did see a petition pass my email path about Scott Atlas, who was in our Department of Radiology as a physician, as you know, and was appointed Trump's Coronavirus Task Force, head of Trump's Coronavirus Task Force. And then there was a petition basically asking him...
what to take away his job. I don't know what it was, but that passed through. But I see a lot of petitions passed through my email and as everybody knows, and the press has pointed out, I'm not great at email and communications. But, but I guess the reason I ask is academic freedom means many things. Like, can you tweet what you want to tweet? I guess I don't call them tweets anymore at the time. Could you tweet what you wanted to tweet? Could you continue to do the science that you were doing?
Could you – did you continue to collect a salary? It sounds like you were able to keep your job, but there was some pressure to not communicate your ideas. Is that about right? Yeah. I mean, there was a threat to my job as well. I think the issue here is one of like – OK, imagine what a – there's a sense of like positive and negative academic freedom. A negative academic freedom means –
There's no active attack on me and my capacity to do work. I think Stanford failed that as well. Like there was an active attack on me. So, for instance, there was a poster campaign all around campus with my face on it, essentially accusing me of killing people in Florida for advising Governor DeSantis that there was no evidence that masking children benefited anybody.
Right? And essentially it was a threat. It was like at the same time I was getting death threats from people, the former head of the NIH wrote an email to Tony Fauci four days after the Great Britain Declaration calling for a devastating takedown of the premises of the declaration.
And then that resulted in essentially press propaganda pieces, the New York Times and elsewhere, essentially mischaracterizing what the Great Britain Declaration said, which was to protect older people better and open schools, let kids go to school. Essentially mischaracterizing is in a propagandist way, we're saying we want to let the virus rip. And that led to death threats against me.
Same time there's this poster campaign all around campus. I called the campus police. I told the department of the folks in the department of the medical school that this was happening. And they result they their response was to send me to a counselor to reduce my online presence.
So Stanford absolutely failed during the pandemic. In 2020, the former president, John Hennessy, approached me wanting to organize a discussion, some sort of panel where different perspectives on how to manage the pandemic, the lockdowns and elsewhere could be had.
And even he couldn't get this organized. Hennessy couldn't? No. Hennessy is one of the most beloved presidents of Stanford ever. I have tremendous admiration for him, but the pressure was absolutely enormous. Like the fact that he approached me at all was actually a credit to him. He's one of the few officials at Stanford who approached me during the pandemic to try to allow me to have – I mean, you know, I might have been right or wrong. It turns out I was right, but in principle, Stanford should have had those debates before
in 2020. We had prominent faculty, people like Johnny and Edie, Scott Atlas, and others, Michael Levitt, who were opposed to the lockdowns, and yet we couldn't get a hearing. Yeah, Levitt reached out to me at one point. I, you know, as I've been criticized for before, you know, with this podcast, I mainly focused at that time on, we launched in 2021, on ways to deal with anxiety, circadian rhythm, sleep, because people were dealing with those issues. I'm not a virologist, so I couldn't talk about virology or epidemiology, but...
Andrew, it wasn't on you to put us on a platform. It was on the Stanford University administration to organize discussions and debates on the most important topics of the day. And that included in 2020, were school closures the right approach? I read comments enough and get calls and emails that I do read enough to know that when people hear this, their minds will go to questions about
Like what is the incentive financial or otherwise for Stanford to not allow you to have these discussions or let's broaden the discussion for any university for that matter. Right. I mean, Stanford's not the only university on the planet, um,
For, you know, a panel, a discussion about these issues to be held. Well, we have a health policy department. What's the purpose of it if not to like impanel the most important debates about health policy of the day? So what do you think was going on? I mean, the vaccine technology was developed at multiple sites, right? I think Stanford had something to do with the development of the technology.
There were other universities that were involved in the development of the technology as well, right? And I think in the back of this conversation, I know what's buzzing, but let's just be direct here. You and I were, there was a vaccine mandate. Everyone that- - This is 2020. - 2020, but eventually there was a vaccine mandate. If you wanted to keep your job, unless you had a religious or other, what was a medical reason, religious or medical reason, you were told you had to take the vaccine.
People did what they did. Some people did. Some people, I know colleagues that falsified cards. I know colleagues who got nine vaccines, everything in between, right? But there were mandates. So to be clear, you were opposed to the lockdowns. Yes.
And you were opposed to vaccine mandates. Were you also vocal about that? Yes. Because that's even, I mean, that's even touchier. I was an expert witness in a number of cases on the vaccine mandates, including one that reached the Supreme Court and overturned the OSHA vaccine mandate. So, yeah, I mean, I was vocally opposed to the vaccine mandates. I was vocally opposed to the mask mandates.
On the lockdowns, I was vocally opposed to school closures. I emphasized the harm that the lockdowns did to the world's poor, right? So in April of 2020, there was a UN report that calculated that 100 million people would be subject to starvation as a consequence of the economic dislocation caused by the lockdowns. I was opposed to that.
I think the idea that the lockdowns were the right strategy, well, they're unique in world history of having lockdowns at the scale we had. And they were no part of previous pandemic plans where such a lockdown of such a length, of such a scale, were no part of any previous pandemic plan or any previous pandemic management experience. And it was very clear to me with my background in
in health policy that we were going to harm the poor, we were going to harm children, and we were going to harm the working class at scale. The lockdowns were a luxury of the laptop class. And that's what I was advocating at the time. The university wasn't just Stanford, you're right. But in fact, there were almost no universities that impaneled these kinds of discussions into 2022. So what do you think happened? Do you think that there was a fear of
I'm not seeding the question, leading the witness, whatever, but do you think that there was a fear among the academic and science community that if anyone, if it were allowed for people to speak out or consider different aspects, positive or negative, about lockdowns or vaccine mandates, that somehow their existence would be at risk? Like that this got to an issue bigger than the lockdowns and bigger than vaccines? Because I'm
I think that this whole issue was really a question of whether or not we consider scientists experts. The word expert has become a very touchy thing. Like who gets to be called an expert? Who designates which experts are really the experts? I mean, it's all, you know,
All you have to do is accuse someone of misinformation and suddenly their expert card is taken away even if they hold a position in a given area that they've... I've been a tenured faculty member at Stanford School of Medicine for decades, right? I've been a full professor with a long scientific history of published papers in some of the top medical journals, the top like statistics journals, health policy journals, and so on, economics journals. And that wasn't enough, right?
Right. The problem is like you have. OK, let me just say one one version of this that you can you can go. There are other other other aspects of play for like, for instance, I think people were genuinely scared. Scientists were genuinely scared for their own mortality, especially in the early days of the pandemic. And that that clouded the way they thought about.
Especially since there are a lot of older scientists. I'm not trying to pick on older, but there are a lot of them. Yeah. And older people were dying more, correct? Yeah. I mean, that was actually the most important epidemiological fact about
COVID was that it was this very steep age gradient in the mortality profile. Young people, very low, low mortality risk. Older people, much higher mortality risk. What was the rate of mortality among people 70 to 85 years old, roughly? Five to 7%, somewhere in there. Okay. So not a trivial number. No, it's huge. Like one in 20 to one in, you know, one in 18 or whatever, 14. And that was due directly from COVID itself, not some confounding variable. Yeah, especially early in the pandemic, right? Okay. So
So, okay, so, but I want to leave aside the personal fear, although I do think that played a tremendously important role in the thinking of scientists, especially since scientists as a class tend to be part of the laptop class, right? People who have the economic resources to shield themselves for extended periods of time without any threat to their livelihood. That's not true for most of the world, but that's true for scientists.
So let's leave that aside and let's just focus on what I think was a core dynamic, right? So there's two norms, two ethical norms in science, and they competed with each other. In science, free speech is an absolute must. If you have an idea that's different from mine, you should be able to express it.
And then we can, you know, we can test each other's ideas out. We can maybe devise an experiment to decide between us. And whatever the experiment says, we'll say, OK, you're right and I'm wrong. And, you know, I'll buy you dinner or something. Right. That's good. That's how science advances, like through this kind of like this process of people talking to each other and having free speech, the ability to come up with ideas and articulate them, defend them is absolutely fundamental to the progress of science. Public health has a different ethical norm.
Public health has an ethical norm of unanimity of messaging. This ethical norm has as its moral basis that the communications that public health puts out are grounded in consensus science, right? So, for instance, if I were, as a former professor, an emeritus professor at Stanford, I'd go out and say, I'm the head director of the NIH, I'd go out and say, smoking is good for you. Well, I've committed an ethical sin.
I've done something really deeply wrong because the scientific basis for the idea that smoking is a terrible thing for you, it really harms your health in concrete ways, that's, I mean, that's like rock solid in science. So the idea that I, as a person who works in public health, shouldn't go out and say smoking is good for you, that has a good ethical basis rooted in science. The idea that closing schools is good for you,
The idea that wearing a cloth mask prevents you from getting COVID. The idea that immunity after COVID recovery doesn't exist. The idea that the vaccine will protect you from getting and spreading COVID forever. None of that was rooted in science.
And yet, the public health authorities of this country decided that they were going to enforce the same kind of ethical approach. They have sort of ethical constrictions on those topics as they do to smoking. When you say none of it was rooted in science, are you saying the science was mixed or there was literally no evidence?
There were a dozen randomized trials on flu before the pandemic, and there was a Cochrane report looking at the literature on masking and influenza. And they concluded that the evidence was weak at best that these kinds of cloth masking in population settings actually prevent the spread of influenza.
i heard a number of people say like what's the big deal about wearing a mask there was also that argument it's not the same thing as a vaccine it's like it's a mask you could argue over inhaling excess carbon dioxide you're not you know you're not getting smiles you're not social interact listen i'm just opening this up for sake of of consideration so like why did the masks become such an issue was it because it was a mandate is that what it's really that mandate mattered um but i'll say their work harms
some of which were recognized, some of which were not. So like, for instance, I heard from parents of autistic kids that the wear, or I'm sorry, hearing impaired kids, that the mask wearing impaired the ability of the kids to learn to lip read, right? So it seems illogical. Yeah. I heard, but it's also true that if you adopt and embrace masks,
public health messaging that's self-evidently not rooted in science, you're going to undermine the public trust in science and in public health. I will say, based on these voices that I hear from a lot, that's what they're asking for. They're asking for the exact message that you're delivering now, which is,
I'll say it differently. They want to hear the scientific community say we messed up. Yeah, and we should. We should absolutely say that. So, for instance, you wear a mask while you walk into the restaurant. You sit down to eat and you take your mask off. And that protects you from getting and spreading COVID how?
Like everyone could see that. You don't need to be a scientist to see that that was obviously ridiculous public health messaging. It was a weird time. And let's just say, could this public health messaging be dangerous? Well, yeah. Imagine someone who's like 80 years old. They have a lot of chronic conditions. It's the height of the pandemic, like July 2020 or something, or June 2020. And they're told, if you wear a cloth mask, you're safe.
They go out in public and take risks that they otherwise would not have taken on the idea that they're safe wearing a cloth mask and they get COVID. The recommendation, not rooted in science, actually could end up killing people. It probably did.
Right. So it's not none of these things are just basically, well, it's low cost. I mean, it may be low cost to somebody who's like, you know, who's not particularly, I mean, particularly bothered by mask wearing, but they can still nevertheless end up causing harm. And I think it kind of did. Why weren't there panels of scientists as opposed to one individual? Yeah.
Tony Fauci. By the way, I invited him on the podcast. Did not get a response. This was a long time ago. I thought if I was going to hear about it, you know, these issues from anybody at that time, it made sense to contact him. And he apparently wasn't interested. We would have, of course, done it remotely. Why wasn't there a panel? So my feeling is when you have an individual, it changes the whole discussion.
But when you have a panel that looks kind of like the United States, and this isn't for diversity reasons per se. This is about just a collection of smart people is way better than one person always, in my opinion. And they could come to some sort of consensus or maybe even disagree publicly. I think panels would have been better.
Well, I think – let's leave aside Tony Fauci because I think he was a very important figure and of course was basically a major spokesperson for the public health point of view. But there was essentially a groupthink at scale. It was impossible to organize a panel with the kind of diversity of opinion that was needed.
There were a million or more – I know this from the set of people who signed the Great Gantry Declaration. Tens of thousands of scientists and doctors who disagreed, but they were afraid to stick their head up for fear of getting chopped off. It's not an accident that Stanford didn't allow a scientific panel with this kind of – my point of view about the efficacy of lockdowns until 2024.
The idea was that we needed to have unanimity of messaging. If you had prominent professors at Stanford or Harvard, Oxford or elsewhere saying that the lockdowns were a bad idea, which they were, then you're going to undermine public compliance with the orders that were being put out. Just a quick diversion, how do I know that the lockdowns are a bad idea?
If you look at – if you ask which country had the lowest all-cause excess deaths in all of Europe, all-cause excess deaths meaning deaths from all causes, excess meaning given how many – given the age structure of the population, how many people would die would you have expected even if there wasn't a pandemic versus how many there were? Yeah.
Which country in Europe had the lowest all-cause excess deaths? It turns out it's Sweden, which didn't follow the lockdown. So the lockdowns were not a necessary policy in order to protect human life, and they weren't sufficient to protect human life either, right? So you had sharply locked-down countries like Peru that had tremendous deaths,
So the lockdowns were neither necessarily sufficient and they caused collateral harm at scale to the poor, to the working class, to children that we're still paying for, that people are still suffering from, the long tail of the lock. For years in the United States, from 2020, 2021, 2022, the deaths from overdoses of drugs were like in 100,000 people died a year ago.
This past year, it was 80,000. We declared success. We went down 20,000. Before the lockdowns, it was maybe 20,000 deaths a year. And that was a catastrophic failure, right? So the problem here is that the scientific community embraced an ethical norm about unanimity of messaging, and then it forced it on fellow scientists. And then it cooperated with the Biden administration to put in place a censorship regime that
that made it impossible even for legitimate conversations to happen. So after the COVID vaccines came out, there were a community of people who were legitimately vaccine injured. The Biden administration went to Facebook and told them, essentially ordered them, that you need to shut down the patient groups that are discussing the vaccine injuries. Or else what? The threat was usually implied or else essentially destruction of your company.
President Biden goes on national TV, says – and he has a complete right to do this. He has the right to do this as president, say, look, Mark Zuckerberg is killing people. He did that. He actually did that. And then he – and then quietly behind the scenes, they pressured Facebook to censor
patient groups that were discussing their vaccine injuries, even in private groups. And no one was putting their stuff out on X then called Twitter? X did the same thing, right? So I joined Twitter in August of 2021. My first thing I posted was the Great Barrington Declaration. The day I joined Twitter, I was put on a blacklist to suppress vaccines.
The spread of my ideas on Twitter. And almost certainly— That's confirmed? I mean, I'm not questioning the validity of what you're saying. I saw it with my own eyes. But that was confirmed by the so-called Twitter files? Yeah. So when Elon bought Twitter, he opened up the databases, invited me to go see them at the Twitter headquarters. I saw it with my own eyes. I saw my face, and it said the word blacklist on it.
um which meant what that when you would post no one would see your posts well it was a shadow band it was a trends blacklist so admit yeah it was a shadow band and make sure i didn't know i was on this so it just made sure that only my followers strict followers would see the post and nobody else had any chance of seeing it i mean the whole reason i i joined twitter in the first place was to to engage with people that didn't know my ideas and the blacklist made sure that that my ideas were not seen by those people so this is
Part of the reason why I think podcasts like the Joe Rogan podcast became such a lightning rod for this discussion. What's interesting is that
Remember they used to put a little tag on podcasts, you know, it would say this may contain misinformation. What they forgot, whoever was imposing that, because I don't think it was from the podcast houses themselves, but whoever directed that. The federal government. Yeah, forgot about the 90s.
when there were explicit lyrics and albums and they would say warning contains explicit lyrics and everyone goes and clicks on those or listens to those they sort of forgot human psychology that's the beauty of the american people we we are we like rebels yeah exactly um it's so pinheaded it's almost unimaginable like we basically the public health authorities of the country and the and the and the uh and the government around it decided that it knew best
that it was going to control the conversations of the public at large, essentially propagandize them. The real question is why. And, you know, people are probably thinking, ask him about big pharma, ask him about the amount of money that, you know, Tony Fauci was made. You hear these theories, right? But why?
Most biomedical scientists running labs at universities aren't going to make a dime from pharma. Most, if you saw their salaries, most people would be unimpressed by those salaries. If you look at the salaries relative to their hours worked, you would be even less impressed. So sure, some people stood to get really rich, but I can't imagine that's the reason.
So the question becomes why? Why all this suppression? Why all this groupthink? What were people so darn afraid of? I think, let's put yourself back in 2020, 2021. I think that while, again, I'm not naive, I do think monetary factors played a tremendously important role. I don't think that they were the central reason.
Okay. I agree with you about that. I think the central reason is that the scientists that supported the censorship efforts, the scientists that embraced the sort of omerta around opposing lockdown, that supported that, essentially the vilification of fellow scientists who disagreed with them, were doing it because they thought they were doing good. They thought they were doing good. Yes. Yes.
I think essentially what happened was that rather than thinking like scientists, they were thinking like propagandists. And in this case, they were public health propagandists. They thought that their job as scientists was to echo public health propaganda rather than act like scientists and ask questions about the messages that the public health authorities were putting forward. I'm going to push back a little bit in fairness.
perfectly valid hypothesis. And you were at the center of this and I wasn't. But many of these people are very, very smart people. I mean, I mean, we can talk about universities as like these places, but these are places made up of people. And while not everyone is brilliant to these places, some of them are truly brilliant people and they are,
dare I say enough, on a sort of a left-brained-ish, spectrum-y type phenotype where they're not pulled into emotional issues the same way that we might think they are. And so it's hard for me to imagine that really smart people would join a dialogue that didn't consider all aspects. And yet that's exactly what happened, Andrew. Like, think about that, right? So, like, I mean, I've thought about that quite a bit. I don't think it had anything to do with being smart or not smart. I think it...
There were a lot of really smart biologists in the Soviet Union. When Lysenko told Stalin that Mendelian genetics was a capitalist plot and that Lysenko was the way forward, a lot of excellent biologists, for fear of not wanting to be sent to Siberia, kept their head down and said nothing.
about even in areas where they were like directly in their field. - So it was fear of being ostracized and shamed by one's community. - And it took just a few examples. Like, so you mentioned, I think I mentioned earlier, Scott Atlas, who was a colleague of mine and friend. In 2020, the faculty Senate of Stanford
voted to censure him. Stanford has a history of censuring three professors ever in its history. One was a man named Edward Ross, who was a eugenicist in the early part of the 20th century. He was one of the leading eugenicists in the country, and Jane Stanford hated him and worked to get rid of him from the faculty. He was fired? He was. Or resigned or left. I'm not sure exactly, but he was let go. I think he was assistant professor.
Then Bruce Franklin, who was an English professor at Stanford, I think he like worked on science fiction. But he was an anti-Vietnam War activist and he brought essentially a terrorist group to campus. And he was given – he was like – just like – there had been like massive public focus on it. So he was given a chance to like defend his points of view. It eventually was like censored by Stanford. For being anti-Vietnam War or for bringing – For bringing the terrorists on the campus. Yeah. I mean bringing terrorists on the campus is bad.
Well, in any case, there was kind of due process around both of those things. Like they got their say. Scott, his major sin was he advised President Trump during the pandemic. And he advocated for keeping schools open, again, consistent with what was happening in Sweden, and for protecting older people better because they were at higher risk of dying if they got COVID. That was his sin. He was seen next to President Trump. And that led...
the faculty senate of Stanford, for something they haven't taken back, to issue a censure of him that has, if you look at it, religious language. They declared him anathema. They effectively excommunicated him. His family essentially was ostracized by their neighbors. He lives on campus. It was an absolutely disgusting act. And it was meant not just at Scott, but generally to send a signal to anyone who agreed with Scott to keep their head down.
And it succeeded. Not entirely. He's at Hoover, right? Yeah, he's at Hoover. But he was formerly at the medical school as the head of neuroradiology. He's a very accomplished scientist and has a textbook on neuroradiology, leading textbook on neuroradiology. For a decade, he'd been advisor to presidential candidates on health policy. So he understood from a broader point of view. He also comes from a working class background. So it was guilt by adjacency. Yeah. But it was aimed at silencing opposition, right?
to the lockdowns. And it worked in large part. I can't, I can't, I like lost count of people from inside Stanford and around the country who would write to me saying, I'm glad you're speaking up on these issues. Please keep it up. I don't want to do this because I don't want to risk my job. Well, you weren't completely alone. So Levitt has a Nobel prize and, uh, and you had, you had some buddies who were pretty smart and pretty powerful. I mean, they don't give Nobel prizes to anybody. No, Mark, Mark, uh, Mike is incredible. He's a very brilliant man, but, um,
Stanford in that sense was better off, right? We had a sort of underground that opposed the lockdowns, very prominent scientists like John Ioannidis, Mike Levitt, Scott. There were people at places like Harvard and Oxford. Harvard, there was Martin Kulldorff at Oxford. There was Sunetra Gupta. There were folks all over the world. But institutionally, the universities of the world made it almost impossible to
You had to essentially decide, and this is what I decided in 2020, that I did not care about my career anymore, that I owed it to the people who were being harmed by the lockdowns to speak up more than I owed it to myself to preserve my career.
And that's why I continue to speak even after, even with the death threats, even with the vilification that, and even with the, essentially the failure of my own institution to protect my academic freedom. I did decide I was willing to give all of that up. That's why I kept speaking. So given your experience and given this experience,
One thing that I hear that, you know, people want to hear scientists admit that they are at least sometimes wrong, maybe not even a specific instance in which they're wrong. Will the NIH, perhaps you, be making a statement on behalf of scientists to, I mean, you have the opportunity to address the entire world.
Here, you're doing some of this, obviously, but will this be part of the messaging of the NIH? Like, we need to revise what we think of when we talk about academic freedom. We need to revise what we actually do. And, you know, God forbid there's another –
We need to really be ready for the kind of discourse that is going to unify people as opposed to divide people. You know, after a patient dies, often in a hospital, there'll be a conference, you know, where the doctors who manage the patient will say, will bluntly say to each other, often behind closed doors, what went wrong.
And the goal isn't to like actually point fingers. The goal is to figure out what happened so that you don't make the same mistakes. We haven't really had that conversation as a country or as a world over the pandemic. And yet the harms from it still persist. I think what I would love to do as NIH director is, I mean, I want to reform the scientific community so that it –
The values that I thought it had, which is the values of free discourse and academic curiosity, those are central to the way we function going forward. We want to make sure that those values are at the center because you can't do science if you don't have that. So you just think about science in the Soviet Union under Lysenko.
There was no real biology going on if you couldn't say Mendelian genetics was real. No, I actually can imagine that the small scale example that I'm familiar with of a laboratory meeting where you discuss someone's data.
is the perfect microcosm for what we're talking about, where you sit back, someone presents their data, and the idea is to challenge the data. The idea is for everybody to try and punch holes in it, make helpful suggestions. And sometimes, sadly, at the end of that meeting, you end up sitting there with a postdoc or graduate student, and you're discussing what the next project ought to be, because that one is just an utter failure.
Or you're discussing something much more interesting than you ever thought was possible in the data set that neither of you could have thought of because you needed some fresh eyes on it. But you can't have a culture in a laboratory where people can't oppose the person in quote unquote in charge. I mean, this is so important. If you can't tell the lab head, no, that's, I think you're wrong. If you can't say that, the lab can't progress. The culture of American science is,
has gotten away from that ideal. In fact, it has this ironically weird thing where like on small matters you can have that kind of discussion, but on large matters you cannot. And that actually is anathema to science. Like that actually means that we cannot as scientists address the most important questions of the day without fear of essentially getting our heads cut off. We had this conversation about DEI earlier. Wasn't it uncomfortable?
Like it was I felt myself being uncomfortable saying what I believe is true because I know this one of those issues where as a scientist, if you start talking about it, you better talk a particular way or else you're going to get your head chopped off. Yeah. I mean, all these topics are uncomfortable, frankly. I, you know, in part because I see them through a lot of different lenses.
The audience lens, my role as a basic scientist, my role as a podcaster, you know, the quote unquote field of podcasters completely transformed this kind of discussion and public health. It's really healthy. We can have these conversations openly in a public environment.
I mean, maybe I'll get my head chopped off again, but like, you know, once you've had it once. I think you're safe. I mean, maybe I have to remind you, you are the director. It is an incredible thing if you really think about it, right? Given your position in 2020 and 2021, 22, 23, et cetera, you're now...
At the top of the pyramid, it is hierarchical. And I believe you, I believe your intentions are, are pure and good. I do. I think it's important to have checks and balances, but I really believe that you want to do right by people. I, I feel that's a felt thing. But yeah, it's a remarkable arc that you're now in the position to make major decisions for the entire enterprise of science. What I would love to do is I would like to make the lives of scientists who disagree with me easier.
I want them to be able to disagree with me. I want to create a culture of science focused on developing truth rather than obeying higher, like top tops of hierarchies. If I can accomplish that, that would be a major thing in my view. Well, I think that's a magnificent sub-vision for the NIH.
I think it's super important that all voices are heard. It's kind of interesting. We have these discussions about diversity and inclusion, but like all voices need to be heard in the context of analyzing data. And certainly the revision of the entire structure of the science enterprise, as you point out, is sociological, it's financial. There are a lot of different aspects to this. Vaccines are a very hot button issue these days, in part because Bobby Kennedy said,
has been associated with the anti-vax movement. I've heard him say with his own words that he's not anti-vax, but he's suspicious or very concerned about certain vaccines. Let's just start with a very basic question. You're an MD. Do you believe that there are any vaccines that are useful? Yes. Okay. Well, I think it's just, let's build up from there. Do you believe that some vaccines save lives? Yes. Yes.
Many vaccines save lives. Do you believe that some vaccines that are given to children save lives? Yes. Do you believe that some vaccines are known to be harmful and yet still given? Let me say the specific one. I think the COVID vaccine for children in particular, I don't think is net beneficial for kids. But you said not net beneficial. Does that mean it's harmful? Net harmful. Yeah.
You believe that the COVID vaccine is net harmful for- Especially for young men. Can you define the age cutoff there? We can argue about this. Like there's a scientific, but I think it's pretty clear that, I don't know, between age 12 and 30 or something, for boys and young men, the COVID vaccine is probably net harmful. Again, with boys who have no other-
underlying conditions and all that. Not obese, no heart condition. Well, I mean, even obese, you have to like look at the numbers. I mean, there's lots of debates and fights over this in scientific literature. So I hesitate to like actually give you a specific age threshold. I think just as a general matter, there exists groups for whom the COVID vaccine was net harmful, specifically young men. Do you think there's any reason to think that
The adjuvants, essentially what the vaccines are suspended in, not the vaccines themselves, are potentially harmful. I've heard this. I am personally not aware of any strong evidence for it. I think these are the kind of things that ought to be investigated, but it's very difficult to investigate just because of the sort of like political –
aura around vaccines, where if you ask, if you really do investigate it and find something that the public authorities don't like, you're going to have trouble. I think there, I don't know the answer to that question from a scientific point of view. Let's start with COVID vaccine and dig a little further into that.
The COVID vaccine was promoted slash mandated, certainly was mandated at Stanford, but was promoted as the best line of defense for avoiding infection and reducing the symptoms of infection and reducing the probability of death. That's what I heard. What is the evidence for or against that statement now, given what we know about
about who took it, who didn't take it, and transmission and death rates. Okay. So can we go back to December 2020? Sure. Because then I'll answer your question, I promise. Answer all the other questions you have. So...
In December of 2020, there were a couple of really important randomized trials published regarding the COVID mRNA vaccines. Could you describe what one of these looks like? Because I'm not trying to slow your roll here, but some people get vaccines, some people don't get vaccine, and you look at who gets sick and who lives and who dies? Yeah, basically. So the large-scale randomized trials flipped a coin, said 20,000 people, I forget exact numbers, get the vaccine.
20,000 people get a placebo or something placebo-like. And then you follow them for a certain number of months and you ask which group's more likely to get COVID, have a diagnosed version of COVID, which group's more likely to die, which group's more likely to be hospitalized. And if the vaccinated group is less likely to get COVID, you report that. If not, then you report that. There were randomized trials then published for several groups
high-profile vaccines that were used during the pandemic in December of 2020. I guess November of 2020, right? So the mRNA vaccines from Moderna and Pfizer, the Johnson & Johnson vaccine, the AstraZeneca vaccine, probably the four most important ones used in England and the United States, or Great Britain and the United States, Europe. Okay, so what did those studies show? For the mRNA vaccines,
And in fact, for all of these studies, they were run – these are studies that were done, again, randomized, like high-quality studies, large numbers of patients. But they were tracked for about two months, right? So you can't say from the randomized trials in December of 2020 what's going to happen after two months because the trials themselves only track patients for about two months, right?
What they showed was that among patients who had never before had COVID, because they excluded them from the analysis of efficacy, among patients who had never had COVID before, the patients who were randomized to the vaccine had lower rates of getting COVID in those two months. I'm sorry, symptomatic COVID in those two months than the people who were randomly assigned a placebo. Okay. The mRNA vaccines...
had more deaths in the treatment arm than in the placebo arm. But the size of the samples were such that you couldn't say that that was a statistically meaningful result.
Okay. Couldn't say it, right? Because it's...and that made sense. The death rate from COVID was something like, you know, three, four out of a thousand. You would have had to enroll populations in the hundreds of thousands or millions in order to get a significant result about deaths. And age range really matters here. Yeah. So the vaccine trials tended to focus more on younger people
It had some older people in it, but it didn't. If I had designed the trial, what I would have argued for is to have the older population more represented because that's who was dying from COVID. And then having the prevention of death or hospitalization is the primary endpoint. Instead, the endpoint was prevention of symptomatic COVID for two months. Now, they didn't ask whether you got COVID, actually, because there are people who got COVID and never had any symptoms.
right? So they didn't ask in the trial about prevention of transmission. They could have, right? So for instance, the people who were in the placebo arm, you could ask whether their household members had COVID at higher rates than the household members of the people who were in the treatment arm. Compare the household members and ask. They didn't ask that. So what could you infer from the trial? You could infer that for two months,
People who had the vaccine were likely to have much less likely to get COVID for those two months, symptomatic COVID for those two months. That's all you could say. You couldn't say they reduced death rates because it didn't actually in the point estimate. And there was not, again, any statistically significant difference. In the AstraZeneca and the J&J vaccines, if you combine those, it turns out that you actually did get lower death rates in the vaccinated arm than in the placebo arm.
J&J vaccine had lower death rates, statistically significant once you combine the trials. Was the J&J vaccine an mRNA vaccine as well? No, it was an adenovirus vector vaccine. And it was the single shot? Yeah. And it was like the AstraZeneca vaccine, similar technology, adenovirus vector vaccine. Okay. So, but again, those were only two months long.
And the death rate difference was like, you know, it's hard to get, it was not statistically powered to find one, although it happened to find one in the adenovirus vaccines, not the adenovirus vector vaccines. And the mRNA vaccines, couldn't say from the randomized trial, one way or the other. Okay, so that's the information base we had in December of 2020.
I wrote an op-ed in December 2020 with Sunetra Gupta where I argued that that is sufficient to say we should give the vaccine, recommend that older people get the vaccine, but that we shouldn't give it necessarily to young people. The reason was that young people died at very low rates relative to young older people when they got COVID. And so the thing you're protecting them from was less of a risk to them than was for older people.
And so the benefit-harm calculation would tilt toward if you have something that's a big threat and you have something that is known to prevent it for two – if it prevents symptomatic infection, then it probably prevents death in the older population. I can't say that for sure from the trial, but I can extrapolate. It's extrapolation, right? It seemed like a reasonable extrapolation in December 2020. Then it makes sense to give it. Even if there are side effects –
which are not known in the trial. The trial is only tens of thousands of people. If you give it to billions of people, you're going to find out side effects you didn't know about, right? But there are these unknown side effects. But it seems like based on the benefit-harm expectation, older people, it makes more sense to give it to. Whereas younger people, the benefit-harm calculation runs in the opposite direction. There are unknown harms that
Some harm is actually you saw in the trial itself, but you don't know once you give it to billions. And the benefit's small. So what I wrote is you should recommend it for older people and then lift the lockdowns. That's the op-ed I wrote and published in The Wall Street Journal.
Instead, what public health authorities decided to do was to take the vaccine and say that we could use it to eradicate COVID. They implied it. They didn't exactly say that, but they would say things like, well, if 80, 70, 80, 90 percent of the population gets the vaccine, then we will achieve herd immunity.
as if it were some permanent state rather than a transitory state having to do with the fraction of the population that are currently immune versus the this the uh you know that that's that's a herd immunity is a very is a clear mathematical construct in epidemiological models of disease spread they were the public health authorities were talking about 70 80 90 percent were using it as a essentially a synonym for disease eradication which it is not was this message
only in the United States or was this message kind of uniform across the world? Yeah. Now, just consider the, I don't know if she's uniform, like for instance, I don't think Sweden ever mandated the vaccine. Right. With the exception of Sweden, I just sort of- A few other places, yeah. Because for one public health science system to kind of collaborate in this, let's assume that they, the public messaging was, they were about a bit out over their skis, so to speak,
But for Northern Europe to do that and for Brazil to do that and for Australia to do that, it sounds like there had to have been a collaboration of kind of massive scale. It's a little hard to imagine everyone collaborating in some sort of secret agenda that extends across international borders. Well, it just pushes us back in December, November 2020 when the news about the vaccine came out.
It was like a sense of joy that we've been a liberal, like the science had delivered us from this deadly plague. It was definitely exciting. Yeah. And so like, and there was this sense of hope, right? That sort of like large numbers of people around the world, I think shared. Public health authorities shared that sense of hope, but that I think partly led them to extrapolate far beyond what the data actually showed.
and make promises to the public that were not in the randomized data that were available at the time. The companies that made these vaccines, are they American-based companies?
I think AstraZeneca is a UK company. J&J is an American. Pfizer, I think, is an American company. Yeah, Merck. For some reason, I thought Pfizer was overseas. Moderna has German roots, I think. I'm not sure. BioNTech is German. Moderna is American. I'm not sure exactly. Because many of the people that are suspicious about vaccines or skeptical about vaccines argue that it's all financial incentives. I mean, was a lot of money generated from that? Billionaires were created.
out of this. And a lot of, in fact, the NIH is collecting patent royalties from the licensing for the technology that went into the vaccines. Still now. Yeah. But Project Warp Speed, the development of the vaccine, aka Project Warp Speed, was a Trump program, right? Yeah. President Trump authorized the program in order to
accelerate the development and testing of the vaccines. I remember seeing him getting the injection on the news. So I think people forget that because of Maha and this sort of assumption that vaccines and Maha are diametrically opposed, in some sense that Maha and Bobby Kennedy are
to my knowledge, this first time that anyone's forcing a look at vaccines with the kind of level of detail that they are doing it or going to do it. People assume that the Trump administration is not aligned with vaccines, but the Trump administration initiated Project Warp Speed, correct? Yes. Yeah. The idea that Bobby or President Trump is anti-vax is ridiculous. This is frankly at odds with what the data actually show.
Okay, let's go back to the COVID vaccine because I think the story is really important. Public health authorities, on the basis of an extrapolation that they should not have made, decided to essentially promise the public that if they got the COVID vaccine, they would not ever get COVID again. That was the implicit public health messaging. You can become free
Just take the shot, you become free. You no longer have to worry about lockdowns and mask mandates or whatnot. It very quickly became clear that that was not true. So I remember seeing the outbreak of cases in Gibraltar, which was like 90 plus percent vaccinated. And I look at them going, why is Gibraltar, I think they were using the AstraZeneca vaccines. Why are they seeing this huge spread of COVID?
I saw data from – I forget which country. It was mostly using the Chinese vaccine, the Sinopharm, which had a more traditional technology. Again, with a huge outbreak of cases in like February or March of 2021. That is real. Country after country, they've been heavily vaccinated, seeing large outbreaks of cases. And that meant that the extrapolation was false.
that the vaccine was going to stop you after two months from getting COVID and spreading COVID was not true. Instead of acknowledging that fact, public health officials decided that the problem was the unvaccinated. And they embraced the idea that you have to force people to get vaccinated for the public good. So they doubled down on their hypothesis. It was like July, August 2021 that the Biden administration decided to...
They've used OSHA to use CMS. OSHA is the safety... Occupational Health and Safety. Occupational Health and Safety. And then there's CMS, the Center for Medicare and Medicaid Services to mandate the vaccine for populations that they had control over. And when we talk about mandates, were there criminal charges or civil charges if somebody didn't get it? Just lose your job. Yeah, you just lose your job. I recall at Stanford, there was an insistence that everyone get vaccinated.
But that if people had religious reasons to not get vaccinated or some special health reason that they could essentially not get it. Stanford made it difficult to not get vaccinated, but possible. Like if you had religious exemptions, they made it possible. Other universities made it much more difficult. So, for instance, my colleague and friend, Martin Kulldorff, was a tenured faculty member at Harvard University, got fired.
Because he didn't take the COVID vaccine, even though he'd already had COVID and recovered. He is currently still fired? Yeah. So there were consequences for not getting it. Yes. Because we hear this word mandates, right? But I don't recall anyone coming around to my house like, you know, and insisting. I just recall that if I need to go certain places, I needed a vaccine card signed. I mean, essentially, it's widespread.
on your basic liberties, civil liberties. That was a consequence, including potentially your employment. And other countries were even worse. Like, so Canada, you couldn't go on public transportation. You couldn't fly if you weren't vaccinated. You couldn't go to a restaurant if you weren't vaccinated. That's true in New York City, by the way. You had to bring a vaccine card. Yeah. And if you didn't have one, you couldn't go in. Essentially,
The regime was essentially to ostracize people who decided that they didn't want or need the COVID vaccine, even though the scientific evidence was that there was no scientific evidence that demonstrated that if you had the COVID vaccine, you were less of a threat to other people as far as spreading COVID than if you hadn't had the COVID vaccine, specifically for people who had already had COVID and recovered.
and weren't vaccinated, actually there was quite good evidence from studies in Israel, especially, that you were less of a threat for someone who never had COVID and was vaccinated in three or four or five months since the vaccine. Evidence out of Qatar showed a pretty sharp reduction in the efficacy of the vaccine against getting COVID by four, five, six months after the vaccination. And what if any evidence was there that the COVID vaccine, any of them,
caused any specific harm in adults. Right, so in young men specifically, like adults as old as 35, 40 years old, there was evidence of heart inflammation, myocarditis. Transient myocarditis?
Yes, but also more severe myocarditis post the vaccine. I mean, that was clear, clear evidence. Why just boys? Do we know? I don't fully understand the biology of that. A reason to do sex-specific studies. And I'm in favor of that. Can't pass up the opportunity. Interesting. So was there any evidence that the vaccine had...
long-term detrimental effects that we're still looking at now. You know, you hear this stuff, you see it circulating, you hear more about long COVID. We should talk about long COVID. But is there any evidence that the vaccine caused long-term issues for people? I think that likely that there's some people who have particular immunological responses or that there's also like evidence that the production process for some of the vaccines involved using
DNA plasmids, which may persist in producing some of the products of the vaccine. I'm not actually, frankly, not. I mean, I've looked at the literature and there's a lot of controversy around the literature and have not made up my mind fully on the extent of it. What I will say is that it's very difficult to ask questions about long-term effects of a vaccine just generally.
You can't run a randomized trial. That's done, right? The vaccine trial was terminated where the placebo arm was vaccinated in January of 2021. And so you're not going to tell from the randomized studies about the long-term effects. So now you're left with observational studies where you need to have a real control group constructed properly.
And it's been difficult to get the public health authorities who are supposed to do this to actually do this.
I've seen some of this. Like I think the FDA put out a report of babies getting the vaccine having epilepsy or seizures at slightly higher rates. It was a report in 2022. There's claims online I've seen about cancer, but I haven't seen anything where they've done a very careful, people have done careful control groups. I don't know. I'm not leaving out the possibility. I'm just saying that the kind of studies that I would like to see done
that have control groups, even in observational settings, it's hard to find them in the literature. And whenever they're in the literature, they seem to get attacked, sometimes for reasons that make sense and sometimes for reasons that don't. It's very difficult to address this from a purely scientific point of view because the literature itself seems like it's poisoned. - Do you believe long COVID is a real thing or is this something that people have constructed
No, I think it's real. I think there's – so I do think that the extent of it is, again, unclear. But it's very clear that there are some. So, for instance, I saw a study, I think it was in 2021, from France, where they looked at people who previously had COVID and previously never had COVID among kids.
And then they were measuring subsequent long COVID rates after the after infect at long COVID rates comparing the match people who previously never had COVID versus who did. And in kids, the rates of measured long COVID, which back in that study, I think was like, did you have one of some number of symptoms in the WHO list of long COVID symptoms?
three months after the COVID infection, the matched study were roughly the same rate for kids. But for adults, it was higher for the people who'd had COVID before. So I don't know the exact rate, but it's certainly a real phenomenon. I mean, I've met people who've had it. Same thing with vaccine injuries. I've met people who have vaccine injuries who report having had concrete discrete injuries after they've been vaccinated.
And I believe them. I mean, I think that I generally tend to believe patients when they say things about themselves and, you know, especially when they have no incentive to dissemble about it. And yeah, so I think that these are real phenomena that we need to address, you know, with open minds. Will the NIH and or CDC be making public statements about some of what you just described, that the messaging around vaccines was...
in your view, inaccurate? Well, I'm still saying this. And I've been saying this. I think that... But in your new... I mean, you're saying it here and we hear you. But in your new role, like at the level of a country of 300, I'll
million people like, like, Hey folks, you know, we, we've looked at this and, you know, I wasn't in charge then, but here's the deal. I mean, I, I, in my role, I have to like focus on stuff going forward more than like, I mean, the past I think is worth addressing, but, uh, it has to be a broader look than just me coming out and saying my opinion about it. This podcast is fun, but that's not the, the, the purpose is, um, so I'll just give you a specific thing. Um,
My colleague, Marty McCary, who runs now, is a commissioner of the FDA, he has issued a new framework for evaluating COVID booster shots.
So rather than just requiring to show that the COVID booster, the new variant COVID booster, whatever it is, in the future produce antibodies either in lab animals or in humans in order to approve the vaccine for use, now going forward, the boosters have to show some efficacy against preventing COVID and preventing deaths and hospitalizations in order to get approved.
That's an evidence-based framework to essentially say, if you're going to sell the vaccines, at least show in humans that it actually works for something we care about. If you produce...
antibodies and it doesn't translate to reduction in morbidity or mortality, then why recommend it or why approve it? Some people might want to take the vaccine to reduce symptom severity, not just to avoid death. There's now at this point, there's not evidence, if you've already had COVID and recovered, there's no evidence that it would do that at this point.
For the boosters. I mean, again, like I want to distinguish. That's why I wanted to start with summer 2020. This was like, you know, we knew about these large scale studies from the vaccines that were new. And we knew I want to distinguish what we knew and didn't know. The boosters are a different vaccine and they don't have the same large scale studies behind them. They've been approved on the basis of relatively small scale studies asking whether they produce antibodies.
not things that clinically matter to people. Is it going to prevent me from getting sick? Is it going to prevent me from being hospitalized? Is it going to prevent me from dying? The boosters don't have that kind of evidence behind it. And so I think it was just a couple weeks ago, the FDA decided that it was going to ask the manufacturers to produce much better evidence for the boosters before it was going to approve them. It shouldn't just be a routine thing. This is not a flu shot.
The framework, the regulatory framework that governs flu shots are based on decades of experience with flu vaccines. Are you a fan of the flu shot? I mean, I've had lots and lots of flu shots in my life. Really? Yeah. Do you get it every year? Generally, yeah. And it's designed to guard against most of the most common strains of flu that year? Yeah. I mean, sometimes they guess wrong. It doesn't do much. And sometimes it gets right. It does better. But I generally have been gotten, I mean, I don't think I got it last year.
too busy, I guess. But you don't, it sounds like you don't have any specific concerns, safety concerns about the flu shot for otherwise healthy adults. Is that right? Yeah, I mean, there's, as a scientist, I want the safety of these vaccines evaluated in a rigorous way. So I'm not, I wholeheartedly support that. And if the data show that they're
bad outcomes, I'm going to say that, right? But as a general matter, the flu shot, the technology used for it is, I mean, it's a traditional technology that has a long history behind it. And the regulatory framework, while I do think that like the production of antibodies is, I think that's actually still the standard for the flu shot. It makes some sense, right? The flu strain that circulates is a different one every year.
And if you required this long-term clinical trial for the flu strain that's currently circulating, by the time you actually recommend it, it would be useless. Now, you can say that's true for COVID as well.
But we don't have decades-long experience with the safety profiles and also the efficacy profiles. And the flu shot, it's hit or miss, right? Sometimes it works and sometimes it doesn't. What we need is an excellent universal flu vaccine, which, you know, there's still a lot of research to try to get. I think the key thing is, what I want to convey is, if you are in favor of vaccines, you should not be treating this as a religious matter.
where vaccine is good, therefore, and you believe that, therefore, you're a good person. Vaccine is bad, therefore, if you believe that, you're a bad person. You should be treating this the same way we treat other drugs that we recommend to the population at large. Evaluate the benefits, evaluate the harms in rigorous ways, including randomized studies. Understand...
patient nuances. It might be right for some patients and wrong for others. If you're going to say something, don't extrapolate beyond what the evidence actually shows, right? Or else you risk losing the trust of the public, especially the public that would potentially most benefit from the thing. What I'm arguing for is an actual, honest, evidence-based evaluation of vaccines. And that's essentially what Bobby Kennedy's asking for, right? So that's what he's asked me to do.
Not for vaccines generally, but for the COVID vaccine, that's essentially the policy. Now, the problem that we have in public health is that, you asked me earlier about do I think there are certain vaccines that are worthwhile, and the answer is yes, I do think that. I think that if we have a public health authority that's gotten it so deeply wrong about this one vaccine,
Where people lost their jobs over it. People got injured and they were silenced over it. People essentially felt like they were made to feel like, you know, you remember like in 2021 where people would disinvite family members from Thanksgiving if they weren't vaccinated. Yeah, or worse. People were kind of excommunicated from families and workplaces. Yeah, essentially we created a class of unclean people as a matter of public policy.
You can understand why people who went through that would say, given that the vaccine didn't turn out to stop you from getting and spreading COVID, why should I trust you on anything else? That's where we currently are. The way forward isn't to force people to say, look, you must acknowledge how great science is on these other things. The way forward is to be utterly honest about what we know and don't know.
and treat people as partners rather than as subjects. So in keeping with that, there's perhaps no issue more sensitive than the vaccine autism issue. My understanding of the current literature as it stands is that the Andrew Wakefield data
this British physician who was really the first to popularize the idea that vaccines could, in his words, cause autism or were highly correlated with autism. Those data were essentially retracted by the journals. He lost his medical license. And my understanding is there was evidence of fraud, that he was either made up data or contorted data. I've had guests on this podcast, including a colleague from Stanford, Karen Parker, who works on autism.
who verified that indeed the frequency of autism is vastly increased in recent years in ways that cannot just be attributed to improved sensitivity of tests, et cetera. One in 32 births is the current number. And so you can understand why parents who love their kids more than anything and would do anything for their kids
are understandably concerned about any possibility that vaccines could increase the probability of autism. My stance as a scientist is, well, if the data are robust that vaccines don't cause autism, then run a proper trial. The Wakefield data are clearly contaminated, if not
Certainly by story and narrative. I mean, there's just no way that those data are going to be resurrected. And I don't think they should be resurrected. Right. I mean, unless there's something I'm not aware of. He said too many things that weren't true. And whatever happened are, you know, is history. So what is it?
the evidence, if any, that a vaccine, some specific vaccine causes autism? And is the NIH and CDC and the new administration going to take a serious second look at this?
- Yeah, so I don't wanna comment on the Wakefield situation because I don't know the ins and outs of it. - All we know is what happened. He lost his medical license. - And I should say, we're talking about one study, right? I believe that replication matters. And so there are, I think, on the MMR vaccine, some excellent studies that failed to find a correlation or a causal link
between MMR vaccination, measles, mumps, rubella, a vaccine that's really, I think, important for the childhood, for kids, and autism. Like there's a massive Danish study that tracks patients who are vaccinated, kids who are vaccinated, matched with patients, similar patients who are not, tracks them for a year or longer, years, and finds no difference.
or fails to find a difference in autism rates. There's people who, I mean, there's all kinds of, if you look online and elsewhere, there's all kinds of fights over that. But to me, that's pretty good evidence for the MR vaccine. For some of the other vaccines, there's been less of a focus to ask whether it correlates to vaccines. Such as polio vaccine? I don't.
I don't know this literature, so I shouldn't comment, but I don't remember seeing a study specifically asking whether the polio vaccine is linked to autism. When I was growing up, every kid got the polio vaccine, measles, mumps, rubella, and- I think there's a DPT, yeah. Yeah, and a couple others. Like there were probably four or five vaccines, as I recall. I think that there's good evidence on the MMR vaccine that failing to find a link with autism
There's, and I don't know the full extent of this literature, so I shouldn't comment too much, but when I've looked, I haven't seen quite the same level of evidence for some of the other vaccines failing. Again, they just haven't looked.
As a general matter, I think it's an unlikely, just from a biological point of view, unlikely to be the main reason why you, autism, the rise in autism, which is now well documented, you talked about it, has occurred. So to me, the question then is, thinking about autism, you're asking, you want to tell parents, answer for parents, well, what does cause it? What does, has led to the rise in the prevalence of autism?
The honest answer is I don't know. We're focused now in this conversation on just one potential cause, vaccines. To me, it's unlikely that they are the reason for the rise in the cause of autism. But there are many other potential hypotheses for the rise in the prevalence of autism that I've seen.
alterations of the gut microbiome I've seen. Retinoids. There was a paper out of Pashko Rikishi's lab at Yale years ago looking at the migration of cells in the cerebral cortex in developing fetuses, primate fetuses, but it's a great model.
And he was exploring the idea that ultrasound was altering cell migration, which may lead to changes in circuit connectivity. Never really got followed up on because that would be wild. It would be wild. It would be wild. I'm not suggesting that ultrasound causes autism, but there were a lot of interesting ideas early on that I thought ought to be explored. Well, so the point is that unless you know the etiology, it's very difficult to talk about the treatment. Now, of course, autism is a –
has a very wide range of clinical presentations, right? You have kids who have some social awkwardness, but otherwise are well-adjusted, have no problems. Think Sheldon from Big Bang Theory or something, right?
- Or many of our colleagues. - Yeah, our colleagues. Maybe me, I don't know. And then you also have kids who have very severe disabilities, a lot of biologically sort of driven co-occurring conditions, apraxia, difficulty toilet training for- - They will never live on. - Right. And so you have a very wide range of outcomes. It's very possible that biology is very different for folks.
along the spectrum. And unless you understand the etiology, it might be different etiology for kids in different parts of the spectrum, then you're never going to have good answers, both for prevention and also for therapies, right? So it's that question that Bobby Kennedy has asked me to answer or try to get an answer.
and that President Trump has asked to get an answer. And I think it's appropriate because if you ask me what is, I mean, we just talked about vaccines as a potential cause. I think it's unlikely to be the cause, but you can see my mind is open depending on the levels of evidence I've seen. Now, this is not my area, right? I should say this. I'm saying this as someone who's now tried to wade into it some just to get a sense of it. But
But as I've waded into it, it's very, very clear that there is not a scientific question, a consensus answering the question of what causes the rise in autism or what is the etiology of autism. But it seems that encouraging a spirit of open discourse...
about these other potential causes, right? And I'm not suggesting, by the way, that ultrasound causes autism. I want to be very clear. But if you read scientific papers focused on brain wiring and you make the not so outrageous leap that autism has something to do with brain wiring, maybe gut and brain and a bunch of other things, but
you come across a number of very interesting preclinical model hypotheses that hopefully will be tested at some point. Well, there's like environmental exposures to various kinds of chemicals, tens of thousands of chemicals in the environment. There's...
that happen in utero potentially. There's nutritional issues potentially. I mean, I've seen a... You named the hypothesis. I've been just trying to wade through this literature from somewhere from the outside. And it's just bewildering. And I can't even imagine what a parent looking at this would look like. Oh, it's got to be devastating. And to me, when there is no scientific question, to an important...
thing that actually impacts health, the answer is let's do excellent science on it. Now, I've seen a lot of excellent science about how to manage autism.
right lots of fights over is it is it psycho is this like psychotherapy the right approach behavioral modification is there's lots of fights over that do we address the co-occurring by you know biological conditions how do we do with address that is is it different i mean i've seen lots of like uh of literature around that which strikes me as more more advanced um
And sort of closer to the right answers, although again, there's lots of controversies even there. On the etiology of autism, it strikes me that the literature is not all that far advanced, that there's lots and lots of competing hypotheses. The data are...
conflicting on many of them. You know, I could give you my most promising one, but they would mean nothing really. The right thing to do in that setting is to have an open-minded investigation to try to address this problem. And the question is, why haven't we had that so far? And I'll tell you, I think the reason we have not had the kind of open-minded, deep investigation by the scientific community at large
on the ideology that the parents deserve, the kids deserve, is because it's dangerous to ask that question if you're a scientist. All of a sudden, you're going to be accused, often incorrectly, of being an anti-vaxxer. And that's the end of your scientific career. That kind of sort of suppression of scientific curiosity means that we won't have an answer to this
Right. So what I've done is I've organized an initiative inside the NIH to address this question of the etiology of autism. Not limited to vaccines? No.
wide-ranging. It includes basic science work. It includes epidemiological work. It includes environmental exposure work. It includes all -- and will bring together data sets
that we'll make available to the researchers. We'll have a competition among scientists, just like the normal NIH way, with peer review panels to ask who should get the awards. We'll have a dozen or more scientific teams asking the question, what is the etiology of autism? We'll have that. I think that normally it takes a year or longer to set up a thing like this. Well, by September, we'll have an open competition for
these scientific projects and you know you can't brush science but I'm hoping within a relatively short period of time you know who knows how long exactly depends on how science works we'll have a much better understanding of the ideology of autism than we had at this current moment
Fantastic. I mean, just fantastic. I mean, regardless of where one sits on the vaccine discussion. On vaccines, can I say one thing? Now, I don't want to, as the NIH director, I don't want to put my thumb on the scale on any part of these potential etiologies, right? As I already said, I'm not particularly an expert in this area. And so, you know, if I were to put my thumb on the scale, it would be not from the point of view of expertise. It would just be the point of view of like, I just happened to read the literature and I was impressed by X, Y, or Z.
But if I were to put my thumb on the scale, I think it would make it more difficult, A, for scientists to ask the question honestly because they want to impress the NIH director or something, and then, B, for the public to trust the result at the end. I want an open-minded – so this is why – like I was asked, well, if you don't believe that these vaccines cause autism, why would you allow people to ask that as a part of the research?
the research agenda. My answer is that a lot of people, especially in, in the public, but that, and even some scientists who disagree with me and I want them to have their say, I want an honest conversation. I think that if you have an honest evaluation, you're not going to find that the vaccines are the primary reason for the cause of the rise in autism. You're going to be, it's going to be something much more fundamental and complicated, but, but I don't want the results to be, uh,
disbelieved because I put my thumb on the scale. I eagerly await the results of the unbiased studies. Yeah. I really do. And thank you for spending that time explaining what that initiative is going to look like. And I'm delighted to hear that it's not emphasizing one particular hypothesis. The other thing about the initiative, it's very important to understand, we're working with
autistic parents. We're working with the autism community, right? A lot of times scientists, when they study things, we put ourselves above and it's like we're examining amoeba or something on a slide. When you do population research, you have to work with the communities that you work to trying to help. And that's exactly the spirit of this. We're going to work with communities of autistic kids and parents, and we're going to apply the
rigorous research methods with control groups and just the normal sort of high-quality, the term of art nowadays is gold standard science. We're going to apply gold standard science to this and subject it to the same kind of replicability standards I want all science subject to.
Can we expect that the National Institutes of Health, which indeed is a plural statement, institutes, NIMH, Mental Health, National Eye Institute, et cetera, will be restructured in some way in part to reflect the Maha movement, make America healthy again. And by the way, no one told me to ask that question. I'm asking out of genuine curiosity. There are these theories. I'm like part of the... I'm politically... I'm a free agent because...
the budget is limited. It's not an infinite budget. Depending on how the IDC thing goes, there may be more or less money to devote directly to the laboratories around the country. And given that fixed amount of money,
You can't do everything. I love the way you're encouraging innovative exploratory science that's rigorous with open discourse. But can we expect that the Institutes of the National Institutes of Health will take on some new names, maybe a new institute starting to emerge? I mean, it's really Congress that determines that. There's a process the administration has put forward. It's
suggestion for a reorganization, I think it's down to eight institutes from 27 or institutes and centers. Congress over the past decades have had several suggestions for how to do this
It's one of these things where I could focus my efforts on things that I think are going to make big changes, or I could focus my efforts on reorganization efforts. I'll do what Congress or the administration asks of me. But from my point of view, we'll let that fight happen as it happens, and we'll respond to it as it happens rather than where I'm active. I think the key thing is not the structure of the institutes to me. The key thing is the content of the research
and the standards we hold ourselves to in the research. Those are the things I want restructured. That's really the fundamental question for me as an NIH director. If I can accomplish...
some of the things we've talked about during this podcast, having replicability be the core of deciding what scientific truth is, refocusing the portfolio so that we enable early career scientists to test their ideas out, that we aim big for trying to, and we address the key health problems that Americans face. If we can do those things, that I'll consider myself a success. Well, Dr. Bhattacharya,
You have a tall task and you're clearly ready for it. I want to thank you for taking time out of your extremely busy schedule. Those aren't just words. You are extremely busy to come here and have this discussion and to tackle head-on questions that were not all easy questions. Some of them quite difficult, actually, because there's a lot of nuance, a lot of different lenses one can look through. It's clear to me that you're a data guy. You love data.
And it's also clear to me that you like dissent, maybe because you've been in the position of... That's been always true, actually. Okay, well, yeah, it sounds like it's in your nature. I didn't know the younger you, but I love that you encourage dissent. I do believe that great science emerges from discourse that includes sometimes even just outright arguments, provided it doesn't get physical or cruel, that are aimed at getting at the truth, if it's possible, getting at the truth.
And it's also very clear that you care about exploration. And I am, I must say, especially warmed by your enthusiasm for protecting and promoting the science of young investigators, meaning young
in the first 10 years of having their labs, as well as trainees. I think, I'm not trying to speak in nomenclature, this is so important. It's vital that the future is nice. It's so important. And yes, there are some older labs doing some wonderful work, but even they will eventually retire and die. We all do. And the younger generation of scientists in this country, it's so key. And so-
I just really appreciate you coming here to share. I do want to check back with you in a year or two to see how things are going. And science and public health really need you to really get behind discovery and the mission statement of the NIH. So thank you for coming here today. You didn't have to do it. And I look forward to more discussion. Andrew, thank you so much for having me. Really a pleasure.
Thank you for joining me for today's discussion with Dr. Jay Bhattacharya. To learn more about Jay's previous work and to find links to his current post at the NIH, please see the show note captions. If you're learning from and or enjoying this podcast, please subscribe to our YouTube channel. That's a terrific zero cost way to support us. In addition, please follow the podcast by clicking the follow button on both Spotify and Apple. And on both Spotify and Apple, you can leave us up to a five-star review. And you can now leave us comments at both Spotify and Apple.
Please also check out the sponsors mentioned at the beginning and throughout today's episode. That's the best way to support this podcast.
If you have questions for me or comments about the podcast or guests or topics that you'd like me to consider for the Huberman Lab podcast, please put those in the comment section on YouTube. I do read all the comments. For those of you that haven't heard, I have a new book coming out. It's my very first book. It's entitled Protocols, an Operating Manual for the Human Body. This is a book that I've been working on for more than five years and that's based on more than 30 years of research and experience. And it covers protocols for everything from sleep to
to exercise, to stress control, protocols related to focus and motivation. And of course, I provide the scientific substantiation for the protocols that are included. The book is now available by presale at protocolsbook.com. There you can find links to various vendors. You can pick the one that you like best. Again, the book is called Protocols, an Operating Manual for the Human Body.
And if you're not already following me on social media, I am Huberman Lab on all social media platforms. So that's Instagram, X, Threads, Facebook, and LinkedIn. And on all those platforms, I discuss science and science-related tools, some of which overlaps with the content of the Huberman Lab podcast, but much of which is distinct from the information on the Huberman Lab podcast. Again, it's Huberman Lab on all social media platforms.
And if you haven't already subscribed to our Neural Network Newsletter, the Neural Network Newsletter is a zero cost monthly newsletter that includes podcast summaries, as well as what we call protocols in the form of one to three page PDFs that cover everything from how to optimize your sleep, how to optimize dopamine, deliberate cold exposure. We have a foundational fitness protocol that covers cardiovascular training and resistance training.
All of that is available completely zero cost. You simply go to hubermanlab.com, go to the menu tab in the top right corner, scroll down to newsletter and enter your email. And I should emphasize that we do not share your email with anybody. Thank you once again for joining me for today's discussion with Dr. Jay Bhattacharya. And last but certainly not least, thank you for your interest in science.