This week's episode is brought to you in part by Science Careers. Looking for some career advice? Wondering how to get ahead or how to strike a better work-life balance? Visit our site to read how others are doing it. Use our individual development plan tool, access topic-specific article collections, or search for an exciting new job.
Science Careers, produced by Science and AAAS, is a free website full of resources to help get the most out of your career. Visit sciencecareers.org today to get started.
This is the Science Podcast for March 14, 2025. I'm Sarah Crespi. First this week, science policy writer and editor Jocelyn Kaiser is here to give us an update on the latest news from NIH. Next, producer Megan Cantwell talks with contributing correspondent Kathleen O'Grady about why some think that using sign language with kids with cochlear implants gives the kids the best chance at communicating fully and fluently.
Finally, researcher Francesca Fardo talks with me about using a pain illusion to understand the role of learning and uncertainty in how we feel pain. We're continuing our coverage of changes in the U.S. government affecting science. Things are moving fast and there is much to talk about. This week, we're going to catch up on the news with Jocelyn Kaiser. She's a writer and editor focusing on policy and NIH. Hi, Jocelyn. Welcome back to the podcast. Hey, Sarah.
So the timing, we'll just put that out here. It's Tuesday, March 11th. This episode will come out Thursday, March 13th, in case anything changes between when we record and when it goes live.
The story we're going to talk about first actually just came out yesterday. It's by Sarah Reardon. And it talks about the cancellation of funding for vaccine hesitancy studies at NIH. They're losing funding. What happened? So yesterday we heard news that NIH was canceling a big batch of grants, at least 33 grants, that are studying vaccine hesitancy, why people are reluctant to get vaccinated and
or ways to encourage them to get vaccinated. And we learned that these grants were being canceled, or in some cases, maybe parts of them eliminated. Can you give us some examples of the kinds of projects that are canceled? So these grants are looking at things like gonorrhea vaccine and the attitudes that patients might have about getting this vaccine. This isn't actually a hypothetical vaccine. Also, vaccines for diseases like MPOX,
human papillomavirus, chickenpox, and COVID. These are about, you know, whether or not they'd be accepted by the public, how to encourage uptake. Are they going to cut vaccine research, testing vaccines, trying to determine their efficacy? So we think they probably are. We've also learned that their NIH is also compiling a list of mRNA vaccines, messenger RNA vaccines, which is
the type of vaccine that was used for COVID. At least one of the COVID vaccines was an mRNA vaccine and considered a really promising technology for other vaccines. But we think this is all stemming from Robert F. Kennedy Jr.'s vaccine skepticism. He's now the Secretary for Health in the Trump administration. And we don't know, we haven't seen the list. We don't know how much of it is clinical trials and how much of it is more basic research or testing them in animals. But so far, we know that the ones that
regarding vaccine hesitancy have been canceled, and they're asking for a list of other things that are going to be, I guess, they're going to take a look at them again. That's right. Yeah. So another change happening at NIH is how review panels for grants are organized. Instead of having panels at these different institutes within NIH, they'll all be undertaken at the Center for Scientific Review, CSR, which is
part of NIH. Is this a big change for the organization? Well, it kind of depends on who you ask. Right now, 78% of these reviews are done by that Center for Scientific Review, but the institutes, NIH has, I think, 23 institutes that do these reviews themselves for more specialized programs like
The institute puts out an initiative to encourage research in a certain area, and they need to get a specific sort of skill set on the panel, and that institute may run that panel. And so now NIH is going to get rid of all those institute panels and have all the reviews done by the Center for Scientific Review, which I said does it much more efficiently. So the goal here is to save money, it
Yeah, the goal is efficiency-saving money, and also they said something about getting rid of the appearance of conflicts of interest at the institutes both funding the program and also conducting the review of it. The money-saving will come from consolidating the grant reviews in one place, but are they going to eliminate jobs then as well from NIH? Yeah.
Yeah, they are going to eliminate, we've heard as much as half of the staff, they're called scientific review officers who now run these panels at the institutes. And that's something like 300 people. There's around 300 at the Center for Scientific Review. So I guess they'd be bringing the total number of those people down by about a quarter from what
We've heard. Yeah. Are there concerns that this might make the process for reviewing grants slower or less effective? Yeah, there have been concerns, kind of a mixed reaction on Blue Sky to this plan. Some people said, well, the Center for Scientific Review is really the experts on this. I'm okay with them doing all the reviews.
But there are concerns that it's going to slow things down or there's not going to be enough staff to run these panels efficiently in the way they should be run. Some people have worried that this is part of Trump's kind of clampdown on NIH. But the director of the Center for Scientific Review has made the point that it was actually under discussion last summer. And so it's not coming out of Trump.
How does this interact with the freeze that's been happening at NIH? All these panels that normally would be convening throughout the past few months have basically been on hold because they can't add things to the Federal Register. That's right. And so the panels run by that Center for Scientific Review are actually starting to be rescheduled now because they're able to post the Federal Register notices immediately.
And for some reason, the health department had not allowed that to happen for the institute panels. But I actually heard this morning that those panels should also be able to be rescheduled pretty soon. So that means that people who have their proposals at the panels are not permanently in limbo until this consolidation happens.
That's right. I think the consolidation may not take place until like October. So I think they're going to keep going with the panels as they are now for the time being. The one big holdup still, though, is that these grants have to go through a second level of review by councils run by each of the institutes. And those meetings have been on hold.
And so that's the communities waiting to see if those can finally be scheduled so that big batches of grants that are just in limbo now can be approved. The last thing I want to talk about with regard to NIH is the Senate confirmation hearings held last week for a new proposed head of NIH. This is Jay Bhattacharya from Stanford. What came out of those hearings with respect to his goals for the agency?
He laid out five goals, and they include tackling chronic diseases, which is one of Robert F. Kennedy Jr.'s goals, encouraging the replication of scientific studies to make sure they're robust. He wants to fund more cutting-edge, kind of risky research. And he wants free speech at NIH, he says. And he also wants to stop risky virus research. What kind of questions were, you know, what were people, mostly Democrats, concerned about with this appointment?
Yeah, the Democrats, many of them asked Bhattacharya his views on what's happening at NIH, where there have been staff layoffs, like at many federal agencies, and the holdup in awarding grants. And also, there's been a proposal to cut funding for kind of administrative costs of grants that could make a huge dent in NIH's budget. It's been stopped by the courts.
But so he was asked about all that. And he basically said that he doesn't want to fire people at NIH. He wasn't part of these decisions, but he will look at them. But he did say that he wants these, it's called indirect costs. He wants to take a new look at that. He's concerned about where that money's going. And he talked about a couple other things that were pretty controversial. Like even though there's no demonstrated link between autism and vaccines, he said he's open to studying that.
What are the next steps for his confirmation? He appeared before the Senate Health Committee, and they will vote on his nomination this Thursday, and then it will go to the full Senate. And I think that could happen within a few days. So he could be NIH director pretty soon. All right. Anything else you're keeping your eye on at NIH? Do we know anything about how the U.S. budget, which is awful?
also heading for limbo, how that might impact what's going on at the science agencies. Yeah, so Congress has still not approved the 2025 budget for the federal government, and they had sort of like a stopgap measure in place that expires on Friday. So there's a deadline to get that budget approved. The most likely scenario right now looks like continuing the 2024 funding levels through the end of the fiscal year, which is the end of September.
And those would not make big changes at science agencies. There is a cut for a couple of NIH programs in that bill, but for the most part, science agencies are not going to change a whole lot in that bill if it's passed. Okay, so one other thing that has started bubbling up in the last few days is this request for
for the agencies to come up with a reduction in force plan. How is that impacting NIH? At NIH and I think other federal agencies, they are throwing together these plans for how they would lay off. I don't know what the target, if there's an actual target or if it's just how you would conduct a RIF. They're scrambling to put those plans together. I think NIHs were due to central office at NIH yesterday and they're due to the White House on Thursday. So, but yeah, as far as I know, they're trying to...
the best they can to try to protect programs instead of just kind of arbitrarily laying off, as Trump did a few weeks ago, newer employees, regardless of their performance level or what they did, which was extremely disruptive. They're trying to do it in a more rational way. And I think they're also trying to, the best they can take into account people who have already taken voluntary retirement.
There was a new incentive this week. You get $25,000 if you agree to retire, things like that. All right. Thank you so much, Jocelyn. I really appreciate you coming on. Jocelyn Kaiser is a writer and editor that focuses on policy and the National Institutes of Health. You can read all of our policy coverage at science.org slash science insider. Stay tuned for a conversation about cochlear implants and sign language with producer Megan Cantwell and contributing correspondent Kathleen O'Grady.
Hi, Science Podcast listeners. This is Kevin MacLean. I'm one of the producers on the show. I just wanted to hop in here before we get started to ask you to consider subscribing to News from Science. Every week on the podcast, we bring you one of the stories that the News from Science team has published, but there's so much more than what we can cover on our show here.
For only about 50 cents a week, the money from subscriptions goes directly to supporting nonprofit science journalism, reporting on science policy, investigations, international news, and the latest breakthroughs from all around the world of science. Support nonprofit science journalism with your subscription at science.org slash news. You have to scroll down and click subscribe on the right side. That's science.org slash news.
When a child is deaf, some parents choose for their kid to be fitted with medical implants that help them hear, called cochlear implants. But after this decision, they're faced with another choice: whether to use spoken language alone or to also communicate using sign language.
Contributing correspondent Kathleen O'Grady wrote this week about the changing landscape of evidence for these different paths. Thank you so much for joining me, Kathleen. Thanks so much for having me. In kind of broad strokes, what is the underlying reasoning for these different approaches when it comes to guiding language development in children that have cochlear implants? For a long time, there has been a recommendation that parents focus on spoken language, which
with the idea that their kids need to maximize how much time they spend listening to spoken language in order to use their implants. But
In 2023, the American Academy of Pediatrics for the first time published updated guidelines that recommend that parents use sign language to give their kids unrestricted and early access to a language from day one, whether or not they have cochlear implants. But after these guidelines were published, the American Cochlear Implant Alliance, which is a nonprofit advocacy group, includes clinicians, researchers, but also cochlear implant manufacturers,
They published a response to the American Academy of Pediatrics saying that their recommendations were inaccurate and biased.
And they said that the authors had ignored research that reports good speech outcomes for kids with cochlear implants and also ignored how difficult it can be sometimes for parents to learn a signed language. So you really have this tension where some experts are advising parents to use spoken language only and other bodies and advocates are recommending that parents use
Go with bilingualism. Before we get more into the reporting that you did for this piece, I thought it could be a little bit helpful to maybe lay out exactly how these cochlear implants work and why it's a little bit different from biological hearing.
Cochlear implants actually send sound directly into the inner ear via electrodes that are implanted into the cochlear and then into the auditory nerve. So they can provide access to sound even for people who are profoundly deaf. Somebody with biological hearing is hearing a whole sound wave, which has like a really smooth sound.
curve to it. They'll be hearing different frequencies all across the frequency range. But a cochlear implant is going to kind of chop that sound up into little blocks. So the quality of sound that comes through a cochlear implant is very, very different. And it's kind of degraded compared to biological hearing, although the experience of people with cochlear implants will be quite different.
And we have a few examples of what speech sounds like to some people through a cochlear implant. Could you kind of set up exactly how researchers were able to get this recreation? Researchers found a way to try to figure out what it is that deaf people hear through their cochlear implants. They found a group of people who are deaf in one ear and hearing in the other ear. They played them a little phrase or a series of little phrases through their cochlear implants. The sun is finally shining.
and then took a clean signal and did various kinds of sound processing to this, and then played this and asked them how closely that matched what they heard through their cochlear implants. Patient number one. The sun is finally shining. Patient number four. The sun is finally shining.
Patient number eight. The sun is finally shining. These examples, they rated them out of 10, right? Most of them rated it as pretty close to what they were hearing through their cochlear implant. It was either like a 9, 9.5, or even 10 out of 10. I was just really surprised at
how different the distortions were between the different sounds that they were hearing, which kind of made me think how exactly are clinicians even knowing whether the implant is working as intended, especially in people who are bilaterally deaf and don't have one hearing ear to kind of have that reference point. Yes, it's true. It can be very difficult to establish what the hearing experience is like for somebody who's wearing cochlear implants, especially when that person is very, very young and possibly can't
give any verbal feedback. And that's one of the things that can go wrong with cochlear implants is that they can be misprogrammed and the children don't actually have very good access to sound through them. How would one find out if it is misprogrammed, especially if kids are getting this as young as nine months old? So in theory, what audiologists are relying on is a
babies having or young children having a kind of behavioral response. When they play a certain sound at a certain frequency in a certain volume, they're relying on being able to tell from the baby whether they're hearing that sound. But that's not always necessarily possible and it can be quite difficult to interpret a baby's behavior. One of the biggest indicators that something has gone wrong can be that a child is not developing language along the timeline that would be expected. But
What often happens in that case, what sometimes happens is that the child is diagnosed with something in addition to their deafness. So they might be diagnosed with autism or another sensory processing disorder, or parents in some cases are blamed for not using enough language with their kids when in fact what's been happening is that the kid just can't hear. So it's not the case that every single deaf child kind of has an equal chance of success of being able to
learn to understand and interpret speech through the cochlear implant? No, that's absolutely not the case. Research has found some factors that are associated with a greater chance of success. So the younger the child is when implanted, that seems to be more associated with the child having a better chance of speaking on par with hearing peers. Maternal education also seems to play a role. How much intensive therapy they get at a young age plays a role. But ultimately, there's no way to predict
whether any individual child is going to do well with a cochlear implant or whether they're going to struggle to develop spoken language. After they're fitted with the implant at varying kind of ages in childhood, what exactly does that process of speech therapy look like? I know it could be very different from kid to kid. One of the main schools of therapy that's recommended by cochlear implant manufacturers and a lot of other specialists is called auditory verbal therapy. One of the things that it's most famous for is
Parents, caregivers, therapists will cover their mouths while they're speaking so that children have to really process the sound without any additional visual cues.
And this is the kind of therapy that often requires parents to commit to not using any sign language with their children, because the theory is that in order for the child to make the most sense of spoken language, they need to maximize their spoken language input and not have any other language input that could, in theory, quote unquote, confuse them. Have there been studies comparing the difference in how
their understanding of speech progresses with or without sign language. A lot of the time they're comparing speech with a number of different systems that involve some visual language. So sign language or signed languages are natural human languages that evolve or emerge through generations of signers.
But sometimes these systems get used and kind of adapted to go alongside speech in a way that isn't really a human language. So you might speak in English, but use a number of signs from a sign language at the same time. Often these studies are comparing speech with various visual language systems, including natural sign languages.
There was a paper in 2016 in Pediatrics that reviewed a number of studies like this up to that point, and it concluded that they were generally too weak or flawed to draw any strong conclusions about whether speech or sign language led to better outcomes for deaf kids.
There was a bigger study done the next year, also published in Pediatrics, and it found that those who had not been exposed to any kind of visual language did better with spoken language. But that study came in for quite a lot of criticism. Other researchers pointed out that if you're looking at all of these different kinds of visual language systems, you're not really studying the effects of a natural human-signed language. So you can't really draw any conclusions about that.
And they also pointed out that the study was only correlational. So you couldn't really make any causal claims and say using sign languages with kids resulted in worse speech outcomes. You could only say that those two things were linked. And it could be the case that children with worse speech outcomes were more likely to have parents who were saying, well, then we need to sign with them and use sign language to kind of make up some of the ground with kids who are struggling with speech.
What has it kind of been like for parents to navigate that decision of whether to only use spoken language with their child versus spoken language and also sign language? When it seems like the field itself isn't totally at a consensus there. Some parents who I've spoken to were told in the hospital when their kid had just been born and identified as deaf.
They were told about American Sign Language and where to get resources for American Sign Language. Other people have had the experience of just not hearing about it at all or actively being steered away from it by professionals. There was a survey published in the Journal of Speech, Language and Hearing Research that found that 105 families with a deaf child reported that
Nearly half of all of the different professionals that they saw, pediatricians, ENTs, audiologists and speech language pathologists had advised families to use spoken language only with their children and told them that they shouldn't use sign language at all. One of the audiologists that I spoke to for this story pointed out to me that none of these people are actually language developers.
development experts or language acquisition experts. They're medical experts, they're experts on hearing. Speech language pathologists are the closest to being people who have expertise in language development. And this is actually the group where they were most likely not to advise to use spoken language only. But a lot of parents just don't get advice on the full range of options available to them. And it's super variable depending on where they are and just luck of the draw with which professionals they're put in touch with.
Most deaf children are born to hearing parents. If they do choose to communicate with their kid using sign language as well, that's something that they have to learn themselves. Does that end up impacting how fluent the child actually is in sign language? The evidence on this is new. A lot of the studies are quite small, but there are studies finding that, yes, hearing parents learning sign language who are novice signers can provide their kids with good enough input.
to give them good language access. What also seems to be really important is involvement of the wider deaf community. So there's research finding that deaf kids of hearing parents who go to a school that uses sign language, that kind of makes up some of the ground. But also a lot of parents that I spoke to talked about the importance of finding deaf mentors who would be able to sign with their kids and make sure that they're getting really high quality sign language input. But a
time commitment and the difficulty of intensive speech therapy. And so these are not necessarily all that different. For some parents, they preferred learning the language during the speech therapy. They found it easier and more fun and more rewarding. So either way, there's a time commitment from parents. And I think
Some parents are going to want to make one choice and some parents are going to want to make another. What sort of are the consequences developmentally if a child doesn't have access to sort of any language that they can really communicate in? They can be devastating. Up to the age of about three, it's likely that a child will make good progress when they do eventually get access to language. Three to five, it's going to start getting a little shakier. Beyond five, it seems like they really are going to be permanently on the back foot.
And it can lead to all kinds of consequences. We know that people who get access to language late, they process language differently in their brains and they might never develop full fluency. So that has all kinds of impacts you can imagine on education, memory, numeracy, sequential processing, other cognitive skills, and just reading the educational outcomes. All of those things can be affected by getting late access to language.
The first year of life is full of milestones towards that first word. So children who don't get access from day one are always getting a little bit on the back foot and they're going to have more ground to catch up when they do eventually get access to language. In terms of kids who are fitted for implants very early on in life, what is the comparison with how they develop spoken language versus people who are hearing?
is it pretty comparable or is there still a bit of a gap between those two, even if it happens very early on? So some of the research on very early implanted kids has found that if they're implanted at nine months, their outcomes are much better than even kids who are implanted at like nine to 12 months and better than kids who are implanted at 12 months or older. The kids who are implanted at nine months on average in some studies look much
roughly equivalent to their hearing peers and certain language tests, but it really depends on the kid and the language test. There are going to be tests where in some research and some of the tests
fewer of the kids even in the early implanted group are in the range of their hearing peers. And there are also kids even in the early implanted group who are not in the range of their hearing peers and who are still behind them on various spoken language outcomes. You've talked about sort of the range of different opinions and advice that people give about whether to choose one path or the other. Do you feel like through your reporting that we're kind of at a turning point where there
Is movement towards one consensus versus the other? Or do you get the sense that this is always going to be a sort of tough decision that parents are going to have to make? I think what's interesting is both spoken language advocates and sign language advocates say that they think the wind is shifting in their favor. So I think we're not approaching anything like consensus. Parent choice is very heavily emphasized.
by a lot of medical professionals and other providers. But we know that parents are not always being informed about bilingualism, language deprivation, and sign languages. So for parents making the decision, you know, the question of what the evidence shows is one thing, but the question of whether they're really getting all the information they need is another. And in terms of evidence, there are studies finding
spoken language outcomes for kids with cochlear implants can be pretty close or match their hearing peers if they are implanted early enough. But
We know that that's not all kids and it's difficult to predict which kids. And we also know that there's this growing body of evidence finding that bilingualism doesn't hurt their spoken language outcomes and it may help their spoken language outcomes as well as guarding against language deprivation. So in terms of the evidence, there's still more research needed. But I think the question of whether parents are getting good access to information is a much more important one. Thank you so much for talking to me, Kathleen. Thanks so much for having me.
Kathleen O'Grady is a contributing correspondent at Science. You can find a link to her story at science.org slash podcasts. Up next, how fooling the body into feeling pain can give insight into chronic pain conditions. ♪
Illusions have been so useful to science, helping researchers to understand perception. What inputs matter to our sensors? And how do our brains interpret things that maybe aren't really there but are giving all the right signals? We've seen this probably most with visual illusions, but there are auditory ones too. I actually hadn't heard of a pain illusion before, but I think I just missed a trick.
This week in Science Advances, Francesca Fardo wrote about how a pain illusion tricks our brains. Hi, Francesca. Welcome to the Science Podcast. Hi, Sarah. Thank you for having me here today. Sure. Francesca, how does a pain illusion work? Just give me the basic mechanics of what you do to someone to make them imagine pain.
I don't think they are actually imagining pain. I think they truly feel pain. It's just that it's from an unusual type of stimulus. So in the laboratory, what we do, we present a inocuous cold and inocuous warm in alternation, alternating on the skin. And for some unknown reason, this gives sensation of a burning pain and intense heat. Right. So you're not giving them super hot and super cold.
innocuous, just normal, just a slightly elevated or slightly lowered
temperature. But if you alternate them in time or in space, it works both ways, but it works better in space. When there is this spatial alternation, neither stimulus is painful by itself. It's the spatial configuration that creates the pain. One other thing that I can mention because the name thermal grill is interesting. The thermal grill illusion. So it's a grill, like a
It's a grill. It's a grill. So this is a phenomenon that was first demonstrated in 1896 by the Swedish physician Thorsten Thunberg, where he used like a spiral of copper tubes. There was cold water and warm water passing through. And then there were many other devices that were used to elicit this same sensation. And the most popular device is a grill of copper.
cold and warm bars. The grill is not necessary for this illusion. Not if it's temporal, right? Not if you're alternating in time. That's not a grill. But I get it now. That makes sense. It's not super painful, right? This is something you could kind of be like, ow, and then get over it. Yes. As soon as you take away your hand from this set of stimuli, you stop feeling pain almost immediately. What were you trying to learn kind of broadly about the pain illusion with the work that we're going to talk about today?
So my lab is interested in understanding how the brain constructs an experience of pain, even when there is no physical damage to the body. We got interested in the thermogreen illusion, in this phenomenon, because it's an interesting case where we can feel a sensation that...
It's not really justified by the physical property of what's going on. So there is a clear disconnection between the subjective experience of pain, what we feel, and what goes in the pain. It is simply innocuous temperatures on the skin. So let's go through the setup a little bit here. You have participants that are going to get trained on a tone and a temperature. They're paired together. This sound with hot and this sound with cold. But it doesn't always stay the same. The pairing does get switched up.
Why did you switch things up like that? That's a very interesting point. We were interested in like the concept of uncertainty. Uncertainty with respect to like the stimulus. And that's why we chose this thermal green illusion, because it's an ambiguous stimulus that the brain needs to interpret. But also uncertainty with respect to what about the expectations.
So we designed this task that resembles a guessing game. Participants heard a tone and then they had to make a guess and then they received a stimulus. Meaning like they were just choosing randomly, they were just guessing, but they learned at a certain point that there was an association between a tone and a stimulus and then this association changed. Sometimes it was
It's completely unpredictable, sometimes was reversed. We did this because we wanted to engage the participants with learning about Q-stimulus association. And we wanted to create periods of time where the participants felt confident about what to expect and other periods of time where they felt highly uncertain. I just want to flag here. I think this is, I realized this reading the paper, like expectation is
It's kind of a simple word for top-down processing. So instead of saying, give me all the inputs, I'll make a decision, it says my brain is already starting to kind of set things up for something to happen. And it gives you that other half of perception. Having expectations involved in this was really important because you want to know more about what the brain is doing, not just what those little sensors on your fingertips are doing. Exactly. Yeah.
Now you've got this training, this scenario that involves learning and uncertainty and expectation about cold and hot and tones. When does the pain illusion come into this for the participants? Yeah, that's an extra layer of complexity. So here, after a while, while the participant was learning this task, we started introducing only occasionally combined cold and warm at the same time to trigger this illusion of pain.
because we wanted to test the hypothesis that when we do not know what to expect, the brain may intensify the pain. The pain would be stronger. Exactly. Were they reporting pain or were they reporting hot or cold? How were they supposed to respond? So after like the stimulus, they were occasionally asked to report how cold, how warm, how painful the stimulus felt. How did uncertainty affect just the regular old hot or cold?
The uncertainty, they affect cold and warm perception. So you basically feel more of what to expect. So if you expect cold, you feel more cold. If you expect warm, you feel more warm. You don't know what to expect. That doesn't really change your experience of cold and warm, but it does change your experience of pain.
Okay, so how was pain different? Pain was more intense when the participants didn't have any clear expectation about what was going to happen. They didn't know what signal was arriving after a specific cue.
Okay, so you saw this difference when it came to uncertainty. If you hear a hot tone, you feel hot. It gets hotter. If you hear a cold tone and you get the cold, it feels colder. But if you're not certain what temperature is going to come next, it doesn't really accentuate the hot or the cold sensation.
But with pain, if there was uncertainty, you felt more pain. This was the opposite when it comes to uncertainty. Was this what you expected to happen? We did expect that. That's why we wanted to do this experiment. Okay.
And it was so rewarding that our hypothesis was actually right. Like we did demonstrate what we set up the experiment to do. So it's really part of like a very good moment in the life of a scientist. So why did you think that it would be different the way hot and cold are affected by uncertainty, so basically not, and pain?
pain is affected by uncertainty? When we don't know what to expect, we might tend to be more conservative, to err on the side of caution and amplify things because we don't truly know what's going on. Because if you think it hurts really bad, then you're going to react as if you're in a lot of pain. In the bigger picture, the idea came from the thinking about the fact that
Some chronic pain condition may depend on the fact that the pain gets stuck in predicting pain even after the body has healed and there is no more injury. It's a learned association that the brain keeps going even if it's not really necessary. And so you feel like this is a parallel to what you see in this pain illusion experiment? Yes. Okay. Would this work if you were actually using painful stimuli instead of confusing stimuli?
Yeah, it will still work. So there is a lot of literature on placebo effects and nocebo effects where the pain is amplified if you expect to feel more pain or it's reduced if you expect to be a mild pain, for example. Does this tell you something about our processing of pain or our
expectations about pain? We set up this experiment with the idea in mind that the brain is a predictive machine that makes educated guesses about what's going on and what's going to happen next to form perception and guide actions. These are educated guesses we have created based on our past experiences.
and they are updated based on new information as it becomes available. In this specific experiment, this helps us conceptualize pain
pain, not just as a response to bodily injury, but as an interpretation, an interpretation that the brain creates based on the past experiences, based on uncertainty and our own expectation of what is an appropriate response in that context. You also used an MRI to look at the brains of people in your study. What were you looking for
We wanted to understand how the brain is biologically organized to construct the experience of pain. So each participant underwent a brain scan and we looked at specific microstructure properties of the brain.
We looked at myelination. Myelin is this fatty tissue that wraps around neurons and helps transmit signals efficiently. And iron content. Iron is something that we know from our everyday life. And it's also present in neurons and it's fundamental for brain function.
So we wanted to link individual differences in the behavior of the participants during the task, so how they learned and how they perceive pain, and individual differences in these brain properties. Oh, okay. This wasn't functional MRI, so they weren't doing these tasks in the MRI machine, but you were looking at the structures in your participants' brains, and they are different person to person, and then you want to correlate those with
you know, how their behaviors were different person to person. What behavioral differences did you see? There were differences in learning. Participants were more flexible or more like conservative in the way they learned and they made decision about the day they expressed their gases.
And the differences are how intense they perceived the thermal green illusion and how the illusion was modulated by uncertainty. So whether they felt much stronger pain when they were uncertain or uncertainty didn't really affect them. And then we found that there were differences in specific brain regions. These are areas that we know that are involved in sensory processing, attention processing,
and self-awareness. And other regions are regions that are involved in learning, especially in prediction errors when there is a mismatch between expectation and reality. Other regions where the cerebellum located at the back of the brain
And here the cerebellum is best known for coordinating movement, but it's also involved in learning and in cognitive processes, especially when we need to adjust our responses to the context and experience. And finally, we found regions in the brainstem. The brainstem is just below the cerebellum and it's the bridge between the brain and the spinal cord. And this is quite an exciting finding for pain scientists because
In the brainstem, we have like the central hub or the descending modulatory system. So it's a system that helps amplify or suppress pain signals from this. So how do you interpret all of these things together? The fact that there are differences between people, the fact that there's this certainty effect on the perception of pain.
and the brain regions that are involved. It gives us a way to explain why pain is experienced differently among individuals. One possibility is that some people are more prone to experience pain and persistent pain after an injury because of something that is biological and is related to some fundamental principles in the brain.
and the way they process uncertainty and they can unlearn their pain experience. Will you look at how people with chronic pain or persistent pain are different or if any of these aspects are different in those people? So the next logical step is to extend our experiment involving individuals with persistent pain. We also would like to look into more brain function. So we looked at brain structure, but that's not the full story.
So we want to understand how these different brain regions that we have identified communicate with one another at rest and while processing pain-related signals. And also we want to look into the role of neurotransmitters like dopamine. We found that many of the regions that were identified are rich in dopaminergic neurons.
So we want to understand how we can harness this information to understand how pain works in the brain or we help people that struggle with pain in their everyday life. That's great. Thanks, Francesca. I really appreciate you coming on. Thank you, Sarah. Francesca Fardo is an associate professor in the Department of Clinical Medicine at Aarhus University. You can find a link to the science advances paper we discussed at science.org slash podcast.
And that concludes this edition of the Science Podcast. If you have any comments or suggestions, write to us at [email protected]. To find us on podcast apps, search for Science Magazine or listen on our website, science.org/podcast. This show was edited by me, Sarah Crespi, Megan Cantwell, and Kevin MacLean. Special thanks to Kathleen O'Grady and Shraddha Chakridhar for their careful review of the cochlear implant segment.
We had production help from Megan Tuck at Podigy. Our music is by Jeffrey Cook and Wenkui Wen. On behalf of Science and its publisher, AAAS, thanks for joining us.
From April 25th to 30th, the American Association for Cancer Research will host the AACR Annual Meeting 2025 in Chicago. This meeting is the critical driver of progress against cancer, the place where scientists, clinicians, and other healthcare professionals, survivors, patients, and advocates gather to share and discuss the latest breakthroughs in cancer research. Both in-person and virtual registration options include access to live sessions,
Q&A, networking, CME and MOC credits, on-demand access to sessions after the meeting, access to the e-poster platform, and more. Learn more about the AACR Annual Meeting 2025 at aacr.org slash aacr2025 and register for the world's most important cancer meeting.