This is the Science Podcast for February 21st, 2021. I'm Sarah Crespi. First up this week, Deputy News Editor Martin Ensring joins me to talk about how organizers of USAID-funded studies are grappling with ethical responsibilities to trial participants and collaborators as funding, supplies, and workers go away. Next, freelance science writer Sandeep Ravindran talks about creating tiny ML devices. Those are tiny machine learning devices for use in the global south.
Farmers are using low-cost, low-power devices for spotting fungal infections in tree plantations. Clinics could use them for listening for the buzz of mosquitoes that might be carrying malaria. Finally, researcher Michael Barnett is here to discuss evolving evolvability. His team demonstrated a way for microorganisms to become more evolvable in response to repeated swings in the environment.
In January, the Trump administration signed an executive order that stopped the USA agency for international development, USAID,
from doing work, making payments. And now last week, in the middle of February, the U.S. courts have put this executive order on hold. But between the end of January, when the executive order was signed, and the middle of February, a lot has happened with USAID. Martin Entrenk is a deputy news editor here at Science, specializing in global health. He edited a piece this week on some of the impossible choices that researchers and public health workers have had to face in the light of the stoppage. Hi, Martin. Welcome to the Science Podcast.
Hi, Sarah. Can you give us just a brief rundown of what USAID does? It's a lot of things, but maybe we could just talk about the global health part. They do a lot of stuff. They're probably best known for distributing food and medicines and, for instance, running a very big HIV treatment program called PEPFAR. They do a lot in malaria, but they also do a lot of research, some clinical trials in Africa, for instance.
And some of our reporting is focused on the impacts on science that this freeze has had.
Let's place ourselves in time. This is always important, at least for the month of February. So we're talking today, Tuesday, February 18th. This will come out on Thursday. What is the status of this executive order that was first put out in January? Donald Trump signed his executive order on Inauguration Day on January 20th, and it basically froze all foreign aid from the United States.
A couple days later, the agency basically issued a stop work order to all of its partners around the world. That has really sent shockwaves through the world of development. The team reported on this earlier this month, and they were talking about people
people just having to come back home, scientists or researchers that were abroad coming back to the U.S. That has happened as well. The government has basically ordered everybody to return home in 30 days, which has caused a tremendous amount of problems for people. People have lives and families in foreign countries, and all of a sudden they're
In order to come home, there have been some dramatic stories of people who felt abandoned in a dangerous situation in the Congo. So that has caused a lot of problems as well. The freeze was lifted on a temporary basis on Thursday, at least for the time being. And on paper, the aid will resume.
Now, whether that will really happen is unclear. And in some cases, the damage has already been done, of course. Yeah. So that's kind of the focus of this story is these downstream consequences of the staff work order, of the funding freeze, people returning to the United States. This has led to some real ethical dilemmas for researchers who were engaged in clinical trials and all different kinds of work with participants. Can you talk about some of what's in the story on that?
Basically, what you saw is that a lot of scientific studies immediately ground to a halt, and that has caused chaos and confusion around the world. And maybe the most dramatic thing that happened, I think, were these clinical trials that were halted. I mean, to give you one example, in South Africa, 17 women have been enrolled in a study of a ring that is inserted into the vagina aimed at preventing HIV.
Now, that study was halted, those women had to come into the clinic and to have these rings removed. The scientists that we talked to described it as really heartbreaking to have to tell them, "We're done." And it raised questions with these women about, was there something wrong with these rings? And they had to tell them, "No, it's just our founder doesn't want to continue this study. Other studies that were in the planning or were about to start have been canceled."
And many of these studies, they're not just very expensive, but they're also big operations that involve many local collaborators. And so scientists are saying it's not just a waste of money, but this destroys the confidence that people have in the United States as a reliable partner. It's a blow to U.S. soft power. And people told us we're just embarrassed by this, embarrassed to have to tell people that it's over.
It might not be easy to pick the pieces back up because of this like broken trust that we're seeing in these different communities. Yeah, that's right. Can you tell me more about this HIV prevention study that got stopped?
It seems like it has more widespread consequences than not getting that scientific information or breaking trust with the community. That's another example where participants could really be harmed by stopping the trial. It's an HIV prevention study in several African countries that compares different strategies of giving anti-HIV drugs to prevent infection. Now, some women in this trial have received an injectable drug that has a long-lasting effect.
And if you end those injections, the drug slowly disappears from the body, but it lingers for a long time. And during that time, you're at a higher risk of becoming infected. And also, those low doses of the drug in your blood can give the virus a chance to evolve resistance.
So those people could end up with a resistant strain of HIV, which they then could pass on to others. So that's a risk not only to the participants in the trial, but also to the rest of the population. And again, people are telling us that's a very unethical thing to do. So now something's happened in the courts that means that money can start to flow again.
There were two lawsuits by two groups that received USAID money. They basically said this is causing enormous damage and the judge essentially agreed and he said, sure, the government can review these programs, but to stop them immediately, there's just no reason for that. On paper, at least, that means that the money will start flowing again. But there have been lots of reports that the USAID's payment system is also blocked, is offline.
So whether these trials, for instance, can resume is very much in question. And some have just simply been ended, for instance, because people were already dismissed from the studies or because the vaccine that people were going to use in the trial perished in the time it took the three months that the freeze would last. A lot of things may not get back on track at all. So what are the next steps? This is a temporary restraining order on the executive order.
Yes, these cases will now be handled by the courts and we'll see what the result will be. I mean, there are legal experts who say that this whole dismantling of USAID and basically ending all these programs is illegal because Congress hasn't been involved. We'll see what happens there. But like I said, in many cases, the damage is done already. And I should say, it's not just these clinical trials that I mentioned. There's many other
types of research that USAID supports. For instance, they're a big funder of a demographic and health surveys program which collects data from developing countries
Those data are no longer being collected. That is a problem. There's a website of an early warning network for famine that is offline. There's a journal that USAID funded that is no longer taking manuscripts. So the impact is very widespread. Yeah. Now, I mean, it kind of seems...
Ridiculous, because this is such a large organization with so much funding. But are there alternative sources of funding that some of these efforts, some of these projects can look to? Perhaps. Some people hope that, for instance, Europe or China will pick up some of the burden. There's also, for instance, the Wellcome Trust in Britain, a large charity that already funds a lot of research. They may be able to step in. At the same time, there's so many groups and organizations now that are clamoring for these funds to be replaced that...
it's unlikely that anybody can pick up the tab. Another issue with that, of course, is if the US leaves this void, it's also giving a lot of influence and a lot of soft power to, for instance, China. And that could hurt US interest in the long run.
Thanks, Martin. It's been really informative. You're welcome, Sarah. Martin Ensor, Inc. is a deputy news editor at Science. You can find all our policy coverage at science.org slash scienceinsider. Stay tuned for a story on using tiny microcontrollers to bring machine learning to fields and farms.
Before the show starts, I'd like to ask you to consider subscribing to News From Science. You've heard from some of our editors on here, David Grimm, Mike Price. They handle the latest scientific news with accuracy and good cheer, which is pretty amazing considering it can sometimes be over 20 articles a week. And you hear from our journalists. They're all over the world writing on every topic under the sun, and they come on here to share their stories. The money from subscriptions, which is about 50 cents a week,
goes directly to supporting non-profit science journalism, tracking science policy, our investigations, international news, and yes, when we find out new mummy secrets, we report on that too. Support non-profit science journalism with your subscription at science.org slash news. Scroll down and click subscribe on the right side. That's science.org slash news. Click subscribe.
We've been talking about AI so much lately, mostly the industrial kind, where there's a big training set of data, very expensive chips, intense power usage, water for cooling. But this week, we're talking about tiny ML, small machine learning, that does the computing in place in a small way with low power. Freelancer Sandeep Ravindran wrote
wrote about it this week in Science. Hi, welcome back to the show. Hi, Sarah. We're going to talk about tiny ML, tiny machine learning. What makes it tiny? Is there like a cutoff point? The original definition of tiny ML was just the ML stands for machine learning, which is a type of AI. And it was any machine learning that runs on devices that use less than one milliwatt of power, which is about the same as a laser pointer. Right, that's tiny. How much do these like big...
AI platforms, do we know how much energy they're consuming? ChatGPT is estimated to consume 600 megawatt hours of energy per day. So mega instead of milli. So it's like thousands of times more power per day. The power usage and the scale of it, the size of the device kind of are the definitive, you know, this is tiny ML. But it's often not connected to the internet. So it's kind of a discrete device.
use case. Why would you want those features in a device that has machine learning going on as well? So a lot of the world doesn't have ready access to electricity and internet connectivity. So they've been kind of shut out from these larger AI models that require internet access to connect to these giant data centers in the cloud. And
to be able to use these huge AI models. So basically the idea is once these models have been trained, they can run just on a device, on these tiny devices without needing internet access, then you can use it pretty much anywhere. Yeah. What are some applications that you came across when you were looking into TinyML?
Yeah, no, the applications are fascinating. So many of the applications are really homegrown. So because these devices are relatively low cost, anything from a few dollars to $50, they're really small. So they're pretty easy to sort of put in all kinds of places. And because they can run on batteries, like regular AA batteries often, or, you know, run for a long time on solar power and they don't require internet.
People have been coming up with all kinds of uses for these. One common use, very popular, especially in many areas of the global south, is in agriculture. I talked to researchers in India, in Benin, in Brazil, all of whom are using these devices to identify plant diseases. Oh, so it's relying on its ability to capture visual information and discriminate or compare it against like a large data set.
Exactly. So I should also mention many of these devices, even at these low cost devices, are available with cameras on them. So it's not just the chip.
They have a camera, they have microphones, so you can capture photos or videos or sounds and then have, if you're running an AI model on these, that you can use that AI model to sort of classify that and identify particular sounds or images. For plant diseases, what they do is they train the AI model to differentiate between a healthy leaf. A lot of plant diseases show up in the leaves, as you might imagine, there's, you know, sort of
A lot of brown spots, yellow spots, red spots. This is what we hear about rusts and smut. Exactly. Yeah. If you showed thousands of photos of healthy leaves and then thousands of photos of leaves with various diseases, you can use these tiny ML devices to tell farmers like, oh, this leaf has this particular disease. Now, this is useful because they won't be able to do the identity...
vacation themselves or because it does it independently of them? How would it actually work? So there's a couple of advantages. So one, right now, you know, the farmer has to go manually through their entire plantation. For example, one researcher I talked to in India, in South India, works with these farmers who
who grow cashew trees on their plantations. So right now, if you want to identify whether a particular cashew tree has a disease, you have to walk around the entire plantation looking at each of the leaves. Often what happens is instead of doing that, the farmers will actually just spray pesticides on all of the plants. You're going to do the walk anyway. Why not just spray pesticide as you're walking, right?
Right. And just not bother identifying which plants are, you know, healthy or diseased. That has a couple of issues. One, pesticides are expensive, you know, for a lot of these small farmers. So it's a
It's a huge waste in terms of money, but also the pesticides hurt the environment and hurt the farmer's own health. I mean, a lot of them are pretty poisonous. So one of the researchers I talked to is basically putting these tiny ML devices that have been trained to identify this disease called cashew anthracnose on these tiny drones that then
can fly around these plantations and the drones have these cameras and tiny device sees what the drones are seeing and can tell you this plant has cashew anthracnose and this plant is healthy. What he plans to do is then have these drones talk to other bigger drones that actually carry pesticides. Wow. So that's actually been going on in India is having these sort of bigger drones carry pesticides to you can spray, you know, the plants using these drones.
So the idea is you'll have this tiny drone that is detecting which plants have the disease and then talking to these other drones that carry the pesticides and telling them, go to this plant, this one has the disease, trace pesticides only on these plants.
Yeah, this is so interesting because it's so specific that you can shrink down the machine learning part of it, right? That's kind of the key here is it just needs to do one thing really well and you can use cheap components to accomplish that. Yes, that's that's right.
These devices are small in every way, right? They also have less processing power. They also have less memory. So there's a definite limit to the kinds of models you can run on them. You just cannot run something like ChatGPT or these very large models that require huge data sets and you're not going to be able to run like universal translator on one of these. It's not going to plan your vacation and find your cashew tree with a fungus on it. It's just, it can only do one thing. Yeah.
That's very cool. So that's an agricultural example. You also have an example for human health. How can you use tiny ML? Because I feel like human health and AI is kind of a contentious area, but maybe if you strip it down to like one task, it wouldn't be so problematic. Yeah. So how can you use this with medical interventions? Tiny ML is good at doing
doing these very specific tasks and it can do them at much lower cost than some of the sort of medical devices that are being used. So particularly in the global south. So I talked to researchers in Brazil who are trying to train tiny ML devices to detect atrial fibrillation, which is this type of abnormal heart rhythm. And you can do this much more cheaply than a typical medical device
An Apple Watch, I think, can detect some of the same things and medical devices can do that. But if you're in Brazil, you know, and you want to sort of have this technology be useful in areas where people can't afford an Apple Watch. Apple Watches are not cheap. Exactly. So these technologies can really bring those kinds of AI models into places that where they're just not available currently.
Okay, third example I thought was really cool was this is an environmental intervention. So monitoring different kinds of pollution. How does TinyML, how is it able to contribute to that effort? Yeah, so TinyML, again, in the global south has been sort of very popular for...
both wildlife tracking and sensing environmental litter and pollution. So in Malaysia, I talked to a researcher who has basically been using tiny ML devices to detect plastic trash. So the plastic trash is a huge problem in Malaysia, as in many other places in the world, but particularly they were concerned about plastic trash accumulating in areas where you have these very young, delicate
mangroves, and you can really stunt these delicate ecosystems. There are already volunteer groups that are going around and they're helping to clean this plastic trash. And basically, this researcher teamed up with these volunteers to take photos of this plastic trash that
can then be used to train a TinyML device. And the idea was instead of needing volunteers to go out constantly to monitor, you could put a TinyML device there that can constantly gather this data. What the device was able to do based on the training was to identify different types of plastic trash. So you could say, oh, these are plastic bottles, these are plastic bags.
And having this data sort of publicly available on a website could then help both the volunteer group, but also the local administration work out how to clean up this trash. That is very cool. So again, it's like acting as a very specific kind of sensor. It has vision, but it just needs to detect a few things out there. What are some of the approaches here that are being used on the technology side to keep this small, low power? You know, how do you keep tiny ML tiny? The
The breakthrough was really figuring out that you could shrink some of these models down into these so that they use very little memory and little processing power without losing too much of their accuracy. And so that was the real trick. And they did that by basically figuring out that you could take the so-called model weights, which are the numerical values that sort of tell the
the models, how to work, you could reduce those model weights, make them more specific and smaller without affecting the accuracy of the overall model too much. And that allows you to put these really powerful models into these really small, low power, low cost devices, and then you can use them for all kinds of applications. You're kind of taking a bunch of specialized knowledge, like how to train a model, how
how to like get it into the actuators, whatever they are, how to program it all to work together. Who is doing that? Because it's not just like one skill. Yeah. So that's actually one of the difficulties or one of the challenges with TinyML is that it's pretty complicated. There's been a concerted effort by various TinyML groups and institutions to
train people in the ability to use TinyML, particularly in areas of the global solve. For example, this program by Harvard and the International Center for Theoretical Physics in Italy called TinyML for Development. And they basically both run training programs for TinyML all around the world and workshops, but also have teamed up with various partners to send TinyML
tiny ML devices to more than 50 academic institutions all over the world from Morocco, Brazil, Nigeria, South Africa, Malaysia. Part of the reason was that they realized that even though these things are cheap by U.S. standards, you know, like a few dollars to $50, they're
In a lot of other countries, that's still relatively expensive. So that was one big barrier to using TinyML. But they realized that once these devices were available to people, they still needed some sort of hands-on training. But what's happened is once they started running these workshops all over the world,
a lot of the researchers in many of these countries started taking on the responsibility of running their own workshops. And so now there's this community of researchers all around the world that are both using and teaching tiny amount using these devices. Very cool.
We've talked about cost. We've talked about kind of how to program and deploy these. What about the technology? What's changing there? What barriers are there still to this kind of becoming more widespread and what might come on board in the next couple of years? So I should mention, you know, TinyML,
devices, the breakthrough was sort of really running these machine learning models on what are called microcontroller chips, which are literally the same kinds of chips that run in your washing machine or in your car airbags. These are very low cost, not specialized. And people realized that they were
They're so low cost and they're so small and they're all over the place. And if you can basically run machine learning models on them, you can really power all kinds of new applications. The problem is you just can't run, you know, particularly advanced machine learning models on these yet. You can't use a model that uses like millions of pieces of data. And what's been happening is that
These microcontroller chips have been getting more powerful while staying at the same cost and still using relatively low energy. And now that TinyML has become more prevalent, a lot of the manufacturers of these devices are making TinyML chips that are specialized for running AI and
And so you're going to be able to run more and more powerful AI models on these chips while still maintaining the advantages of low power and low cost. Anything else that you thought was particularly cool that you ran into when reporting on this? I think one of my favorite uses of TinyML, just the fact that these devices are so small, is where you can put them, including on the tops of tortoises in Argentina.
So there are these tortoises in Argentina called Chaco tortoises that are a threatened species. But they're also because they're small and they live in this sort of desert and shrub land. They're kind of hard to track and people don't really know what their life cycles are like.
And there are commercially available sort of wildlife trackers, but they're expensive. So this group in Argentina came up with, used TinyML to create these really tiny tracking devices about the same weight as a golf ball that they could then stick on top of the tortoise shells and be able to track them, you know, and communicate to the researchers when the
tortures were moving when they were resting. It was just a really cool use of this technology. Yeah, and I think it really points out this thing that you keep you mentioned in your story, which is it's a tiny use. It's a specific use. And maybe big AI companies are interested in solving these very local problems. Yeah, absolutely. These are really bespoke, specific homegrown solutions are where TinyML really shines.
All right, Sandeep, thank you so much. This is really fun to learn about. Thank you, Sarah. This was great. Sandeep Ravindran is a freelance science writer based in Maryland. You can find a link to the feature we discussed at science.org slash podcast. Don't touch that dial. Up next, we're going to talk about inducing the ability to evolve. So evolving, you know, in general means adapting to the environment. There's genetic changes and some of them are adaptive and they're selected and you get more of those guys.
But can an organism get better at evolving? Can it be more evolvable? And can being evolvable, can that be a beneficial trait for selection to operate on?
Now we have Michael Barnett. He and his colleagues wrote about mechanisms for evolving evolvability this week in science. Hi, Michael. Welcome to Science Podcast. Hi. Thank you. Okay. So I don't know if everyone is aware of this debate. It's actually kind of a meta question. Is it possible to become more evolvable, less evolvable through natural selection?
If you think about just mutation rates, you have these tiny genetic changes going on and you have natural selection acting on those things. And this rate of mutation, that can be changed. If the mutation rate just goes up, up, up, that's not good for you. That's not good for your population. But how about being more selective about where the mutations are occurring? What would be a mechanism and what are the constraints on that? It's a strange idea because it's
It seems to imply some sort of foresight to evolution, right? We know it's this blind process driven by random variation. Whatever works will be selected. So how can you sort of prepare for some future contingency? Well, the solution is that the environment isn't random. There's recurring selective challenges. And so that recurrence sets the scene to actually change
learn about that regularity, to evolve, to embody it somehow. The capacity to generate adaptive variation, evolvability, can itself evolve because most mutations have a deleterious or neutral effect. What would be a better solution from an evolvability perspective would be to increase variation along particular dimensions or at a particular locus of the genome, but it's more likely to generate
adaptive variation. So you get the adaptive variant, but you don't get the deleterious background. And so that's kind of what we show here with this localized hypermutability. So if you think about organism time, for example, bacteria, things are turning over very quickly. But on the scale of the planet and your environment, swinging between different temperature regimes, swinging between different water availabilities, that is something that the
the species or the organism has had to adapt over this very long timescale, that can actually set it up for success. Is that what you're saying? Yes. Well, you can basically, through this
recurring selection in the past, you've selected things in the present that were good at going through those past selection pressures. The future often resembles the past. That's the basis of learning, right? You learn some regularity, even irregularity through environmental change. That's the beautiful thing about this experiment. If we go into the details, it's like, oh, this is how that works.
Right. So you're like, oh, how does a bacteria embody the history of the changing environment? And so you figured out a way to kind of show a mechanism for this. You use these bacteria and switch them between conditions. And basically it was mutate or die. They had to mutate to survive. Yes. They had to generate a particular adaptive variant that the environment in which we put them demanded. It was to produce an exopolymer
formed the basis of a biofilm, which enabled them to survive in this little environment. But then there'd be selection against that type. So basically it's selection for a particular phenotype and then selection against that same phenotype. And that was the repeating cycle. Yeah. Right. That was the recurring selection pressure. So you
You're in an environment that favors this thing. You're in an environment that disfavors this thing. You're in an environment that favors this thing. And you go on and the game is to keep playing. And you stop playing. You go extinct when you fail to generate the target phenotype within the given time. So I can see in the abstract how that would generate evolvability. If you have to survive these switches over time, then you need to have this flexibility.
But what does that look like? Let's get into the genes here. The game was to turn on a phenotype and then turn it off and turn it on and turn it off. At the genetic level, this meant activating a gene, a regulator that then turned on a structural gene that would then generate this polymer that made the biofilm. We did that across a number of replicate populations. So they all took different paths to reach the phenotype.
Right. And so we've got variation between these different lineages and some of these paths you can turn off that phenotype by
by breaking the regulator that you just turned on, or you could break the structural gene. For example, if you made a big deletion in the structural gene, you would now be the correct phenotype, but you would never be able to switch it back on again because you've destroyed the gene that's necessary. So that's a dead end, right? And so many lineages took these dead ends. They went extinct. And that allowed the...
opportunity for a lineage that successfully generated the phenotype to spread so they could replace them. So we get this sort of dynamic of lineages dying and spreading according to their ability to generate these phenotypes repeatedly. Right. So they're switching back and forth. So if they've deleted the gene, if they've interrupted it in such a way that it's irretrievably lost to the lineage. Yes.
Game over. And you can also get better at switching. For example, we had mutations that increased transcription of a particular gene. If you ramp up transcription, you can turn that gene on. But you have a secondary effect because transcription is mutagenic as well. So a lineage that turned on this gene through polymorphism.
promoter activation, which elevates transcription, also led to a higher probability of further mutations in that gene, further mutations that can turn it on and off. That was sort of one of the very early stages of how this, what we call a contingency locus, this localized hypermutable site came to be. I want to focus on this for one second. Your environment is saying that
We really care about this. We really care about making this thing. So the focus of the mutations and the accumulation of these changes in that region are going to lead to more or less success. Also, a bunch of different approaches were taken. The different lineages build
build up this history of mutation. They turn on and off these different genes and there's many different regulators you can do this by. We went through this convoluted process where many of these lineages just went down dead ends, others survived just by the skin of their teeth, and then some lineages actually started getting better. And how did they get better? Well, in one case they've built
a hyper-mutable sequence. So some DNA sequences are more mutable than others. And in this case, you can imagine if you want to turn off a gene, you could insert a sequence. You could insert a small sequence of nucleotides, like a small duplication. And in this case, what exactly happened was we turned off the gene by adding 11 base pairs. And
And so now you've got a duplication. You have two copies, right? So one way to turn a gene off is to insert a sequence that pushes the gene out of frame. So, you know, three base pairs, that's the codon, that's how the genetic code is read. If you add one base pair, you're out of frame. The whole protein's screwed up. But it's still there. The code for it is still there. It can be redeemed. Yes. And if you remove that...
or one base pair, you get back to a functional protein. And so these insertions and deletions are really good ways of turning on and off a gene reversibly compared to a deletion that removes all that genetic information and leads you to a dead end. And is that what you saw in your more successful
lineages? Exactly. The other thing that's going on here is that when you have these duplications, it's more likely that you'll get another duplication at that sequence because there's a phenomenon called slip strand mispairing. It's a universal type of mutational instability. It causes disease in humans, Huntingtons and things like this, but it's been co-opted by bacteria as these useful little mutational switches. And so you get an
insertion, you've created a tandem repeat that increases the chance you're going to get another mutation of that sequence. Now you've got the potential to build up a longer and longer sequence of repeats and increase your mutation rate further and further. And this will allow you to go in-frame, out-of-frame, in-frame, out-of-frame, jump on and off. So that's exactly what the bacterium did in our experiment. It's exactly what you see in
pathogenic bacteria. Right. So there is a parallel in the natural world. Yes. This is how a pathogen is like, oh, a whole new immune system I've never been exposed to before. In the real world, they seem to evolve in response to the immune system. So they're often found, these repeat sequences, in genes encoding structures that are on the outside of the bacterial cell. These
these structures are going to be detected by the immune system and anything with that structure will be destroyed. So you might want to evade the immune system by turning off that structure. But that structure is also probably useful to you, so you want to turn it on again. And so that's the parallel between our experiment and what's going on in nature. There are other ways to turn on and off genes that are going to be transcribed into structural proteins.
There are ways to turn them on and off with a transcription factor that says transcribe more, transcribe less. There's all these different mechanisms that we have to regulate this kind of thing instead of waiting around for a mutation to bump things in and out of frame.
Is this just early days and then maybe someday the microbes would evolve a transcription factor or something else that could do this for them? Yeah, why wouldn't the solution be a normal kind of regulation? Yeah, why aren't they doing a normal kind of regulation? Well, it's because there's basically this process of responding to the environment at the individual level, becoming a phenotype that increases your growth rate.
That requires mutation. And all these on-off mutations become the substrate for what can be built at the longer timescale. So you don't get the sort of mutations that allow you to have this normal sort of regulation. Also, normal sort of regulation is generally coupled to some environmental inequality.
input. Or that can be stochastic, but that requires some quite elaborate genetic architecture as well. We don't have time to make a new protein that sits down exactly where we want it to turn things on and off in response to environmental stimuli. We got to just quick and dirty, take a mutation and hope we survive and then build up from there. Yeah.
It is really interesting to see because I think a lot of weight has been put on individual variation and selection on the individual when it comes to understanding the history of the evolution of all these different species. But from this, it seems like it's going to be really important to think about lineages and generations. Yeah, for sure. The evolution of evolvability, I think this is a very small example, but that's a powerful idea that is going to be relevant in other systems.
particularly early life, whereas this fundamental architecture of the cell was being built. And it's just something we don't consider so often. And there's arguments for why higher level selection is sort of impotent. And our experiment shows that it's not necessarily. There's actually adaptations in nature that is using this process. The other thing I want to add at
that we didn't talk about, there was an additional advantage to having this hyper-mutable locus. Once the lineages got this going, they were switching between the target phenotypes more rapidly. And because they were reaching the target phenotype more rapidly, they actually had extra time to generate additional adaptive mutations to other aspects of the environment. So now we've got this lineage-level adaptive trait
speeding up evolution and actually facilitating further adaptive evolution. So you really get the self-facilitating dynamic that you'd expect from the evolution of evolvability. So that was a surprise to us because we were thinking, oh, maybe we will get something like some mechanism that allows them to switch between two states. But we didn't think, well, that more rapid switching will actually give them more time to adapt to other aspects of the environment.
So just like a global hypermutator, the evolution was sped up but without the cost of the deleterious variation that comes with being a hypermutant. Across the genome. Yeah, that's super interesting. Is this something that's likely to happen in new carots like us, in complex multicellular organisms? So sexual reproduction is...
what I would call maybe the greatest lineage level adaptive trait of all time. This is a trait that allows adaptive evolution to increase in speed by bringing adaptive alleles together and purging deleterious ones. So you actually increase the efficiency of adaptive evolution. It seems to be maintained by something like lineage level selection, where species that lose sexual reproduction, become asexual, tend to go extinct.
Thank you so much, Michael. This has been fascinating. Yeah, thank you. That was fun. Michael Barnett is a postdoctoral researcher at the Max Planck Institute for Evolutionary Biology. You can find a link to the paper we discussed and a related perspective at science.org slash podcast.
And that concludes this edition of the Science Podcast. If you have any comments or suggestions, write to us at [email protected]. To find us on podcasting apps, search for Science Magazine or listen on our website, science.org/podcast.
This show was edited by me, Sarah Crespi, Megan Cantwell, and Kevin McLean. We have production help from Megan Tuck at Podigy. Our show music is by Jeffrey Cook and Wen Kui Wen. On behalf of Science and its publisher, AAAS, thanks for joining us.
From February 23rd to 26th, the American Association for Cancer Research will host their inaugural AACRIO conference in Los Angeles. Covering the full spectrum of immuno-oncology, this major conference will encompass the very best of basic, translational, and clinical research in immunology, inflammation, and immunotherapies for cancer, including immuno-oncology drugs, inflammatory modulators, vaccines, and cellular therapies.
Featuring the biggest names in immuno-oncology, this new conference will bring together basic scientists, translational researchers, clinical investigators, regulators, industry investors, and press. Learn more about the AACRIO conference at aacr.org/aacrio and register today to secure your spot at the must-attend immuno-oncology event of the year.