Breast cancer cells are more susceptible to chemotherapy during the estrous phase (low progesterone levels) due to hormonal changes that impact the tumor microenvironment, particularly macrophage activity.
Hormones like estrogen and progesterone indirectly affect chemotherapy efficacy by altering the tumor microenvironment, particularly the influx of macrophages, which can reduce chemotherapy effectiveness.
The first chemotherapy treatment appears to 'freeze' the tumor's state, so subsequent treatments maintain the same efficacy as the initial one, regardless of cycle disruption.
Approximately 30% of breast cancer patients are under 50 and premenopausal, making them candidates for cycle-based chemotherapy timing.
World models are mental representations of the environment that enable humans to plan and reason. Researchers believe AI needs similar models to achieve human-level intelligence.
Current AI systems, like OpenAI's O1, are improving but still lack certain human-like capabilities, such as building accurate world models or generalizing problem-solving across diverse contexts.
The discovery of amber in Antarctica suggests the region once hosted a temperate rainforest, supporting evidence that Antarctica was warmer and more habitable in the past.
Centenarian stem cells provide insights into genetic and cellular mechanisms that enable long, healthy lives, such as enhanced protein quality control and resistance to neurodegenerative diseases.
Footprints from Homo erectus and Paranthropus boisei found in close proximity in Kenya suggest the two species may have shared the same environment, though direct interaction is uncertain.
Coffee drinkers have higher levels of certain bacteria, like Lawsonibacter asacralicticus, which thrive on coffee and are associated with health benefits such as virus resistance and reduced diabetes risk.
Oh my god, it's the coolest thing ever. Hey guys, have you heard of Gold Belly? Well, check this out. It's this amazing site where they ship the most iconic, famous foods from restaurants across the country anywhere nationwide. I've never found a more perfect gift than food. They ship Chicago deep dish pizza, New York bagels, Maine lobster rolls, and even Ina Garten's famous cakes. Seriously.
So if you're looking for a gift for the food lover in your life, head to goldbelly.com and get 20% off your first order with promo code GIFT.
This is an ad by BetterHelp. What's your perfect night? Is it curling up on the couch for a cozy, peaceful night in? Therapy can feel a bit like that. Your comfort place where you replenish your energy. With BetterHelp, get matched with a therapist based on your needs entirely online. It's convenient and suited to your schedule. Find comfort this season with BetterHelp. Visit BetterHelp.com today to get 10% off your first month.
That's BetterHelpHELP.com.
Welcome back to The Nature Podcast. This week, how the time of the month changes the effectiveness of chemotherapy. And how close is AI to human-level intelligence? I'm Nick Pertrucciano. And I'm Emily Bates. Breast cancer is more susceptible to chemotherapy at certain points in the menstrual cycle, according to new research in Nature.
The team behind the work looked at the equivalent cycle in mice, known as the Easter cycle, to initially find this out, later finding similar results in humans. I reached out to one of the authors of the new work, Kalinda Shaler, and she told me how this research got started. Well, this story actually, it started a bit out of frustration, because initially we were studying chemo-resistance in breast cancer.
And we tried to find out how breast cancer cells could become resistant to traditional chemotherapies. And the results that we had, they were very heterogeneous. Sometimes we had very clear resistance and doing the exact same experiment we fought, we didn't get any resistance at all.
So we were very puzzled by this. And then at some point during a brainstorm session, we figured that potentially actually this cycle, this Easter cycle in the mice, could play a role in this. So we started to time the treatments based on the cycle. And then all of a sudden it all made sense because there we saw that actually by chance we sometimes had the mice in one part of the cycle and by chance sometimes in the other part.
And then one of the two cycle stages, it worked much better than the other stage. So at certain points in the cycle, the breast cancer was much more susceptible to the chemotherapy. Yeah. So we found that actually in the first half of the cycle, which we call the estrous stage in the mice...
We see better response to the chemotherapy compared to the second part of the cycle, in this case the diastereous stage in the mice, and there we see less response to the chemotherapy, so more resistance to the treatment. And to understand this as well, obviously hormones play a big role in this cycle. And so you looked at the roles of the different receptors there.
that could be triggered by these hormones. What did you find here? Yeah, so we found that this effect seems to be independent of the hormone receptor status of the cancer. So we tested this both in hormone receptor positive breast cancers that expressed the receptors for estrogen or progesterone, but also in triple negative breast cancer models where actually there is no expression of these hormone receptors. And in both cases, we found the effect positive.
Initially, this was quite surprising to us, but then when we looked further into the biology, we actually found that it may not be so much a direct impact of the hormones on the cancer cells, but more an indirect impact that it more impacts the tumor microenvironment.
which then in turn impacts on the cancer cells. And so you mentioned that you looked at the environment around the tumours. Did you find any changes there that might explain why there is this different sensitivity to the chemotherapy? We found several changes, but one of the changes that we really followed up on was the change in the macrophage abundance.
So we see in the diastereous phase, so the second part of the estrous cycle, we see a large influx of macrophages. This is also known in the normal cycle, in the normal tissue, and we see it also in the tumors. And we saw that actually these macrophages, they really impact chemotherapy response, because when we depleted the macrophages, we saw that the chemotherapy response in the diastereous phase became much better to the same levels as the
in the estrous phase. So it seems that definitely the macrophages, they play an important role in the resistance to the chemotherapy in the diastereous phase. And do you have an idea of why it might be that they're making the tumour, I guess, less susceptible to the chemotherapy? Because I would think, you know, immune cells, they're there to help and fight the cancer. So why is it that it's less susceptible to the treatment? Yeah, so the exact mechanism, we did not figure it out in this paper.
But of course, I can speculate a bit. And so it could be that the macrophages, they take up a lot of the chemotherapy themselves because they engulf substances, they engulf cells. So they may actually take up a lot of the chemotherapy molecules so that just less of it reaches the cancer cells. That could be one way.
But this is speculation. We have not yet done the actual experiments to really find out what is the exact molecular mechanism there. And your experiments, they were done in mouse models, but also there was some amount of human data as well that you looked at for this. What did you find here? Yes, so the human data, I have to stress that it was a retrospective study. So in hindsight, we determined that
in which phase the patient was when the first chemotherapy was given. And then we looked at tumor response to see if there was this correlation. But this was retrospective in historical data and also in a small patient cohort.
But still, when we looked at this data, we found exactly the same. Again, we saw that when the patients were treated in the first phase of the menstrual cycle, so the progesterone low phase, the treatment was always more effective compared to patients that were treated in the progesterone high phase, so the second phase of the cycle. So these findings seem to play out at least in these small bits of data that you've looked at in humans.
Yes, for now, they seem to confirm indeed what we found in the preclinical models. But it will, of course, be very important to now do a prospective study where we really enroll patients, assign them to one or the other phase, and then follow them up over time.
to really confirm our findings in a larger patient cohort. One interesting aspect of this as well is, as I understand it, chemotherapy in humans can disrupt the menstrual cycle. And doing this sort of chemotherapy, I assume you'd want to do several such treatments. So what happens if you give them the therapy once? Does it all just get messed up as the cycle sort of gets disrupted? Yes, so indeed, after one cycle of chemotherapy, both in our patients,
preclinical mouse models, but also in human patients, often after the first round, the whole menstrual cycle or estrous cycle is disrupted and it kind of stops.
We tested it in mice and we indeed also see that the cycle stops after the first round of chemotherapy. But still, we did the experiments to see what happens if we now give multiple cycles. And we just gave treatment once a week with the chemotherapy. So the first chemotherapy was then timed to either the estrous phase or the diastrous phase. And then afterwards, when the cycle was halted or disrupted, then we just gave it every week.
And still over these prolonged periods of time, we would see that we still had a difference in response. So what we think is that after this first chemotherapy, the cancer or the tumor is actually somehow frozen in time. So it remains in the state as it was at the first chemotherapy treatment. So that first treatment, the timing of that is really the key? Yes, we think this is really the key to time the first treatment.
And then afterwards, we think it doesn't matter so much any longer. Now, I think people often consider breast cancer to be a disease of older people, but actually around 30% of people are under the age of 50 and are premenopausal. So they're still having menstrual cycles.
And so do you think what you found here could be really useful to help those 30%? So if we can confirm this finding in our prospective study, then this is very easy to implement in clinical practice because it's just changing the timing of the first chemotherapy by a maximum two weeks. But as I said, it will be important to first have this prospective study done
be sure of what we find out that really holds true also in larger patients' populations. And if that is the case, I think it's very easy to implement this in clinical practice. That was Clinda Schradler from KU Leuven in Belgium. For more on this story, check out the show notes for some links. Coming up, what researchers are doing to try and make AIs more human. Right now though, it's time for the research highlights with Dan Fox.
Want to change your gut bacteria? Try another cup of coffee. Researchers studying the dietary habits and genomic sequences of gut microorganisms in nearly 23,000 people have found 115 bacteria species associated with coffee drinking.
One particular microbe, Lawsonibacter asacralicticus, was up to eight times more abundant in coffee drinkers than in non-drinkers, and in a culture dish, grew faster when fed coffee of any kind, brewed or instant, caffeinated, and even decaf.
Coffee drinkers also had higher levels of certain metabolites associated with some of the health benefits of coffee, such as fighting off viruses and reducing the risk of type 2 diabetes and cancer. Pull yourself a cup and read that research in full in Nature Microbiology. Amber deposits have been discovered in Antarctica for the first time, meaning that the substance has now been found on every continent.
Amber is produced when plant resins are fossilised and had previously been found only as far south as New Zealand. But now researchers have uncovered ambers in sediment drilled from the seafloor in West Antarctica's Pine Island Bay. The tiny fragments even contain pieces of what is suspected to be preserved tree bark. Analysis suggests this amber might be between 83 and 92 million years old.
and the team that found it attribute its survival and good preservation to the environment around the South Pole at the time. The location once hosted a swampy rainforest of conifers, which have resins that fossilize well. The high water levels of the time could have also protected the material from oxidation.
The discovery indicates that Antarctica was once warm enough to support resin-producing trees, backing up previous findings that the region once had a temperate climate. Drill down into that paper in Antarctic Science. Next up on the show, we're discussing a question that I'm sure many of you have been asking. Just how close is AI to human-level intelligence?
It's the subject of a feature in this week's Nature, and so joining me to discuss it is Chief News and Features Editor, Celeste Beaver. Celeste, hi, how's it going? Very well, thanks, Nick. I'm glad to hear it. And you've been editing this article, so I'm really interested to dive into the details with you. And I said human-level intelligence there, but in the field this is often referred to as artificial general intelligence.
And now both of those, I think, are quite hard to define. So what do we actually mean when we're talking about these things? Yeah, we do mean a machine that would have all the cognitive capabilities that a human does. So it wouldn't necessarily be physically embodied like we are. But in terms of planning, reasoning, the ability to form abstract concepts and
and to apply things learnt in a specific situation to lots of other situations. So to generalise from one to another, which is something we do very, very easily. So all of that is,
is basically what is meant by artificial general intelligence or AGI, sometimes also called super intelligence by people who don't like that term because they think either it's a bit imprecise or it has a lot of baggage from science fiction. But whatever you want to call it, it's basically human level intelligence.
I mean, a machine with human level intelligence does sound like something out of science fiction. But this is actually something that some researchers are taking a bit more seriously now. Why is that? Yeah, that's right. I think for a long time, the phrase started being used around 2007, but it was regarded pretty much as science fiction. In fact, one of the researchers in the feature says the
The only people who talked about AGI were crackpots until recently. But the thing that kind of took it from those more fringe elements into everyone's thinking was the arrival of large language models. The most famous example of that being ChatGPT, which is a bot that is powered by a large language model. And that was sort of the moment that...
really brought these large language models into the wider consciousness. And they're the thing that's kind of caused people to reevaluate, are we actually really close to an AGI, a human level intelligence in a machine, because they have so many capabilities that previous AI systems lacked. And these large language models have been advancing at an incredible pace. And
Just recently, OpenAI launched its most recent model and the other makers of large language models have done similar things. So can you tell me what are the capabilities of this latest suite of large language models? So the most recent...
high profile release was something called O1 by OpenAI. And that really amazed people even further. So the AGI debate kicked in again, because even some problems that required several steps of reasoning, which had seemed out of reach of LLMs, O1 could do. But it seems there are still some pieces missing, testing it on...
some visual puzzles where you sort of look at a couple of examples and the person or machine who's taking the test needs to figure out what the abstract rule is that takes you from the first image to the second. And people can do this with relative ease, but O1 is no better at this, suggesting that it's just more of the same. So it's not a sort of paradigm shift that would take us to AGI. It
It's more extra good at what it was already doing, but we aren't seeing some of the capabilities that were kind of have always been lacking in LLMs. They're still not there. So what is it that researchers believe needs to be achieved in order to actually get to this level of intelligence? What are they lacking? A lot of researchers are looking to neuroscience to answer that question.
And so neuroscience does offer some clues, both to what's going wrong and therefore to what could be needed to actually build an AGI. And one thing that's attracting a lot of attention right now is something called world models, which are models of the environment that
that people build in their minds and that are thought to be really key to some of these things that LLMs can't quite do. So world models are thought to enable us to plan ahead. We can build this model of our environment. And then when we're making a decision about what to do next...
sort of simulate what it is we're going to do, look at the outcome, compare that to what we're trying to achieve, and then decide if this is the right course of action. And so how is it that they hope to be able to try and achieve this with AIs? Could they do this with LLMs? Or would it be a different kind of AI altogether? So some people have asked the question, are LLMs building some form of world model?
And some people have claimed that they are. So there's some argument about whether they are or they aren't. But a really interesting experiment trained an AI system with the same architecture as an LLM. It wasn't trained on language, though. Instead, it was trained to predict the next route that a taxi driver in Manhattan was going to take. So it was fed all this data, years and years of taxi rides and the turns that taxi drivers made. The model was given all that and
and trained to predict the next turn. And it was able to do that really well. And then the researchers said, okay, so let's see whether we can figure out if it's built its own map of Manhattan. Is there a map of Manhattan inside this AI that it is using to come up with these answers? And they determined that it had built a kind of map, but
But a lot of the map was wrong. So it contained a lot of the real streets, but it also contained all kinds of what they called impossible streets. So just if you look at it, it looks like a bunch of scribbles and you can sort of immediately tell that this is not a map of Manhattan and it doesn't really make any sense. And when they tested it in new situations that were not in the training data, it couldn't predict the next turn. So it also revealed not just...
that it wasn't building a good world model, but also that it had functional consequences and led to a sort of collapse almost of its abilities when it was in a new situation, which again, you would not see that with a person. And listeners, I will mention that if you want to see a picture of this world model, check out the feature because it's very funny to see a spaghettified version of Manhattan, I would say.
But coming back to how to build these human level intelligences, is there anything beyond the world model that researchers want to try and implement or any other ways of trying to improve these models? So a lot of people think that we may need a different architecture for an AI. I often mix an LLM with some other technology in order to actually build an AI that can build world models and maybe do some of the other things that we do.
I mean, there's a little bit of debate about this, but most people think we're going to need something else. The LLM isn't going to take us all the way there because the way they're built is quite limited. They're trained to predict the next, it's called the next token. A token could be a word or a character or a grouping of characters, but it's the way that the text that they digest is broken up to make it something that they can work with. And it's thought that that
kind of focus on just predicting the next token may just be too limited to actually release more of these cognitive abilities. So people are looking at architectures that don't work in that way, that somehow understand a problem more in its entirety, like we do. But it's all quite in its infancy. One thing I could say as well that might be a little bit sort of
But some researchers think that another thing that's going to be really key to kind of a human level intelligence would be a greater level of autonomy. So right now in LLM, if you feed it data...
it digests that data and it trains itself on that data. It doesn't make choices about which data to ingest and which not to, but we do. And some people think the only way to build an AI that is like us would be to do that because anything else just becomes hugely inefficient if you're learning from everything that
just have this huge bandwidth. And to get at the interesting stuff for any given situation, the AI made me to become much better at making those choices. And then it would be more autonomous, which then raises all these other questions of would it be safe? And do we want an AI that's autonomous and is making those kind of choices? And then to bring us back around to the initial question, how close is AI to human level intelligence? And
I guess, would we even know it if we saw it? So the estimates for when AGI might be created range quite widely. Some of the CEOs of the tech companies who are developing these AIs tend to be quite bullish. So those can be as soon as a couple of years. And then there's other people that think it's more than 10 years out or, you know, kind of basically who knows.
And whether we know it when we see it, again, one school of thought is we'll just know. Like, it will be obvious because it's so fundamental to the way that it would operate. But also, it doesn't necessarily mean it would be a sort of sudden thing. So certainly one researcher in the future describes it as creeping up on us because it's...
It will take time for us to use a technology and to understand its true capability and sort of optimize how we interact with it. So while we might know it when it's here, it might also take us a while to get a handle on it.
So basically, it's all really quite hand-wavy in answer to your question of when it will be here and whether we'll know it. Celeste, thank you so much for joining me. Yeah, it's been a great pleasure, Nick. Thank you. Nature's Celeste Beaver there. For more information on this topic, check out the show notes for a link to the feature. Finally on the show, it's time for the briefing chat, where we discuss a couple of articles that have been highlighted in the Nature briefing.
So, Nick, what have you been reading this time? Well, I've been reading an article in Nature about an effort to create a stem cell bank of cells from centenarians to better understand ageing. Ah, so centenarians, that's people over the age of 100. Yeah, people 100 or over. And I don't know if you know this about people that age, but there's not many of them. So it's actually quite difficult for researchers to study them.
which is why a group of researchers in Boston have made reprogrammed stem cells from the blood of some of these centenarians so that they can use them in different experiments and, you know, keep reproducing those cells forever. Oh, fantastic. So reprogrammed, meaning they are taking these blood cells and they are turning them back
to the stem cell stage. That's right. So these cells are known as induced pluripotent stem cells, where you introduce some factors to the cell and it basically turns it back into a stem cell so it can become any kind of cell that you want to study. And researchers are hoping that this will help them answer some key questions about ageing. Because
The idea has been that if you live to 100 or more, you probably have some pretty nifty genetics in order to live that long because chances are you've encountered a lot of disease, a lot of challenges throughout your life, and so probably you have some genes that might allow you to better resist disease or better bounce back after disease. But
doing those sort of genetic studies have been difficult. And so the hope is that these cells will allow researchers to understand, especially the genetics, because while making induced pluripotent stem cells makes the cell sort of younger, in inverted quotes, the genetic information remains the same. So if you try different things, you can see what the genes are doing. And
Why do we need these stem cells to get that genetic information from? Why couldn't we, say, look at the genetic profile of someone just from, for example, the blood cell? Well, you could do that as well. And that is something researchers certainly have been doing. But with these stem cells, you can do all sorts of fun experiments. So, for example, they've turned some of these stem cells into neurons to then get an idea of
what is going on in the brain of these centenarians to allow them to withstand aging. So, for example, and this is unpublished results, so take it with a pinch of salt, they found that there's something odd going on with the quality control of
of proteins in centenarian neurons. So it's understood that as we age, some of the quality control processes, so sorting good proteins from bad proteins in neurons starts to break down. And so when they looked at this with the neurons grown from centenarian stem cells,
they found that their neurons were actually not doing very much of this quality control. But what happened was, is when they challenged the stem cells, they gave them some sort of stressor, then suddenly this process went into overdrive and they were very quickly and efficiently sorting the bad from the good. So this is sort of some of the insights they can get from these stem cells. Fascinating. Is there anything else that's been sort of preliminarily shared?
The other preliminary thing that some researchers found was that the brain cells derived from these stem cells, they have high levels of genes associated with protection from Alzheimer's disease. And that's maybe not too surprising because when they were looking for centenarians, they
Many of them were healthy, both cognitively and they were able to live independently and that sort of thing. So chances are they won't be as affected by Alzheimer's or something like that. But now researchers can start to try and probe the mechanisms of that. Oh, that's fascinating. So what is the hope for the future that we will all be able to live longer by uncovering the secrets of these genetics?
I think that is the ultimate aim, but for now the more immediate aim is to use these cells to make other kinds of cells that are relevant to ageing. So maybe liver, muscle, gut cells, and this is something we've talked about on the podcast before as well, maybe making organoids, which are basically a group of tissues that are like a mini organ that researchers can then use to probe for more details on what's going on.
Well, centenarians obviously have done a fantastic job living to 100 years, but I've got a finding from 1.5 million years ago. I've been reading about it on Nature. They found footprints made by two species of ancient human relatives in Kenya.
And it seems to provide the first evidence that these archaic hominin species might have coexisted in the same place at the same time. Wow, so I'm imagining that these footprints are just of the two different species just together. And then from the way they look, I guess, they're able to be like, oh, those guys were together at the same time. Is that right? It's close to. So in 2021, researchers found multiple sets of ancient footprints in the Kubifora site in East Turkana area of Kenya.
And these included one continuous path of impressions left by one particular hominin and also some isolated prints of three others. And the surface of this area they knew was around 1.5 million years old. And with the impressions around the area of sort of like sand and reed beds, it suggests that the area was probably a lake.
And when they started analysing these prints, they realised that these two individuals came from different hominin species and also probably walked through the lake area within hours or days of each other. And this is the first direct record of different species coexisting in the same place. So maybe not hand-in-hand walking down this lake bed, but certainly in the same sort of area at the same time. Yeah, they can't say for certain. It doesn't look like the prints were together as such, but...
You know, it does seem like they would have been in the area. So they were probably at least aware of each other, whether they were interacting or not, we don't know. So the two species were Homo erectus, which is sort of the forebearer of modern day humans, and Paranthropus boisei, who is more of our distant relative, shall we say.
And there have been previous studies which suggested that different hominin species lived alongside each other, but the fossils are often separated in large areas and the estimated dates are sort of thousands of years. This is the first time that...
we sort of have proof that they probably were very close to each other and were aware of each other. So I'm guessing evidence like this is quite rare. Is there anything else that this fossil can sort of show us? Yeah, so there was some really fascinating things. I mean, these footprints, if you do go and have a look, they have these amazing pictures. And Paranthropus Boise, who also walked upright, but they had a flatter foot,
And what I thought was particularly fascinating was the position of their big toe moved from step to step. So it has a much greater range of motion than anything like we as modern day humans would have. Ours extend outwards maybe to about 10 degrees. These of Paranthropus boisei went around to about 19 degrees on one of the feet, on one of the prints. Oh, wow. And alongside the hominin footprints, the site also had transverse,
had tracks from cattle other animals lots of different birds including a giant extinct stalk that's really cool i like these studies that sort of paint a picture of what it was like like 1.5 million years ago would have been like that's super interesting exactly and they're hoping in the future studies they'll focus in a bit more on the animals and the birds and really sort of bring to life
this whole landscape from 1.5 million years ago. Well, that's a super interesting story, Emily. Thanks. And listeners, for more on those stories, check out the show notes for a link and for a link of where you can sign up to Nature Briefing to get more like them. That's all we've got time for this week. As always, you can keep in touch with us on X, we're at Nature Podcast, or you can send an email to podcast at nature.com. I'm Emily Bates. And I'm Nick Pertrucciano. Thanks for listening.
So good, so good, so good. Perfect Gifts? We've got them at Nordstrom Rack Stores now. UGG, Nike, Barefoot Dreams, Kate Spade New York, and more. Find everything on their wish list, all in one place. Steve Madden? Yes, please. It's perfect. Did we just score? The greatest gifts of all time? Yeah.
Head to your Nordstrom Rack store to score. Great brands, great prices, the greatest gifts of all time. Oh my God, it's the coolest thing ever. Hey guys, have you heard of Goldbelly? Well, check this out. It's this amazing site where they ship the most iconic, famous foods from restaurants across the country, anywhere nationwide. I've never found a more perfect gift than food. They ship Chicago deep dish pizza, New York bagels, Maine lobster rolls, and even Ina Garten's famous cakes. Seriously.
So if you're looking for a gift for the food lover in your life, head to goldbelly.com and get 20% off your first order with promo code GIFT.