Save on feel-good favorites with great everyday prices at Whole Foods Market. Look for the yellow low price signs throughout the store on quality proteins like responsibly farmed Atlantic salmon. Prioritize well-being for less with 365 by Whole Foods Market supplements and delicious smoothie ingredients like organic whole strawberries and almond milk. Don't sacrifice quality. Shop great daily prices at Whole Foods Market.
Whoa, easy there. Yeah.
Nature.
Welcome back to The Nature Podcast. This week, using AI to detect cardiac arrests via a smartwatch. And the latest from The Nature Briefing. I'm Benjamin Thompson. And I'm Nick Petrichow. When it comes to treating a cardiac arrest, every second counts. This week, researchers demonstrate a way to quickly detect this life-threatening condition and automatically call the emergency services.
or using a person's smartwatch. Cardiac arrests occur when the heart suddenly stops beating, perhaps caused by irregular heart rhythms, known as arrhythmias, or by heart attacks where key blood vessels get blocked.
And if nobody is around to see someone having a cardiac arrest and offer immediate aid, the chances are it will be lethal, as every minute without intervention increases the risk of death. Each year, millions of people experience cardiac arrest, with somewhere between 50-75% of these events going unwitnessed.
But these days, many people wear smart watches that can detect their pulse. A sudden loss of pulse is a key sign of cardiac arrest, so researchers have wondered if this wearable tech could be used to signal that somebody is in trouble. The problem with that is making sure that the watches only call for help when someone is really having a cardiac arrest. Otherwise, it could risk overwhelming health services with false calls.
This has not been easy, as it's hard to get data on cardiac arrests. Now though, a team from Google have developed a machine learning algorithm that can detect these deadly events. I called up one of the researchers, Jake Sunshine, and he laid out the key challenges to making this work. It's a really challenging area to study because these events...
are individually rare. And so assembling cohorts to follow people waiting for events to happen is really, really challenging. It's also a challenge because it's a life-threatening condition. And so you can't take healthy volunteers and then sort of induce this life-threatening state in order to, say, develop an algorithm. And so...
That brings us to your new paper. And so this is a way to have automated detection with a smartwatch. How did you go about doing this, given the challenges that you just mentioned? So a watch is a common device that uses something called photoprothismography to measure changes in your vasculature when the heart beats. And that allows these watches to, say, measure your pulse and your heart rate.
We know that when people experience a cardiac arrest, they have a pulseless arrhythmia. And so what we did was we worked with cardiologists when they were working with their patients who had previously had an implanted cardiac defibrillator. Some people require these to live their lives because they're at such high risk for these events.
sometimes those devices are tested as part of their routine care. And so we collaborated with these cardiologists and patients who were previously scheduled
to have these procedures done and had them wear a watch to transiently capture what it looks like when someone's heart stops, which is done to test their implanted device to measure how their pulse changes using optics that are in the watch. And how closely would this pulseless arrhythmia
replicate what it's like to have a cardiac arrest? Thankfully, in the safety of a procedural suite, their heart is shocked back into rhythm immediately. That doesn't happen in an out-of-hospital environment, especially if no one is around.
And so there are definitely some key differences, but it does capture sort of that underlying rhythm that is commonly seen with people who experience a cardiac arrest. So you're using these people going through this procedure as sort of the positive cases to train this algorithm to show it this is what a cardiac arrest looks like.
In the paper, you also did something called arterial occlusion, where you blocked people's arteries to broaden that out and test this on other people as well, because that was very similar to this procedure. What did you do for the opposite? How were you showing this machine learning approach, this algorithm approach?
what it's like when people are not having this happen to them, if you know what I mean. Once you have that algorithm, you're talking about minimizing false positives, which is a really, really big consideration. So you need a lot of negative data to make sure that whatever you're developing doesn't lead to excessive false triggers. Yeah.
And therefore, we're able to take that algorithm that we developed in this way and then run it back retrospectively over hundreds of thousands of hours of data where we know no one had one of these cardiac arrest events. And that helped us get information about how likely the algorithm was to lead to false positives.
And then we ran prospective validation studies with diverse cohorts to understand how frequently unerrant classification escalating to a phone call occurred. And how accurate was this approach at determining that someone was having a cardiac arrest? In the paper, it says it's got a 67% sensitivity. Does that mean it gets it right 67% of the time? So yeah, that...
requires a lot of context. So if you can imagine a wearable device that is connected to an emergency response system like 999 or 911, it's a public resource that we all rely on.
And so we really had to be thoughtful in designing a system that respects that public resource. So we really optimized to do something that is clinically meaningful in a situation that's sort of nearly unsurvivable if it's in an unwitnessed circumstance with balancing that need to not call emergency services unnecessarily.
Therefore, in our validation study, that's what we found. About 67% of true positives in that cardiac arrest simulation were detected by the watch. And then it had a specificity that is implementable at sort of societal level scale that's not going to overwhelm 911 systems. And were you still getting some of these false positives?
because in the paper it says that for every 21 user years there was an unintentional call. So yeah, so if a user had a watch on average it may make an errant call once every 21 years and so that's not
that often, right? You know, you need to account for large numbers of people who have that. But even when you do that, that amounts to something that can be responsibly deployed at sort of societal level scale. And is this just an algorithm that can be rolled out onto any smartwatch or does it need this particular kind? How do you see this progressing? So the paper is focused on one type of watch, the Google Pixel watch.
This, in theory, could be done with other devices. There are lots of nuances and validations and things like that that are required, you know, going through regulatory reviews, things like that. But in general, the purpose of this paper is showing that it can be done on a consumer wearable device.
And our hope is that as these capabilities expand, it provides a new way to sort of keep people safer for these events that have really high mortality if they happen in unwitnessed circumstances. That was Jake Sunshine from Google Research and the University of Washington in the US. For more on that story, check out the show notes for some links. Coming up, research shows that dogs blink more when they see another dog doing the same.
Right now, though, it's time for the research highlights with Sharmini Bundel. When vaccines are in short supply, could a fraction of a dose be enough? Researchers working in Uganda and Kenya wanted to know how effective a low dose of yellow fever vaccine would be in unvaccinated adults. They compared the effects of a full dose of the vaccine, which is nearly 14,000 international units, with lower doses.
They found that a dose of only 500 IU was enough to promote similar antibody production to the full yellow fever vaccine dose. This could help battle the mosquito-borne disease during vaccine shortages. You can find that study in the New England Journal of Medicine. An umbrella-shaped fossil tree shows that early forests were unexpectedly complex.
Fossils of early trees are rare, and fossils that preserve the leafy crowns on top of the trunks are even rarer. A new fossil from Canada is one such rarity, revealing a tree from the early Carboniferous, less than three metres tall, with a slender trunk topped by a dense mop of long, thin leaves. The leaves stick straight out from the trunk, giving the tree the appearance of a giant umbrella, with an estimated radius of 5.5 metres.
The team think that this was one of the first sub-canopy trees, designed to live in the dappled shade of taller trees. This suggests that early carboniferous vegetation was more complicated than previously thought. Branch out and read that research in Current Biology.
Finally on the show, it's time for the briefing chat, where we discuss a couple of articles from the Nature briefing. Nick, why don't you go first this week? What have you been reading about? So this week I was reading an article in Nature about an AI tool developed by Microsoft that can create...
impressive video game worlds. Right, you and I are obviously big video game fans. It takes a lot of work to invent an entire world. So what's going on here? Well, what is happening here is this is a generative AI tool, like the kind that power things like ChatGPT that you may be familiar with.
And so this is one that has been developed by Microsoft to help game designers come up with new ideas. And in principle, it works very similarly to something like ChatGPT. You give it a frame, a video clip, or, you know, an image from a game, and then it generates what it thinks will happen next.
So the idea could be that you could be like, okay, this is something I want to happen in the video game. And then the AI will generate what it thinks would happen next. And the other thing it does is it also includes the inputs from the player. So if you say, this is what the scene is going to look like, and this is the button that the player has pressed, it will come up with a plausible next scenario. So a little bit like those AI art situations.
situations where you can kind of drag the size of the canvas and it fills in the rest based on what's already there. Yeah exactly that. Now this one I must say is very very very specific so this particular tool which Microsoft have two names for they call it Wham and they also call it Muse so whichever you prefer this particular tool has been trained on a video game called Bleeding Edge and
And it's taken seven years of gameplay data to generate this thing. But it only works for this particular game. Right. And I guess...
The proof is in the pudding, right? Do people who've played those games like what the machines come up with? Because if they don't, it's all for naught, I suppose. So in the article, a few researchers of video games were interviewed. And on the one hand, they applauded Microsoft for having a developer-focused tool. Because one of the things that Microsoft did here is they spoke to developers about what sort of things would be useful for them. And that's why they've done this. And one of the things that they found particularly neat was
was the fact that you could drag and drop different assets in. So if you had an image from a video game, you could drag in like a new enemy or a new player or something like that, and then it would create a new world around that thing. So they applauded that. But on the other hand,
As I said, it is very, very specific. So it's unclear how useful this would be for other video games, for example, or for developers who may not have years worth of gameplay data to plug into a model to generate this. Right, so applying it to a totally different genre that might not work, as things stand. As things stand. I mean, the Microsoft team are hopeful that this could be useful forever.
for things beyond the training data and coming up with new scenarios and levels for different games. But it is challenging. Modern games are very, very complex. This is much beyond sort of Pong or Pac-Man or anything like that, where you have very simple rules. Video games have very different levels. They can have different gameplay mechanics. And those things can also change things.
through the game. So it's unclear how easy it would be to apply to different things without having all of this training data to build the model. And the other thing that needs to be balanced is that AI tools like this, they use a lot of energy, they have impacts on the environment. So, you know, you need to balance those needs. But
In principle, it could be a useful tool to help developers come up with new ideas. An interesting one, help developers rather than replace developers. Yeah, and I think that is something that concerns people in this field because one of the researchers who was interviewed for the article worries about such generative AI systems
enabling studios to lay off staff or create low quality AI generated content. So he said that we need to work together as consumers, designers, researchers, game studio owners to make a better future. Right. Well, looking forward to seeing what that might come up with. But let's move on to our second story.
this week and it's something that I read about in science and it's based on a paper in Royal Society Open Science and it's evidence of a potential way that dogs can communicate with each other and that way is blinking. So not barking, not sniffing each other's
Rear areas, more blinking at one another. I mean, those are, I guess, important ways for dogs to communicate. But yeah, blinking does seem to be quite an important non-verbal communication strategy. Like when you, a human, talk to another human, if you blink, they will as well. And subconsciously, without realising. And it seems that this is quite a subtle, as I say, non-verbal form of communication. And it is seen in other primates too. So there is this mimicry aspect
between individuals. Now, dogs, domestic dogs at least, are known to blink more around other dogs. And also it's thought that maybe blinking plays a role in keeping the peace with dogs and perhaps humans as well. And dogs will mimic other things, yawning and facial expressions apparently. So there's been quite a lot of research done there. But I think the researchers here wanted to know
Is blinking something they do as well? Is mimicry seen in this situation? And that's what they set out to test. And so how did they go about that? Did they just get dogs to blink at one another? Pretty much, actually. So they created a variety of short videos of a terrier, a cocker spaniel, and a border collie looking at a camera. Okay, they were looking at something behind the camera, a toy or a bit of food, apparently. And they edited...
these dogs looking into the camera into three sets of videos. So the dogs were blinking, the dogs were not blinking, just sort of staring down
down the lens or the dogs were licking their nose okay so a sign of eagerness or frustration that's believed to be and the blinking and the nose licking happened every four seconds in these individual videos so we have these three videos and they then showed these to 54 adult pet dogs of various breeds who didn't know the dogs in the videos okay they weren't poochy pals and these 54 dogs wore heart monitors and were filmed to see you know check how often they
they blinked. So the idea was just to see how often they're blinking. They're not trying to work out what's being communicated here. At this point, I think, and we can get to that in a minute, but this is where things get brilliant, Nick. So the article says that a few of the study dogs just got bored and went to sleep, which I think many of us feel like that sometimes. But those that didn't,
They blinked around 16% more on average when they watched the other dogs blinking on the video compared to the other videos. And the nose licking videos didn't elicit an increase in nose licking. Okay, so it does seem like blinking increases blinking in dogs. So it implies then that this mimicry does exist in these dogs. Yeah, and there's some interesting questions here because they don't know whether the blinks were synchronized to the ones in the videos or how quickly they
responded with their blinks. Okay, so the timing I think is important to understand. But the researchers say that this increase does suggest that this could be mimicry. And what was interesting is the
the dogs didn't show any increase in their heart rates. They weren't stressed. This wasn't a direct response they were thinking about, right? This is presumably subconscious like it is if you or I were doing it. But the question like, why are they doing it? Which I know you want to ask, shrug, I think at the moment, right? But it does add some grist to the mill then that this is a form of nonverbal communication that is important in a bunch of different ways.
different animals. Now, I covered on the book show a little while ago about vocal communication between animals and the fact researchers are trying to unpick what sounds mean. And I think this shows that there is an entire new complex level of non-verbal communication that
I mean, we've no idea what it means or what it signifies. I think in the article they say that maybe it's a way of saying, hey, I'm relaxed. I'm blinking back at you. Don't sweat it. We're going to be friends. But that, I think, is speculation at the moment.
Well, listeners can't see, but I'm blinking encouragingly at Ben to tell him it's time for the end of the briefing chat. Thanks so much for that one. Very interesting story. And listeners, for more on those stories and for where you can sign up to get more like them, check out the show notes for a link. As always, you can keep in touch with us on X, on Blue Sky, or you can send an email to podcast at nature.com. I'm Benjamin Thompson. And I'm Nick Pachaciao. Thanks for listening.
Save on feel-good favorites with great everyday prices at Whole Foods Market. Look for the yellow low-price signs throughout the store. On quality proteins like responsibly farmed Atlantic salmon, prioritize well-being for less with 365 by Whole Foods Market supplements and delicious smoothie ingredients like organic whole strawberries and almond milk. Don't sacrifice quality. Shop great daily prices at Whole Foods Market.
In store and online. To make switching to the new Boost Mobile risk-free, we're offering a 30-day money-back guarantee. So why wouldn't you switch from Verizon or T-Mobile? Because you have nothing to lose. Boost Mobile is offering a 30-day money-back guarantee. No, I asked why wouldn't you switch from Verizon or T-Mobile. Oh. Wouldn't. Uh...
Because you love wasting money as a way to punish yourself because your mother never showed you enough love as a child? Whoa, easy there. Yeah. Applies to online activations. Requires port in and auto pay. Customers activating in stores may be charged non-refundable activation fees.