This week's episode is brought to you in part by Science Careers. Looking for some career advice? Wondering how to get ahead or how to strike a better work-life balance? Visit our site to read how others are doing it. Use our individual development plan tool, access topic-specific article collections, or search for an exciting new job.
Science Careers, produced by Science and AAAS, is a free website full of resources to help get the most out of your career. Visit sciencecareers.org today to get started.
This is a science podcast for February 14th, 2025. I'm Sarah Crespi. Pressed up this week, we have international news editor David Malakoff. He joins us to discuss a possible big change in NIH's funding policy. We talk about the outrage from the biomedical community over the potential cuts and the lawsuits filed in response. Next, freelance journalist Christa Lestay-Lassaire talks about training artificial intelligence on animal facial expressions.
These days, this approach can be used to find a farm animal in distress. One day, it could help veterinarians and pet owners better connect with their animal friends. Finally, we have researcher Kea Ganoski. She's here to chat about the case against machine vision for the control of wearable robotics.
It turns out that adding cameras to exoskeletons, to things that we wear, may cost too much, like in loss of privacy. And that could outweigh the benefits of having robotic helpers on our arms and legs.
Now we have Science's international news editor, David Malakoff. He's an international editor, but in the past he was our policy editor and has covered the U.S. government for a very long time. He wrote about a recent announcement from the Trump administration changing how NIH grants money to researchers. Hi, David. Thanks for coming on the show. Yeah, thanks, Sarah. Good to be here. So let's start with the timing. We are recording this on Tuesday, February 11th. The change was announced on Friday by NIH right before the weekend.
What did it say? What did they announce on that Friday? Yeah, well, it's been a head snapping three or four days for the research community. So what happened was late on Friday, Washington, D.C. time, the NIH dropped a memo. And that memo basically said that they were unilaterally changing the amount of money that research institutions received.
could be reimbursed for research expenses. These are things like facilities, the administrative staff that administers grants, and other odd expenses that are not directly related to the research being done. And that's why they're called indirect costs. And so they said they're going to cap that at 15%.
It's a little complicated to explain how indirect costs work, but the bottom line is that universities can get reimbursed for these expenses in proportion to the size of the grant. And it's a lot of money. When a researcher working at a university gets a grant, some of it goes directly to the cost of the experimental, you know, the purchases that they have to do to make that experiment happen. But some of it has to go with
has to do with the building. Keeping the lights on. Exactly. Keeping all the infrastructure that allows a giant institution that has multiple labs going. Exactly. And in 2023, NIH said they sent about $35 billion out to researchers. $26 billion of that was direct costs.
But $9 billion was these indirect costs. And they claim that by changing the formula, they can save about $4 billion of that $9 billion. But what do researchers and their institutions say about saying, OK, but how do we keep running? Like, yes, you're paying for mice, but you're not paying for the building with the mice in it. Right. So I think the short answer is they were apoplectic.
They were upset that this was done so abruptly with no discussion. And they also said it is simply not tenable to have a world-class research institution without significant indirect cost payments. There's a lot of history here, Sarah, but the short version is various administrations, including the first Trump administration, have tried to cut indirect costs.
Universities typically have almost always argued that the indirect cost payments that the federal government makes are actually insufficient. They don't actually cover the whole cost of the research that they do. But at the end of the day, this is a huge budget cut for research institutions, for research universities.
That's what it looks like. But one of the questions is, if NIH was actually able to go through with this and save, say, $4 or $5 billion, quote unquote save, not spend it on indirect costs, what would they do with it? For example, this year, that money is already appropriated.
So would it go back to direct costs for basic research? Would they do something else with it? So that's just one of many questions that came up. And this really runs into who makes the budget? Who decides how much money all these different agencies of government get? In previous administrations, it was brought up as
a change in legislation, right? As opposed to a change in policy. - That's right. So for example, in the first Trump administration, they proposed cutting the indirect cost rate to 10%. This time they've proposed cutting it to 15%, but then they did it as part of their budget request to Congress.
And basically what happened is Congress said, uh-uh, we're not doing that. And in fact, they went even further. They said, not are we not going to do that, but we're going to pass a provision that says, hey, NIH, you can't make this change unilaterally. You're basically going to have to come to Congress. OK, let me just recap real quick. Trump in his first term, they tried this as a budget proposal. Congress said, no, you cannot limit the cap like that. You have to actually...
get a piece of legislation passed. They made that the law. You have to pass legislation to reduce the cap at NIH. This latest effort by the Trump administration isn't doing that. It's changing the cap by releasing a memo from NIH. And that brings us to the courts, right? They're going to have to suss this out. Exactly. That change was announced on Friday night. By Saturday morning, very knowledgeable folks in the biomedical research community were saying, not only is this a really bad idea, it's flat out illegal.
For one thing, it violates this provision that Congress passed in 2018 and has stayed on the books ever since that says NIH can't do this unilaterally. And secondly, they said it violates this thing called the Administrative Procedure Act, which is this very important law with a very boring name, but it basically determines how federal agencies are supposed to go about making changes in policy.
This is like if you've ever seen, oh, this is now on, it's open for public comment, right? That's what I think about when I hear that. Like there's a change that's being proposed by X agency and it's open for public comment. We report on this all the time through Science News.
That's right. And that's a classic example of the kind of process that is governed by things like the Administrative Procedure Act. And in two different lawsuits, one filed by 22 states that have Democratic attorneys general, another one by university and research groups, including groups that represent the largest research universities, the largest public universities,
The big heavy hitters in the research university world, they both went to federal court, they both argued that this was illegal, and they both asked a judge, "Hey, issue a temporary restraining order while we can sort this out." And they both won. The judge on Monday night issued essentially a nationwide temporary restraining order and said, "Hey, come back to court on 21 February."
And let's talk about this. And we don't know what's going to happen at that point. We're going to be back at the judge's bench and we have to hear arguments from both sides. Right. So what's going to happen between now and the 21st is each side will file their briefs, their arguments. The judge will read those. They'll come into court and answer questions. After that, we don't know how long it might take the judge to make a decision. And after that, even once a decision is made, either side can appeal. So this could be a very long process.
Now, I just want to address the restraining order here for one second, because what we've seen with the National Science Foundation, NSF, there is a restraining order, but it's not necessarily being applied across the board as expected. Are you worried or researchers worried that this is going to happen at NIH as well? There's not a lot of clarity about what is going on with these judges' orders.
There are some indications that the administration may have decided that they're going to buck these judges' orders, but in some cases, the situation is very complicated. But the bottom line is researchers are worried that the judges' orders will not be followed and the spigots of money will not be turned back on. I've seen a lot of reaction to this in the
science press and among researchers that I know. But there's just been like crazy level of concern about this. Yeah. I mean, if you know, Sarah, the reaction on that Friday night was just amazing on social media and people putting out press releases. One researcher I spoke to compared this memo to dropping a nuclear bomb on universities.
Another one said, one of the big research university groups said, this was going to delight America's competitors because it was going to hollow out our research establishment and make us much less competitive. So people were really angry. Thanks, David. This has been really helpful for me to understand what is going on right now. And hopefully, you know, we'll learn more soon. Well, this is a story we're certainly going to be following. Thanks, Sarah.
David Malikoff is Science's international news editor. You can find a link to the story we discussed at science.org slash podcast. And to keep up with all of our science policy coverage, you can go to science.org slash science insider, all one word. Stay tuned for a story on using machine learning to better understand animals.
Before the show starts, I'd like to ask you to consider subscribing to News from Science. You've heard from some of our editors on here, David Grimm, Mike Price. They handle the latest scientific news with accuracy and good cheer, which is pretty amazing considering it can sometimes be over 20 articles a week. And you hear from our journalists. They're all over the world writing on every topic under the sun, and they come on here to share their stories. The money from subscriptions, which is about 50 cents a week,
goes directly to supporting non-profit science journalism, tracking science policy, our investigations, international news, and yes, when we find out new mummy secrets, we report on that too. Support non-profit science journalism with your subscription at science.org slash news. Scroll down and click subscribe on the right side. That's science.org slash news. Click subscribe.
Artificial intelligence really shines at visual pattern finding, ingesting many, many examples and finding those patterns in the data. This approach has been tried on many scientific problems. We have talked on the podcast about finding fences from satellite imagery or distinguishing tuberculosis coughs by analyzing audio spectrum, just to name a few. Now researchers are
are starting to use AI to look into the faces of animals. Not only so far can the AI tell pigs apart, but it's starting to learn to tell us what their facial expressions might mean. This week in science, freelance science writer Krista Lestay-Lassere wrote about using AI to understand animals. Hi, Krista. Welcome back to the Science Podcast.
Hi, Sarah. You know, what made you decide to write this kind of larger piece looking at animals and machine learning and artificial intelligence? I kept writing about these animals' facial expressions and mostly in horses. There's so much work in horses' facial expressions. You know, they're these prey species. They don't want to attract attention from predators. So they've developed this ability to really communicate with each other through really subtle facial expressions. So
It just kind of fell together, really, that I just kept seeing again and again that there was this interest in reading animals' facial expressions. Right. What are some examples here? You know, why is it helpful to be reading animal facial expressions or, you know, their emotions? And why have horses, like you said, have so much attention? Well, first and foremost, it's really helpful with regard to pain. And that's where it all kind of began, you know, is to find out, like, in a veterinary situation, a lot of times they're not going to tell you that
They need more pain medication. But their facial expressions will show that they are not comfortable. There's also this issue of, for example, horses can have something wrong with their legs, their feet, or even their back that makes them lame, which means that they look like they're limping, essentially. But sometimes that can be even really subtle. And we don't want horses to be ridden or especially not competed.
if they're suffering in any kind of way. So their facial expressions can be actually the very first look into how they're feeling, even before they show anything in their body movement that we can kind of pick up on what's going on. So it's important for welfare and just for the good relationship. So one of the projects you talk about is called IntelliPig. What are they trying to do with this kind of technology?
This is part of the idea of what they call smart farms, which is where we're taking what we know about machine learning to be able to put it into a situation where we can improve animals' lives on farms. Because you've got a farmer, he knows his animals really well, but how on earth can he look at a thousand animals a day to make sure that they're okay? But he can ask the AI to just scan the faces of each pig individually.
And they start with just the facial identity of the pig. And I don't know if you've seen, Sarah,
a great white pig, but no, I mean, they all kind of look alike. You'd be hard pressed to tell the difference between pig A, pig B, C and pig, you know, 355. But this AI can really do a good job of like 97% accuracy in detecting one pig that looks identical to us from another pig. So they start out with just the facial ID, but then they also look at it and say, how's this pig doing today? They can
pick up on facial expressions that show pain, any kind of stress. And they also keep a record of what the picture that they took last week and last month and even earlier. And they can compare it and see how that pig is doing across time and adjust things according to the needs of that particular pig. Does that pig need veterinary care? Maybe a different kind of feed. Does that pig need a different social setting to make the animal more comfortable? The input for these models is just metamorphosis.
masses of visual data. Where are they getting for IntelliPeg or for these porous expressions that we have a really nice graphic about this? Where does the data come from? How are the models trained? The models are trained based on, first and foremost, information directly from animal behavior experts. So the animal behavior experts are able to say, okay, this animal is in pain.
or this animal is stressed or this animal is probably not in pain or in stress. It's very hard to say definitely because we can't ask them, but in their best guesses, this is a photograph or a video of the face of a horse that is just fine. And this is a video or a picture of a horse that just went through castration surgery, which is, you know, the animal's probably in pain.
or a pig who has just been exposed to very dominant older sows who are bothering her. And so we know that she's also stressed. Now, for years and years, animal behavior experts have been doing what they call facial action coding, where they are picking up on these particular aspects of animals' facial expressions to say the eye is moving up, the mouth is moving down, the ears are moving back. This is a sign of
pain or stress or something like that. The original idea was that the researchers were teaching the machines to recognize these different aspects of facial expressions that humans had already picked up on. And then over time, the researchers realized that, you know, they could actually just say to the machine, okay, look, this is a rabbit that is before surgery and probably a happy rabbit.
And this is a rabbit just after having surgery and very, very likely to be in pain. We haven't given it pain medication yet. Machine, please tell us what difference do you see? Right. Right. So kind of letting it take over the learning process to pick up on stuff that humans have not defined so explicitly. Yeah.
Exactly. And this is the sort of thing where we're actually seeing that AI is beating humans in being able to recognize this, especially in a very recent study on sheep pain, that humans were able to pick up on 60, 67%, something like that of sheep that are in pain, but the AI is able to pick up on nearly 90% of the differences. And we're kind of left scratching our heads and saying, wait, wait, how did you see that? Because I did, I missed it. I
I totally missed it. Can people go in and figure out like what is being picked up on by these machines? Yeah, absolutely. And that's kind of a that's a big issue at the moment is what they call the black box. What on earth the machine is finding? But they're
Fortunately, is a software program that allows us to see it and it creates what they call heat maps where they're looking at what area of the face, where do they pick up on the biggest differences? Most of the time it's actually around the eye. It's often the ears, but it can also be around the eye.
That's so interesting. The theme so far really has been pain or discomfort. I think this is something that it's easier to annotate in the data, right? You're like, you can be clear and say, I know that given the context, this is an in pain animal or a stressed animal. But what about the more subtle, complex feelings that maybe a pet owner or a horse owner
horse owner. And ultimately, that's what we want, right? We need to know how they're feeling in order to tell the machine. We need to annotate those pictures and say, this is a confused horse. This is a frustrated cat. And then, so how do we get that part? How does that...
work? What you're talking about is really what we call the ground truth. We really need to make sure that if we're saying that this is a happy cat or a frustrated dog, we know for sure that that animal does feel that emotion. And to start with, I mean, most researchers don't even want to use those
words in the first place. They don't want to say that an animal is happy or sad because we just don't know what exactly they're feeling, but we do know that they can have positive versus negative emotions. So for the sake of this discussion, let's say that they're happy or sad, or we can kind of project a little bit. So they have been able to create some of those situations
There was a study that researchers did recently where they made dogs frustrated. They taught them that if they sat in a particular place and waited a minute, they were going to get a treat. Then they had to wait for the treat and they were frustrated. So they were able to compare between dogs that were eager. They were like, oh, yay, we're going to get a treat. And then suddenly the treat wasn't coming. And so we, you know, can assume that they were probably feeling frustrated or something, some emotion similar to frustration, but just something similar with horses.
The researchers were able to film the animals and give the AI those images and say, this was the animal when it was eager. And this was the animal when it was frustrated. And this was the animal when it was neither. It was just, you know, normal status. But what was even more intriguing was with the horse, they added a different element where the horse just didn't get the treat at all. It just disappeared.
And so the horse was either frustrated or at some point became what we call disappointed, the equivalent, at least the horse equivalent of being disappointed, right? And this is really, really new that the AI is able to pick up at least some differences between being frustrated and being disappointed, which is really exciting because this isn't just a question of a negative emotion. This is distinguishing between two different kinds of negative emotions. There's not.
a lot of this kind of data. I was really this, I don't know why this really struck me in your story. And I can't even estimate the number of cat and dog photos there are on the internet. I know, I know. There's so many. But there's no data around them, right? You can't just like download all of Flickr and say, find me the cats because you don't know what the cat is experiencing. So it's, you kind of have to gin this up and it's going to take a while. It sounds like to do these more complex potential emotions in animals.
The other thing that came up at the end of your story, which I thought was really interesting, was that chickens don't really move their faces that much. So how are we going to understand our chickens? Can we fold in other kinds of signals from these animals to more holistically read them? Yeah, a lot of the facial expression work has been done on mammals, you know, because mammals share a lot of the same facial muscles that we do. But when you come to other kinds of animals, other groups of animals, birds, chickens, we're
reptiles or whatever, they have different ways of expressing their emotions. Obviously, a bird is not going to be able to, you know, move its beak in the way that could help us understand facial expressions. So there are other ways to do it. I mean, animals also are very vocal. They don't have vocal language the way that we do, but they do have grunts. They have squeals. And those are very important. They're usually jam-packed with emotional information. Their body language in general, we have to look at the body language over time. So not just a, not
not just a snapshot, and what is the series of movements that they're doing that could give us more information about how they're feeling. And also, there may be many other things as well, and including their body temperature and the heat that they're emitting can also give a lot of information about what they're doing. So AI has a long way to go in helping us pull all of this together. It's a lot of work. It's a lot of information, but it's
This is something where AI can be our partner in helping us understand what animals are feeling. Yeah, absolutely. When it comes to farms or veterinarians, they're going to want, they're not going to have a close connection with their animal that they can rely on to make these judgments. They're going to be able to use this as a tool. And my own personal, you know, like my cats are really hard to read. I have three, which is nuts, but they're all very different from each other. You know, I would love a little AI that just says,
She's not happy. And something that's really important with that, with your three cats, is also that you could really benefit from having AI that says what your cats are saying to each other. Oh, my gosh. Not just what they're communicating to you, but also what they're saying to each other to make sure that this is a moment where they're happy together. Or maybe this is a moment when Sarah should probably intervene and say, OK, time out for one of you guys. Calm down. That is the dream. All right, Krista, thank you so much for talking with me. Thanks, Sarah. Krista Liste of the Serp.
is a freelance science writer based in Paris. You can find a link to the story we discussed at science.org slash podcast. Don't touch that dial. Up next, we're going to talk about why robotic pants should probably not have onboard cameras.
Science robotics papers have a lot of videos. I look through them, almost all of them, the ones that are getting published, and you just see these incredible variations on the theme of robotics. You know, we see magnetically controlled microbots cruising along inside of blood vessels, these buzzing drones that are navigating through forests at high speeds. And many of the ones that I see are actually wearables designed to stay on the body and assist in different ways.
I've seen a third thumb for additional grasping options, an extra arm that reaches over the shoulder from a back mount for heavy lifting.
And these kind of helpful pants that capture and use walking energy more efficiently, why don't we have these things on us now? It seems attainable. Recently in Science Robotics, Kea Godosky and colleagues wrote about the slow adoption of wearables and why it might not be a good idea to include computer vision in these technologies. Hi, Kea. Welcome to Science Podcast. Hi. Thanks for having me. Oh, sure. I'm sure there's a lot of factors that have made the adoption of wearable technology
technology is slower? What are some of the key issues? In our paper, we specifically talk about challenges in usability, reliability, privacy, and costs. The biggest one in my mind is that of usability. The benefit that we get from it, does that offset the cost or the challenges associated with the use of a device? Exoskeletons, especially lower limb exoskeletons, are especially a problematic application for these types of technology.
Let me just add that when we say lower limb exoskeletons or prostheses, these are basically things that are going to help with mobility, help you walk or gain more power from your own walking. A lot of people, for example, don't want to have something that is really visible when they're out and about in the world. They don't want to draw attention to themselves. So these devices have to be small and innocuous. Specifically with computer vision, if you have something like a camera and you're wearing clothing on top of
a wearable device that has a camera on it, then either that device becomes unusable because the camera can't see because there's clothing on top of it, or the camera is now on top of the clothing, which is again, drawing attention to the fact that you're wearing something. Why would you have a camera on a lower limb exoskeleton? So there are some use cases for sure where...
having a camera and having vision is an important aspect of controlling a device. That is how we control how we walk, we observe where we're going. And yes, there's a lot of automation to it. So we know how to move and we don't necessarily look where we're stepping, but that is a big part of how we move. And that is the motivation for using computer vision. But the problem here is
In a lower limb device, for example, with a lower limb prosthesis or an exoskeleton, it can be helpful to know when we're coming up on stairs or how we're going to be changing what surface we're walking on. And that is a really important use case. But again, it's not necessarily going to be as useful if you don't have light or if your clothing is obscuring the camera.
Another important use case that a lot of individuals, especially those with impairments, want to use exoskeletons for are daily activities, specifically in the bathroom, right? You want to be able to stand up comfortably and sit down comfortably independently. You don't want to have a person in there. You're not going to want a camera in there either. Exactly.
Exactly. This goes towards that privacy aspect, but I think that each of those, the reliability, privacy, and cost, those relate back to whether the device ultimately is usable enough that people will want to continue using it. Well, let's talk a little bit about some of these other use cases where you have a wearable that may or may not benefit from vision, robotic vision. Can you talk about some of the other ways this might apply and what the considerations there are for whether or not to have computer vision involved?
Let me maybe start by talking about a context where I think computer vision or machine vision can be really helpful. And that is in upper body prosthesis. And this has been something people have studied for a really long time. They've looked at how can we identify what object a person is about to grasp and how should that interaction happen? That said, while we can use machine vision, there are also a lot of other less intrusive, more private ways of doing some of that.
For example, we can use muscle activation measurements of electromyography to do that sort of sensing and prediction of what kind of grasp, for example, a person might want to have. This is not necessarily the best solution, but it does work really well. And yet we don't really see it adopted as much in the real world. And part of that is that even with that level of awareness of what the person might want to do, the way that the device behaves feels alienating.
alien to the person. If the person doesn't feel like they know what the device is doing or can predict what it's about to do, they don't want to continue using it. This is an interface problem then. So how you know what your device is doing and how your device knows what you're doing, you know, that could be aided by a camera, but not necessarily, that's not necessarily the best solution. Yes. I have something to add there, but before I go into that
I want to mention another application that specifically in our research group we've seen has been an issue. And that is, we like to use these exoskeletons, especially legged exos, for assisting in high-intensity tasks with applications to the Department of Energy, applications to the Army, and so on. And in a lot of these really sensitive application domains,
we can't have cameras again. And so privacy becomes another issue. Yes, there are solutions like you can have everything happen on board and you treat the data really private. And while you can mitigate a lot of those challenges, in order to do that mitigation, you're adding more weight to the device, more compute to the device locally. And that
again, is now taking away from that usability of what the device can actually help you with. Because if it's heavier, then it's not helping you as much as it would if it was lighter and still able to do the same. Yeah, I can see these ratios, like it's a cost going up, is the weight going up, is the usability going up or down? And all of those things are kind of fluctuating. And having camera, having vision is going to impact how those things work for you. And going back to your previous point about how it's about the...
shared the interface between the human and the robot, one of the ways that we should think of this is, yes, humans use vision when they're interacting with their environment. But that vision is something that we are then using to make our own decisions with. Whereas if a robot is making decisions based on its vision, they may not necessarily correlate exactly with what the human is trying to do.
Instead, there might be an argument there for saying we should instead be looking at signals that we're getting from the human themselves and using those to drive what the robot should be doing. Yeah. We're also integrating a bunch of other things, sound and tactile feedback and proprioception, the position of our limbs. All that stuff integrates into our decision making, even if we don't notice it. That's right. The robot's just saying, I see stairs. And you're like, but I'm running. Right.
Exactly. And what if you, you know, you walk up to the stairs, the robot thinks you want to go up the stairs, but you don't. And so you want to stop, but now the robot wants you to move. And so you create a really unsafe environment. A lot of these are more exacerbated in the lower limb space because there's much more scope for catastrophic issues. Absolutely.
And I think this is a really interesting point I read in your piece, which was that, you know, we use vision, but it can fail on us, whether it's dark out or there's kind of things that are intervening between us and what we want to see. And we have all these parallel systems that can help us in that situation. And if those same failures can affect your prostheses, then you're going to have to have other systems on there as well to compensate for the failure of the vision. The argument there is also that if
If we rely entirely on vision, then we would need to have backup systems. And we have really good systems that work and could work as a backup there. But in that situation, what really is the vision adding? If we can do all of that with the systems we already have, is vision giving us that much more to offset the costs and weight that we're now adding to the system? Yeah, so what are some examples of other kinds of sensors or other kinds of process
processes that would substitute for computer vision in some of the scenarios we've talked about or other devices? The sensors specifically that we like to use are IMUs, inertial measurement units. You know, they give us acceleration and angular information a
about each of the joints on the human body. And so they tell us essentially, how are you moving? How fast are you moving? How are your joints moving relative to each other? And that can give us a lot of information. We can even use that to identify, you know, are you walking? Are you running? And so on. It's not as prescriptive
precise as vision would be, but it's still a long way there. Another one that I briefly talked about is muscle activation, which we measure using electromyography. And that tells us, you know, how much are you activating your muscles? How's your brain essentially telling your body to move? And how are those signals being transmitted through your muscles to the exoskeleton, let's say? Sounds more like mind reading. So kind of really keying more into the intention of the wearer.
Researchers have definitely looked at using electroencephalography or EEG brain waves to try and predict what the person might be doing and have used those to try and control exoskeletons as well. Yeah. So what do you think
kind of in the big picture, what is going to end up coming into our lives from the wearables category? Do you have any predictions on that? Or do you sense that something is going to succeed and become kind of a commonplace everyday item for us? I think we need to see a breakthrough in terms of the weight and usability of the device. Specifically, some of the challenges that we're facing are
What is the minimum number of sensors that we can have? Can we reduce the weight of the device itself, the weight of the batteries? Batteries are a huge problem in wearable devices. Can we get rid of a lot of these extraneous things that are probably making the device not be as effective as it could be?
So that's one thing that needs to happen. And then another is we still don't necessarily know, especially in the context of exoskeletons, how a device should behave so that a person is able to best benefit from it. And part of that is because if you've interacted with an exoskeleton, it feels really alien, especially in the beginning.
And then over time, you get used to it. But if you change how the exo is behaving, your behavior in response to it will change and so on. And so there's some back and forth that needs to happen. Having the exoskeleton learn and understand what the human wants is, I think, a big part of what needs to happen in order for these devices. Yeah. So the user interface. Yes. Is there an everyday use case for these technologies?
like the limb support and stuff like that? Yeah, I'm glad you asked that. So we actually have the use case where we want to help with high energy tasks. So thinking of cases where you might be working in a warehouse and there are back exoskeletons or knee exoskeletons that help with lifting tasks. That's an example of something that's already happening. There are passive devices like that that exist and we're working on similar powered systems.
A different example that I think is really interesting is wearable devices to help individuals hike or walk longer and more comfortably. There are some companies now that are looking at a sort of e-bike style of set of pants that now help you hike a little bit more comfortably, a little bit more easily. Love it.
The underlying theme, I would say, of this paper has been, yes, machine vision would probably be amazing in the future when we have devices that already work extremely well. The problem is, especially for commercial adoption, we're not
quite there yet. And you'll see that a bunch of our co-authors are actually individuals that are in the industry and are working with these devices. And they're seeing a lack of adoption, whether it's with powered exoskeletons or powered prostheses. And that's because of this mismatch to a large extent. They spend all of this money and then they get a device and either it doesn't do for them what they thought it was going to do, or it behaves in a way that they don't feel comfortable with.
And so they end up not using it. And that abandonment is really the problem that we need to address for commercial adoption. In terms of wearable devices, we still don't necessarily have a good way of physically interacting with the human. And so even if the computer vision is able to figure out exactly what the person wants to do, that doesn't necessarily translate to the device behaving the way that the person wants. I see a lot of haptic feedback stuff happening in the pages of Science Robotics, and that's
The ability for the robot to tell you, or in this case, I guess the prestige is to tell you what it's touching. It's so complicated. Yeah. And that is actually, it touches back to the point that I mentioned earlier too, of we need to understand the human's perspective and what they want to do. But at the same time, the human has to understand the device and that bidirectionality is something that needs to be set up before we can go for more
complicated or high level, whichever simpler or harder devices for sensing, we first need to understand what our devices are for the actual interaction between the human and the robot.
So basically our ability to understand what the device is going to do and the ability of the device to tell us what it's going to do is still limited and we're still learning how to make that work. Computer vision can only help so much with that and it actually interferes with a lot of other things we want from our devices. And so it's better to leave it out until we are much better at making these devices work for us and figuring out use cases where vision will really optimize them. Yes, that sounds exactly right.
Thank you so much, Kea. This has been great. Thank you so much for having me. Kea Ganosky is a postdoctoral fellow at the Georgia Institute of Technology. She'll be joining Rice University this summer as an assistant professor. You can find the science robotics paper we discussed at science.org slash podcast.
And that concludes this edition of the Science Podcast. If you have any comments or suggestions, write to us at sciencepodcast at aaaas.org. To find us on podcasting apps, search for Science Magazine or listen on our website, science.org slash podcast. This show was edited by me, Sarah Crespi, and Kevin McLean. We had production help from Megan Tuck at Podigy. Our show music is by Jeffrey Cook and Wenkloy Wen.
on behalf of Science and its publisher, AAAS. Thanks for joining us. You listen to us to hear about new discoveries in science. But did you know we're a part of the American Association for the Advancement of Science? AAAS is a nonprofit publisher and a science society. When you join AAAS, you help support our mission to advance science for the benefit of all.
Become a AAAS member at the silver level or above to receive a year's subscription to science and an exclusive gift. Join today by visiting AAAS.org slash join. That's A-A-A-S dot O-R-G slash join.