See You Now is a podcast highlighting the innovative and human-centered solutions that nurses are coming up with to solve for today's most challenging healthcare problems. Created in collaboration with Johnson & Johnson and the American Nurses Association and hosted by nurse economist and health tech specialist, Shawna Butler.
People don't realize how much artificial intelligence is already embedded in the things that they use every day. To order something or to resolve something. AI is not a technology. AI is a collection of many different technologies. But when you add AI in healthcare, it's a game changer.
We don't want to take the human element out of care. Care is a very personal and intimate and human act. There's no replacement for your own skills, your own clinical decision making. I think it's important for people who are developing the artificial intelligence tools that incorporate the AI algorithms
to be talking to the end users right from the beginning. We know the majority of information that is put into our health system, especially our documentation, comes from nursing. You need to be prepared to keep learning, keep thinking about what are the newest tools and technologies, and be creative. Welcome to See You Now. I'm Shawna Butler.
AI is everywhere and it's changing how we do everything from recommending tacos, running shoes, and romantic partners to guiding our navigation systems, investment portfolios, and clinical trials. And while AI-based technologies are busy solving complex problems, it's also generating big waves, loud headlines, and intrigue in every industry, every workforce, and everyone's life and future.
And in healthcare, artificial intelligence is a game changer. AI-based technologies do many things faster and better than humans can alone, with the promise of so much more ahead. And with that comes a lot of questions, possibilities, and well-founded concerns. How can we ensure that AI is beneficial, not detrimental, to care and caring to clinicians and communities?
Unintended consequences are we overlooking by developing technology that can perpetuate bias, override clinical insight and instincts, or undervalue human presence. And nurses are leading the way in asking and helping to answer these questions. In this first episode of Many to Come on AI's role in transforming healthcare,
We're starting with the basics and the big picture. What is artificial intelligence? How can we be sure AI is safe and equitable for everyone? And what's the best way of thinking about and working with AI in clinical settings, no matter how we use it?
To explore these questions and more, we reached out to a nurse attorney with a specialization in bioethics and a veteran researcher and leading expert in computer science, AI, and machine learning to give us an overview of AI, the game changer it will be for healthcare, and how we can ensure AI is safe and beneficial for everyone.
My name is Peter Stone. I am a professor of computer science at the University of Texas at Austin. I'm also the director of Texas Robotics here at UT Austin. And I'm also the chief scientist at Sony AI, where we think about bringing out large, ambitious projects into the creative process. And then I have many external roles as well. I have been the
lead author on the first study panel report of the 100-year study of artificial intelligence. I also, until recently, was the president of the International Robot Soccer Federation, a group that has the ambition of trying to create humanoid robots that can beat the best World Cup soccer team on a real soccer field by the year 2050.
I'm very excited about the possibility of artificial intelligence programs being developed in conjunction with nurses to free people up to give the very necessary human element of care and improve health care for everybody. So, Peter, McKinsey released a report in October of 2024.
And it's titled The Pulse of Nurses' Perspective on AI and Healthcare Delivery. And what it suggests is there's optimism and excitement about AI-powered tools, but it is tempered by this desire and concern, a little bit of reticence, about ensuring that the quality of care isn't sacrificed.
And so, you know, to safeguard this balance, nurses are responding in a lot of different ways, but they have a real keen desire to better understand how AI works, what AI is, and to provide input and guidance and development on how to best use the technology in our clinical environments when we're taking care of people. And while we're in these early days of AI development,
driven care transformation. You know, what do nurses need to know? What do they need to do? What do they need to be a part of and to lead? So we are opening it up with you, Peter. As a veteran researcher, you know, somebody in computer science, but really a leading expert across academia, the private sector, and a leader in machine learning, AI, multi-agent systems, robotics, and
We're looking for an overview of AI, the implications and the effects of this really powerful, provocative, pivotal technology and the impact that it's having on all of our lives, our societies, our planet, and making sure that it's safe, you know, safe and beneficial for everyone. So that's where I want to start with you is by describing what AI is, what it isn't, and what is really important to understand about it at this moment.
Thanks, Shana. Yeah, and these are great questions. And I think the most important starting point is to recognize that AI is not a technology. AI is a collection of many different technologies. And that's super important to keep in mind because there's artificial intelligence programs like ChatGPT that can help you write a letter or something like that.
It's a completely different AI tool that would help somebody read a mammogram or help diagnose somebody or even there's robots now trying to help nurses with parts of their job. If you keep in mind that artificial intelligence is lots of different technologies, it's not just something you can sprinkle on a profession and all of a sudden change it. A little bit of AI magic here and there. Yeah, it doesn't work that way. You need to embed the...
the artificial intelligence algorithms in special purpose tools or programs that are designed for whichever industry you're interested in, in this case, healthcare. And it takes a combination of the capabilities of the algorithms with the domain experts for figuring out what are the best avenues for helping people to reduce the mundane aspects of their job and allow them to do
best what people are needed for, what they're needed for. Like most professions, there are some parts that feel a little bit like drudgery. People go into care to provide the human element, to interact with the patients, and
and not to carry the linens from here to there, right? That's not what you go home and saying, I'm really glad I spent time walking from one place to another. And so that you actually have more capacity, you can be in the flow of your creative aspect of the job for more. And this is true of many different professions. So the sort of optimistic view of the way AI tools will be used is that they reduce or eliminate drudgery aspects of a job that don't really need a human touch.
And they leave time and space for people to bring what they're best at and magnify that. Now, of course, the flip side could happen where the AI programs are doing the part that people want to do and leaving them to do just the drudgery. That would be the worst case outcome. And so I think it's important for people who are developing the artificial intelligence tools that incorporate the AI algorithms.
to be talking to the end users right from the beginning to understand what are the parts that should and could be automated and what are the parts that should be left to the human element. And there's a section in the 100 year study on AI, the AI 100 report that I helped lead back in 2016, we had, we had a workshop called coding and caring. So there's a section of the report on that. And, and,
One of the conclusions was that we don't want to take the human element out of care, that care is a very personal and human act. And it's not something that should be fully automated. It's something that we should be bringing AI tools to help magnify the humanity and the human touch in care. I would add to that, that we don't see oftentimes in our science reports is the sacredness. There is the humanity and there is the sacredness.
You said that AI is a set of technologies. And so when we talk about AI as a set of technologies, what is that set of technologies that comprise AI? It's clear many people have learned about AI from science fiction, from movies, from books. And nowadays, everybody sort of is an expert in AI. People say, oh, I've been using ChatGPT since it was released. So I've been doing AI since the beginning. But of course, the field of AI was introduced 75 years ago.
Historically, it came out of computer science, but it has grown to be much more cross-disciplinary and there's bona fide experts in the social sciences and humanities as well. And that's very important. And you asked about the definition.
There is no actually generally accepted definition of artificial intelligence, but the one that we put together... Good to know, good to know, yeah. I mean, just because nobody can really even define intelligence, right? Is intelligence what people do or is, you know, are dogs also intelligent? Is a calculator intelligent? There isn't really a generally accepted definition of intelligence. So by extension, neither of artificial intelligence.
But the definition that we put together in the AI100 study was that it's a science and a set of computational technologies that are inspired by, but typically operate quite differently from, the ways people use their nervous systems and bodies to sense, learn, reason, and take action. So the idea is that it's not totally replicating people.
But it's creating computer programs that can do the kinds of things that people do when we consider people to be acting intelligently. And it is more than just deep learning or large language models. There are areas such as computer vision, planning, symbolic reasoning. There's algorithmic game theory, computational social choice.
human computation, reinforcement learning. There's all kinds of sub-disciplines that if you go to an artificial intelligence conference, you'll see sessions on each of these topics that are all getting at one component or one aspect of intelligence. And the long-term grand ambitious goal of artificial intelligence is to put these all together into something that is more generally intelligent, robust, be able to perceive the world like people do through language and vision and touch
They're able to reason and think deeply like people do when they're playing chess or planning a route from home to the office and to be able to actually act and execute their actions in the world and then see how that changes their perceptions. One other word that's sometimes bandied around is autonomous agents or artificial agents. An agent could be a robot, but it could also just be a computer program, something that has to sense, decide, and act.
in a, what we say, a closed loop. In other words, they're without a person having to do the action for it or provide the sensing for it that it can sense, decide, and act, get a new sensation, decide again, act again, and just continue on like that. That's what an agent, an AI agent is doing. Gosh, that sounds so much like the nursing process.
Everything that you just described, all these different types of technology, there's not a single place in healthcare that our healthcare professional teams, nurses, there's nothing that you mentioned that we wouldn't be using, needing, activating. So big, huge sandbox for us to be able to play in and work with. And increasingly, we're hearing generative AI, AI.
The way we're hearing it in the headlines and feeling it in our lives and we're seeing investment, you would think it's just kind of like this really new thing when, in fact, it's already embedded into so much of our lives and things that we do and we appreciate, rely on that we've stopped thinking about it as being AI. This transition moment where we feel like there's a lot of change happening.
You know, how is the public sentiment towards AI? How has it evolved? And then how do we prepare the workforces, you know, the healthcare workforces who are facing profound changes, opportunities, and concerns as it feels like this big, huge wave of AI is descending upon them?
Yeah, there's a lot of different things to respond to there. But I guess the first is, you know, you say that people sort of don't realize how much artificial intelligence is already embedded into things that they use every day. And that's very true. Another definition, more of a tongue in cheek definition that sometimes people use of AI is trying to get computers to do the things they can't do yet.
But then once they can do them, it's engineering. It's not AI anymore. And that's very much the case. And I was a PhD student. It was a very cutting edge AI problem to try to figure out how to get computers to listen to you speaking your telephone number or your credit card number and understand what digits were being used. The technology behind it was called hidden Markov models. And it
is being developed and refined. Nowadays, people don't even think of that as AI anymore. You just sort of expect that you can call up your credit card company and speak your number into the phone and it's going to recognize that. And that's been true of many different technologies. If you have a microwave and you just press the heat potato button,
That's also an AI technology that has sort of been now taken for granted. And I think public sentiment shifts with what they consider to be AI. So nowadays you ask people, you know, what is AI? And they think of chat GPT or they think of language models.
But I think I've personally been quite pleased at the level of nuance that's started to come into the public perceptions of artificial intelligence. Like I said, it used to be that people either thought of it as all good or all bad. And that was through science fiction, through the press. Now, I'm seeing many more newspaper articles, television segments that are taking a much more balanced view that are saying, look, there are some really exciting possibilities that
of automating some aspects of your healthcare, for example, but there are also some risks and dangers that we need to keep in mind. And if we don't do it correctly, things could get worse, not better. And then actually articulating what some of those are. And I think that's very encouraging to me.
It seems like the level of discourse, the level of nuance, the level of understanding is only going up. Everybody these days needs to become literate in artificial intelligence, just like we want people to be literate in the sense they need to know how to read. They need to be numerate in the sense that they need to understand how numbers work. We teach these basic concepts in elementary school and high school. I think we're getting to a point where AI literacy is
is rising to that level of a necessary basic skill for everybody who's coming out of certainly university, probably also high school, maybe even middle school and elementary school now, to have some education of what artificial intelligence really is, what it isn't, what it can do, what it can't do, what's realistic, what's science fiction, to learn a little bit how to use the tools that exist out there, and also to be able to
have the confidence and to continually refresh your skills and understanding because it's going to continue to be the case that technology is going to advance and whatever you learn in 2024 is going to be out of date in 2029, maybe even in 2026, who knows? You need to be prepared to keep learning, keep thinking about what are the newest tools and technologies that
And to be creative about how can you use them to make your job better and make you more productive because you've spent as much of your time as possible doing the things you love about your job. So when you think about...
The current state of play, like where we are with the capabilities and the limitations of AI, I'm asking you this question at a moment in time when our healthcare systems around the globe, they're facing significant pressures on an aging population, an aging workforce, projected and current workforce shortages, overwhelming mental health crises, disparities and inequity in access, increasing cost of care, region-specific pressures like natural disasters, conflict zones. Okay, that's the healthcare that we have today, right?
What do you think healthcare should be most excited about and most cautious about? I think most excited about is at the very general level of being able to make many processes more efficient. And often it's the specialists don't have a full communication channel with one another. And I think AI tools can...
can be helpful for that. That if you have an AI assistant that's aware of the conversation you had with your general practitioner and is also aware of the conversation you had with your podiatrist or your endocrinologist, that there may be connections that could be made that none of them are going to be able to do if they're not coordinating by talking to one another directly, which they just don't have the time to do. And so I think there are all kinds of possibilities in terms of logistics and coordination and back end. And those are
opportunities for full automation, for things that could actually, you could take a program, an AI program, and possibly just do it. I think there's also opportunities for human AI partnerships in places where you would not trust an AI system to work on its own because of, you know, there's a couple of risks. There's, you know, hallucination. They're not always trustworthy. They do make things up. That's really important to recognize. They sound so authoritative, though. They really sound authoritative.
They make things up with confidence, right? But that's actually not a, you know, if you understand how they're trained, that's what they're trained to do. They're trained to sound plausible. They're not trained to be correct. People are working on trying to make the tools more correct. And I'm optimistic that things will get better. But it shouldn't come as a surprise that they are sometimes going to be incorrect. They can be fantastic professionals.
for brainstorming. They might come up with something that you wouldn't have thought of. And that could be because it's incorrect, in which case you should discard it. And it could be because it's something that is correct, but you hadn't thought of. And if you can verify it, then that could actually be a really big aid. So like people who understand the
what the strengths and weaknesses of the tools are can use them to magnify even aspects of care that it's clear still that people alone are better than computers alone. But you might get computers and people together being better than either. And diagnostics may be one of those examples, right? I wouldn't trust an AI model to do diagnostics by itself. If it's up to me, I want a person making that decision. But if that person is somebody who's
capable of using tools to make their decisions better, and they're still taking the final responsibility and still making the final decision. I think that can be a huge win. The other question that's worth asking is sometimes we do have the luxury of having personalized care from a nurse or a doctor. There are many people who may not have that access. And then the question is, is it better than nothing? AI tools are going to be in many cases worse than a person who's devoting their full attention to you for half an hour or for an appointment.
But maybe very much better than not having any access to anything. And so these are other things we have to take into account. The danger, of course, is that if health care providers opt for less expensive solutions that are lower quality, that's probably not a tradeoff we want to make.
And so we have to be guarding against that. And then another question is privacy and security. And who are we trusting with our information? And when you're starting to use AI tools, it's still murky. Who are you giving access to your information? And what are the implications of that? That's still something that people are working through. And that's not just a technical question. That's also a legal question. It's an ethical and moral question.
You are part of this pretty interesting group, the 100-Year Study of Artificial Intelligence, better known as AI100. And it's bringing experts, academic, industry type of experts, researchers coming together.
One of the things that I found important about this AI 100 group is the statement is we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children's children. How do we document? How do we set some goals? How do we put some safeguards in and help us define and understand what AI is and how it's touching our lives? The title of one of your very first articles, I love this,
permeating influences of AI in everyday life, hopes, concerns, and directions. It feels very mindful and attentive and aware that this is pretty powerful stuff. We need to proceed with hope and with caution. So AI 100 is designed to sort of bring together people who are AIists
AI insiders or experts who are working across the different disciplinary perspectives, but who have been thinking deeply about the technologies and the societal implications for a long time. And to give this sort of sober view, because it does tend to get exaggerated in fiction that, you know, AI is either going to destroy the world or it's going to create a utopia and save everything and create a life of leisure for everybody. And it's neither. It's somewhere in the middle. And so we're trying to be grounded.
So I'm going to put this to you on the very personal, Peter, as patient. So knowing what you know as a researcher and where we are in state of play, what parts of that health care service do you think, yeah, I'm really comfortable with an AI managing that? And yeah, we're not there yet. It doesn't have the level of reliability, accuracy, personalization, sensitivity, discretion yet.
for it to be involved in this part of our healthcare service delivery. So yeah, like I said, Peter, Peter, the patient, Peter, the dad who's taking his child in. This is a good question. It's changing over time. I mean, I trust the doctor or the nurse that I'm interacting with to know what they are able to do and what what's able to be automated when there's like a sensitive procedure that's going to be difficult. I want to be able to talk to a person. I want to be able to ask questions and,
when I'm being prescribed some, some routine medication, and I want to know all of the side effects and how many times a day I should take it. And whether it should be with food or something, I'm okay getting that information from a from a program and just looking up in my automated chart. But then if I want to say, you know, what are the risks? Or what are the long term implications? Or how does this interact with this other medicine I'm taking or something like that, I want to be able to talk to a person. And part of that is,
trust and whether the information is going to be correct. And part of it is, as you've said, sort of the sacred aspect of care being a human thing. I want to know that there is a human being who's thinking about my health, thinking about what's best for me, and thinking of me as a holistic person.
not just as a data point. But look, people are also not infallible. There's all kinds of studies that people make different decisions when they're hungry versus not hungry. There's biases that creep in. People of different backgrounds get different care, get different recommendations when maybe they shouldn't. I think there are deficiencies of human judgment and human reasoning that can be corrected by artificial intelligence programs.
And there are deficiencies and weaknesses of artificial intelligence programs that can be corrected by humans. It's something that there's been tons and tons of data on. And it might even be that an AI system is going to be able to read the x-ray statistically better than a person could. And I would actually personally...
find it at that point negligent of the person not to use the program that is able to detect small differences that the human eye isn't able to. But I also then trust the person to know for this particular type of image, for this x-ray, is this something that the program is good at? And how good is it? How much should it be trusted? And to bring in that human judgment. I don't want as a patient to be the one that makes the call, should it be a doctor or an AI system? I want the doctor to know that.
Researcher and computer science expert Peter Stone stays busy with his many roles that include founder and director of the Learning Agents Research Group within the Artificial Intelligence Laboratory at the University of Texas at Austin, director of Texas Robotics, and the chief scientist of Sony AI. ♪
As Peter mentioned, AI offers a whole lot of promise about making processes more efficient, removing drudgery from work, and automating error-prone tasks. But he also names a concern that many of us share when he says, I want to know if there is a human being who is thinking about my health, thinking about what's best for me, and thinking about me as a holistic person.
That's one among many concerns and goals that our next guest thinks a lot about as AI automates certain tasks, creates new possibilities, and surfaces enduring ethical questions. Hi, I'm Liz Stokes. I am the Director of the American Nurses Association Center for Ethics and Human Rights.
I identify as a nurse attorney bioethicist and I am so excited about artificial intelligence and how nurses can really be impactful in moving AI forward. When I think about where we are, the potential is vast and the intersection of AI and healthcare is huge. Nurses are in the lead, we should be in the lead and really help to continue to drive change.
So Liz Stokes, you must be very busy these days because I read the headlines every single day and there is nothing out there that doesn't include the words AI, artificial intelligence, generative AI, and they are the full range. They're hitting every single sector. I don't think I'm over-exaggerating when I think that there is a very heightened attention of AI in
in health, healthcare delivery, healthcare education, healthcare practice. And the reason I wanted to come to you is we got to talk about the ethics of all of this. Exactly, exactly. I think you're spot on, Shauna. When I think about AI, it's been around for ages.
It's been in the public for ages. Think about the AI that we use when it's convenient for us to order something or to resolve something. But when you add AI in healthcare, it's a game changer. The fear factor also enters the picture of like, okay, what's going to happen? Things are moving so fast. What should I do? Healthcare clinicians are really, really concerned as they should be.
People are really have their eyes open and are asking questions. And I think it's great. It's a time to have the conversation. And I think we're having it in so many different disciplines. And then we're having it collectively, which is also a benefit. And I feel like that's different from in the past. We've often been siloed. Physicians are talking about something here and nurses are here. Social workers are here. But now it's affecting everyone all at the same time. And we're all at the table.
In a recent report that McKinsey and company performed, the pulse of nurses' perspective on AI in healthcare delivery. What they found, you know, it's pretty good size of respondents, but they found there was an optimism and an excitement about AI-powered tools.
that are tempered by a desire to ensure the quality of the care is not being sacrificed. Why I wanted to have this conversation with you is in all of this rush to understand and to build and co-create and commercialize and have these really powerful tools, we can't do this without a very serious, hard pause about ethics.
and about the guidelines. So that was why I thought we would come to you as an ethicist and a nurse and somebody who's thinking about equality, rights, social justice. How are you thinking about the definition of AI? Because what we're calling today AI wasn't the same thing that we were calling it, you know, two years ago, five years ago, and probably won't be in two and five years forward. That's so funny you ask that because as we're thinking about when we started
having the conversation about ethics and AI, it was before COVID. And we wrote a position statement, you know, hit the highlights about AI. COVID hit, so we paused everything. And then we published it after COVID. And about four months after we published it, generative AI hit, chat GPT hit. And it's like, okay, great.
So we're already outdated. And so, you know, one of the things that we really focused on when we talked about what is the definition of AI, it's got to be a broad term because it is evolving so rapidly. There's so many different types of AI, so many different interpretations of what AI is. And we did not want to get down into the weeds of wordsmithing. Nurses know everything.
what AI is, right? They know that there's some algorithm or something that is happening that is not a human person doing something. And so we really were intentional about being broad, but also talking about
It is your decision-making that you might see in your electronic health record. It could be something that's part of the robotics that you might see going down the hallway in your unit. So we were trying to help nurses see the practicality of AI without getting lost into, do you understand what machine learning is and deep learning? Like we really have heard from a lot of nurses who've said,
Help me understand what this is so that I can make sure that I am using it safely, that my patients are using it safely, that I even know that it's being used in my facility, my institution. So we were intentionally broad in defining AI and really helping people understand there's a predictive piece to it.
that it helps predict, you know, whatever an outcome may be, and that it is based on the information that is input. So that is key. That's where nursing really has such an impact because we know the majority of information that is put into our health system, especially our documentation, comes from nursing. And so it's critical that nurses understand what they write down
may be used, could be used later on in some type of data output that has significant consequences on decision-making for either themselves or their patients. So let's talk about the fear piece. What are you hearing? What kinds of fears are you hearing? I presented on AI and ethics to a nursing conference of oncology nurses specifically.
And I was walking off the stage and a nurse came up to me and she said, I'm really worried about AI in my institution because it is being used in our decision-making algorithms for our patients. And the algorithm may say one thing, but I know from my many years of practice that this is not what's happening with my patient. And Shauna, she was so emotional because she felt like
The algorithm was making clinical decisions for her on her behalf, and she felt extremely threatened by that. And when she raised these concerns in her institution, she shared with me that the nurses were at threat of being disciplined, being sent to the Board of Nursing because they were challenging the algorithm. And it just it struck me.
Because this is the hallmark of ethics, is we say that there are no technologies, there's no replacement for your own skills, your own clinical decision making that we've learned through our education, that we've learned over our years of experience. And to think that there's something not human that's challenging that.
And in order to speak up about it, this nurse did not feel like she was supported and felt like she might be disciplined. And then, of course, we hear the fear of economic displacement. So, you know, we hear that and there's data to speak to that. Again, we really... Just to dispel this. Yes. AI won't replace nurses. AI will replace what nurses do not need to be doing.
I've heard AI is going to replace the nurse or the radiologist or whoever isn't using AI, but that's still kind of feeding into the fear of being replaced. And the thing is, the more positive thing is we had a better, smarter way to work, which means you can do the things that only you're capable of doing. And this is one of these core technologies that just continues to upskill everybody in
in the care participation chain. And Shauna, what I hear you describing is trust building within the nurses. Nurses have to be able to trust the algorithm, trust the AI so that they feel safe using it, that they feel safe that their patients are going to benefit from it.
And the balance, because that's what ethics is, is a balance is how efficient is this AI making your life? Have you ever seen those maps of the nurses footprints in a shift when they're all over the unit and they're getting supplies and they're going...
And then we have robots, AI robots now that will help and get the supplies. So there is a great efficiency, right? So you're spending more of that face-to-face time with your patient, with your family. AI is helping you in that situation, right, as a nurse.
But if you think about the nurses who are so fearful of, "I'm not trusting, what if it brings me the wrong supplier?" So they're really questioning whether or not this is going to help them. And this is not just for nurses. Helping people understand that AI is not 100%. But the goal is that it is making our lives more efficient. So that is going to have an output that is a higher percentage of accuracy than humans.
We've seen this in radiology, where it's able to identify things that the human eye is not catching. There's great data to really show the impact and the power of that positive impact that AI is having on healthcare. So it's the balance of, it's balance and it's time. I think nurses are just going to have to continue using it, understanding it, experiencing it.
before we get to that place where we're comfortable. It takes time and it takes trust as we continue to experience and engage with technology to get to the place where we're comfortable using it. The current ANA ethical guidelines
It's written very clearly in the code that, one, that nurses must ensure that these technologies do not compromise the nature of the human interaction and the experience, that nurses in all roles are accountable for decisions made and actions taken in the course of the nursing practice. They also have the responsibility to provide leadership in the development and the implementation of
Nurses are responsible for being informed about ensuring that there's appropriate use of AI and that we understand it well enough that we can explain it to the patients that we're taking care of. So that is a lot of ethical guidelines and requirements and responsibilities for a technology that many, many of us are, I still don't understand it well enough for me to feel like
I can accept responsibility for it. It's funny. People always say, I don't see artificial intelligence in the code of ethics. And we say the word is not there.
But the sentiment is there. It is talking about advanced technologies. And I always say this, fill in the word. The ethics don't change. The technology changes, but the ethics are still the same. And it's really important for nurses to understand, again, that decision-making piece cannot be replaced by anything. By AI, by the monitors on the machine. If for some reason your machine output is telling you that the blood pressure is not what you think it is,
That technology does not replace your critical thinking. And so we really help nurses understand that the ethics of making sure something is efficient for your practice, that it is not taking away your time that you need for nursing care, your time with patients, your time with families, your time for assessment, the skills and the science and the art of nursing that we all know.
AI should be helping utilize those things and not taking away from it. One of the things that we often hear too is that nurses say, oh, well, we have this AI, so we don't need as many nurses. And it's like, no, never, ever, ever. That's not the right answer. And so it's important to speak up. It's important to really speak up. And the question I'm hearing now is, well, who do I speak up to?
When I have this issue, there's, you know, a lot of organizations don't have a technology committee or some do. Some have an AI technology committee or they have someone who is a representative for nursing and technology in their region or in their institution, but many do not.
So we're helping nurses realize this is an ethical issue. Go to your ethics committee, go to your board, go to your shared decision-making governance committee, because this is not just an issue for you. So I'm focusing my comments on acute care, but I'm thinking about even in the home health communities. I think about a case. This was a few years ago in Arkansas. There was a 90-year-old woman who was a Medicaid recipient. She was receiving home health care.
I think she was getting around 45 hours a week. There was a decision-making algorithm applied to her case and her hours got reduced for home health care down to 30 something. And like I said, she's 90 years old. She's got dementia. She was so reliant on this care. And in this situation, she actually had to get legal assistance. So legal aid of Arkansas was able to jump in and really assist. But by the time they were able to reinstate her hours,
she'd passed away because the algorithm didn't account for the holistic things that nurses would see. It didn't count for her diabetes. It didn't count for the fact that she was bedridden. But these are things that nurses know. It's part of our assessment. It's part of our skills. So we see AI is happening all over healthcare and it intersects with so many other disciplines. So if you're in those situations, it's important to speak up.
to your governance committee, really try to find advocates for your patients when you're seeing that the technology is actually causing harm versus benefit. And it's that it's ethics. It's the risk harm benefit that we've always done and we continue to do.
So when you're thinking about what nurses need to know about this particular technology at this moment in time, again, you said ethics don't really change, but let's have the interpretations and the applications for these specific use cases and where nurses are finding themselves today.
in these ethical quandaries and questions. One of the biggest things I think when I think about AI, and we hear this all the time, is the justice piece and making sure that the technology is being used in a just way. One of the key pieces, the design and development, nurses need to be involved.
And we hear, oh, yeah, I was in part of the process when we had this company and we implemented into our documentation. But if they'd asked me before we even made the contract with the company, I would have picked a different company. So we even say even before the design and development, nurses need to be at the conceptual part of the decision making process.
because we are often the end user of this technology. We're often even teaching our patients how to use this technology. Chatbots is a perfect example where patients or people may not understand this is not a person. This is not a nurse. It's not a clinician that is answering you and they become reliant on
on that answer and they make their decisions based on that answer. And a human person has never seen the person or the patient. Sometimes it's great. Sometimes it works out. They're able to get referred or they're able to get treated, but then it's those edge cases where it doesn't work out. It's not to say that we shouldn't have it. We just need to have more data to make sure that we're able to create the safeguards, prevent harm as much as possible,
It's never going to be 100% without harm because it's human's work. That's not how it is. So I think being realistic about that is our goal. That's our journey. Yeah. Most of the algorithms that are being trained right now are trained on insufficient data. There are a whole group of people who have not...
been in our medical records. And so if we're training our probabilistic algorithms to predict or to make this recommendation, and you're somebody who doesn't really show up in the data,
How applicable is that recommendation to you? And that is making sure that people understand AI is not applicable in a universal way. So AI, the application of AI for one person may not be the same outcome or may not be beneficial for the second person. So really understanding just as you would apply any other technology, making sure that you're having that thought, that assessment, is this appropriate for this situation?
Another big piece, of course, we hear all the time is about bias. It's huge, right? Recognizing that the predictive nature of what we're getting, the outcomes of the algorithm is based on what's in the algorithm.
And I think about, you know, again, I'm back to the legal sense. I always think about these legal applications. There's a group in Washington, D.C., who really work on cases where there are companies that are applying the data that's already existing in their databases.
which is already some type of prejudicial data. And then they're putting out the same harmful outcomes. And so they're able to look at banks or companies that, you know, had loans or they're applying loans in a discriminatory way, using that data to continue to, you know, approve or disapprove individuals in a discriminatory way based on bad data. So it's important that we're accounting for bias. It is positive.
possible. So when you're working with developers and designers of technology, that has to be at the forefront of the conversation. How are you going to account for the bias in the data? Because we know that that's there. As an organization, as an enterprise, how is the ANA thinking about the use of AI, the range of generative AI tools in the work that they're doing? How are they leading by example?
That's a great question. And that is something that we are tackling at a rapid pace as well, because again, the technology is changing so quickly. We are really looking at our own use of AI within the organization. We've created guidelines within our organization to help employees understand our employees, as well as our volunteers that work with ANA to understand the use of AI. One of the really interesting stories that I heard recently is that
that employees of companies are putting things into generative AI platforms, not really recognizing that it's a public platform. So they're putting business notes in there and confidential information. And so they were able to type in, tell me the executive business notes of a top organization and there's output.
the output of the data. But people are, you're just not thinking that. You're thinking, oh my gosh, this is a great way to summarize our meeting. Let's put our notes in the system. So really helping people to understand the significance of using generative AI, but then also
are the nuances. So we've talked about prompt engineering. What does it mean to put in a prompt? How can you make sure that the output is relative to what you need? So it's not like we're saying no, but it's like there's guidelines around the use, just as we're saying there's guidelines for nurses.
It's the same thing for businesses and organizations to protect their confidential proprietary information, but also finding ways to make their work more efficient and more valued. You know, another very deep question. As we have AI come in and be supportive of our human intelligence, our instincts, I hope that it will make us smarter. I hope that we're pulling on the collective wisdom that
At any point, do we have to be revaluing the human part of this? Is there a recalibration or a redetermination of the value of the human therapeutic process, the human care versus the AI care? Nurses have, part of their value has been the physical labor. And as we are able to automate that,
a huge part of that hopefully will reduce injuries, but there's a certain therapeutic presence where you've had that relationship with somebody and they've seen you at your most vulnerable, you know, and this therapeutic stuff is deep. It's really good to think about that relationship. And as we were revising the 2025 code,
it is set in a relational way. So the first three sections of the provisions of the code are really looking at the nurse-patient relationship,
The next three are looking at the nurse to nurse, nurse to other colleagues. And then the end is like the nurse to society. So we've been having a lot of these relational conversations. What does it mean to be in a nurse patient relationship? And that's really what I hear when we're talking about the therapeutic nature, the ethics of caring in a relationship. What does that mean? And understanding that it's reciprocal, right?
And that's something that we really heard when we talked about patients and recipients of our care. They're not just receiving our care, but they are giving of themselves to us. So they're putting their vulnerability of being sick or being ill or being in need with us in our hands. And when we think about AI, you know, you can't replicate that. It's not replicable. And so I think it might be time to really think about how do we articulate that
financially, if we're taking away the more laborious things, the more tasking mechanistic things, if we're able to, like you said, Shauna, automate these with AI and the use of advanced technologies, how do we value and how do we account for what I think nurses are able to do best? And so maybe there is this time for a revaluation of what it means. First, what it means to be in a therapeutic relationship.
Because I think sometimes even patients are going to be fearful, like, okay, well, if we're going to have a robot, are you still going to be my nurse? Are you still going to be able to provide for me the things that I think that I need? And over in the technology world, the language is abundance. What it helps us do is address the care that has been left undone.
All of the things that we cannot get to, people get a follow-up reminder. Hey, let's make sure that you're up to date on your immunizations. And have you had a mammogram? And is your driver's license up to date? Have you registered to vote? Are your guns safely stored? Have you done all the fall prevention around your house? Are you signed up for Medicare yet? I mean, there's so many different things that we could do.
that we never get to and so these technologies create a sense of abundance in that time becomes free so there is this financial value there is a human value there is a human flourishing value well i will tell you shauna it's interesting one of the things you just said is one of the new focuses that we have included in the 2025 code and that is on human flourishing and the recognition
that A, every human deserves to flourish, including nurses, as well as our patients and those that we care for. And figuring out how AI fits, how can AI support that? And I think it's exactly what you said, by doing the things that we often don't get to, or the things, you know, the mechanistic things that we can automate, so that nurses can help patients thrive. You know, I think about
If someone says, here's your prescription, take this three times a day with food. Do you have enough food to eat three times a day? Who's asking that question? And so there's the financial incentive right there. So your patient is not being readmitted to your hospital. So it's really finding out what is the economic value of providing nurses that time and space to perfect and perform the art and science of nursing.
AI is here. It is moving fast. It's going to shape so much of how we practice, how we learn, how we care for people. I mean,
I'm excited. Are you excited about it? I'm really excited and I think about just if you take account of where we are as a human evolution, that we are here with this massive technology that's really able to give us such incredible accuracy and predictability in a way that we have never as a species had before. It's very remarkable.
Nurse attorney and bioethicist Liz Stokes is the director of the American Nurses Association Center for Ethics and Human Rights. We're grateful Liz is at the helm as we wrestle with the myriad of ethical, legal, and practical issues AI and all of technology present.
Special thanks as well to researcher, computer science expert, and AI thought leader Peter Stone for his continued leadership, insight, and rational exuberance. Returning to the 100-year study on artificial intelligence, better known as AI100, which studies and anticipates how artificial intelligence will impact society over the next 100 years,
Amongst the qualified global experts and their scholarship and experience is the belief that specialized AI applications will become both increasingly common and more useful in the foreseeable future, improving our economy and quality of life across many dimensions.
But this technology will also create profound challenges affecting jobs, income, communities, laws, opportunities, lifestyle, and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared. As we race toward this AI-powered future,
Being aware, being involved, intentional and transparent about the design and deployment of AI will build trust, allay fear and suspicion, avert harms in ways we might not predict or intend, and enable humans to flourish. If you haven't done so yet,
Subscribe to see you now wherever you listen to podcasts to make sure to catch every episode as we dive into the thrilling possibilities and daunting challenges of AI. Coming up...
on our next exploration into how AI is transforming healthcare delivery. We're struggling across the board with a workforce shortage. We have to get creative in how we're going to render care to our patients. How can we bring generative AI into this conversation? How do we be very intentional about implementing technology as a part of our team?
The number one thing that I am most cautious about as we make product and engineering decisions is not listening to our clinicians in the room. There's a piece to it that can really help alleviate a lot of the day-to-day burdens that nurses are facing right now while still offering the safest patient care that we can.
The goal is to dramatically increase healthcare access in such a way that we never could even consider. I truly think that the generative AI technology that we're utilizing today is going to help augment what clinicians are able to do. For me, the outcomes beget the technology. We're working here in AI. We're utilizing this incredible new framework of generative AI and large language models. But the most important thing here is how are we changing and impacting patient lives? How are we improving outcomes?
Not only making it a healthier place for patients, but making it a healthier place for our providers and our clinicians in general. For See You Now, I'm Shawna Butler. Thanks for listening. ♪
Nurses are transforming healthcare through innovation, compassion, and leadership. And Johnson & Johnson is proud to continue its 125-year commitment to champion nurses through recognition, skill building, leadership development, and more. The American Nurses Association is dedicated to building a culture of innovation,
Nurses improve the lives of patients and communities through innovative thinking, empathetic connection, scientific rigor, and sheer determination. ANA is proud to support and advocate for our nation's most valuable healthcare resource, our nurses. For more information on See You Now and to listen to any of the earlier episodes in our library, visit seeyounowpodcast.com.