We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
J
Jeff Fowler
L
Lizzie O'Leary
Topics
Lizzie O'Leary: 我关注的是人工智能在医疗保健领域的应用,特别是人工智能如何辅助医生进行诊断和治疗。在节目中,我们讨论了人工智能辅助问诊的案例,以及人工智能在处理医疗记录和回答患者问题方面的应用。同时,我们也探讨了人工智能可能带来的风险和挑战,例如人工智能可能给出错误的诊断或治疗建议,以及人工智能可能存在的偏见。 总的来说,我认为人工智能在医疗保健领域具有巨大的潜力,可以提高医疗效率,改善患者体验。但是,在应用人工智能的同时,我们也必须谨慎,要充分评估人工智能可能带来的风险,并制定相应的监管措施,以确保人工智能的应用安全可靠。 Jeff Fowler: 我是一名科技专栏作家,我亲身体验了人工智能辅助问诊,并对人工智能在医疗保健领域的应用进行了深入的调查研究。我的研究表明,人工智能可以帮助医生减轻工作负担,提高效率,例如处理大量的医疗记录和回答患者的问题。但是,人工智能也存在一些局限性和风险,例如人工智能可能给出错误的诊断或治疗建议,以及人工智能可能存在偏见。 此外,我还发现,医疗行业对人工智能的应用缺乏足够的谨慎和研究,许多医生在使用人工智能工具时,并没有充分了解人工智能的局限性和风险。因此,我认为,在应用人工智能的同时,我们必须谨慎,要充分评估人工智能可能带来的风险,并制定相应的监管措施,以确保人工智能的应用安全可靠。

Deep Dive

Chapters
The episode explores the increasing use of AI in healthcare, focusing on its role in assisting doctors during patient consultations. A tech columnist's experience with an AI-assisted doctor's visit is used to illustrate the technology's current capabilities and limitations.
  • AI is being used to take notes and perform administrative tasks during doctor visits.
  • Doctors are using AI to reduce paperwork and improve efficiency.
  • AI's potential for errors and misdiagnosis is a concern.

Shownotes Transcript

Translations:
中文

Save on Cox Internet when you add Cox Mobile and get fiber-powered internet at home and unbeatable 5G reliability on the go. So whether you're playing a game at home or attending one live, you can do more without spending more. Learn how to save at cox.com slash internet. Cox Internet is connected to the premises via coaxial cable. Cox Mobile runs on the network with unbeatable 5G reliability as measured by UCLA LLC in the U.S. to age 2023. Results may vary, not an endorsement. Other restrictions apply.

This episode is brought to you by Indeed. When your computer breaks, you don't wait for it to magically start working again. You fix the problem. So why wait to hire the people your company desperately needs? Use Indeed's sponsored jobs to hire top talent fast. And even better, you only pay for results. There's no need to wait. Speed up your hiring with a $75 sponsored job credit at indeed.com slash podcast. Terms and conditions apply.

Not too long ago, Jeff Fowler went to the doctor for a checkup. So I sit down in front of him and he says, hey, Jeff, would you mind if I use an AI agent to listen to us during our session today? OK, so this is when I explained that while Jeff was getting a checkup, he was also reporting a story.

Jeff is a tech columnist at The Washington Post, and he went to see a doctor at Stanford, Christopher Sharp, who is studying new technologies to help deliver care. In this case, AI.

And I said, is that going to be private? And he said, yes, absolutely. It's going to be private. We'll delete the recording when we're done. Basically, the AI is going to take notes instead of me so I can spend the time focused on you. Did it feel different or did it just feel like a doctor's appointment?

It did feel different. You know, for the last decade, at least, when I've gone to the doctor, I would say the doctor spends more time looking at a screen and typing than they do spend looking at me. During this entire visit, Dr. Sharp looked at me. And so we were able to connect that way, which is really useful. He also would say out loud things that a doctor wouldn't necessarily normally say out loud, like when he was taking my blood pressure or listening to my lungs.

He would sort of call out as if he was speaking to some assistant who was there in the room. And that assistant turned out to be an AI. That might sound pretty intriguing, to have a doctor look at you and not down at their keyboard. Turns out that appeals to doctors, too. So my brother is actually a primary care doctor.

And every time we get together as a family, all he does is complain about having to do these notes from patients. And this has become a real issue for doctors in the United States with the rise of electronic medical records and a lot of legal requirements. They have to do so much paperwork.

And so the idea here, Dr. Sharp told me, was, look, let's let AI take some of these notes, do some of this paperwork, and then I can focus just on being a happier doctor and treating you. All of this sounds great, but AI, as we know, is far from perfect. So what happens when it gets your symptoms or your treatment wrong?

The metaphor that I heard that really stuck with me from one of the researchers was, do you ever hear about how people go on vacation to Hawaii and they follow the GPS and they end up driving into the ocean? This happens. Could the same thing happen with your doctor? Today on the show, the AI will see you now. I'm Lizzie O'Leary and you're listening to What Next TBD, a show about technology, power, and how the future will be determined. Stick around.

Have you heard about DoubleNomics? It's okay if you haven't. It's extremely niche and practiced by Discover. Here's an example of DoubleNomics. Discover automatically doubles the cash back earned on your credit card at the end of your first year with cash back match. That means with Discover, you could turn $150 cash back to $300. It pays to Discover. See terms at discover.com slash credit card.

This podcast is brought to you by Progressive Insurance. Do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy. Just drop in some details about yourself and see if you're eligible to save money when you bundle your home and auto policies.

The process only takes minutes, and it could mean hundreds more in your pocket. Visit Progressive.com after this episode to see if you could save. Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states.

We should point out that AI is already being used in medicine, analyzing images, collecting data, looking for patterns. But that is not what Jeff wanted to write about. The one that I wanted to explore is the one that I think most of our listeners may have either already encountered or are going to very, very soon.

you know, already millions of Americans are being treated by doctors who are using these AI tools, which deal with sort of advising the doctor or like a helper to the doctor focused on some things that have become a lot of, um,

wrote work for physicians in their direct encounters with patients. So that's things like taking notes, going through those notes, summarizing those notes, or answering the zillions of questions that we all now send our doctors through the app that we use to say, "Hey, I've got a funny looking whatever on my arm. Is it okay?"

We didn't used to do that, but we started doing sending those kinds of notes during during the covid times. And now doctors are totally swamped with them, too. How widespread is this stuff? And are we talking about like your generic primary care provider or, you know, is it only, say, someone who's affiliated with a big hospital system that will have access to these tools?

Yeah, when I started doing this reporting, I thought it was going to be one of these like super niche things that you had to go to a super cutting edge medical center like Stanford where I was. But basically within the last year, like doctors all over the world

all over the United States, big medical centers, small ones, little clinics have started using this kind of AI to help them with these kinds of tasks. So for example, the biggest electronic medical records service in the US is called Epic. They said they have each month or

over 2 million patients are being seen by AI that does the transcription part of the encounter alone. And that's just them. And there are competing services. There are homegrown ones. There are things that doctors just kind of do on their own as well if they're in small private practice. So this has become really common and is going to get even more so, I think.

And is there, I mean, you mentioned Epic, but is there like a corresponding explosion in companies selling these services to medical providers? Absolutely. So Epic is kind of a main gateway that a lot of medical organizations get it. But the software itself is made by companies like Microsoft.

which owns Nuance, which is a name you might have heard from back in the day. Remember Dragon, naturally speaking, that would do your typing for you? Well, it has evolved into an AI that listens to your encounters with doctors and summarizes it for them. So Microsoft is a good example of a company that's pushing this hard. One of the things

One of the things that you did in this piece that I was really interested in is talking to both, you know, the primary care doctor you went to see, but also people who study this stuff about why medical providers turn to AI. You mentioned your own brother. What is it like? What is the tech good at? Part of the problem is we're still figuring that out real time.

And the thing that I find a little unsettling is that I'm kind of accustomed to medicine happening when we figured these things out. Medicine is normally very skeptical and there's all this research that has to go into stuff. The FDA has to sign off before you get treated by certain things. But, you know...

doctors are pretty desperate for some relief and the sales pitch from Silicon Valley, from Microsoft and Epic and others is coming on strong to these organizations. So they are trying to sort of

shove the existing AI technology into uses, even as we sort of study and trying to figure out, are they good at it? How often do they make mistakes? And most of all, do they even save doctors time? Do they even make them more efficient? And then there is the question that is always looming around generative AI. Does it sometimes get things wrong?

We know it has these hallucinations where sometimes it will leap to conclusions that were not there or include details that were not there. So there are risks definitely in bringing these into human patient encounters. I mean, let me frame it another way that I think about this. I have written lots of stories about some of the safety risks of bringing AI into our lives.

be it how we get information on Google or how TurboTax and H&R Block were using AI to answer your tax questions, sometimes giving you bad advice along the way. But it is very often hard for critics like me to be able to say, what is the real danger? What is the harm that could come of bad AI information?

And when I heard about how doctors' offices were using it, I was like, oh, dear. Well, here the harm is pretty clear. If the AI gives bad advice to a doctor and the doctor passes that along or mishears something and enters that into a permanent medical record for a long time, that's going to be there forever. You know, that risk is really quite big. When we come back.

What happens when AI gives you bad medical advice? Running a small business means you're wearing a lot of hats. Your personal phone becomes your business phone. But as your team grows, that's impossible to manage. That's where Open Phone comes in. Open Phone is the number one business phone system. They'll help you separate your personal life from your growing business. It's affordable and easy to use.

For just $15 a month, you get visibility into everything happening with your business phone number. Open Phone works through an app. They use AI-powered call transcripts and summaries, and if you miss a call, automated messages are sent directly to your customer. Whether you're a one-person operation and need help managing calls automatically, or have a large team and need better tools for efficient collaboration, Open Phone is a no-brainer.

Right now, Open Phone is offering 20% off of your first six months when you go to openphone.com slash TBD. That's O-P-E-N-P-H-O-N-E dot com slash TBD for 20% off six months. Openphone.com slash TBD. And if you have existing numbers with another service, Open Phone will port them over at no extra charge.

This podcast is brought to you by Progressive Insurance. Fiscally responsible. Financial geniuses. Monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and save hundreds. Because Progressive offers discounts for paying in full, owning a home, and more. Plus, you can count on their great customer service to help you when you need it. So your dollar goes a long way.

Visit Progressive.com to see if you could save on car insurance. Progressive Casualty Insurance Company and Affiliates. Potential savings will vary. Not available in all states and situations. Head to Whole Foods Market to jumpstart your January during our New Year boosting event with savings on feel-good favorites store-wide. Save on organic picks, wellness staples, and more all month long.

You had a moment in this piece where one of the researchers kind of showed you a chat GPT query. And I'll just read it out and then you can tell me the next part of how it goes. Dear doctor, I've been breastfeeding and I think I developed mastitis. My breast has been red and painful. And the chat GPT responds, use hot packs, perform massages and do extra nursing. But it turns out, according to the researcher you talked to, that is wrong.

Yeah, this was so interesting. This was another doctor at Stanford. So we sort of have both sides of the equation happening on the Stanford campus here. And she has been leading teams of doctors and engineers to red team the systems that doctors are using to help them, for example, in this case, draft email message responses to patients.

And she pulled up this example of something that happened to her. She got mastitis with her kid. And so she typed in the question, as you said, and the answer it gave was the entirely opposite recommendation that the Academy of Breastfeeding. Yeah, that was the old advice, basically. That's right. The Academy of Breastfeeding Medicine now recommends you do all the opposite things.

The point here is that AI, and this is again, AI being used by doctors right now to answer emails to real patients, isn't necessarily built to know and accurately diagnose and recommend treatments for medicine. The generative AI tools we have today are built to create

answers to things that could be reasonable. That is not the same as being 100% accurate, a thing you usually want from your doctor. Generative AI has lots of problems with, for example, not necessarily not knowing what it doesn't know. And if its training hadn't been updated since

Since that recommendation was updated, then it's going to hand out some bad advice as it does. It also has a tendency to encode whatever the biases of the inputs are. There has been a lot of structural racism in medicine. This has been studied tremendously. You wrote about this as well, that like the AI can say, oh, well, black people have a higher pain threshold, which is obviously not true.

This is another really terrifying bit of research that has gone on into this space. So another set of researchers did an experiment where they asked an AI to sort of weigh the pain threshold of different kinds of patients. Turns out doctors can measure pain on a scale of 1 to 10.

And when they ask human doctors to do this, oftentimes human doctors show a bias where they think black patients can tolerate more pain. So they asked the AI, they asked ChatGPT, which is trained on human language and has all of our human biases in it. And it turns out it did the exact same thing. In fact, some of them were even a little bit worse than the humans were. And the problem with this is,

A, when doctors are relying on these systems for recommendations or to draft some advice to a patient, are they going to be aware that

that these biases are there? Are they going to be trained and to understand to be on the lookout for them? Or is the AI simply going to amplify doctors' existing own biases? And then we get this kind of multiplication effect where we get even more bias. So these are the kinds of questions that researchers are asking. And I think they're really smart and we ought to be asking them, but unfortunately they're happening while we're using these systems live in patient-doctor encounters. Where are the regulators on this?

All of the uses of AI that we've been talking about here today are, again, just administrative, technically. They're just like helping the doctor. They're not- They're not diagnostic. That's right. They're not making their own diagnoses or they're just like supposed to be helping the doctor do their task. So the FDA is kind of hands in the air. And the FDA also hasn't really figured out how are we supposed to be evaluating and constantly checking large language models?

which can be used throughout medicine. And there's ideas out there about how we could figure this out. Maybe we get one large language model to check another one's work. But so far, we really don't have much. There was another study I wanted to ask you about with the caveat that it's small.

A study from UVA comparing doctors using chat GPT and doctors using traditional diagnostic reference tools to find a diagnosis. And they kind of performed about equally. And then there's this other part of the study that says chat GPT solo outperformed the doctors. You actually have looked at this study and can't.

came away with an interesting conclusion. I wonder if you could tell me about that. Yeah. So this study was done, one of the doctors who did this study was the same one who did the study I just talked about, about how TAC-TPT has a bias about race and pain. And the goal of the study was really to figure out, does using the AI, does using TAC-TPT help doctors make fewer mistakes in their work? And

And the main discovery from this was it didn't. It didn't actually make the doctors more accurate. The doctors did

didn't necessarily listen to advice from the AI that might have put them along a different path. Part of this was that when asked certain questions, Tachi PT was able to answer some of the medical questions in a way that was better than the doctors alone, but it wasn't exactly a head-to-head competition because in the whole setup for the

for the experiment, human doctors did a lot of the work, a lot of the legwork. They investigated the problem. They wrote all of these reports about the problems that these imaginary patients had and then handed that over to ChatGPT.

One of the things I heard from doctors, even doctors who are kind of invested in the idea that AI is going to help improve medicine, one of the things they say is that one of the hard things about making this leap is that

Doctors do a lot of investigation. They ask a lot of questions. But the way that our current chatbot technology is set up is it doesn't know how to push back. It doesn't know how to ask more probing questions. Let me put it this way. If you go to a doctor and you say, hey, my ear hurts. What should I do about it? The doctor's probably not going to say, oh, take two Tylenol and

you know, call me if it doesn't go away. The doctor is going to ask you some more questions. The doctor is going to say, well, is there any redness? Has there been any discharge? And all these sorts of things. And our current generative AI tech does not know how to do those doctor kinds of things. So we still have a long ways to go before I think anybody would really recommend choosing a chat GPT over a human doctor.

And yet what I have heard you saying throughout this entire conversation is that this stuff is happening regardless. And so like where are the guardrails and who is making sure that chat GPT doesn't tell you to take two Tylenol and go away when really what you need is someone to look in your ear?

So the truth of the matter is there are lots of people out there, lots of patients, people with medical needs who are just turning to Tachi PT and Gemini and other AI for medical advice.

I really don't recommend this. I haven't met a single doctor who really would recommend that, but people are doing it. But I get it. Like, I get it, particularly if you have had the experience, which I have had, of not being listened to or being turned away or being gaslit if you're from an underrepresented community and the doctor's just like not giving you the time of day. I get it. Yeah.

It's a version of Ask Dr. Google, right? Where you go and you do your own research. But I totally get that people want quick answers. They don't want to have to go wait and see the doctor in a week. There's a real tension there for patients. You asked where the guardrails are when we are interacting with doctors. And everybody says, all of these tools I've been describing are always supposed to be used by doctors who check the work.

So when I went for my checkup at Stanford, Dr. Sharp had the AI produce this report. And then afterwards, he pulled up the report and he edited it. And it wasn't a huge thing, but he did find that the AI at one point sort of leapt to an assumption about the origin of a cough that I was complaining about. The AI blamed it entirely on my three-year-old.

I did partially blame it on my three-year-old, but it said it also could have been allergies or a tree I have next to my bed. We talked about a lot of different things. So, you know, in this case, the doctor was responsible for going through and editing it. Same thing with these email messages that AI is now drafting for doctors.

They're not supposed to just say, "Okay, yep, the AI did a good job. Send it off." They're supposed to look at it. They're supposed to edit it. It's supposed to be just a starting point. And if you trust your doctor and your doctor is very responsible, then maybe that's going to be fine. The problem is we know that humans kind of develop this problem when we get technology that we tend to overtrust it and we tend to forget about its problems.

Jeff Fowler, as always, it is good and a little bit terrifying to talk to you. My pleasure. Jeffrey Fowler is a tech columnist at The Washington Post. And that is it for our show today. What Next TBD is produced by Evan Campbell, Patrick Fort, and Shaina Roth.

Our show is edited by Paige Osborne. And TBD is part of the larger What Next family. And if you're a fan of the show, the number one best way to support our independent journalism is to join Slate Plus. Just head on over to slate.com slash whatnextplus to sign up. All right, we'll be back next week with more episodes. I'm Lizzie O'Leary. Thanks for listening.