We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How insurance companies use AI to deny claims

How insurance companies use AI to deny claims

2024/12/18
logo of podcast On Point | Podcast

On Point | Podcast

AI Deep Dive AI Insights AI Chapters Transcript
People
C
Casey Ross
C
Christine Huberty
R
Ryan Clarkson
U
UnitedHealthcare
Z
Ziad Obermeier
Topics
Christine Huberty:在Medicare Advantage计划中,许多患者的护理费用申请被UnitedHealthcare等私营保险公司拒赔,这其中存在严重问题。算法驱动的拒赔导致患者错过必要的治疗,造成不可逆转的健康损害和经济损失。她以自身经手的多个案例为例,详细说明了算法拒赔的危害性,并强调了医疗记录等证据的重要性。 Casey Ross:通过调查报道,揭露了UnitedHealthcare公司使用名为NHPredict的算法来决定患者护理费用的申请是否批准。该算法缺乏透明度,其决策过程不考虑患者的具体情况和医生的建议,导致许多患者被不公平地拒赔。调查还发现,UnitedHealthcare公司内部申诉流程中,超过90%的拒赔申请被撤销,这表明算法存在严重缺陷。此外,一线员工在使用算法时面临压力,倾向于做出拒赔的决定,这进一步加剧了问题的严重性。 Ryan Clarkson:代表原告对UnitedHealthcare公司提起集体诉讼,指出Cigna公司在两个月内拒赔超过300,000份申请,这凸显了使用AI评估保险申请的问题。诉讼的目标是揭示AI算法的运作方式,并推动对AI在医疗保险中的应用进行更广泛的讨论。 Ziad Obermeier:AI在医疗保险中具有应用潜力,但UnitedHealthcare公司的做法存在严重缺陷。将AI作为最终决定权来决定理赔申请是不道德和不科学的,应保留人工审核环节。他认为,AI可以用于识别需要额外护理的患者,从而使患者和保险公司受益,但保险公司应公开数据,并改进算法的设计和激励机制。 UnitedHealthcare:公司声明对护理费用的决定是基于CMS的标准和会员计划条款,并否认诉讼的有效性。

Deep Dive

Key Insights

Why did Christine Huberty notice an increase in denied claims for skilled nursing care under Medicare Advantage?

Huberty observed that patients, including those with severe conditions like strokes, were being denied care that would have been approved under traditional Medicare. This led to patients losing weeks of necessary therapy and potential loss of function.

How many people use Medicare Advantage in the U.S.?

Approximately 31 million people use Medicare Advantage, according to the Kaiser Family Foundation.

What is the name of the algorithm used by UnitedHealthcare to predict patient stays in nursing homes?

The algorithm is called NHPredict, developed by Senior Metrics and later acquired by NaviHealth.

How much money does NHPredict save UnitedHealthcare annually?

The algorithm is estimated to save UnitedHealthcare and other insurers billions of dollars a year by cutting costs in post-acute care, which is a significant expense for health insurance companies.

What percentage of denied claims are overturned upon appeal?

Over 90% of denied claims are overturned upon appeal, according to the lawsuit against UnitedHealthcare.

What is the current status of the class action lawsuit against UnitedHealthcare for its use of AI in claims adjudication?

The lawsuit is in the pretrial discovery process, with no trial date scheduled yet.

What was the 1% target rule for case managers at NaviHealth?

Case managers were instructed to keep patient stays in nursing homes within 1% of the number of days projected by the algorithm, limiting their discretion to deviate from the AI's recommendation.

Why does Dr. Ziad Obermeier believe the use of AI in health insurance claims is flawed?

Obermeier argues that the current use of AI eliminates human oversight, leading to unjust decisions. He also criticizes the incentive systems that reward adherence to algorithms rather than clinical judgment.

What potential improvements does Christine Huberty suggest for the use of AI in health insurance claims?

Huberty suggests that denials should only occur due to a change in the patient's condition, that treating medical professionals should have an override for denials, and that automatic denials should be stopped unless prompted by a change in condition.

What advice does Casey Ross give to individuals facing denied health insurance claims?

Ross advises individuals to know their rights, ask questions about the decision-making process, and appeal denials. He notes that overturn rates are high (80-90%) when appeals are pursued.

Chapters
The episode starts by introducing Christine Huberty, an attorney who noticed irregularities in Medicare Advantage claims denials. She observed that patients were being denied care, even when medically necessary, and suspected AI involvement. This led her to submit a comment to the Center for Medicare and Medicaid Services (CMS), which caught the attention of investigative reporters.
  • Medicare Advantage claims denials
  • AI involvement suspected
  • Christine Huberty's comment to CMS
  • Stat News investigation

Shownotes Transcript

Translations:
中文

Bored at home? Head over to Chumba Casino and join some serious social casino fun. With hundreds of games at your fingertips, no purchase necessary, you can play anytime, anywhere. There's a special welcome bonus waiting for you when you sign up. Play for fun, play for free, and you could even redeem some great prizes. Visit ChumbaCasino.com and get ready to reel in the fun.

Sponsored by Chumba Casino, no purchase necessary. VGW Group, void where prohibited by law. 18 plus terms and conditions apply. Church's fans keep it real, and we gotta reward that. Church's real rewards makes it real easy to save real coin, real fast, like in real time. Get a free two-piece chicken or three-piece tenders with your first purchase when you sign up. This is On Point. I'm Meghna Chakrabarty.

Christine Huberty is an attorney. She used to work in Wisconsin, where she provided free legal assistance to people on Medicare. In 2019, she noticed something strange. It was about Medicare Advantage, the Medicare program where people are covered through private insurers instead of directly by the federal government. Almost 31 million people use Medicare Advantage, according to the Kaiser Family Foundation.

Now, Huberti noticed that some of her clients on Medicare Advantage were being denied claims for care at skilled nursing facilities. There was one patient in particular that really worried her. He'd suffered a major stroke. When his doctors had said, you're going to need at least two months.

of rehab in this type of care setting. And if that person had had original or traditional Medicare, they would have gotten 100 days without any other type of review. They would have got what their doctor prescribed, what they were all advising. Whereas this client, this patient was put in a situation of after two weeks figuring out, do I go home when it's not safe to? Do I stay here and pay out of pocket?

And so while this is all happening, because you're getting the denial in real time, he's lost weeks of care. And so while we can sit here and talk about who's paying for what, they stopped the therapies that he needed, and he lost probably some function and the ability to get back to his prior level of function. The man was enrolled in Medicare Advantage, and his coverage was coming through UnitedHealthcare,

the nation's largest private health insurer. That's when Huberti started noticing even more denials from UnitedHealthcare. There's cases where they're on feeding tubes. There's cases where they have volleyball-sized tubes

pressure wounds, so bed sores, and they are just dumbfounded that they need this care. Their doctors are saying they need this care, but their insurance company is saying that they don't. And so we have, you know, hundreds of pages of medical records supporting why they should get this care and to get that and to learn then that this is all started by a computer. I think that's, you know, just dumbfounding.

As she started collecting more and more denied claims, Huberti learned that the Center for Medicare and Medicaid Services was soliciting public comment on how the Medicare Advantage system was functioning. Again, this is back in 2019.

So Huberti commented. It was the night before the deadline and it was feverish and it wasn't, you know, now when I work at the Center for Medicare Advocacy, it's very polished. It's very, you know, very edited. This was very much just a stream of consciousness. Get it out there, you know. So I haven't looked back at it because I would probably be embarrassed. Nevertheless, that comment found an audience.

So what happened with that was then Casey Ross and Bob Herman from Stat News found that comment and found that I had identified this issue. Stat is an award-winning healthcare and medicine news organization. And Casey Ross is Stat's chief investigative reporter for data and technology. And he's with us now. Casey, what exactly was in Christine Huberty's stream of consciousness comment about

at CMS that caught your attention? Well, it's hard to remember the exact words, but what I remember was the anger that was there.

anger over the way that she felt patients were being unjustly denied care. There may have been some all caps. It was a tone and tenor that made me say to myself, I have to talk to this person. Well, we should say that that first comment that led to you, uh,

and your co-reporter reporting this very groundbreaking series for STAT about the use of AI in health insurance claims and specifically with UnitedHealth. So tell me, from Christine's first comment about all these claims being denied for people that she was working with and how she says it was started by a computer, what were the first steps that you took to dig deeper?

Well, we began making phone calls to whomever would talk to us about the use of this algorithm in their facilities or on patients that they knew. So Christine was a very early call. We began to get in documents. We began to talk to other skilled nursing providers just to get a sense of

Okay, how is this being used? How is it being referenced? When they ask questions of United or another insurer using the algorithm, what was the answer? And the intriguing answer to us was, well, that's proprietary. We can't tell you what data it was trained on. We can't tell you how it makes its decisions.

Ah, okay. So when a reporter is told, we can't tell you, that's like catnip, right? You're going to find out. Can you give me some more examples? I mean, the first story that you published, you start with another really kind of disturbing but detailed example of the ways in which this algorithm was deciding when care would be approved and when care would be denied. It's the story of Frances Walter, who's an 85-year-old woman from Wisconsin who had a shattered left shoulder.

Yeah, Francis Walter was a case that really stood out to us and also is very similar to cases we subsequently heard about around the country.

This is a perfect example of why an algorithm can't be applied to make decisions about patients without considering other information and without considering what their doctors have to say about it. Frances Walter had suffered a very serious fracture of her shoulder in a fall. She also had an allergy to pain medication.

And when you have that combination of things, it's going to take somebody longer to recover. It's harder. She couldn't put on her shoes. She couldn't take care of the basic activities of daily living for herself. And yet an algorithm is saying, you know, at day 16, and I think the actual recommendation was 16.6 days she needed to go home, her insurer cut off

care on day 17. Wow. And this was the recommendation made by this algorithm? Yes. And it was followed, you know, obviously to the letter. And the problem is that sends her family into a panic. Where do we put our mom? Where does she go? She's not, she can't take care of herself. She lives alone. What do we do? How do we pay for this?

And what they end up doing is having to apply for Medicaid and pay down their assets in order to pay for her care. And the horrible thing about that when it happens to so many people is that generational wealth gets lost. They can't pass anything on to their kids because all of their money has to be spent on care that's wrongly denied.

Was Frances's family able to get any answers about why her coverage was stopped at day 17 from her insurer?

Only through a very lengthy appeal and discovery process did they finally get a sense of the algorithm. And that's a key point in this is that the patients and the families don't know that the algorithm is being used in the process of making the decision. So once they appealed and appealed...

Throughout the process, you have to appeal sort of multiple times to get to a point where you can get discovery and find out, okay, what tools were being used, what information was used to make this decision. And Christine Huberty eventually found this report, this algorithmically generated report that predicted her length of stay. But it required legal action, essentially, because he said it was in appeal and discovery. Right.

Months of it, too. Yes, it took months of work to get to that point. Okay. And again, just to reiterate, we're talking about a senior woman, an elderly woman who was in so much pain that she couldn't even dress herself, go to the bathroom or push a walker without help. And yet...

This algorithm decided that on day 17, she could no longer receive coverage to stay in a nursing home. So, Casey, let's start peeling back the layers here that you discovered. What is this algorithm and who developed it?

This algorithm is called NHPredict, and it was developed by a company called Senior Metrics back in...

the late 1990s and early 2000s develops this algorithmic tool using various data inputs. And eventually, it was actually the administrator, former administrator of the Center of Medicare and Medicaid Services, Tom Scully, was looking for investments as part of a private equity company

following his tenure, and he saw that there was an opportunity presented by the fact that so many elderly patients were spending longer periods in nursing homes. He saw this algorithm, he forms a company, and he buys it. And that company becomes NaviHealth. And NaviHealth begins applying this algorithm to the care of all

older patients in nursing homes across the country for a number of years until eventually, through a series of transactions, UnitedHealthcare, the largest insurer and the largest Medicare Advantage insurer, buys it and begins using it on its own patients and also contracting out the algorithm for use by other insurers. Okay. So this is where UnitedHealth comes in. And

Of course, one of the reasons why we're talking about this today, even though your reporting came out in March of last year, is because of the terrible murder of UnitedHealthcare CEO Brian Thompson earlier this month.

I just want to say clearly that talking about this aspect of UnitedHealthcare by no means means that we at On Point or you, Casey, are condoning murder of any kind. I just want to say that very, very clearly. But again...

But again, that incident, that act has brought a lot of attention back on UnitedHealthcare as the nation's largest private health insurer. And as you said, the largest players essentially in Medicare Advantage. So how much do we know of the advantages that this NHPredict algorithm brought to UnitedHealthcare? I mean, how good was it at making decisions?

Well, it wasn't good at making decisions accurately about the care of patients, but it does effectively save UnitedHealth and other insurers that use it a lot of money. Post-acute care, care in nursing homes after you've had a hospital stay for a serious illness or injury, is a very expensive option.

center of cost for a lot of health insurance companies. And so health insurance companies are targeting that

specific domain where it's very expensive to try to cut their costs. And it's not just United, but other insurers are doing that as well. Okay. So Casey, hang on for just a second. When we come back, I want to understand a lot more of what your reporting found. And also there's a lawsuit still in the works over the use of AI in deciding health insurance claims. So we'll talk about that too. This is On Point. Support for the On Point podcast comes from Indeed.

Are you hiring? With Indeed, your search is over. Indeed doesn't just help you hire faster. 93% of employers agree that Indeed delivers the highest quality matches compared to other job sites. And listeners get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash on point. That's Indeed.com slash on point. Terms and conditions apply. Need to hire? You need Indeed.

You're back with On Point. I'm Magna Chakrabarty. And today we're talking with Casey Ross. He's the chief investigative reporter for data and technology with STAT. He reported a series called Denied by AI, co-reported that with Bob Herman. And it was a 2024 Pulitzer Prize finalist in investigative reporting where Casey and Bob uncovered the use of artificial intelligence in either approving or denying health insurance claims and

and specifically within the world of the nation's largest private health insurer, United Health Group. Casey, just the other day, we told listeners that we'd be talking to you today, and we wondered if folks listening had any similar experiences. And we heard from Deborah Dolinsky. She's a former nurse.

She has had a series of surgeries on her hip over the last four years, and she told us she's had to advocate for herself to stay longer in the very kind of skilled nursing facilities you've been talking about because her insurance company has regularly denied her claims, even though returning home early could be catastrophic for her. If you live alone...

have stairs it's an entirely different story you have to have a slider chair in your tub so that you can get it get a shower you have to have all these things and

You know, you have to be able to stand and it's really, really hard. With all my surgeries, I lost a lot of blood and had a very low blood count. And I was exhausted from the effort. They didn't take any of these things into consideration. That's On Point listener Deborah Dolinsky.

So, Casey, tell me more about what you were able to find that this software, this algorithm, NHPredict, what did it use to make the decisions about when care should be approved or denied?

So it uses a bunch of different categories of data about you to make its calculation. And the different pieces of data include your living situation, what condition were you hospitalized for, what other illnesses might you have that compound the disease,

the difficulty of recovery from the injury or illness that caused your hospitalization, other demographic details,

And it takes those data, weights the pieces of data differently. And through the calculation, it spits out this prediction about your length of stay. So we never really got clear answers about all of the pieces of data that went into the model or even where the data was from because they just united in its subsidiary. NaviHealth would not answer those questions.

Interesting. Now, you spoke with someone named Amber Lynch, right? She was a former case manager with Navahealth? Yes, that's right. Okay. And she told you, though, that according to her, that a lot of things such as comorbidities weren't factored into the algorithms analysis. I think the concern that a lot of people who worked within the company had was that

they're running this algorithm and they're looking at the patient. They have to actually go to the facility and look the patient in the eye. And they see them. And they see the impact that different

conditions that they have are having on them. They can see them. They look at what is going on with a patient and the prediction by the algorithm doesn't resonate with what they're feeling is as a caregiver when they meet with the patient and they see what's actually going on. So I think that gap caused a lot of them to want to talk with us as we were doing our reporting because it didn't feel right to them. Mm-hmm.

Well, it's interesting, though, because as you said, they have this algorithm, this technology had access to a database of at least, what, six million patients. There was all this information that it was drawing on to make these recommendations or decisions.

But if certain things weren't included in that decision making, of course, the question is why. But I understand that no one would would would United Health actually talk to you about this? Would they tell you anything in detail?

They would. They got on the phone to answer some questions. We had lengthy conversations with them, but they were always on background. They declined to put a person on the record so that we could actually quote their responses and their explanations. They didn't.

A lot of the explanations that we got were sort of, well, this isn't the final determinant. We're having somebody else review the algorithmic prediction to see whether the person needs more care or should be denied care. But it's important to kind of understand how the pieces fit together here. So the person that's on the front line that's using the algorithm and also dealing directly with the patient

has to make a decision and that person has to make a decision about, okay, should this patient be denied? And if so, do I recommend that to a medical director who will then review the case? And what was happening was that those people on the front line would feel under pressure to make the decision that they should be denied because their performance was judged based on that. Mm-hmm.

Okay. So then they forward the case, and a lot of these cases they felt were just sort of getting rubber stamp denied once they put them up the line. Okay. So let's go through this in even more detail because this is a critical part, right? Because AI can actually be quite a useful tool, right? If deployed –

correctly and thoughtfully because, you know, when we talk about healthcare data, there's so much of it that could be used for good. But the question is, what is the ultimate human decision-making? Okay. So if I understand correctly, Casey, your reporting found that when, for example, coverage denials were appealed,

through the UnitedHealth's process. There's an internal appeals process. If I understand it correctly, over 90% of those denials were reversed or are reversed. This is according to the lawsuit that's currently pending against UnitedHealth, which implies if those appeals are that successful that the algorithm is incorrectly denying coverage.

Yeah, you could certainly make that conclusion. And unfortunately, so few people appeal and follow through the process. So it's something like 2% to 3% of people actually appeal the denial. So given that gap, you can surmise that the insurance company is getting away with a lot of denials that if contested would get overturned. Okay. But someone is telling those case managers who...

technically offer the final decision on coverage or denial, how close they should stay to the algorithm's determination, right? I'm seeing here that in 2022, NaviHealth told its case managers to keep patient stays in nursing homes to within 3% of the number of days projected by the algorithm. Did that bracket get wider or narrower over time, Casey? Yeah.

it got narrower. And this was a really stunning part of our reporting because you see that

these caregivers, instead of having the discretion to make a decision about all of the information they are seeing, are instead being told that you have to follow this algorithm and its predictions within 1%. So if they, as a care coordinator, have 10 patients and together those patients are allotted 100 days, if that group of patients stays more than 101 days,

then they've missed their performance target. And that's especially disturbing when you think about what everybody always says about AI when it's used. They always say, well, we're going to have a qualified expert human looking at this AI and making a decision about whether the AI's recommendation should be applied or not.

In this case, it's the exact opposite. They're being told to follow the algorithm no matter what, almost, within 1%. Within 1%. And this was just last year. The target narrative, you cannot deviate more than 1% from the AI's recommendation. Now, technically, and the details matter here, it's not case managers that are the deciders on coverage or denials, right? That falls to a physician medical reviewer.

Right. That's the point I was trying to make earlier about. So the case manager basically can only approve additional care, and they would have to go to their manager and say, I think this person needs more care. And the manager would say, okay, give them a couple more days or not. But if it comes down to a denial, the denial can only be made by a physician medical reviewer who takes a look at the case and decides, okay.

okay, this person is denied or not. But they're being advised by those case managers. But they're being advised by the case manager and they're also taking into consideration the algorithmic output. So there is a concern there about sort of

you know, process automation and kind of automation bias. Once it gets in front of you, it's sort of been reviewed. And there's a concern that a lot of experts have about this, that you're just kind of, you begin to automatically rubber stamp the prediction of the algorithm. Because in many cases it seems, well, this is correct. But then there are those two and three cases that come up where, wait, it's not. And it's,

And then sort of this kind of automatic process can lead to unjust and unfair decisions. Right. But then that scales up when you're talking about the nation's largest private health insurer, right? It can be just a small fraction of cases, but we're talking about millions of people.

whose care is being funneled through this algorithm. Now, earlier I had mentioned that not long after your reporting came out, Casey, this major class action lawsuit was also filed against UnitedHealthcare. And by the way, we should say that UnitedHealthcare has provided a statement that

I think it's the one they also provided to you, Casey, in which the UnitedHealthcare says, quote,

Coverage decisions are based on CMS coverage criteria and the terms of the members' plan, end quote. The United Health also says, quote, the lawsuit has no merit and we will defend ourselves vigorously.

Okay. So what United Health is claiming in this statement, Casey, which I know you're familiar with, is completely different than what you're describing or how you're describing they use this algorithm. They're just saying, hey, it's information to inform providers, families, and caregivers about what sort of assistance and care the patient may need.

Yes, exactly. And that would be the case, but for this 1% target, right? I mean, this algorithm and algorithms like it can be really helpful because they do provide information that could be considered and used in the process of making decisions. But if you tie the decision makers hands to the algorithm, then you're eliminating their discretion to deviate from the algorithm. Another important point I'd bring up in this, which I didn't get to earlier was,

When the algorithm is making its prediction about you, what it's doing is it's making the calculations between the data categories I talked about, and it's comparing you to patients like you in this 6 million patient database that they have.

And that's how it's reaching this determination about your length of stay. But what if those patients in the database are not actually that close a comparison to you? What if they're different in certain ways that end up being incalculable? And that's what we sort of found, that it's just, it's really impossible to get at all those nuances to reliably and consistently make an accurate comparison. I'm Meghna Chakrabarty. This is On Point.

Well, Casey, we reached out to Ryan Clarkson. His law firm is representing plaintiffs in this big class action lawsuit against UnitedHealthcare. He's also suing health insurer Cigna for using AI in a similar fashion. And Clarkson says they found that Cigna denied more than 300,000 claims in a two-month period, which he says underscores the problems with using AI to evaluate insurance claims. If you do the math,

on that two-month period across the 300,000 claims, what it came to is something like 1.2 seconds for every claim denial for the physician to review. And I don't think anyone on earth would think that 1.2 seconds is a sufficient amount of time to review the circumstances of someone's health insurance claim.

Clarkson also said that one of the goals with this class action lawsuit is he wants to create a larger conversation about the use of AI in healthcare and specifically health insurance.

And in order to do that, he's hoping that a judge will allow the case to proceed to trial, which would then lead to a round of discovery. If we are able to proceed, then we will also have the opportunity to conduct discovery and really open the black box of the AI algorithms and code to understand exactly what it is that the

the AI is made of and how it's being utilized to what we believe to be unfairly deny our clients and other similarly situated health insurance claims. Casey, do you happen to know the status of this class action suit right now?

The class action suit is sort of wending its way through the very lengthy motion process. And the Cigna case I'm less familiar with because it's not one I reported on, but the case involving NaviHealth and the use of that algorithm is in the pretrial discovery process right now. There is not a trial date yet scheduled, but it's moving its way through the process. Now that lawsuit is alleging that

that UnitedHealth and NaviHealth did things such as breach of contract, breach of good faith, fair dealing, unjust enrichment, and insurance law violations in a lot of U.S. states. About that unjust enrichment part, were you able to find any...

hard numbers or even an estimate on how much money UnitedHealth is saving by using NHPredict? Because it's impossible to imagine that this part of the purpose of this tool wouldn't be cost savings.

Yeah. I mean, the tricky part is you can only kind of do rough calculations based on the data and denial rates that you see within this particular category of care and how much a typical episode of care costs. But when you do those calculations, you come up with billions of dollars a year.

And I would point out that after our reporting, the permanent subcommittee on investigations in the U.S. Senate did an inquiry about this and subpoenaed hundreds of thousands of documents from insurance companies and found, in fact, in those documents that the insurers were predicting that the use of these algorithms would save them billions of dollars a year. And their decisions, importantly, were calibrated

directly to what the financial impact would be. If it was going to cost them money, they didn't recommend using it. If it saved them money, they did. Casey Ross is with us today. He's chief investigative reporter for data and technology at Stat, and he's walking us through his investigative series on the use of AI in health insurance. There's a lot more. This is On Point.

Support for AI coverage in On Point comes from MathWorks, creator of MATLAB and Simulink software for technical computing and model-based design. MathWorks, accelerating the pace of discovery in engineering and science. Learn more at mathworks.com.

You're back with On Point. I'm Meghna Chakrabarty. And before we return to our conversation with Casey Ross, I just want to give you a heads up on something we are working on for the new year. It's coming quickly. But in the beginning of 2025, we are going to be welcoming back two of our favorite people, the On Point money ladies. You know them. They're Michelle Singletary of The Washington Post and Rana Foroohar from The Financial Times. They come together to talk all things finance, micro and macro. And they're going to be talking about the

And we'd like to hear from you. What's your economy-related question or concern? Maybe you have some thoughts or hopes about how the incoming Trump administration might have an impact on the economy and your bottom line. Or maybe does it have to do with housing's always a big one, loans are always a big one. What's your overall question for the inimitable money ladies? So you can send us the question through our On Point Vox Pop app at

If you don't already have it, look for On Point, Vox Pop, wherever you get your apps. You can also still call us, 617-353-0683. That's 617-353-0683. For the money ladies, who will be back with us early in January. Casey, I wanted to ask a couple more questions about how the federal government has become now involved in...

in this issue of the use of AI and health insurance claims and coverage. I understand that federal inspectors did look at denials from 2019 and found that perhaps this is broader than AI specifically, that private insurers were straying beyond Medicare's actual set of rules in terms of

or denial, and they were using internally developed criteria to delay or deny care. I mean, it makes sense given what you've described on how NaviHealth and UnitedHealthcare are using AI. Yeah, the Office of Inspector General within Health and Human Services has been very active on this issue, and that's what they found. They found that these internal criteria and decision-making tools, these algorithms were basically being used

being used as an intermediary between the published Medicare rules about when people deserve and need care to make decisions about whether or not those criteria, the patients met those criteria. So you're inserting a filter there between the actual rules and the decision about the patient, and you're using these algorithmic tools to

to make those decisions or parse them in a way. But again, I mean, this is where the federal government could have quite an enormous impact, because even though Medicare Advantage is coverage that's ultimately administered through a private insurer like UnitedHealth, the money is still coming from Medicare, right, Casey? Right.

Yes, the federal government sets the rules here around the use of these technologies. They're providing the funding. So all of the money that's going into these plans is coming from taxpayers and from these federal programs. This is the federal safety net at work, and Centers for Medicare and Medicaid Services is in charge of that.

So there are things that they could do, in other words. There's another thing that's just recently happening. I think this is what you were mentioning earlier. In October of this year, there was this report, right, that came out from the U.S. Senate Permanent Subcommittee on Investigations, and they found that

Medicare Advantage insurers are intentionally, this is from the report, quote,

So, Casey, in the wake of all of this, including the ongoing lawsuit, has there been any indication from UnitedHealth, NaviHealth, any of the other companies involved in this use of AI in terms of claims adjudication that they might change course?

We haven't seen any evidence of that, that they're planning to pull back on the use of the algorithms or change the way they do it or welcome any additional oversight. In fact, the algorithm that was at issue within UnitedHealth

owned by its NaviHealth subsidiary, NaviHealth no longer exists as a subsidiary under that name. It's now sort of just part of Optum. And so now you don't really even know that there is this particularly branded algorithm and that it's used by this particular subsidiary. It's just sort of part of the Optum group of businesses within UnitedHealth Group, which makes it sort of harder for people to understand when and how it's being used. Okay.

Well, Casey, hang on here for just a minute because we reached out to someone I know you're familiar with, Ziad Obermeier.

He's an associate professor of health policy and management at the University of California, Berkeley School of Public Health. He's also an emergency medicine physician, and he spent years researching applications of AI in the field of health care. And he takes a look at issues of bias and whether or not the AI is efficient.

He even founded a company that makes clinical data available to people who want to build useful algorithms for the healthcare profession. And by the way, Dr. Obermeyer was featured in our award-winning week-long series a couple of years back called Smarter Health, How AI is Transforming Healthcare. And folks, if you haven't heard it, you really should. Check it out in our podcast feed or go to onpointradio.org and you can find it. Just look for it.

But anyway, we reconnected with Dr. Obermeier about AI and health insurance. And he tells us that the public actually should not be too quick to condemn AI because having a better way to compare and evaluate coverage needs is in fact critical to insurance administration.

But to turn that into like a prescriptive thing and then to sort of, you know, put reimbursement decisions and claims, denials and things like that behind that just seems insane to me. And really just like both ethically and scientifically just wrong. In other words, making the A.I.,

The final say in claims decisions eliminates that critical step of human oversight, which is what Casey went over with us, that 1% variance that UnitedHealth is allowing.

That's why Obermeyer says UnitedHealthcare's use of AI is flawed because case managers were given instructions and rewarded for adhering to the algorithm and not their own expertise. And Dr. Obermeyer says, you know, that's analogous to rewarding teachers for their students' test scores, for example. You might wish to incentivize a commitment to learning, but you're only incentivizing that teachers teach what's in the test.

Choosing that metric is so important and designing the incentive systems around it in an intelligent way is also super important. And I think that in this case, neither was the metric chosen well, nor was the incentive system around that metric designed well. However, Dr. Obermeier says there is still room for an ethical use of AI in health insurance.

UnitedHealthcare could have used the algorithm to determine when patients were staying in care much longer than normal. And then that information could have actually been used to establish where additional care, not necessarily less care, is needed, which would benefit both patients and the company. Even if you're an insurance company, there's actually something very useful to do there, which is to sort of say, okay, these people have a lot of undiagnosed conditions.

If you're in Medicare Advantage, undiagnosed conditions is great. It's like, go diagnose those conditions, then you're going to get more money from Medicare. So it's not even that, like, this requires people to, like, sing Kumbaya and, you know, do the right thing, even if it's bad for business. I think there's a lot of places where, like, what's good for patients is what's good for business. Dr. Obermeier says some insurance companies aren't thinking about how to truly maximize AI's potential to improve both insurance coverage and patient health.

For example... So I think, you know, like a lot of people have knee replacements and do great. But there's a significant minority of people whose knee replacements go catastrophically.

And knowing that in advance would be super valuable for everyone. It would be super valuable for the patient. It would be super valuable for the insurance company because then they don't have these huge bills for both the knee replacement and the rehab. Like, why don't we have algorithms to do that? Like, why are we just predicting these dumb things like length of stay? But even in this clear example offered by Dr. Obermeier,

The biggest wall to more sophisticated AI use is the insurance companies themselves. Major insurers and corporations that build these AI tools insist on keeping proprietary data private. Casey talked to us about that just a few minutes ago, meaning they do have access to these huge data sets that, if shared, are

could potentially generate much better tools, but insurers and tech companies refuse to do that. I think we're really being held back in this area because only a few people at a few different types of companies can access the data. And that's not good for society. It's not good for the economy. It's not good for, you know, anyone except those small number of companies that can access the data. Yet on one hand, Dr. Obermeyer remains optimistic about the potential for AS youth...

use, I should say, in healthcare if insurers learn from their mistakes and that if is where his optimism hits an obstacle. If we see enough of these catastrophic failures, there's going to be a very reasonable reaction from everyone to just shut the whole thing down and be like, these things are bad. They can't be saved. And

And I think that would be a tragedy because I think, you know, as I mentioned, I'm really optimistic that there are a bunch of very useful things that we can do with algorithms. But the more of these bad examples that come to light and, you know, there are a lot more than have been publicized. You know, there are a lot more examples that are going to come to light shortly over the next few years and it's going to be ugly.

That's Ziad Obermeier, emergency room physician and associate professor of health policy and management at the University of California, Berkeley School of Public Health. Casey Ross, your reaction to that?

Yeah, I mean, I think the points are very well made. And I think the problem is that a lot of the decisions about the algorithm, the metrics that are chosen, the incentives that surround it are made behind a corporate firewall that we don't have any visibility into and that there is not any oversight over. Insurance companies right now are free to make these decisions about

how they build the algorithms, what data they use, what metrics they choose to use in the process of evaluating patients, and then what incentives they give their employees around the use of the algorithm. And the great irony in all of this

when I think about it, is that when they market these tools and they talk about their use, the insurance companies and others who use them said, well, we're using this to make a better individualized decision about the patient.

But when the algorithm is applied in such a hard and fast way, it is missing crucial details about the patient that result in exactly the opposite of an individual carefully made decision. It results in a generalization that cuts off care at exactly the wrong moment. I'm Meghna Chakrabarty. This is On Point. Casey, we've been talking about...

claims denial, most specifically in the area of nursing home care or acute care facilities. But these technologies are used across the industry, across health care. I'm wondering if in the process of reporting these stories, how did it change sort of your own personal view or personal experience of health insurance in America? Because not only are you a reporter, you're an American who's also using this system.

Yeah, absolutely. I mean, when I look at it, I just I feel like there is a complete failure in communication in a way around what insurers do and how they use these tools and how they communicate about them to regulators in the public. I mean, insurance decision making is about applying math and statistics to make calculations about risk.

and then to write policies that protect various stakeholders from that risk. It's actuarial science. And when you insert these algorithmic tools into that process, it becomes harder for people to understand, and it feels very opaque because there are these calculations being made. And when you're on the other side of that calculation and that decision, it feels cold.

And it feels unjust. And it can feel like exploitation. And I think a lot of the anger that you see in this very emotional, visceral reaction in the recent weeks is a result of that discrepancy and that disconnect and that complete failure of understanding and communication about what insurance companies do, how they do it, and what the impacts are on the lived experiences of patients every day. Mm-hmm.

On their actual lives, right? So, you know, I want to play one last thing from Christine Huberty, who's the attorney who really kicked off your reporting when she made all these comments about the denied coverage her clients were experiencing.

She's now an attorney at the Center for Medicare Advocacy, and we actually asked her if she thinks there are concrete steps that could be taken to improve the use of AI in evaluating health insurance claims. The only thing that should be kicking off a denial is a change in the patient's condition.

That would be number one. Number two would be that the treating medical professionals have to have some sort of override for any type of denial that is turned out if they're still using it. And then third, I think they should be able to stop that automatic denials as well. And so there again should be some sort of change in condition that prompts another denial or review instead of it just happening automatically.

So these are potential systematic, I should say, improvements that could be made. But in the last minute that we have, Casey, I mean, people listening to this are individuals. Is there anything an individual can do, you know, if they're confronted with these denials?

I think the only thing an individual can do is know their rights and ask questions and know what questions to ask. I think it's important to ask about what information was used to make this decision, what tools were made about this decision, and to understand that whatever decision an insurer makes cannot just be appealed by your doctor. It can be appealed by you.

You can gather your records. You can submit a letter and appeal directly to the insurance company. And if you do that in cases where you feel the denial is unjust, what we've seen is that the overturn rates are very high. 80% to 90% get overturned on appeal, but you have to appeal. And you have to take that extra step. And that can be extremely hard to do when you're a family member

in one of the worst moments of your life because maybe a loved one is dying or is suffering. And that's really hard. And to try to work with advocates and caregivers to make sure that process unfolds in a way that helps the right decision be made for you and your family.

Well, Casey Ross and Bob Herman co-reported this special stat investigative series called Denied by AI. It was a Pulitzer Prize finalist in 2024 for investigative reporting. Casey, thank you so much for your reporting and thanks for being with us here today. Thank you, Magna, so much for having me. It was a real pleasure. I'm Magna Chakrabarty. This is On Point. On Point.