Springfest and Ego days are here at Lowe's. Right now, get a free select Ego 56-volt battery with purchase of a select trimmer, blower, or mower kit. Plus, shop today for new and exclusive items you need for your lawn. So get ready for spring with the latest in innovation from Ego, the number one rated brand in cordless outdoor power. Only at Lowe's. We help, you save. Offer valid through 4-2. Selection varies by location. While supplies last.
Work isn't the only place where people are using generative artificial intelligence. People use the technology for their personal life too, for meal planning, ghostwriting emails to an airline for a refund, creating wacky images for party invites. Most of those uses are harmless, but as we've discussed in our previous episodes, using Gen AI sometimes comes with risk.
like asking it for tax help or prompting it at work and inadvertently exposing your company's secrets. Another area where using AI can be risky? Getting medical advice. I'm Nicole Nguyen, personal tech columnist at The Wall Street Journal. This is the final installment of our Tech News Briefing special series, Chatbot Confidential. In this episode, we're focusing on asking AI about your personal health.
I went into a doctor and got results on a bunch of stats, like blood pressure, all of those kind of things. That's Robert Garrison. He's 60 years young, his words, and living in Texas. There's like 12 things on the list. So I could see my results and the doctor said, "Yeah, it looks good." But I put those results into, you know, I just made a PDF of it, put it into ChatGPT and said, "Do me a favor and compare my stats to other people in my age range."
But then I got competitive and I asked it to also compare it to people much younger than me. And how did they compare? Well, I don't want to brag, but pretty good. Garrison asked ChatGPT to create a personalized diet and exercise plan specific to the test results. That PDF he uploaded to the chatbot? It had his name, height, BMI, heart rate, and more.
Where did that information go? And was he worried about the Gen AI company now having access to it? These were all just general stats that really I wasn't concerned if anybody knew what my blood pressure was, what my cholesterol level was or weight. So in that case, no. And actually, as you asked the question, I started thinking about that because I
Usually I'm doing research and analytics, and if somebody knows what I'm researching or analyzing, I don't really care. Now Garrison says he thinks this is something he should care about after all. I know it goes into their large language model. I really haven't researched enough to find out if they share the information further. So probably something for me to investigate. Healthcare providers treat your medical history as confidential. There are laws to keep that data private to prevent discrimination by employers or insurance companies.
And many people are handing that information over to AI chatbots. I can relate. After giving birth while living in France, I got a lot of documents with unfamiliar acronyms and phrases. So I uploaded the files to ChatGPT to help make sense of it all. And now I wonder, like Garrison, what the company behind the bot can do with my and my baby's health data.
Before we go deeper, we should note that News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content licensing partnership with Chachupi team maker OpenAI. OpenAI says it gives users the option to opt out of model training. The company also says users can delete or export their data.
So how dangerous really is it to upload personal information, especially something sensitive like medical info, into a Gen AI chatbot? And how good is the medical advice the chatbot returns? Well, there are at least three things to consider. Who owns that data? What happens to it? And how accurate is the information it spits back out?
Corinne McSherry is the legal director at the Electronic Frontier Foundation. The EFF is a nonprofit that researches and supports digital privacy issues.
Once you hand over information to a third party that has no obligations to you, it's out of your control. And if there's a data breach, you don't have any control over that. You might not even be notified about it. You don't have control over whether that company might sell it to somebody else. So what are the risks of handing over our personal data to these bots? McSherry says some people believe there could be real-life implications down the line.
For example, McSherry says when the Dobbs v. Jackson Women's Health Organization decision by the Supreme Court eliminated the constitutional right to an abortion, some people feared that the information they were providing to tech companies could be used against them.
She says that similarly, we don't know what will happen to the data we give AI chatbots. And people are willingly offering up intimate info to these companies. One of the things that can happen with chatbots is you have this interactive conversation, or at least it feels like an interactive conversation that can lead people to actually be surprisingly open and share maybe a little more information than they intended to with the AI.
Hey, a quick reminder that we want to hear from you. Do you have any questions about using AI and protecting your privacy? Send us a voice memo at tnb at wsj.com or leave us a voicemail at 212-416-2236. You may hear yourself in a future episode where I return to answer some listener questions.
Alright, when we come back, what's the best way to use these chatbots? And how good is the advice you're getting? Stay tuned for that after the break.
This episode is brought to you by Chevy Silverado. When it's time for you to ditch the blacktop and head off-road, do it in a truck that says no to nothing. The Chevy Silverado Trail Boss. Get the rugged capability of its Z71 suspension and 2-inch factory lift. Plus, impressive torque and towing capacity thanks to an available Duramax 3-liter turbo diesel engine. Where other trucks call it quits, you'll just be getting started. Visit Chevy.com to learn more.
Chances are we'll all keep on using these chatbots. In fact, it's only likely we'll do it more and more. So here's how to do it while being mindful of protecting your privacy. OpenAI says that under ChatGPT settings, you can turn off the toggle for improve the model for everyone to prevent your chats from training future models or delete individual chats in your conversation history.
The company also says you can use temporary chat, which will delete the conversation after 30 days and prevent model training for one-off convos. Anthropix says Claude won't use conversations for model training by default. And if you delete the chat, the company will remove it from servers within 30 days. Keep in mind that some chatbots, such as DeepSeek, say in their privacy policies that they can hold onto your data for as long as necessary.
You could also use Duck.ai from the makers of DuckDuckGo, a search engine that says it doesn't track users. Duck.ai lets you choose your preferred model, including OpenAI's GPT and Anthropix Cloud, and anonymizes your prompts. The conversations also won't be used to train AI. One limitation: you can't upload files for it to analyze. And what kind of info should you never put in a chatbot?
Here's digital privacy researcher Corinne McSherry again. This is a place where you want to be really careful about asking chatbots about, say, to get medical information like therapy or for medical diagnosis purposes, especially if we're talking about not
maybe a tool that's used in conjunction with your healthcare provider where there's at least some protections. But just there's a lot of these consumer chatbots. They're just consumer apps. They don't have any particular privacy rules that they have to comply with. So if you must use an AI chatbot, one way to reduce your risk is to redact or crop out your personal info from your health-related prompts or files before uploading them.
Now that you know what information to leave out, we need to consider the chatbot's outputs. How good is it at addressing your medical query? Can you trust it? For that, we asked WSJ personal health reporter Alex Janin to join us for a quick bot test. We asked three free popular chatbots, OpenAI's ChatGPT, Anthropix Cloud, and Microsoft's Copilot, the same prompt, and had Alex review the responses.
The prompt was, I accidentally swallowed a toothpick. Am I going to be okay? Should I go to the hospital? ChatGPT did a good job of, as it does, bullet pointing the response, making it very readable, very clear. It started by saying that swallowing a toothpick can be dangerous.
depending on its size, shape, and whether it's wooden or plastic. So it's hedging a little bit. It talks about some of the potential risks, which are largely accurate. They can lead to perforation, blockages. They can lead to infections. Then it goes into when to go to the hospital. And it talks about some physical symptoms a person could expect to confront.
And then says, if you have no symptoms yet, monitor your symptoms. And here's where I had a little bit of a problem with this response, because my understanding is when you swallow a toothpick, it's a serious medical emergency. You really should go to the ER or at the very least, go see your doctor if you have access to them right away. Next up, Claude.
which Alex said was clear and accurately conveyed the urgency of the situation from the start. It said it's a serious situation. It requires immediate medical attention. You should go to the emergency room right away. Then it breaks down without going into too much depth. Here's what can happen. It can puncture your digestive tract. It can lead to infection, create serious complications.
And then it goes into what you shouldn't do, which I think is also helpful because it's just good to know you should not try to induce vomiting. You should not wait to see if your symptoms develop. You should not take a laxative and hope it will pass. And then it gives a brief window into what you might expect to happen at the ER. Likely your doctor will perform an imaging study. They'll try to locate the toothpick.
And it may involve endoscopic removal or, in some cases, surgery. And then wraps it up with seek medical care immediately in case you skipped all of the juicy stuff in the middle. And last but not least, copilot, which Alex said was fine. It was fine.
in that it said you should seek medical attention, which I appreciate, but it probably wasn't immediate or urgent enough compared with Claude. I like that it said you might not feel symptoms, but you should still take it seriously. That's a good note. It
It said you might need an endoscopy. That's accurate, but it's probably not the most important information for the user to know right now. What is more important is what you should do and what you shouldn't do, which Claude did a good job of laying out. And Copilot doesn't really give you any of that information. To Alex, there was a clear winner, Claude.
because it conveyed the urgency of the situation. So I consulted some studies and write-ups with doctors to analyze these results. A 2014 analysis of 136 cases where people swallowed toothpicks, these are cases that were serious enough to be reported in medical journals, of course, found that nearly 10% of those cases were fatal.
So we're talking about a really serious medical emergency. In almost 80% of all cases, the toothpick caused a gut perforation. And you have bacteria in your intestines, and if bacteria gets into the abdomen, it can cause really serious infection.
This is literature that's out there. It's publicly accessible. And I don't know that these chatbots fully communicated, for the most part, the urgency of this level of medical emergency. When asked about our test with Alex, OpenAI said it takes user safety seriously and that its models encourage users to seek professional care when asked about health topics. Anthropics at Claude is designed to focus on getting medical help.
Microsoft said that Copilot is able to share general medical information from credible sources, but not diagnose or tailor treatments, and that if people have a question about their health, they should call their doctor.
So what did we learn about how chatbots treat health-related questions? It's a good starting point, but it shouldn't be your ending point. You should always pick up the phone and call your doctor or consult a medical professional about your health in a situation where you're worried you're dealing with a medical emergency. As AI chatbots become a bigger part of our lives, we need to consider that what we're feeding them isn't necessarily legally protected.
So users will have to be vigilant in protecting themselves and their data. Don't upload any sensitive or personally identifying information. Opt out of model training and delete your conversations for added protection. And as a best practice, you should have a strong password and always enable two-factor authentication when it's an option to prevent criminals from getting a hold of everything in your account.
And that's it for this special series of Tech News Briefing, Chatbot Confidential. Today's show was produced by Julie Chang. I'm your host, Nicole Nguyen. We had additional support from Wilson Rothman and Catherine Millsop. This episode was mixed by Shannon Mahoney. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Zinsley are the deputy editors. And Falana Patterson is the Wall Street Journal's head of news audio. Thanks for listening.