You may get a little excited when you shop at Burlington. What a low price! Did you see that? They have my favorites! It's like a whole new world! I can buy two! I'm saving so much! Burlington saves you up to 60% off other retailers' prices every day. Will it be the low prices or the great brands? You'll love the deals. You'll love Burlington. I told you so.
People are spilling it all to generative AI-powered chatbots to get help with everything, from writing code to planning vacations. But some use cases are riskier than others. I'm Nicole Nguyen, personal tech columnist with The Wall Street Journal. And over the next few weeks, I'll be hosting a special series of tech news briefing, Chatbot Confidential.
We'll explain what not to tell AI chatbots, whether it's personal, like getting medical advice, or professional, such as composing a tricky email. On the show today, it's tax season.
The deadline to pay your taxes is in 15 days. Over 161 million people filed individual income taxes in 2022, according to the most recent data from the Internal Revenue Service. And recently, some people on social media are recommending AI chatbots like ChatGPT or Claude for tax help. Like
Like these. Not sure what you could write off on your taxes if you started a business. How about you combine that question with the power of AI? Tax season is coming up, but ChatGPT is going to make it so much easier to file your taxes. Before we go further, we should note, News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content licensing partnership with ChatGPT maker OpenAI.
According to a recent Harris poll, 17% of filers said they would use AI for tax prep help, while 45% said they would consider it in the future. Even tax preparation software companies like TurboTax and H&R Block have launched their own AI chatbots. OpenAI and other popular chatbots are free to try. But can you trust their answers? The main issue is that you'll think that you know something that you don't really know if you use the bot.
That's Laura Saunders. She writes the tax report column at The Wall Street Journal. It might tell you about a certain break, but it won't tell you everything about it or what's an alternative to that or this was true in 2021, but it's not true in 2025. So it lacks context. However, it does, you know, a pretty good job of telling you some things and orienting you in the landscape.
And the U.S. tax system is so complicated, it's no wonder AI gets it wrong, let alone people. So why is it so hard to file taxes in the U.S.? There are a lot of reasons for that. One is that taxes are complicated because life is complicated. For example, there's a very generous tax credit for children. It's $2,000 a year.
So what if you have an eight-year-old who lives with you? She's your niece or your grandchild, and she's with you for nine months of the year during school. Do you get the credit or would somebody else get the credit?
Next, there's human nature. Taxpayers will drive a truck through any tax loophole they can find. It's even kind of a game for them. So Congress and the IRS have to write the law to mean exactly what's intended, not more and not less. I mean, this takes a lot of words. Laura and I sat down to test out three free popular chatbots, OpenAI's ChatGPT, Anthropix Cloud, and Microsoft's Copilot.
We asked each of them the same prompt and had Laura review the responses. The prompt was,
Of the three, Chachubi Tea was the most detailed. However, it only goes through 2023, the tax code. So it tells you what the standard deduction was for 2024, but not for 2025. And if you're doing some planning, you really need to know those numbers for 2025.
Another thing is that it talks about benefits for children, like the dependent care credit or flexible spending accounts. But it doesn't say that usually you can only have one or the other. So you might think, wow, this is great. I'm going to get these two great benefits. But probably you only get one of them. This is how something like ChatGPT could trip you up. You certainly have to go beyond it to find out more. Next was Claude.
It had much less detail about numbers, and that's sort of good because it doesn't mislead you. But I noticed one thing that was really wrong. The prompt said, I'm an employee and work at home. What about the home office deduction? Well, if you're an employee who works at home, you are not eligible for a home office deduction at all. But CLAWD implies that you are.
And so it gave information that's not accurate. And finally, copilot. It was even more vague than the first two. It doesn't give numbers or almost no numbers. For instance, it doesn't tell you that the child tax credit is a credit of up to $2,000. And it goes all the way up to $400,000 of income. So many, many people can get it. It's not like a credit that ends at $1,000.
A much lower income. To be clear, Copilot's response had said the user may qualify for the child tax credit if your income falls within the eligibility range, which most people in the country would. When we asked Anthropic about Laura's test, the company said Claude can be helpful to tax preparers by outlining potential deductions and credits.
The spokesperson said the bot also recommends professional tax help for the most accurate guidance. Microsoft said it encourages users to enter multiple prompts as it helps Copilot refine its answers. For more intricate questions like filing taxes, Microsoft recommends the Think Deeper toggle, which uses advanced reasoning for more complex tasks. And OpenAI did not respond in time for publication about ChatGPT's tax prompt results.
So which of the three performed the best? I thought ChatGPT was the best of the three. It had the most detail. Still, Laura says the best place is going to the IRS site itself.
The IRS has publications on all of these areas. Retirement savings, selling your home, mortgage interest, things like that. Beyond that, she recommends hiring a tax professional. Also, H&R Block gives advice. So does TurboTax. I would check out those if you don't have access to a person that you trust. But when you ask them for advice...
Find out what's involved, how much it will cost, and exactly what protections they're providing. Ultimately, if you're going to use these AI chatbots, just make sure to double-check the information spat out by these machines. Because as we've mentioned, they can get things wrong. Just be careful and remember, garbage in, garbage out. Isn't that one of the first rules of computing?
When we come back, we'll hear from a privacy expert on what these chatbots do with your data and how to protect it. That's after the break.
Make your next move with American Express Business Platinum. Earn five times membership rewards points on flights and prepaid hotels booked on amextravel.com. And with a welcome offer of 150,000 points after you spend $20,000 on purchases on the card within your first three months of membership, your business can soar to new heights. Terms apply. Learn more at americanexpress.com slash business dash platinum. Amex Business Platinum. Built for business by American Express.
As more AI chatbots pop up and more people use them, we wanted to know about what happens to your data. And to understand that better, we sat down with someone who's been studying this for many years. I am Jennifer King. I am the Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence, where I research, wait for it, data and privacy and AI.
To begin, when people include information in prompts they give to chatbots, who owns that data? Well, I would argue that you lose possession of it. I looked at the privacy policy of one popular AI chatbot, and it's very clear from their privacy policy that any data that you provide them, ideally through that prompt, answering that prompt, much like a search box, goes to them. And they will potentially use it for training purposes.
Now, they may use it for all the normal things that companies have been using data for decades to improve their products and services, personalization. But in that, even in this context, they explicitly state that they may use that data for retraining, which I think is one of the concerns that a lot of us might have, that our data is not just going to be stored, but is going to be repurposed, essentially. And King says just because you give data over to chatbots, that doesn't mean that data will pop up somewhere else.
And of course, if you enter something identifiable, they may actually be working to strip identifiable data from the training set. Or when it's being processed and being used by the chatbot, they may try to make sure that full names, for example, aren't spit back out in the context of a discussion. So the guardrails will really matter.
But there's certainly been research for over the last few years that has found instances where not necessarily chatbots per se, but LLMs in general have repurposed or spat out memorized data that has mostly just been found online.
So what steps can you take to keep your data private from companies running these chatbots? ChatGPT has a feature called Temporary Chat that you can turn on. OpenAI says with this feature, ChatGPT won't be aware of previous conversations and that temporary chats won't be used to improve our models.
Meanwhile, anthropic markets clawed as privacy first. And Microsoft says it doesn't use customer data to train Copilot or its AI features, unless users provide consent for the company to do so. It also says it doesn't share customer data with a third party unless granted permission by the customer. But how much can you trust what they say?
You can trust them, I think, insofar as if you're making a public statement and a regulator can also view that statement and has the power to actually ask the company questions about that in a context where their answers could be used against them in a regulatory action, you could trust it. I don't necessarily think they're trying to mislead you. And it may be they're not selling or sharing your data with any other third parties, but
But again, there's that question of are they using it to retrain?
So, if you're going to use these chatbots for tax prep, here's the deal. They can give you a starting point for where to look for deductions. But a good rule of thumb is to trust but verify. Confirm specific rules and numbers with an official source because the tax code can change. And to protect your privacy, don't upload documents such as tax returns with sensitive information. Keep your prompts vague and omit personally identifiable information, such as your social security number.
Companies have put in some guardrails to try to prevent bad actors from digging up users' information. For instance, when you ask ChatGPT for personal data, the bot says, quote, I can't provide that information. But that doesn't mean they're foolproof. Next time, we'll tell you about using chatbots in the workplace and what risks come with using them when, say, asking AI to draft up an email to a coworker. Before we go, we want to hear from you.
Do you have questions about using AI or regarding privacy? Send us a voice memo at tnb at wsj.com or leave us a voicemail at 212-416-2236. That's 212-416-2236. I'll be back in a future episode to answer some of your questions.
And that's it for Tech News Briefing. Today's show was produced by Julie Chang. I'm your host, Nicole Nguyen. We had additional support from Wilson Rothman and Catherine Milsop. Jessica Fenton mixed this episode. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Inslee are the deputy editors. And Falana Patterson is The Wall Street Journal's head of news audio. Thanks for listening.