We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Chatbot Confidential: When AI at Work Is Risky Business

Chatbot Confidential: When AI at Work Is Risky Business

2025/4/6
logo of podcast WSJ Tech News Briefing

WSJ Tech News Briefing

AI Deep Dive AI Chapters Transcript
People
K
Kathy Kay
N
Nicole Nguyen
S
Stephen Rosenbush
Topics
Ashraf Zayed 和 Ian Yang:我们日常工作中使用AI聊天机器人,用于工程、建模和仿真等专业任务,以及一些个人用途,例如给孩子讲故事。 Nicole Nguyen:本期节目探讨了在工作中使用生成式AI聊天机器人(如ChatGPT和Claude)时如何保护隐私和数据安全。越来越多的员工在工作中使用AI,但同时也带来了风险。 Stephen Rosenbush:大型语言模型(LLM)存在两种主要的安全风险:数据泄露(outbound)和导入恶意软件(inbound)。公司需要关注这些风险,因为技术发展速度快于政策制定速度。LLM的风险类似于云计算早期阶段,需要建立共享责任模型,但目前公司承担了大部分责任。 Kathy Kay:我们公司已经限制了公共聊天机器人的使用,员工需要获得批准并接受培训才能使用。我们也使用自己的内部聊天机器人,并追踪员工与外部机器人的互动。如果发生数据泄露,我们有应急预案来处理。重要的是要创造一个安全的环境,让员工尝试使用这些新技术,并从中学习。 Nicole Nguyen: 生成式AI工具的兴起给企业带来了新的挑战。员工很容易无意中泄露公司机密信息,例如三星和苹果公司已经发生的案例。公司需要制定相应的策略来管理员工对这些工具的使用,包括制定使用流程、进行员工培训以及建立责任追究机制。

Deep Dive

Chapters
Generative AI's integration into the workplace is rapidly changing how tasks are performed. The use of AI chatbots for various professional and personal tasks is increasing, raising concerns about data privacy and security. This episode explores the risks and benefits of using AI tools in the workplace.
  • Increased use of AI chatbots in the workplace (doubled in a year)
  • Common uses: research, writing emails, presentations
  • Growing awareness of AI's potential for data breaches and security risks

Shownotes Transcript

Translations:
中文

It's safe to say work will never be the same again now that generative artificial intelligence is in the picture.

People have used AI chatbots for all sorts of tasks. I use it on a daily basis for engineering and for modeling and simulation, which is really my job. Sometimes for fun stuff like, oh, tell my son a story while we're driving in the car to keep him busy.

Those were Wall Street Journal readers Ashraf Zayed and Ian Yang.

And I'm personal tech columnist Nicole Nguyen. Today, we bring you the second installment of our special tech news briefing series, Chatbot Confidential, where we look at whether it's possible to protect your privacy and keep your personal data safe when using generative AI chatbots like ChatGPT and Claude. In this episode, we dive into an area where the temptation to tap gen AI tools is very strong.

work. We'll give you the lowdown on the risks your new helper brings and how not to give away company secrets while using them. Before we dive in real quick, a reminder that we want to hear from you. Do you have questions about using AI and privacy? Send us a voice memo at tnb at wsj.com or leave us a voicemail at 212-416-2236.

One more time, that's 212-416-2236. I'll be back in a future episode to answer your questions. All right, back to the show. After ChatGPT first came onto the scene, many companies were quick to ban the chatbot. Still, one in five U.S. workers said they used ChatGPT for work in 2024, according to the Pew Research Center. That's more than double compared to the year before.

It's easy to understand why. Chatbots can take on some of your work, saving you time, the most common use cases, research, writing first draft emails, and creating presentations. Before we get into it, News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content licensing partnership with Chachibee team maker OpenAI.

Another Pew survey found that about 16% of respondents say they do at least some of their work with AI. And a quarter say, while they're not using it much now, at least some of their work can be done with AI. So with AI use growing in the workplace, what are some risks employees, and their employers, should keep in mind when it comes to these large language models, or LLMs?

Stephen Rosenbush is the Chief of the Enterprise Technology Bureau at WSJ Pro.

Companies are very familiar with a certain kind of LLM risk. They're familiar with this idea that the LLMs might make poor decisions in a very convincing way, that they might hallucinate, that they might be biased in some way. But they're not too focused on this idea that the LLM could present an actual cybersecurity threat. And security pros have two names for that threat.

Outbound, as in a data leak, and inbound, as in generating compromised code or recommending malicious software. Stephen explains. The outbound is somewhat more familiar. This is a cybersecurity threat in which there's a risk of data exposure, either intentionally or unintentionally.

In March 2023, a bug in ChatGPT allowed some users to see what other people initially typed in their chats. OpenAI also said the users' first and last names, email addresses, and payment information were exposed. OpenAI said it is committed to user privacy and keeping its data safe.

But there's also an inbound threat in which companies could be at risk of importing not just compromised data, but actual compromised software through an LIM. Stephen says such threats are bound to multiply, especially as more Gen AI tools flood the market. As the tech is still so new and technology advances at a much faster clip than the government's ability to enact policy, most companies are on their own, at least for now.

It reminds me of the early days of cloud computing, when many companies were moving to the cloud and they didn't really fully appreciate the risks that were sort of hidden in the system. There was so much technical work to be done. They didn't have real visibility and they didn't really understand what the cloud providers were responsible for, what they were responsible for.

and make sure that everyone was living up to that bargain. So I think that over time, we'll see a similar shared responsibility model take shape when it comes to LLMs. Right now, let's say that the dial, that the share that falls on the company itself

is pretty close to 100%. - And amplifying the risk, companies are made up of hundreds, sometimes thousands of people. And with that, points of potential failure abound. So who's responsible for making sure a company isn't at risk when employees engage with new online tools? When we come back, we'll hear from a chief information officer on how she's handling the use of Gen AI in her workplace. That's after the break.

With leading networking and connectivity, advanced cybersecurity and expert partnership, Comcast Business helps turn today's enterprises into engines of modern business. Powering the engine of modern business. Powering possibilities. Restrictions apply. Since the advent of ChatGPT and other Gen AI tools, security chiefs at companies have had to figure out how to mitigate risks. And it's not just cyber breaches they have to worry about.

Generative AI brings with it a unique challenge: it's easy for employees to inadvertently spill company secrets, like confidential or proprietary information. And this has happened already. According to Bloomberg, Samsung banned the use of ChatGPT and other AI-powered chatbots after sensitive internal source code was accidentally leaked to ChatGPT by an engineer.

And the Wall Street Journal reported that Apple has restricted external AI tools for some employees as it develops its own similar technology. Documents viewed by the journal showed that the iPhone maker is concerned workers could release confidential data. So how are company leaders addressing this?

Kathy Kay is the CIO of the global financial company Principal Financial Group. We actually have locked down any of the public chatbots. If somebody wants to use them, we have a whole workflow that will say, what's your business rationale? And then there's an approval. They have to take a quick training. Their leader has to take a training. Kay says they've also signed agreements for enterprise technology to use at the company.

like their own chatbot. We call it page that people can use that provides a lot of protections around making sure that they're the only ones who are leveraging the data that they have access to, things like that. For those that do go outside, we do track the interactions they're having with the external bots. So some bosses can see everything you've typed in on a company device. But do they look? That's a discussion for another time.

Uploading a client contract, composing an internal email, generating a chart with undisclosed financial data. Getting an unauthorized bot to do any of that could land you and your company in hot water if that data is leaked or absorbed as a part of the model's trading dataset.

Kay says if company secrets do get out, there's a system in place to deal with the fallout. We have a whole playbook of who do we immediately include? How do we assess the impact of that? Were customers impacted? But the best failsafe for companies, she says, is to work with the new tech, train up their staff and trust employees. With any new technology,

You have to find ways for safely allowing employees to try these things, right? Because if not, if you make it so hard for them to try these things, they're going to make mistakes going around all the blockage, right? And so my philosophy is, how do I make a safe environment for employees to try these things such that they're learning, we're

coming up with new ways of using it. As Kay suggests, people will keep coming up with new ways to use these tools, like getting medical advice. Next week, we'll tell you about using chatbots in your personal life, specifically health, and how to do it without compromising your privacy. And that's it for this episode of Tech News Briefing's special series, Chatbot Confidential.

Today's show was produced by Julie Chang. I'm your host, Nicole Nguyen. We had additional support from Wilson Rothman and Catherine Millsop. Shannon Mahoney mixed this episode. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Zinsley are the deputy editors. And Falana Patterson is the Wall Street Journal's head of news audio. Thanks for listening.

With leading networking and connectivity, advanced cybersecurity and expert partnership, Comcast Business helps turn today's enterprises into engines of modern business. Powering the engine of modern business. Powering possibilities. Restrictions apply.