Welcome to the HBR IdeaCast from Harvard Business Review. I'm Curt Nikif.
Artificial intelligence is changing business as we know it, but the extent of those changes depends on two things. First, how good the technology gets. And second, how much companies adopt it and consumers actually buy it.
And there's a gap there. According to a Gartner survey, for instance, four out of five corporate strategists say AI will be critical to their success in the near future. But only one out of five said they actually use AI in their day-to-day work.
That was a 2023 survey. It's probably different today. But the point remains, adoption is lagging. And a key reason for that is perception. Many people view AI and automation negatively and resist using them.
Today's guest has studied the psychological barriers to adoption and explains what managers can do to overcome them. Julian De Freitas is an assistant professor at Harvard Business School, and he wrote the HBR article, Why People Resist Embracing AI. Julian, hi. Hi, Kurt. Thanks for having me on the show.
Julian, the adoption of technology is an age-old experience for people. We've resisted technology many times in the past and have adopted it. Is AI any different from other technologies when it comes to resistance to adoption? I think the answer is yes, and we're finding in many cases AI is different from a consumer perception standpoint.
What we're seeing is that in many use cases, people perceive AI as though it is more human-like as opposed to being this sort of non-living technology. And this has profound implications for a number of marketing problems, such as overcoming barriers to adoption, but also new ways of unlocking value from the technology that aren't possible with previous technologies.
And then, of course, there's also interesting challenges around the risks because, you know, it's not actually the case that this is another human. It does fall short of humans in various ways. And so if we treat it as a full-fledged human being, that could also create challenges and risks. What are the main ways that people see AI as something that they want to drag their feet with, as something they want to resist?
We tried to narrow it down to five main barriers. At a high level, I think you can summarize them as AI is often seen as human-like but not human enough, or conversely, it is seen as a little bit too human, you know, too capable.
And then there's one last barrier that's just about how it's really difficult to understand. So the five barriers that myself and my colleagues have identified through our research are that AI is opaque, emotionless, rigid, autonomous, and not human. So let's talk about these roadblocks one by one, starting with AI being too opaque. What does that mean?
So this is the idea that AI is black box. You know, there are inputs that come in, let's say email, and then outputs that come out. You know, it tells you if the email is spam or not, but you don't really understand how it got from the input to the output. Or there are these really sophisticated chatbots, and you just can't really predict what they're going to do in any new situation.
And admittedly, there are many products that we don't understand, but this is particularly acute for AI given the complexity of the models.
Some of the latest models are operating using billions or even trillions of interacting parameters, making it impossible even for the makers of the technology to understand exactly how it works. I remember seeing a video where somebody was talking about autopilot on a plane where the pilot said to each other, you know, what is it doing now?
Just that sense that, you know, it's doing something for a reason, but you can't quite figure out why it's doing what it's doing. So what do you suggest companies or product designers do in this situation? So one obvious intervention is to try to explain how their systems work, especially answering this question, you know, why is the system doing what it's doing? So, for example, an automated vehicle might be stopping because there is an obstacle ahead.
as opposed to just saying that the vehicle is stopping now. Another solution, sometimes companies will ease stakeholders into the more difficult-to-explain forms of AI. So one example which a colleague, Sunil Gupta, wrote a case about is Miroglio Fashion, which is an Italian women's apparel company.
And they were dealing with this challenge of forecasting the inventory that they would need to have on hand in their stores. Previously, this was something that the local store manager was responsible for. But they realized that they could get more accurate at doing this, and this would translate into higher revenues if they could use some kind of AI model.
They had two options. One was to use the sort of latest off-the-shelf model that really operated in a way that was hard to understand. So it could extract all sorts of features about the clothing that you and I can't even perfectly verbalize and use that to forecast what the store should order for the next week.
But there was also a simpler model which would use easy-to-verbalize features such as the colors or the shapes of the clothing and then use that to predict what to order for next week. And so even though the first type of model, the more sophisticated one, performed much better...
They realized that if this was going to be implemented, they needed buy-in from the store managers. The store managers needed to actually use the predictions from the model. So for that reason, they initially at least rolled out the simpler model to a subset of their stores. And the store managers did use these. The stores that had this model performed better than the ones that didn't.
And after doing this for some time, eventually they felt ready to upgrade to the more capable model, which they did eventually do. In some ways, they ended up with a model that is still not very easy for you and I or the store managers to understand. But what they did is they
trained their employees to get used to this idea of working alongside this kind of technology to make these kinds of predictions. So they kept the human factor in mind. That's really interesting.
So what about this critique of AI that it's emotionless? At the heart of this barrier is this belief that AI is incapable of feeling emotions. There are many domains that are seen as depending on this ability, domains where some sort of subjective opinion is very important. If you are selling some sort of offering product
and introducing AI into it. While if it's a domain that is seen as relying on emotions, you're going to have a hard time getting people to get comfortable using AI in that domain. This also makes me think of automated voices, right? On your smartphones or smart speakers where, you know, a lot of companies use a woman's voice and
Doesn't make it right, but they use a woman's voice because it's perceived as more trustworthy, more engaging. Is that what you're talking about here? Yeah, you're absolutely right that imbuing the technology with...
gender, a voice, even other cues that we typically associate with having a body and a mind, like when Amazon's Alexa goes, hmm, you know, it's as if it's really pausing and thinking, or if you imagine introducing breathing cues and all these sorts of things. Well,
What they do is they subconsciously tell us that we're interacting with an entity that is like a human. And these kinds of anthropomorphizing interventions do indeed increase how much people feel the technology is capable of experiencing emotions.
Another strategy that I've seen is instead of trying to convince people in some way that this AI system can indeed experience feelings.
Instead, you can play to what are already seen as being AI strengths. Take dating advice. Much of the experiments will show that people prefer receiving dating advice from a human than from some kind of AI system. And that gets flipped when you think about something like financial advice.
But if you tell people that actually getting the best dating advice or getting the best match in the domain of dating really does depend on having this machinery beneath the hood that can take in as inputs, you know, your demographics and any information you might have provided the company. And then it has to sort of sort and rank and filter various
possible matches to find the perfect match to you, now people can see how something that they would typically view as being highly subjective and dependent on emotions actually benefits from an ability that they already think AI is good at. So a company like OkCupid, for instance, often talks about how its AI algorithms are doing this to find the perfect match for you. That kind of intervention or
also helps get around this emotionlessness barrier. Do you have to know as a product designer or company
Whether your product is maybe better left emotionless, whether it might be a mistake to introduce emotion. Are there certain products where you really want it and certain products where you really don't? For sure, yeah. I think there are domains where what you're talking about is very sensitive in nature or embarrassing emotionally.
It's tempting to make your chatbot as lively and human-like as possible as a default, but it might not make sense for your particular use case. There are examples where people are actually happy that what they're talking to is an AI system as opposed to a full-fledged human being judging you and analyzing you.
Well, related to this idea is that people are worried that AI is too autonomous, you know, that it just has a mind of its own and is going to, you know, do what it does without taking me into account.
That's right, yeah. So there are some cases where AI systems seem to have too much control. You can think about a robo vacuum that can vacuum and mop and do all these things that normally you used to do.
Or you can think about some sort of home automation system for regulating temperature that's running these algorithms to change the temperature throughout the day without you needing to do anything.
These kinds of systems can begin to feel as though they're taking control away from you. Autonomous vehicles, it's a system where you're getting into the car and now it's making all of these complicated decisions and adapting to various settings and you worry that you're not going to be able to take control at the moment that you need to. That's an example of where in some ways it's the
reverse of what we were talking about earlier where AI systems can at least in some cases seem too capable for our own taste.
Need to improve your expense management processes? Consider the U.S. Bank Commercial Rewards Card for your business. More than just a card, it's an all-in-one card, expense, and travel management solution that simplifies expense management with a simple, intuitive platform. Get control and real-time insights into spend, set custom policies, and earn rewards, all with no annual fee.
Manage your cash flow better and streamline your operations. Learn more at usbank.com. AI is coming to your industry if it isn't already here. But AI needs lots of speed and computing power. So how do you compete without cost spiraling? Upgrade to Oracle Cloud Infrastructure or OCI.
OCI is the blazing fast and secure platform for your infrastructure, database, application development, and AI workloads. Right now, Oracle is offering to cut your current cloud bill in half if you move to OCI for new U.S. customers with minimum financial commitment. Offer ends March 31st. See if your company qualifies at oracle.com slash ideacastai.
And just to underline what you said about those two examples with automated thermostats, Nest, the thermostat company, lets you use sort of a learning algorithm or you can just switch to manual mode. You give people the option of choosing which one they want to go with and you give them that sense of control. And then for the Roomba vacuum, iRobot actually...
programmed it to move in predictable paths rather than unpredictable ones that might have been better, but just to give more of a sense that it was under control and not, you know, having a mind of its own. I think the broader idea with this second type of intervention is to
put humans in the loop. So even if the system is doing most of the work, giving people the sense that they are still in control makes the world of difference. One thing we know from the research is you don't need to give people too much control either in order for them to feel like they're in control. And in some cases,
This is a good thing because overall, the system might be more accurate if the AI system is doing more of the work. Is this the same perception that AI is too inflexible, even though it's theoretically built around your needs and prompts?
Yeah. So inflexibility is in some ways the very opposite of autonomy. So while it's true that there are cases like automated vehicles where the system behaves in a very sort of autonomous way that seems to take control away from you, there are other domains in which we worry that
This AI system is not going to be flexible enough, that it's not going to adapt to my particular unique problem that I'm trying to solve, because we believe that it can't learn from mistakes in the way that we see other human beings learning from mistakes. What's going to be helpful is including cues that suggest that the system is in fact learning
A lot of the lab experiments will show that even if you label the system differently, calling it, for example, machine learning as opposed to algorithm, this changes people's belief that AI is very inflexible. In some companies like Netflix, you see them address this by including little cues such as, you know, for you or, you know, recommended because you saw X.
which shows that it is continuously learning beneath the hood. There's also another strategy, which is, if you can, to not even talk about the fact that you're using AI at all. One very useful example that I saw was from Borusan Cat, which is a subsidiary of Caterpillar, the sort of large vehicle manufacturer.
Boris Ankat is in Turkey, and they were dealing with this issue that many of their B2B customers had where the equipment would eventually break down and then
Boris Uncut would have to repair the big machinery and often the parts of the machine had deteriorated to the point that they weren't salvageable. So it would take quite a while to get the machine back up and running again. And in the meantime, the customer would be left without the machine. So this was a really bad situation for all parties.
And so they realized that if only they could predict when the machine was going to malfunction ahead of time, they could avoid all of this altogether. So this is a perfect job for AI. They very smartly embedded sensors in all of the machines so that they could collect data on various features of the parts as they were used over time.
Now, when they tried to sell the service as a standalone, because it was really high-performing, I think they were able to predict with something like 97% accuracy whether the machine was about to have a failure. What often happened is the customer said,
Are you telling me that from your office there in Borosuncat, you can tell me better than I can, who uses this machine every day and knows all of its quirks, that this machine is going to fail? I don't buy it. This is a sales gimmick. I'm skeptical that you're really providing a solution that's personalized. What the company eventually did was it just folded this ability into
into the maintenance contract. So it told the customer, look, we promise you that if you go with this particular maintenance contract, you will never have machine downtime. And instead of selling it as a standalone service, they just gave the customer this promise. And they found that that worked much better. Not only that, but because it was part of this bigger ticket maintenance contract,
The salespeople were also more motivated to now sell this full bundle. And also because they were able to predict when machines would break down. Before they actually broke down, they were able to salvage many of the parts and refurbish them and sell them to create additional revenue streams.
In this particular case, they didn't need to talk about the fact that AI was involved. And it allowed them to completely circumvent this concern, the skepticism that the customer would have that, you know, your AI system is not going to be able to solve my particular needs.
So maybe the biggest issue, the fifth roadblock, is that people prefer dealing with people. They're obviously an important part of our jobs that aren't just about increasing productivity. Work is a human experience and a collaborative experience. How do you tackle this concern that, you know, I'd just rather talk to a person?
What we do know is that people will use AI systems if they believe that they truly outperform humans trying to do the same job. But when the performance is equated between the AI system and the human, people continue to prefer to interact with the human.
Of course, we're not yet at the point where there are, you know, humanoid systems walking around that both physically and mentally resemble us perfectly. An interesting question, maybe a bit more of a science fiction-like one, is, you know, in a near future where these types of systems are available...
Will we continue to interact with humans instead? I'm not sure exactly what the intervention for this will look like just because it is in the future. But one interesting idea is perhaps the kinds of interventions that will get people to
with these types of robot service providers would be the same types of interventions that social scientists have historically deployed in order to soothe intergroup relations of other kinds. So, for instance, when people don't want to interact with those who are not part of their ethnicity or whatever other group markers, they
And the reason for this is that when you have these interactions with those that you view as other, you slowly change how you psychologically represent them from something that's much more categorical and stereotypical to something that's much more nuanced and sensitive to their unique traits and
And that can eventually soothe anxiety or discomfort around interacting with them. So it could be in a similar way in the future. The more that we use them, the more that we'll eventually view them in a different way. By the same token, you might imagine that if these kinds of systems will be framed as helping you achieve your goals, complementing the goals that you already are striving to achieve,
then they'll be viewed as being on your side. And so that will also ease people's willingness to utilize them. If people are listening to this and they want to work in AI adoption of their company's products, I mean, what would you recommend to somebody to build in their careers?
A lot of this can be done, I believe, by managers of all kinds, as long as they're sensitive to the human factor. Anyone with marketing training, for instance, learns this skill.
bitter lesson that good products don't sell themselves. And I think in a similar way here, if one is aware of these types of barriers, then you can get good at identifying for any use case, you know, which are the particular barriers that are at play, and then what can I do to, you know, address them so that people view this technology in a way that doesn't conflict with their, you
existing way of viewing the world and that will ease their concerns and lead to adoption. And I'm also wondering about ethics here. It just reminds me a lot of like the early web, right? Where a lot of the marketing training was like how to create habits and how to get people to spend more time on your site and click more things, right? And there's just a lot of psychology work that went into that.
And now there's a lot of pushback and criticism that some of these online products have just become addictive and not productive. What would you recommend to managers about the ethics of the psychology work that they're doing as they try to increase adoption of their products?
I think the same interventions that could increase adoption of AI in the short term can create risks for consumers, firms, and society in the long term. So I think managers need to have a very long-term view of not just, you know, is this going to increase customer acquisition, but also, okay, once the customer starts to use this product, you
you know, what are the downstream concerns that I should be thinking about? And I think adopting that long-term view will allow them to intervene in a way that's more balanced, where they're thinking about the full sort of lifetime of the customer rather than just that initial acquisition phase.
So, for example, this barrier of viewing AI as very rigid, one solution is just to give people the most capable, flexible systems,
that also increases the chance that they will use the system in ways that you didn't even intend for them to use it, creating potential risks. So one study we did, for instance, was looking into these so-called AI companion applications, which are
applications specialized for developing social relationships. So if you've seen the movie Her, it's pretty much the same idea where this is an AI friend or romantic partner in your pocket.
Now, the intended use of these apps is exactly for that. But what we found was that about 5% of users were also using the system to express pretty serious mental health problems that they had, including in a subset of these messages, crisis messages such as self-harm, ideation.
And we found actually when we audited the performance of these apps by sending such messages to them and classifying how they responded,
that about 25% of the responses weren't just unhelpful, but they were also deemed as risky by a clinician. And so that's an example where giving people that flexibility on its own is not necessarily the best approach, but you want to also think about, does the system truly need to be that flexible for me to give the customer the benefits of the technology here? And
If so, what are the additional guardrails I need to put into place to protect against these downstream risks that are going to harm not just
the consumer, but also, you know, me as the firm and my reputation for being able to provide this kind of offering safely. Well, the human mind is complex and these business problems are complex too. So it's been really helpful to talk through some of these challenges and avenues for solutions with you. Julian, thanks so much for coming on the show to share your research.
Thanks again so much for having me, Kurt. It's been a real pleasure to share some of these ideas and think through some of the nuances with you.
That's Julian DeFreitas, assistant professor at Harvard Business School and the author of the HBR article, Why People Resist Embracing AI. And if you want more, we have over 1,000 episodes and more podcasts to help you manage your team, your organization, and your career. Find them at hbr.org slash podcasts or search HBR and Apple Podcasts, Spotify, or wherever you listen.
Thanks to our team, senior producer Mary Du, associate producer Hannah Bates, audio product manager Ian Fox, and senior production specialist Rob Eckhart. Thank you for listening to the HBR IdeaCast. We'll be back on Tuesday with our next episode. I'm Curt Nikish.