And rapidly changing market speed to insight is everything. AI search is the newest feature on your marketer pro plus. It helps you streamline research, delivering context driven answers in seconds. No more endless searching, just relevant insights to power your strategy. Stay ahead with AI search. Why wouldn't you exclusively on pro plus learn more on our website, emarketer.com.
Hey, happy to be back.
Hey, fella. We also have our SVP of Media Content and Strategy. He lives up in Maine. It's Henry Powderly. Hey, Marcus. Hello, sir. Today's fact. We start there. How long would it take you to run around the world? Any guesses? Sounds like a trick question. I know it shouldn't be, but someone actually did this. So Englishman Kevin Carr did it. Just amazing. It took him 20 months. That's a long time, Kevin.
The greatest distance, and that's at a decent pace as well. It would take most humans most of their life, basically, to run around the world. What a trip, though. The greatest distance run in 24 hours, close to 200 miles, which is what, nine marathons? By Lithuanian Alexander Sorokin in August of 2021. He did the run in Poland.
um dean carnazes holds the unofficial record for the longest run without sleep 350 miles which he ran over three and a half days back in 2005 without sleep can't be good for you no no no dean who's if someone close to me said marcus i'm thinking about running for three days straight i'd be like don't
Okay. I don't care about the records. Stop it. Oh my goodness. Remarkable. Yep. I'm exhausted already. It's barely midday. Three days. Trying to stay awake is tough. You kept running. Well played Dean. Anyway, today's real topic, using AI at work part two, how companies are using it and some tips on how best to use it in the workplace.
All right, gents, for this one, let's start here. So Stuart who runs the team sent me this article and it was from Grace Harmon.
who writes for, she's part of our tech team, basically. Yeah, she's an analyst for the AI Tech Briefing. And that was a great piece that she put together. Fantastic piece, right? 41% of C-suite executives said adopting Gen AI is tearing their company apart and creating power struggles. So this was according to, this is what she wrote. This is according to Writers 2025 survey. It found that,
31% of employees admit to sabotaging their company's Gen AI strategy, with even higher shares of younger Gen Z and millennial folks doing so. One in 10 workers are tampering with performance metrics to make it seem like AI is underperforming. Why, you may be asking. Grace explains that the main reasons for obstructing Gen AI strategies included AI's risk of diminishing visibility,
their value and creativity, 33%, fears about AI taking over their job, 28%, and a bigger workload, 24%. Gajar, I'll start with you. Are we heading towards an employee AI backlash? I mean, I wouldn't say it's a wide scale, but possibly in certain situations, certain industries, I could see that happening, especially if...
Like we said in the previous episode, if AI implementation is done a bit carelessly, saying, oh, we need to adopt, we need to innovate. But at the same time, if employees are left to their own devices, pretty much, like this story that Grace put up, it says 49% of executives said employees are left on their own to figure out Gen AI. Yeah. I can see that being a point of friction, definitely.
Because all of a sudden, over and above what they're doing, now figuring out AI and how it makes the company better becomes something that they need to be taking care of as well, right? Yeah. Yeah. I mean, Henry, we talked about this a bit in the last episode about how...
it seems like a lot of tech executives, the people building the AI, aren't trying to sugarcoat this. Matteo Wong of The Atlantic writing as early as 2016, OpenAI CEO, leader, founder, co-founder Sam Altman said, as technology continues to eliminate traditional jobs, new economic models might be necessary, such as universal basic income, UBI. He warned repeatedly since 2016
then the AI will disrupt the labor market, telling Mr. Wong's colleague, Ross Anderson of The Atlantic 2003, jobs are definitely going to go away full stop. AI employee backlash, how likely? Well, I mean, I think it's likely and clearly happening based on this survey and research.
I mean, I think it's the responsibility of leadership in this example to change the perception. Because if the C-suite executives in this survey feel like it's tearing a company apart, they're obviously making it a priority while at the same time not giving any resources out to their teams to figure this stuff out. So that's going to leave people feeling, A, that they're just trying to be forced out or eventually their role will be replaced by an AI. And that's
That could be the case in some positions, but in many, that is not the case. I think a lot of this is driven by a need to get more efficient, more nimble, more creative. I think over the past few years, companies have been asked to do a lot more with less.
And one of the ways I look at Gen AI is perhaps giving companies a chance to reclaim some of that mental load back for the thinking, for the strategizing, for the developing. And I think that if they could tell that story more clearly to their employees, there would be a lot less...
while at the same time empowering them with resources to be able to learn how to use all of these things. - Yeah, I agree with that. I mean, I think the narrative should be that it's a tool that can help augment but not replace your employees. And using AI for things like support or just doing the more menial time-sucking tasks,
That could make a big difference in an eight-hour workday, right? Yeah. If applied properly, again, right? Yeah, saying we want to take this off your plate and then saying, so we can have you do this other stuff, as opposed to we want to take this off your plate and people are looking around thinking, okay, well, then what am I supposed to do? There does seem like there's to be a chasm. There does seem like there is a chasm between
how employees view AI adoption, how they think it's going at their company versus how the C-suite executives see things. There was a writer survey, we were citing them in the first episode on Monday. They found 70% of executives felt their company's approach to AI had been strategic,
strategic, successful, and that the business was AI literate. That number falls from 70 to close to 40% for how employees view their businesses AI adoption. So a 30% percentage point chasm between how companies think it's going, higher level people, and then the people on the ground. So Henry,
It does feel as though at AI use at work, it could end up feeling a bit ad hoc. I was thinking this a few days before seeing these numbers, actually. I
I was thinking, don't businesses need to outline a clear AI strategy? And then two days later, found this writer survey. And in it, they had specifically 90% of execs saying their company has an AI strategy. That number falls to 57% when employees were asked. So a big part of this has to be just confronting this thing head on. It's kind of like we were talking about on another episode about kids using AI for homework. It's one thing to...
you can't ignore it. You have to address it and say, okay, if you're going to use it, this is the right way to use it. This is the wrong way to use it. Not just let's hope that they don't use it. Let's kind of ignore the elephant in the room. Yeah, I love that. And I also think that the other question is,
Are they communicating what they're going to do as a company when they realize what they're trying to gain by using AI? If it's time savings, what are we going to do with that 30%? I mean, I can think of three positions on my team I would love to hire for in order to grow and do different things. But, you know, we're all working under our own budgets and our own realities. And so I think that, you know, not to be idealistic because, of course, you know, one side of efficiency is unfortunately, you know,
right-sizing a business, but on the other hand, it can be, you know, investing and building into business. And so I think the communication has to be as clear as possible. Yeah. Yeah. So let's talk about how to figure out how people,
can figure out where AI and businesses and employees of those businesses can figure out where best to use AI. Professor and senior fellow at the Stanford Institute for Human-Centered AI, Erik Brynjolfsson, was saying there's always this difficulty of translating even the most amazing or maybe especially the most amazing technologies into productivity and business value, calling it the productivity J-curve because it's sometimes
gets even worse before it gets better. He was saying we saw it with electricity, the steam engine, early computers. We're seeing it now. The real challenge, the bottleneck is figuring out how to identify business value. So Gaj, I'll start with you. How do you identify business value when it comes to figuring out where to inject AI into the company? I would buy and match AI solutions to certain outcomes. So you're trying to solve problems.
Whether it's cost reduction, you could do that through automation maybe, cutting down on repetitive workflows, things like invoice processing, data encoding, taking that into consideration or even like revenue growth through AI-driven personalization, dynamic pricing, chatbots for your sales and marketing services.
Clearly, you're trying to solve for specific problems, right? And sometimes it's a situation where you could have any types of solutions to fit that. But then, you know, AI is just a convenient and measurable and available tool that can quickly show you that it's working, right? Yeah.
Henry, how about for you? Where do folks start when they're looking at this? Do they just write down on the whiteboard a list of all the problems that they're having and then figure out, okay, let's prioritize them, let's rank them, start with number one and then go from there? How do we find the tool to fix the problem? I mean, I think that's one way, but
When you're talking about business value, you're talking about money, right? And so I think that's what you need to look like. You look at it from two sides. What can you do that's making you money that Gen AI is going to help you accelerate? And what is costing you money that Gen AI can help you reduce that kind of expense? With the product side, I think it's...
you know, what more can you make? It's by personalizing all of your messaging. Do you get an X amount lift and conversion rate? And what does that translate into, you know, the cost per acquisition? I think there's a lot of those equations that need to be worked out. But,
But I think that's where you start because the bottom line is the bottom line where we're using these tools in order to run more efficient and more profitable businesses. One thing that is, it's kind of a bit of a paradox because on the one hand, people are being told, maybe people are having a bit of cognitive dissonance or what, you know, people are being told use AI, it's making things faster, more efficient, better, improving the quality, et cetera. But,
Then they're being told, slow down, these tools aren't perfectly accurate and you can't trust everything that you're getting from the answers from these things because they're hallucinating, which is when they make up answers when they can't find the actual one. There was a new study from Columbia Journalism Review's TAL Center for Digital Journalism and it found serious accuracy issues with Gen AI models
for new searches. This is an article from Benj Edwards of Ars Technica. He was explaining that the researchers tested eight AI-driven search tools by providing direct explanation
excerpts from real news articles and asking the models to identify each article's original headline, publisher, publication date, and URL. And they discovered that the AI models incorrectly cited sources in over 60% of these queries, raising significant concerns about the reliability incorrectly attributing news content. Henry, I mean, how...
What do you make of this new study about AI module inaccuracies and how do people get around this industry-wide issue? I'm not surprised that this is what they saw when looking at news specifically because when you're talking about news online, it is a much completely different ecosystem than informational queries like things that Wikipedia or content marketing sites are going to show. The news landscape is full of
Small players, scrapers. I mean, I think even the study cited how often that Yahoo News was the source, which was just aggregating the original source of news. So I think the language models already have a challenge when it comes to disseminating the most authoritative news sources for these queries. And at the same time, a lot of the top publishers that perhaps have the most trust and authority are blocking these crawlers in their robots.txt protocol. Yeah.
even though the study did note that they did find some instances where they were going around it. I just think it's a really complex environment, and I'm not surprised that the language models are struggling with it. And it's more problematic because the
the language model doesn't say, I don't know when it's confused, it makes up an answer. And I think that was one of the problems that they noted in the study as well. Why is that? Why can't these models just say, I don't know? I think they're just programmed to have answers and solutions, right? Yeah. And rather than saying, no, I don't know that, or I can't act that, they'll give you something that's less than useful. Yeah.
I mean, that's people though, isn't it? I mean, I can't remember. It's hard. I feel like conversations with people, it's very rare that when you ask someone something, they say, I don't know. They will confidently just kind of say an answer and then you kind of run with it or they'll kind of talk around an answer and try to figure out in real time. So, and these things are designed by people. So maybe it's just a reflection of society is that people don't ever say to you, I don't know. Yeah. And you know what? Even voice assistants hit that wall. If you talk to, you know,
to Google or if they don't know, they'll just say, I don't know, but I found this on the web. In other words, figure it out because this is all I have, right? But I guess the language models don't have that built into them. So they need to reason an answer. As with anything, the data readiness is the huge factor there.
Are they using clean structured data or just basically rehashed aggregated news, which in itself is problematic, right? Yeah. Let's end the conversation with some tips for using AI at work. I'll start with two from Alex Fitzpatrick of Axios recently outlined five in an article. I'll give you two of them. Be specific.
The more precise you can be with your request, the better the outcome. And then number two, follow up. He says, if your AI's first output is off the mark, try to follow up, try a follow up request with instructions for improvement. And again, be specific. Two from him. Henry, I'll go to you first. What two would you offer? What two tips would you offer on how best to use AI at work?
One of the things I've been really experimenting with, and it's been helping me a lot, is using audio as the interface. And that means it's like recording myself. So somebody wants me to write them a proposal rather than just opening a blank page and starting to kind of type away. What I'm starting to do now is just
It record on like an Apple note and transcribe the whole thing and just talk for 30 minutes, talk out all of my ideas and then give that transcript to something like Claude and query in and use that to try to come up with, you know, an ultimate proposal. I've been using that for longer form things. I've used it for, you know, writing a newsletter. I find that it is a huge time saver and it really just kind of takes that blank page syndrome out of the equation and just, just let you kind of go,
So that's tip number one. I think that's a really interesting one. And that's something that people say you should do when you're speaking to a person as well. They're like, get your ideas out there. Talk them out. Talk them through. And not everybody can do that by typing. I mean, there's a bit more imposing to be staring at that blank page. But I found that just recording yourself is great. You know, I work from home, so it makes it a little bit easier for me to do that than if I were sitting in an office surrounded by a lot of people. Gotcha. Don't even think about it.
Yeah. Uh, and then my second tip is, uh, I've been using Claude, uh, styles a lot more. So if you, if you use Claude, um, you can train it to write or respond in a, in a, in a way. Um, you can give it, uh,
past examples of your writing or past examples of reports or any kind of, you know, example that you want to emulate. And it does a really good job of helping you come up with a, like a style standard. So that once you train it, it's much easier to get work that comes out of it. That really kind of feels like you, like the example I said with the,
audio recording the first time I started feeding it into the AI and asked it for you know give me a memo based on my transcript it sounded very robotic and very not me but by training it on some of the pieces that I'd written over time and really getting it to hone in on my voice it does a lot better job now so Claude Stiles has been a real helpful tip very good Gaja what do you have for us
Yeah, so mine are more general, but I think can be applied to a lot of situations. So I think when piloting AI tools, you should start small. So just use it on one team, one department. That way, it's easier to measure whether what works, what doesn't.
And then that can be replicated. And I know we've done this at eMarketer as well. We have pilot projects, which, you know, give us good feedback. And most of the kinks are worked out when it's rolled out into larger groups. Also, I mean, find ways to measure success. I mean, AI can be so nebulous, right?
But you really want to know how it's helping. What are the benefits? So, I mean, you could use time saved perhaps for certain tasks. A big one would be error reduction. If you can manage to tailor AI so that it could help in those areas, then that's something you can bring back to your manager, your board and say, look, this is working. Let's do more of it, right? Yeah.
So those are mine. Very nice. The two I'll note quickly, again, from Mr Fitzpatrick's article, but I think are really, really good. One of them is check its work.
You know, there is a disclaimer at the bottom of these models, especially the free ones, saying it's AI, it's a work in progress. But still, they make stuff up. So spot check, make sure you fact check, all that stuff. And then secondly, I thought this was interesting. He says, be polite. And he was like, no, really, researchers have found that using words like please and thank you improves AI chatbots performance.
So yeah, another good tip there. That's what we have time for for today's episode. Thank you so, so much to my two guests for hanging out with me at the start of the week, the end of the week. Thank you to Gajo. Thanks again. Yes, sir. Thank you to Henry. Thank you. Absolutely. Thanks to the whole editing crew, Victoria, John and Danny. Not Lance, because I asked him to help me with my new camera and he ghosted me for a week.
Unbelievable. Thanks to Stuart, though, who runs the team. And Sophie, who does our social media. Thanks to us. A true story. Thanks to everyone for listening in to Behind the Numbers show. That's not true. A new marketer podcast. A new marketer video podcast. We'll see you again on Monday, hopefully. Happiest of weekends.