Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.
Hi, everyone. Allison here. As our show remains on winter break, we're dropping bonus episodes for you in our feed. Today, Sam and Shervin speak with Tom Davenport, a professor at Babson College who weighs in on what he sees as the most interesting and compelling AI trends for organizations in the coming year. We hope you enjoy this episode. And as always, really appreciate your ratings and reviews, as well as comments on how we can continue to improve. I'm Tom Davenport from Babson College, and you're listening to Me, Myself, and AI.
Hi, everyone. We've got a bit of a different episode for you today that we're excited about. Today, Shervin and I are talking with Tom Davenport, Distinguished Professor of Information Technology Management at Babson College. The fun thing is that through his work, Tom has vast insights into what's happening with artificial intelligence. And Shervin also spends a lot of time talking with companies working on real projects. So we're kind of planning to have a more general episode about the state of AI and the directions we see. Tom, thanks for joining us.
My pleasure. Thanks for having me. Hi, Tom. All right, let's start easy. What are we all seeing that companies are excited about right now? Tom, what are people excited about? I don't know if you've heard of this thing called generative AI, but people seem to be quite interested in it.
In a way, I think they're too interested in it because there are obviously more traditional forms of AI. Some people call legacy AI that is still quite valuable. I know you talked about it for years on this podcast before generative AI came along. And for many companies, I think it's still as relevant or maybe even more so than generative AI. So oftentimes you have to try to persuade them this is not the only form of AI that's out there.
It's funny, Trevor and I were just talking about the phrase traditional, or you now said legacy AI. How have we gotten to the point where we can talk about this GWIS and Newfangled thing as traditional or legacy at this point? Well, you remember, Sam, at the World Bank event a couple of days ago,
Somebody reminded us that, well, AI has been around since the 50s. So in some ways, it's not that wrong to refer to it as legacy. But Tom, totally resonates with what you're saying. That is, generative AI has made such a big splash that some folks have
forgotten about the bigger splash that was already there that maybe they'd ignored, which is AI. In our work at BCG, I see three paradigms. I see those companies who have been
building and investing in AI for some time. And Sam and I have done a ton of research, as I know you have too, Tom. And so these companies, for them, this is a continuation of their investments, though not necessarily a linear continuation, but they do have
a fair amount of capabilities in terms of data ecosystem and technology ecosystem and ways of working. And so they are more willing and able to bring in generative AI and make it work with AI.
That's, I would say, the winners today. Then there's a group that have been taking small steps around AI. And so now they're seeing an acceleration with generative AI and they're thinking, oh, we need to really, really think hard about our AI strategy and how it fits together. And so this is a wake up call.
It's no longer a theoretical thing. And then there is a group that's unfortunately thinking generative AI is going to come so we don't have to worry about AI anymore. So like everything is now generative AI and we don't need data scientists or data engineers and everything. And you just ask it and it will do, which is unfortunate. I think that group is slowly learning. So that's what I'm seeing. I don't know, Tom, what are you seeing?
Well, it's interesting. I hadn't quite heard of that last category. I agree with you that the early adopters have an advantage relative to making generative AI real in their organizations. I just did a survey with MIT CDOIQ, the Chief Data Officer Group, and AWS, and
A month ago or so, it concluded, and only 6%—I mean, everybody was very excited about generative AI, which was the focus of the survey. 80% said they thought it was going to transform their organizations, but only 6% had a production application that they had deployed. So, I think we're really in the early experimental days for the vast majority of companies. And even, you know, when you talk about the ones that are really quite advanced—
I was talking to a bank in Asia that I'd written about. It's DBS, the biggest bank in Southeast Asia, in Singapore. And they were quite early adopters of AI. And I've written in Slow Management Review about the CEO and what a great...
example he is of a leader in AI. And I was talking to him the other day about generative AI, and I said, yeah, we've got like 35 use cases, but none of them are in production yet, largely for regulatory reasons. So some organizations have constraints about putting things into production that are understandable. Is that the danger that you're worried about? I mean, I think when you led this off, you kind of said, hey, you've heard about this thing called generative AI. I
I mean, so one argument is, oh, that's great. It's getting people's attention towards artificial intelligence that then perhaps we can channel. But I think you had a bit more of a tone of, hey, this is distracting from legacy or important stuff. Well, companies will ask me to come in and talk about generative AI. And initially, I would do that as they requested. But I started feeling guilty about it and saying, you know, why?
By the way, in your business, you'd be better off exploring some of the more traditional stuff. I don't want to be one of these old guys around saying, remember what it was like? Cautions or mind expansion is necessary. By the way, that's the third group in my view, right? I mean, there is a real danger that you sort of think I'm going to sidestep this or forget about AI and all that because, yes, I'm behind.
We never invested when we should have because we thought it's not for us or there wasn't priority. We maybe did some pilots, never got to production. But now this new thing, we don't need the old thing anymore. Yeah. And, you know, I think it's quite conceivable that a lot of things we did previously will eventually be replaced or we'll have generative AI as the interface, right?
Like Sam, I spent a lot of time with analytics before AI came around, and I think these systems can do amazing work in analytics and machine learning. And just a two-line prompt will get you three pages of information.
machine learning model creation and analysis and feature engineering and all that stuff is just mind-boggling. Okay, but don't tell people because I'm giving an exam on that right now. But this is an important point that many people think of gender of AI as it's going to write poems or text or summarize things or make movies, which it does, but it could help you sequence tasks and write code and debug itself. I mean, that's really, really powerful, as you were saying.
Yeah, I guess that all works. The advanced data analysis part of ChatGPT works by writing Python code to do all of that stuff. So it's really quite astounding. Although even in the tech space, I was just talking with an old friend this morning. He has a company that does mostly analytics work, but now they're doing AI. And we're talking about the opportunities in text space.
Whether you're talking about customer conversations or employee comments or legal documents for all the lawsuits that have been brought against you or sentiment analysis online, there's just a mind-boggling amount of stuff that generative AI can do to make sense of all that in a much better way than the previous approaches. One of my favorite examples is sarcasm. I used to love sarcasm.
The fact that traditional sentiment analysis could not deal with sarcasm at all. And my favorite example was one that some people at Marriott told me. They said, somebody wrote on TripAdvisor, the pool was too cool. And, you know, how do you interpret that? AI was not capable of it at the time. But,
Generative AI can say they probably think that it's a pretty great pool, you know, much more accurate. And it can figure out that that particular comment should go to the local hotel manager and not to, you know, corporate customer relations or whatever. So it's just quite astounding all the things it can do. Well, I think you're hitting the nail on a very important aspect that has been a hindrance in adoption of
legacy AI, as you called it, right? And Sam, like we talked about this and the research all sort of culminates to what I would call a 10-20-70 kind of a rule as a rule of thumb that at least I use and many of my colleagues use, which is to get value at scale, 10% is about the model and the algorithm and 20% is about the technology backbone and all the connectivity and everything that's related. But 70% is about embedding it inside the fabric of the organization and the adoption and the
Sam, you and I with our colleagues framed it as human AI interaction and different modes of interaction or process change and all of that, right? That 70% has traditionally been really hard. It's why those examples you mentioned, Tom, haven't been able to get things to production.
What generative AI could do is it could make the 7D a lot easier. It can make AI more explainable or make the users have the ability of being able to interrogate it or override it or work with it in a much less clunky way than before. And I think that's a real advantage of generative AI. As you mentioned, you know, it could be a very nice interface that's much more intuitive and
and hence allow a lot more adoption and usability of traditional optimization and prediction tools. Yeah, my next book is about the citizen movement in non-professionals in system development and automation and analytics and AI. And it's pretty clear that it's going to introduce a whole new realm of people to AI.
Those capabilities who didn't want to bother to learn what data was available or how to use the visual display tools or Excel or Power BI or what have you, but they can say a simple English sentence about what they want. And I think it's really going to open things up.
And by the way, maybe both of you know who Randy Bean is. He has this consulting firm that does an annual survey of data leaders. And he has a new survey out. And one really depressing thing has been questions about being a data-driven organization and having a data-driven culture and so on.
have kind of bounced around in the low 20% area for years. And it's even getting worse in some cases. It was in the 30s a few years ago. He's been doing this for 12 years now. And in the latest survey that I think will be published by the time this podcast is out,
It more than doubled these figures to organizations, data leaders saying, now we have a data-oriented culture. We have a data-driven organization. It's amazing how it had to be generative AI. Nothing else changed so much in the past year. It's amazing how it's opening up not just the possibilities for people participating, but already the interest in these issues at every level of a company.
That seems like it's going to have a great trickle-down effect. But let me take the counter on that, just to be argumentative. You know, I think planes could fly a lot faster if we would take away those pesky safety protocols that they put in place. I mean, that just slows things down to, you know, have all those redundant pilots. We could go a lot faster and be a lot more efficient without that. And let me expand that to software development. I mean, all that testing and quality assurance, that just takes way too much time and resources. We could go a lot faster without that.
How are we not setting ourselves up for that with your citizen developer world that you're talking about in the future? People care about features first, security later. Well, it's interesting. I mean, I'm writing this book with Ian Barkin. He comes more out of an automation background. And people don't worry as much about little workflows and so on that are created by citizens. But...
But both on the development, application development side and the data science side, people are more worried about it. We have this idea that, you know, a lot of IT organizations were still thinking, oh, this creates shadow IT or rogue IT. And my people aren't going to want to look at it and see if it actually works. Some of that is still out there, but it's amazing how many people,
Chief Information Officers, we found who say, "We're really encouraging this. We can't do all this digital transformation on our own." Yeah, they're overwhelmed. And the lines at IT for developing applications are getting longer, not shorter. So I think it's really going to have a big impact. I mean, BMW training 80,000 white-collar workers on how to do citizen development. It's just mind-boggling.
But there's a real danger of paralysis and inaction here because as you're talking about this, like AI was complicated and complex and Gen AI is making the implementation and the adoption also more complicated because you have all these other things you have to think about as you're talking about. And so it sort of reminds me of
very early 2000 with the internet phenomenon and so many established and famous organizations, public and private, basically said, why do we need a website? Why do we need e-commerce? This is a complex technology. Nobody could build a website. We don't even know what it is. Why do we need it? And those organizations paid heftily for
being behind. And I still remember incredibly useless and clunky websites of companies trying to sell things, very established companies where you just had to wait for 30 seconds for a page to load and then you go somewhere else that's like immediate. So I do also worry that
There is a real danger for, again, my three groups of like the leaders are going to be fine. They're going to continue to innovate. Then you have the middle group and the bottom group. And there is a real danger of these companies really, really falling behind and just waiting for things to settle down. What do you think about that? Well, I agree. I co-authored a piece with Vikram Mahadhar in SMR a number of years ago about innovation.
AI is not a good area to be a fast follower in because it takes a long time to accumulate the data that you need, and it takes a long time to hire the people that you really want. And the longer you wait, the fewer of those people there are going to be. I think it is going to set a number of companies back semi-permanently if they don't get moving fairly quickly in this space.
That bothers me, though, because I think what you're arguing for is that this hegemony of technical giants is only going to get bigger, stronger, more powerful. And, you know, if following is so difficult, how do we get ourselves out of the middle ages where we have these feudal lords that we've got to
pledge allegiance to in order to get our models that we want if it's all concentrated? Well, the question is what does make following so difficult? I think a lack of understanding of what the opportunities are, what the technologies are, and a lack
or a belief that, well, this is not for us and we can never figure it out, or a belief that things are going to get much more simpler so that I go to one vendor or one solution that will do it for me. So I'm arguing that some of that difficulty is,
is a mindset and, for lack of a better word, a reflection of either fear or ignorance versus priority. So I guess part of that difficulty, in my view, is just a lack of understanding in the management and senior management of some companies that like what is really required. And to Tom's point, why it doesn't make sense to be a
late adopter or a second follower. You could have all that you wanted to, but if you didn't have data, Tom was saying data was a big part of it. Everybody has data. I think for startups, it can be quite challenging to get the data that they need to create a
Minimum viable AI product or whatever, but big companies, obviously, they have lots of data. But Sam, I think we can distinguish between the vendors in this space, which are mostly big giants. I mean, OpenAI was sort of an interesting example because...
They had 300 and something people employed there compared to, what, 80,000 at Google, or maybe it's even bigger, but they still beat them to market generally. Of course, now they're in bed with Microsoft in a big way. I was going to follow up with that exact point. You do have to have a lot of processing power, and that generally requires a big company. But I think that among the users of this technology,
I think that's where I was really arguing. You don't want to be a fast follower because it just takes too much time to catch up. Let's switch back to you. You mentioned the 80,000 people at BMW learning and citizen developers. I mean, why stop there? Do we need much more public awareness about AI ML? I've got a 13 and a 15 year old. Should we be having fireside chats at night?
over the dinner table about artificial intelligence? How much... I don't think it's that group. I think it's the generation that's making the decisions now.
I don't know, Tom, what you think. Well, it's interesting. I might have agreed with you until yesterday and I saw some analysis of, in one way it was comforting. It was a study out of Stanford saying that there's not much cheating happening because of generative AI in schools, but the percentage of people who seem to be even aware of it
much lower than I thought. I forget the exact numbers. And sadly, minority kids seem to be substantially less aware than white kids. I do think that's
It's going to be incumbent upon schools at every level to teach people how to be productive and effective with these technologies and not to ban it as some did early on. I think that's receding, fortunately. But this is the most powerful technology we've seen in generations. Most people seem to agree with that, even grizzled veterans like ourselves. And so schools are going to have a
heavy load to bear and letting people know how this stuff works and parents too, for that matter, Sam. So I guess, yes, you should be talking about AI with your kids. Actually, Pew Research had a study out recently that was talking about who had heard of ChatGPT and who was using it. And it was perfectly inversely correlated with age. Their survey cut off at 18, but pretty clear what the trajectory was. And
Based on the sample of the kids hanging around our house, they are all over this technology. I mean, I'm sure you've seen this too, Sam. It's going to be really hard to engage faculty in this. I mean, there's some of the faculty at Babson are very gung-ho, but some actually went to the academic technology office and said, can we shut off AI on our campus? And it's a big organizational change to get everybody bought into the fact that this can
really make for better learning and not easy even to figure that out. I've done it in some of my classes on AI and
In some ways, you know, I make them show their work, they show their prompts, they show the edits that they've made and so on. And there are all sorts of problems. One, when they edit, they introduce grammatical and spelling mistakes into what the LLM has done. And they forget they're supposed to show me their prompts. And then I tell them they need to look it up in Google to make sure it actually is true. And they forget to do that.
And several of them said, you know, it was easier when we just went to Wikipedia and copied some stuff down for... Back in the good old days of copying Wikipedia. Yeah, exactly. I think that's an interesting sort of...
The different takes on things and part of it comes down to what you're teaching. I mean, Tom, you're teaching a class in AI. The idea of banning it clearly doesn't make any sense there. But if I was teaching people math, I think I would ban a calculator, at least until people knew adding and subtracting. And I would argue I would argue instead of doing that.
make the problem a tougher problem and allow the kids to use more imagination, more creativity with a tool? I mean, because the analogy would be, sure, you want kids to be able to do timetables and all that, right? But do you want folks to be able to multiply three-digit numbers? Maybe. I mean, some schools would glorify that and say, it's great that you're doing that. I also feel like the
Banning AI is like saying, you know, let's just like cut communication lines of internet because like we don't want any improper content to show up on our TV. Right. Or like we don't want our kids to be on YouTube. Let's just make sure we have no internet. Whereas
You know, it's the reality of our lives. So let's do good with it. That's why it's hard, right? So I think that my caution and my fear is this, which was my earlier point, is that this inaction or this fear or this paralysis would lead people and societies and companies to
To just treating this as a black box, probably an evil black box, let's have nothing to do with it. Whereas I think you need to just sort of go into the lion's den or the circle of fire and open it up. And like, it's not going anywhere. It's part of our future. Let's figure out how to do good with it. Let's figure out how not to do, you know, bad with it.
And vis-a-vis what we do in schools or what we do in companies or in societies, let's do human plus AI could do better together rather than on their own. So that in your example, make the problems harder or make the assignments more creative and allow kids to go use whatever resources they want. Because aren't those the right skills anyway that we're going to need, you know, 10 and 20 years from now? No.
Not the addition skills anymore, but the ability to integrate all these technologies into doing something that the technologies on their own cannot do.
Well, I think we have to figure out what is the best type of technology for working with humans. There was a presentation last week with a MIT initiative on the digital economy that I attended, and a doctoral student was studying copywriting. And there was an experiment with three different conditions. One was just a word processor, no generative AI. The second was a sort of an advanced type-ahead system where you could accept or reject, and it clearly...
maybe 50-50 human and machine. And the third was generative AI creating the full output. And in general, the people, the copywriters liked the intermediate stage more, and it produced higher quality work and that got more click-throughs for online. So I think we need to figure out what's the best way to use these tools. And it's not just
generate my essay for me based on one prompt. Yeah, I think we're probably pretty early in figuring that out. We've only had a year now. I think we can give ourselves a little bit of slack to try to figure out how to work with the tool. If the tool just quit changing on us, we could figure it out, right? Well, that's the thing. It's a full-time job just keeping up with how the tools work and what new things have been developed and so on.
Tom, it's really been quite fascinating talking with you. And, you know, we've covered a lot of different topics here. And I think maybe we're ending up with a learn more, do something. That's kind of a running theme of it's going to affect everybody. And we need to be doing something versus passively waiting and seeing. Because as you say, this fast follower might not be the right approach here. Thanks for taking the time to talk with us. Thank you, Tom. This was really wonderful. My pleasure. Really enjoyed it.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.