We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Bonus Episode: How Can Organizations Better Measure and Manage Artificial Intelligence?

Bonus Episode: How Can Organizations Better Measure and Manage Artificial Intelligence?

2024/3/12
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
C
Carol Corrado
N
Nicholas Zolis
O
Olivia Igbokwe-Curry
S
Scott Wallsten
Topics
Olivia Igbokwe-Curry:作为云服务提供商,我们可以轻松衡量客户对人工智能的使用情况,因为我们可以监控计算、数据存储和服务的利用率。云计算的可扩展性使得这项技术能够被所有人使用。 Nicholas Zolis:许多企业无意中使用人工智能,因为他们将一些非核心业务流程外包给使用人工智能的企业。这使得人工智能的使用测量变得复杂。在对人工智能进行监管之前,需要更好地理解人工智能对企业的影响,特别是对就业的影响。我们的调查显示,大多数企业采用人工智能后,员工构成没有变化,甚至可能导致更多招聘。 Carol Corrado:许多员工自发地使用人工智能工具,这与传统的IT系统采用模式不同。对人工智能的监管不应区别于对人类行为的监管。可解释人工智能(XAI)的兴起是企业自发地为了理解人工智能决策过程而产生的。模型的可解释性对于企业高管和普通消费者来说都非常重要。 Scott Wallsten:技术政策研究所(TPI)利用大型语言模型(LLM)来提高效率,并提出了一种衡量人工智能使用的新方法。透明度意味着不同的东西,取决于应用场景。现有的法规主要关注最坏的结果,缺乏细致的考虑。人工智能法规应更细致地考虑其益处和成本,例如,政府可以利用人工智能来处理公众意见。新闻媒体对人工智能的报道往往夸大其负面影响,忽略了其积极作用。

Deep Dive

Chapters
The panel discusses the barriers to AI adoption, highlighting the surprising organic use of AI by employees and the need for better understanding of what constitutes AI use.

Shownotes Transcript

Translations:
中文

Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.

How do we know what kind of impact AI is really having? And how do we put the right guardrails around the artificial intelligence tools we're using? That's the focus of the talk that we use to create today's bonus episode. As you might remember, Sam and Shervin joined the World Bank and Georgetown University for a forum on how artificial intelligence is shaping organizations back in December of 2023.

In this episode, Sam moderates a panel of four experts to talk about just these issues. We hope you enjoy it, and we also hope you're very excited for the premiere of Season 9 of Me, Myself, and AI, which returns on March 19th. Until then, please enjoy this episode. Olivia, start off. Tell us a bit about who you are and your background. I'm Olivia Bequia-Curry. I lead Political and Congressional Affairs for Amazon Web Services, and I also lead our AI Federal Policy.

Nicholas Zolis: Hi, my name is Nicholas Zolis. I'm a senior economist with the Center for Economic Studies in the business research area at the U.S. Census Bureau, and I've worked on some of Census' recent efforts to generate some national statistics on technology adoption and AI adoption in the U.S.

I'm Carol Corrado. I'm a senior policy scholar at the Center for Business and Public Policy at Georgetown. I previously worked at the Conference Board and also for the Federal Reserve.

And my work involves studying intangible capital, digital innovation, and the role of technology in driving productivity. I'm Scott Walston. I'm the president of the Technology Policy Institute, also a senior fellow at the Georgetown Center for Business and Public Policy. I study antitrust and regulation and broadband and AI and the economics of all kinds of things. And also, apparently, I'm not trusted with my own microphone. Exactly.

We'll have to be open about sharing that. Rather than go down the panel one by one, I'll keep people on their toes and start off with Carol, who's just put her microphone down, and ask you, we've talked about all this awesome stuff that people can use artificial intelligence for, but a lot of people are not particularly using it. What kind of barriers, what can we do to encourage that? What's keeping them from that? Do you have opinions? A few. First of all, I...

I think there's been a groundswell of interest in using AI since the release of ChatGPT in November of 2022, a little over a year ago. And we found that, when I say we, the conference board found that in some surveys that it did in July and August. They have a regular survey of workers and their attitudes towards work, and they did a special one that

asked whether workers used AI. And what they found was that among office workers, actually 56% of the people responded that they have used

primarily generative AI in their work. Among marketing professionals, the response was even higher, like close to 90%. And there they use it for, they use a mix of techniques. This is organic actually, because some follow-on questions revealed that most people just did this on their own.

I mean, on work time, but they educated themselves and applied these open source products that they found available to them. So I found that absolutely stunning. Again, I'm not saying 90% of workers. I'm saying 90% of people who work in marketing departments. I guess that shouldn't surprise us. I'm surprised at just how much

workers are embracing the technology. This is not the policy of the firms that I'm talking about. And that's one thing that I think hasn't come up in our discussions today yet. And that's very unusual in that typically we think about IT systems as being something that people must be forced to use and we have to incentivize it to use that. And this is

a lot of organic use. We did a similar survey before ChatGPT and 66% of the people said they had not used or minimally used AI. But then when we asked them, well, what about this tool, this tool, this tool? 43% of those people said, oh yeah, yeah, I use that. I didn't realize that that counted. And so I think that we have a bit of a question about use and what does it mean to use? It's not a binary yes or no in terms of using.

How do we measure use? Scott, do you have a thought about how we measure use? What does use mean? There are differences between how companies use it and then how it changes the outputs. And those will each require different things. I'm going to sort of take the question and turn a little bit.

I'll put myself in a little bit of a different position than I usually am. I usually want to talk about data and the research we're doing, and I'm also usually the annoying person who says, "Oh, anecdotes, not data." Instead, I'm going to talk about what we're doing at TPI with AI and LLMs. We built our own LLM for use in our organization. It's public. You can go to it right now, chatTPI.org.

But the idea for it was lots of the issues that we deal with in policy are kind of the same over and over again, even if there are sort of new incarnations of them. One that we're dealing with right now is net neutrality. I wrote my first thing on it in 2006. It's just, it's horrible.

And I thought, wouldn't it be great if I didn't have to ever write this stuff again? I could just ask something to write something new based on everything I've ever written. And we started there. And now we have an LLM that answers questions based only on documents from TPI.

And so that's, you know, I think shows a couple of things. First, it's a new way that AI is changing the way you interact with any organization's website. And we're now doing this for some other organizations too. Second, you don't really need a lot of resources to do this. We did it with our senior programmer.

And it's freed up time to do other things. And so we can measure how we use it. We also can measure how many people outside use it. And also, I haven't tracked how much time I've saved, but I do keep a list of all the things that I do with AI. And it's a long list, which might say more about how poorly I used to manage my time or maybe still do. But

But it's exciting. I'm fantasizing about loading my syllabus into your tool and having it answer questions there. Sorry, Nicholas, tell us more about Measurement.

Now, so the first question is how do we define use? We can think about use as being a core process of the business or a method or something that's really important to the business. But then from what we've seen and when we've talked about this with the businesses themselves, we seem to think that a lot of these businesses use AI incidentally, where they're not necessarily aware that they're using AI and that's because

A lot of times businesses will sort of outsource, you know, functions that are not part of the core process to other firms. So when they're looking to, for instance, hire somebody, they might not, you know, go through this sort of internal process, but they might use like a job site like Indeed or something that then uses AI to sort of sift through resumes, use natural language processing to identify like the perfect candidates and then bring that person in. And so that's, you know, that's

So the business itself has no idea that AI is being used on this particular task, but

but how do we measure that right and so like when they say do you use ai in your business as a yes no the answer is always no and so even within existing surveys at the census and we've had multiple years and we've asked many different types of questions over these years um we get a lot of variance and and responses and adoption rates and so you know just to sort of put a little

kind of benchmark in people's minds when we look at you know the national statistics of the number of firms in the US that use AI we get something between three and six percent and so that percentage is going to vary considerably depending on what our definition of AI is what specific applications we're talking about with AI and whether or not we're discussing specific processes or tasks on you know that may or may not use AI

So good, it depends answer. I sure wish we had some infrastructure in place that could help us use these tools. Tell us about the role of organizations to help people use these tools. Yeah, so I think if you look at like cloud service providers, CSPs, it's easier for us to measure what our customers use. Like you can always see the amount of compute. You know, you don't want to monitor or you don't want to monitor your customers, but in terms of pricing or invoicing, you can see the amount of data

compute or storage that a customer is using or how often they use a certain offering if it's AI or not or legacy AI versus generative AI. So I think as a service provider, it's easy to have that measurement. I think most of generative AI, if not all, is going to be in the cloud. And so that is an easy way if you're a hyperscaler or if you're just a

CSP cloud service provider. That's one way of offering these tools to run efficiently and scale these tools across, you know, internationally, domestically, across large and small businesses, individuals, et cetera. So having the cloud and the scalability of the cloud is a really easy way to make sure that...

this technology is put into the hands of everyone. So we're going to put some technology into the hands of everyone. You know, nothing can go wrong there, right? That's never gone wrong. Do we need to be regulating this in any way? Carol, I know you have some thoughts there. Yeah, I have some thoughts, but I'm coming from a certain perspective, which is

Let me just sort of state to begin with, which is I don't think we should regulate this AI technology any different than we would regulate humans doing the same thing. That's nice. Number two, just as I commented on this sort of organic adoption of technology,

some AI tools by employees. One of the things that's happening sort of higher level in corporations, more at the C-suite level, is the exploitation of what is really a subfield of AI called explainable AI. And why did this come about? Well, it came about organically, not because of some regulation, but because you have people in C-suites or managers who

who want to harness this technology, but are used to being able to ask questions of the analysts who bring solutions to them. Like, how did you come about, you know, choosing option A versus option B, let's just say. So, you know, there are now...

tools that can be embedded in traditional AI, may I use that word, that are not very surprising. They can spit out what features of the model generated option A versus option B. They can do partial dependence plots. They can do counterfactuals.

And this helps in the storytelling of understanding just why a given prediction is made. I mean, this just shouldn't be surprising. I mean, I was a macro forecaster at the Fed for many years, and you never just said, oh, the CPI is going to be 3% next year, full stop, end of sentence. You always said, why? And so that's what...

business decision makers want to know and lo and behold, this kind of accountability is being exploited through actually just variants of classic statistical tools to tell you the truth.

I was going to say that transparency on how AI outputs is something that you will see at senior levels, but we've seen it at all levels of our customers. And so you can have something called model cards, service cards, which tells you a very lay, non-technical explanation of how the AI output was measured or delivered. And I think that's something that more...

you know, lay people, not necessarily business leaders, just everyday consumers want to know. And that was, I think that really is what, because you can get high level behind the scenes in the C-suite, you know, explanations, but for the everyday consumer that wants the explanation, that's when you really have to work to make sure it's available and something that's

almost not categorized, but something that's required for all AI models. I think while Scott's getting ready, I think that's interesting because you used the word partial dependency plots, which didn't strike me as a layperson explanation. Yeah. So we have to see. I mean, there's clearly some, you know, who it's. Yeah.

So go ahead, sorry. Oh, yeah. I mean, there's also the question of what transparency means and what it is people really want to know when you get the answer. Sometimes I don't care how it got to the answer, but I want to see its references. And that's one of the things that we put a lot of effort into in ours, that it has citations. I don't know what it did in its black box, but all that matters is that I can go and check what it did. And so I think there's some difference and it depends on the application.

And when you ask about regulation, the problem with the regulations we've seen so far, I think, is that they focus mostly on the worst possible outcomes. Because we've been trained for decades to catastrophize this. I mean, it's the plot of so many awesome movies. Yeah.

that it's hard for us to think rationally about it. And that's, you know, the executive order here, the AI Act in Europe, they're all focused on, you know, avoiding this worst possible thing. I'm exaggerating. But it needs to be more nuanced than that. And, you know, including thinking about ways that it can be useful, even within the government,

I wrote a while ago about possibility of generative AI LLMs creating responses to government's requests for comments. And, you know, right now they already get floods of automatically created responses where somebody can go to a website and click a button and it sends it in. Eventually they're going to have thousands of, you know, 20-page LLM created. And then the

the government's going to need its own LLM to try to go through it, right? And you don't necessarily want to ban it. I also actually did one of those with using an LLM to create a response. We didn't submit it because it's not good enough yet. For the record, you did not submit it. Right. But I think there are lots of... I mean, these are two separate things. There are different kinds of transparency, and you want to think about what is the important thing that the person, the user needs to know to trust the output. And the other aspect of it is regulation, and I don't think we're thinking about...

the costs and benefits in a rational way yet. I think the cost of it is interesting because as you note, news cycles run off of extremes. And, you know, we

I hate to always pick on cars, but a lot of people are going to die in car wrecks today by human drivers, and it's not going to show up on page one. But I almost guarantee that if there's AI involved in any way, that's going to be top of the fold on whatever disaster there is. And so it feeds to that cycle of either AI is going to solve all of our problems or AI is the worst thing in the world without any middle ground.

Yeah, I mean, I agree. And part of the issue, I think, before any regulation could take place is that there's still like a very limited understanding of what AI does to the firm. And so like one of the immediate concerns, obviously, that's been getting a lot of press attention is that AI is going to replace jobs and lead to vast amounts of automation. And so we need to think about how...

how workers in the future are going to get money and what are they going to do and substitution. And so in one of our surveys, we asked firms that adopted AI, how did this impact the workers? Did it lead to hiring of more workers, letting go of more workers? What did it do to the composition of workers in terms of production workers versus non-production workers? And how did it impact the skill level of the workers, the average skill level of the workers? Now, granted,

all of these responses are self-reported, so you can kind of take it with a grain of salt. But for the vast majority of firms, 70% of firms responded that the adoption of AI led to no change in the composition of workers, like no change at all in the number of workers or in the change in the type of workers. If there was a change, it was much more likely that the change was going to be positive, that it led to more hiring. And so that's

sort of goes in line with some of the productivity enhancements that AI has led to. But the biggest change that we saw was in the skill composition of those workers that firms that did adopt AI, they trained their workers and so they required a slightly higher skill level than previously before they adopted AI. And so those are the sort of questions that I think we need to better understand before any sort of formal regulation can take place. Thank you, panel, for spending the time talking with us. And thanks to everyone. Thank you.

Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights,

and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.