Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.
Hi, everyone. Allison here. You might remember our first bonus episode back in January, which was an excerpt from an interview at the World Bank event at Georgetown University about how artificial intelligence is transforming organizations. This episode also features a talk from that event, this time focused on manufacturing.
Sam moderates this panel and puts Shervin in the hot seat as one of our panelists alongside Ness Schroff, director of the AI Edge Institute at Ohio State University, and Matthew Wilding, the co-lead of the Digital and Artificial Intelligence Program at U.S. Steel. To learn more about how artificial intelligence is shaping manufacturing, as well as a little bit about Shervin's background in chemical engineering, tune into the rest of this episode. We hope you enjoy it.
A little bit about your background and what you're doing. I'm Ness Shroff, and I'm a professor of electrical and computer engineering as well as computer science and engineering at The Ohio State University. My research interests are in telecommunication networks and for the last decade or so in artificial intelligence. I lead one of these National Science Foundation AI institutes that's led by Ohio State.
And the goal of that institute is to develop AI technology for designing future generation wireless networks, as well as enabling distributed intelligence, so democratizing AI. That's one of our goals. Pass it over to you. Matt?
So my name is Matt Wilding. I work with United States Steel, so I'm the co-lead for our digital and artificial intelligence program. My background is actually in chemistry, so I was a classically trained chemist, worked at the Department of Energy for a while, and wanted to work for solutions that were a little bit closer to the end consumer. Did some time in management consulting and then landed at U.S. Steel, where I've led successive digital transformations kind of in different areas of our business.
starting in people analytics, then growing that into thinking more about talent acquisition, recruiting visa programs. How do you do those things in a manner that's digitally enabled and is actually making intelligent decisions? And then currently, again, leading the actual global initiative for United States Steel around digital and artificial intelligence. And we'll go to Shervin. Even though you just heard from Shervin, some people may have joined the live stream here. So Shervin.
Hi, Shervin Kodobande. I'm a senior partner with Boston Consulting Group and one of the leaders of our AI business. Many of you might have heard of BCG as a strategy company. You might not know that we do a fair amount of AI work, both in terms of strategy and design, but also in terms of building solutions and implementing them. All right. So manufacturing, nothing to do with artificial intelligence. It's just processes. It's machines running AI.
Prove me wrong. Matt.
Could not be farther from the truth. I'll start there. You told me to be adversarial, so I've prepared. One of the topics that we've talked about is introducing new processes with artificial intelligence versus reimagining processes that are already out there. And there aren't more process-heavy organizations than manufacturers. You know, everything we're doing is starting with the raw material. It's starting with energy. We're consuming that to convert it into some sort of work in progress that's then a finished good that goes to our end consumer.
And so along every one of those potential steps where we are either touching the product, touching the consumer, or touching our customers, you actually are making decisions that are very, very data-enabled, right? Because all of our equipment is recording information so that we can actually produce within good tolerances, with good quality, with good yield. And so we've got a plethora of data that actually gives us a really good
great, rich platform to be one of the first movers in pulling some of this information forward. And the risk profile, we heard from folks who were in healthcare this morning, again, the risk profile is a little bit easier when we're making decisions on yield or production than if we are actually making decisions on diagnosis of a disease in a clinical setting.
But perhaps a little more pressure, and we can come back to that. But you mentioned data. Ness, I know that you've thought a lot about how you get good quality data from lots of different places at the same time, and that seems like the core of what Matt was just talking about.
Yeah, I mean, I think it's very, very important. So in the previous session, we talked about bias in data. There's also the notion of garbage in, garbage out. So you have to be very judicious in how you sample the data. Sometimes there is just an overwhelming amount of data. Sometimes there's not enough data.
And so especially if you are developing solutions that need to handle rare events, need to be able to sort of work in non-typical environments, then you need to ensure that you've sampled enough of that data and use it in your model versus just having sort of the regular data that happens.
that happens. So there is a whole field of important sampling which deals specifically in how you sample the data so that you can get outcomes that are efficient, fair, equitable, etc. That seems like it works pretty well once you have all the data. I mean, all these things, models we've talked about require tons of data, but almost all these projects probably start as small pilots.
How do you go from small pilots which have small data to these big projects? And I'm going to look at you, Shervin, because I know that you have a ton of experience of pulling small projects to big projects. Well, I happen to also have been a chemical engineer.
We have three chemistry folks on the... You mentioned chemistry, chemical engineering. And so I go back to when I was in academia and we learned about engineering formulas and science and physics and heat exchange and fluid mechanics and all those things that work to basically model a plant or a production. And then when I worked for a while as an engineer, my job was to go and
you know, had a small sort of piece of that process that was mine around some chemical reaction that I had to optimize. So you have these set points that you set, the pressure or the temperature or the flow or the mix of reagents. And so I felt very confident doing that because I learned the formulas, I knew the engineering tables, I could go to the handbooks and do that. I think what
our colleagues here are talking about is that the game has changed. And now you have so much more data that those formulas often are an approximation of reality, what happens inside these reactors and reagents and all that, right? So that's where the AI has the power to really optimize set points or yield or, you know, the whole economics of something.
The challenge is that, like I was and like my boss was as an engineer, these operators have a tremendous amount of pride in what they do. They have a ton of experience because they've been doing it. And you can argue they're not wrong because they're not basing it on gut feel like, for example...
a, you know, a retailer might or a loan officer might. Not that they do, but I'm saying there was a time where you could say, okay, you're going by gut feel, you don't have data. These guys are scientists and engineers and they have years of training. So, you know, you asked pilot to production. I think it starts with realizing that manufacturing or chemical process is a big challenge.
is a big deal. People, operators have a tremendous amount of pride. They know what they're doing. You need to help them get better. They also have the incentive and the motivation to do better. And so I would say start with them in the loop, understand the process, understand what they're trying to optimize, understand what they're measuring, understand how they're making decisions and show them
what AI could do to get more yield or less reagents or more throughput or whatever it is they're trying to optimize. And I think the benefit that a data scientist or an AI leader has in this space
is that you can actually build some causal models. I think in retail, it would be really hard to say, I did this because of this particular advertising. There's so many other halo effects. So many variables out there. So many variables. It...
In engineering, you're measuring. You could see the outcome. You change something, you could see the outcome. So the ability to build causal models so that there's no perception that these are just mere correlations is actually quite high. So short answer, I mean, it's a long answer already, but my point is understand what they're going through, start them in the process from day one,
You have the benefit of being able to show what AI could do. Let them be skeptical of it and push back, adapt, and then also give them the ability to override it so it's not a black box. I think when you do a pilot this way, slowly they'll see
before and after, and it will begin to change the hearts and minds of themselves and their colleagues. I like the phrase, let them be skeptics. Keep going. Yeah, no, no, I wanted to just add a couple of things. I completely agree that domain knowledge is integral to having very, very good solutions with AI. But I think oftentimes when we sort of, you know,
talk about AI, we talk about it like a monolith, like it's one thing. But in fact, it is not. So there are, for example, a lot of control systems in manufacturing processes or in communication networks, et cetera, where you are making decisions on the fly. So you're using whatever data that you have
and you're making decisions, that's called online learning. That's very, very different, for example, than using a huge neural network that you train for many, many days or months or whatever and come up with a solution. So I think what you have to do is you have to develop solutions that are appropriate
for the data that you have and appropriate for the dynamics that you want to study or analyze or control, right? So in the case of these online mechanisms, what you have is you have data that's coming in and you need to develop appropriate solutions that can, you know, in real time,
give you the right output, the right control mechanisms in order to sort of maximize whatever objective function you have in mind.
To push back on that slightly, you used the word appropriate several times and the word right. I don't think anyone's going to argue for doing an inappropriate process or doing the wrong thing. How do you, and I'm going to push towards maybe Matt, because I know you've got, this is taking a twist here. We started off talking about manufacturing and processes, and suddenly we've slipped over into organizational learning somehow. Yeah.
And I think that's a beautiful transition that's happened here. Matt, I know you've got some recent work exactly about that at U.S. Steel, which isn't what we would naturally expect, I think. U.S. Steel is one of the longest standing companies in the United States. So we were the first publicly listed on the stock exchange. So you might not expect to hear us synonymous with artificial intelligence, with Google Cloud, with pushing this forward. But
You know, really the stance we've taken, and we alluded to it earlier, was AI has to be in service of something. So in an earlier panel, someone said, you know, what might you say to a minister of technology? And the answer is that AI is in service of something. It is a tool. It's not an object in and of itself. And so the reason I say that is...
You know, when you think about a business in transformation, manufacturing is a great place for that, right? Because we're always having to reinvent ourselves, come up with new products, new ways. We're depleting natural resources, finding ways around it, and new material science challenges.
And what is really critical in doing all of that, which kind of builds to U.S. Steel's intersection here, is finding ways to put innovative solutions in front of your customers using the same manufacturing base and tools that you've got at your disposal. So the average age of the U.S. manufacturing base, our assets, is over 10 years old. In primary metals and manufacturing, it's over 20 years old.
So we talk about trying to gather data, pull that information together. Even with the best laid plans to do that, you're controlling technology that's 20 years old before cell phones were really as ubiquitous as they are today. So what US Steel's been working to do is we found the right partner to work with. So we've been working with Google Cloud.
and piloting different AI solutions. So, you know, we've talked about different flavors of AI too. So generative AI, custom AI, traditional AI. So we've actually recently released in a press release a few months ago, our first generative AI application for the industrial US sector, which we call MindMind.
And it essentially has digitized our production manuals, our repair manuals, the SNOPs that we use for that. So now our technicians that previously would have been unfolding these multi-panel paper documents can ask in real time, the blinker is out on my truck. How do I repair it? And you really begin to see that instrument uptime or that equipment uptime come up.
And then what I think folks may not appreciate from that, now I'll stop talking a second, but folks may not appreciate is when you can do more work as a manufacturer, when you can be more efficient, you consume less resources, less energy, and less labor intensity. So you're actually greener and more sustainable for being able to do that.
So in terms of how AI can serve your mission for manufacturers, it's really around being innovative, but also thinking about your sustainability initiatives and how you want to continue to operate the assets you have today to the best of your ability. But how does that work in terms of, I mean, for example, Ness, you mentioned learning and getting feedback, and you mentioned things that are 10 or 20 or 30 years old.
How do you build that learning that Shervin was talking about back into the process? I mean, it seems, I mean, how does that process look? How do you, I'm not sure who to ask that to, everybody. Like, how do you get that feedback? What point do you, Shervin? You seem like the expert.
I mean, you got to start small, right? You've got to pick a process. I think, you know, Nesh, you made a good point. You also have to figure out what problem you're solving and what are the practicalities of this problem you're solving and how many other sub-processes and, you know, upstream and downstream things it affects so that you're not doing it in a silo. I mean, it all has to connect together. And I think compared to other industries...
that connectivity is super important because like if you're, you have a production line that's doing something to then be used somewhere else. And if this one is not working and you find out midway, then the other one has to stop because it's using the improper material or not the well-cooked item or whatever it is. And so, and the cost of production line going down is, is massive. So you got to sort of,
I totally agree and resonate with that. This is not a problem that could be solved in a silo, but you also have to start small and have a path on how do you bring the operators together
along the way. So I alluded to that. They have the right incentive. AI has the right incentive if it's designed right, that human plus AI will make everybody happier there. So make sure they're in the loop. Do it for a small plant. If it's, let's say, mining or extraction or something, okay. I mean, most companies don't have just one. They have tens or hundreds of those. So do it for one.
show how it works, how it works upstream, downstream. Now you've got a playbook. Now you could scale it. Also, I would say,
Maybe those AI formulas don't exactly scale because maybe you have a five-stage process somewhere and a three-stage process somewhere else. So it's not necessarily you swap one for the other. But the playbook for that organizational learning, for the connectivity, for how fast you go from POC to pilot could then evolve. So that's, I guess, my brute force answer to that is you start small, you measure your success,
you move on. So I see a horizontal and a vertical dimension because I'm business school, so I have to have everything in a two-by-two matrix. So I have to think in two-by-twos. What you just described was going, one approach would be go incredibly deep down one silo and get it all super AI'd up and then pick a different process and go super AI there. Or a different process would be go a little bit on each process to improve them all
another process, take another pass through them all and keep them in sync. Which way works? So, I mean, I can speak for what we do. So in our institute, we actually follow your model somewhat because the point is that, you know, we are building for the next 10 years, next 20 years, right? So we are developing the AI tools for wireless networks that are going to still be built. And so what happens is that we want to understand
how to develop new models that can improve efficiency in a significant way at a small level first. And so one of the things that we have found, and this relates back to your earlier question, is that simply applying off-the-shelf components to solve problems that we already have a pretty good idea of how to solve using traditional techniques often doesn't work.
So what you have to do is you actually have to build new foundational AI.
in order to solve these problems. So this is not just using AI that's, you know, that's available now, but actually building AI that can deal with these complex systems. Like, what's an example of that? I'm having trouble. Oh, yeah. So for example, you know, let's say that you want to figure out how to, say, do congestion control in a network. So if, you know, all of you use the internet, you know, we have
reasonably good tools of how to manage congestion control. If you just use a traditional AI algorithm and put it there, it works worse than what you would have using all the knowledge that we've accumulated over the last 30, 40 years in dealing with these systems. So what you have to do is you have to develop the AI that can appropriately model the dynamics of this complex system, appropriately understand how to characterize the constraints
that are part of the system. And then you can get the orders of magnitude improvement. So these improvements could be in terms of having bandwidths that are much higher than we have right now, or the delays are much, much lower, so that you can have very fast action happening, say, in an autonomous driving location. But in order to do that,
You really need a fundamental change in the technology. Just using the technology that we have right now won't work. And this sort of goes back to sort of, you know, the domain knowledge, the physics. You need to understand the physical system that you're trying to model and then be able to develop the AI in order to solve that problem.
that problem. So you're describing that AI could be much worse at first. It could certainly be. Right, that you've got a history of a giant way of doing a process and you're going to introduce AI tools into it and it's not going to do as well. Okay, how do you overcome that? Feynman made the argument, I'm assuming all of you know Richard Feynman, a famous physicist, right?
He said, it doesn't matter how good your theory is. It doesn't matter how clever your idea is. In the end, if the damn thing doesn't work,
If the deep thing doesn't work in short term versus long term, I think there's the crux. But that's why you need to develop the appropriate technique that not only makes it work, but makes it work better than it does now. So you need to ensure that at a baseline, the AI is doing better than what you would without the AI.
So Matt, when you introduced this manual, this generative process, did it perform better than people were doing? You'd have to answer with it depends, which is the answer everybody's famous for giving. So if you've got an individual who's been working with us for 15 years, 20 years, they know that manual like the back of their hand. They may not even reference the manual when they're going out to the field to make a repair.
So the way we think about that, kind of tying into that, it's been shared in our experience as well, is that when we have these types of tools where we've captured the domain knowledge, we're thinking about what are those actual deterministic models that you can put together or you capture the domain knowledge in a generative AI system, what it helps is to even out performance and to bring people up to speed more quickly.
So when you've got a person who's just transferring onto your maintenance team for the first time, they've never cracked open that manual before. They will see a huge performance gain from being able to go in the same way we can to, you know, chat GPT into BARD, you know, ask a question and get a response. But if, you know, you're thinking about the 30-year veteran who knows it like the back of their hand, no, the gains will be relatively modest when you think about that, which –
for us can become really challenging because we haven't really talked about talent in manufacturing, the challenges that are facing manufacturers today. And it's that trade schools are not as popular as they have been in the past. The traditional skills that are out there to keep manufacturers running, machinists, welders,
They're really hard to come by nowadays. And so really the name of the game for a lot of manufacturers is getting folks up to speed, trained quickly, and being able to facilitate and even out the performance in folks who are executing those tasks. Because the domain expertise is there. You've got really talented teams, and hopefully you do. You've been operating for a while. You've got really great teams. It's how do you make sure that information gets really easily into the hands of the newer folk
or the folks who are a little bit less sure about whatever process or model that they're trying to work through. Yeah, I think it's a super important point, Matt. And a good, I would say, also pivot that like the, at least the overtone from the conversation for me was,
When is AI going to beat the human? And it doesn't need to, right? It doesn't have, like, it doesn't need to be binary. And I think it's, and I think that's why you're saying it depends because, I mean, you could imagine all you need is AI plus human to do better than either one of them on their own. And like in your example, Matt, yes, if I'm a veteran of something and I know the manuals by heart, I probably don't need it.
But if the new guy is going to come and ask me, well, A, he's not going to ask me, so I get my efficiency. B, the manual itself might be wrong. Because in this specific use case of knowledge, like I've seen in my work, there's been multiple SNOPs that have been incongruent because they were revisions or written in different times. And that might be confusion to people. So I just think that the frame of this is,
is AI helpful or hurtful? And then of course, depending on use case, you might have a very, very low risk
tolerance where you thread carefully or in other places you might have a lot more tolerance. And, you know, and so I just feel like it is very well said that it depends. It depends on the use case. And it also depends on what you're measuring. And that measurement doesn't have to be we have to replace every single engineer with AI. It's that that engineer plus AI could be more. I like it depends because it depends means it's hard for us. If it was if it didn't depend,
We just have an answer. Sorry, go ahead, Matt. I was just going to add to that. You know, we think about how AI, I like that kind of analogy of thinking just AI plus human has to be better than either independently. And why some of the newer technology coming online so the, you know, generative AI is helpful for us is even evening out differences in forms of communication.
So we saw that really early on as, you know, we've digitized these manuals. We feel so intelligent. We feel great about ourselves. The digital team's high-fiving. And then you have the team that's on the ground asking questions, and the model can't perform because the colloquial language that they use to refer to some particular piece of equipment, in this case it was called the dog bone. And so we're trying to run through the manual. You know, the data science team in the background is like, where's the dog bone? And we can't verify that the model's wrong. It's that...
For them, dog bone is a tie rod, and that's the language that that group had used for 50 plus years. And so that's been another huge benefit for us is being able to have folks understand one another, understand the differences in how we might be categorizing things, discussing things, and kind of working through problems.
And that's actually been, rather than a hindrance for us, it's actually been really helpful to have the different groups from across our sites understand what those differences are and what the commonalities are. - This is a, I really appreciate the panel coming and talking today. Some really good examples. And I think they highlight issues not just in manufacturing, but issues really at all industries. And thank you for your time.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.