We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Extreme Innovation With AI: Stanley Black and Decker's Mark Maybury

Extreme Innovation With AI: Stanley Black and Decker's Mark Maybury

2022/3/8
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
M
Mark Maybury
Topics
Mark Maybury: 本人作为Stanley Black & Decker公司首位首席技术官,负责领导公司范围内的极端创新项目,涵盖新兴企业、加速新公司发展以及在公司业务内部培养创新等方面。从13岁起就对人工智能充满热情,职业生涯中将AI应用于国防安全、网络安全以及商业领域,积累了丰富的经验。在公司内部,创新被划分为六个等级,从增量改进到激进创新,公司会根据市场需求和风险评估来平衡不同等级的创新项目组合。为了降低风险,公司采用技术准备级别和商业准备级别来评估项目,并注重倾听市场信号、吸取经验教训,以及快速测试和迭代。公司还制定了负责任的人工智能政策,以确保AI应用的公平、透明和可持续性。 Sam Ransbotham 和 Shervin Khodabandeh: 两位主持人与Mark Maybury就Stanley Black & Decker公司如何运用AI进行极端创新,以及如何管理创新过程中的风险展开了深入探讨。他们关注的重点包括:AI在不同创新等级中的应用、风险评估方法、负责任AI原则的制定和实施,以及公司在可持续发展方面的努力。 Sam Ransbotham: 探讨了Stanley Black & Decker公司如何平衡其产品开发组合,以及如何考虑负责任的AI准则。 Shervin Khodabandeh: 与Mark Maybury讨论了Stanley Black & Decker公司如何增加对可持续性的关注,以及如何将AI应用于其业务的各个方面,包括工厂自动化和产品开发。

Deep Dive

Chapters
Mark Maybury discusses his role as the first CTO at Stanley Black & Decker, leading extreme innovation across the company and fostering innovation within its businesses.

Shownotes Transcript

Translations:
中文

Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast. AI applications involve many different levels of risk.

Learn how Stanley Black & Decker considers its AI risk portfolio across its business when we talk with the company's first Chief Technology Officer, Mark Mayberry. Welcome to Me, Myself & AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, Professor of Information Systems at Boston College. I'm also the guest editor for the AI and Business Strategy Big Idea Program at MIT Sloan Management Review.

And I'm Shervan Kodabande, senior partner with BCG, and I co-lead BCG's AI practice in North America. And together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Today, we're talking with Mark Mayberry, Stanley Black & Decker's first chief technology officer. Mark, thanks for joining us. Welcome. Thank you very much for having me, Sam. Why don't we start with your current role? You're the first chief technology officer at Stanley Black & Decker. What does that mean?

Well, back in 2017, I was really delighted to be invited by our Chief Executive Officer, Jim Lurie, to really lead the extreme innovation enterprise across Stanley Black & Decker. So I get involved in everything from new ventures to accelerating new companies to fostering innovation within our businesses and just in general, being the champion of extreme innovation across the company.

You didn't start off as a CTO of Black & Decker. Tell us a bit about how you ended up there. It was a really a very, if you look at my history, you know, how'd you get interested in AI? I mean,

AI started when literally I was 13 years old. I vividly remember this. It's one of those poignant memories. 1977, I saw Star Wars. And I remember walking out of that movie being inspired by the conversational robots, R2D2, C3PO, and the artificial intelligence between the human and the machine. And I didn't know it at the time, but I was fascinated by augmented intelligence and by ambient intelligence, you know, that they had these machines that were smart and these robots that were smart.

And then that transitioned into a love of actually understanding the human mind. In college, I studied with a number of neuropsychologists as a Femmick scholar at Holy Cross, working also with some Boston University faculty. And we built a system to diagnose brain disorders in 1986. It's a long time ago, but that introduced me into Bayesian reasoning and so on. And then

When I initiated my career, I was trained really globally. So I studied in Venezuela as a high school student. As an undergraduate, I spent eight months in Italy learning Italian. And then I went to England and Cambridge and I learned English. The real English.

The real English. C-3PO would be proud. C-3PO, exactly. That's right. Exactly. Indeed, my master's was in speech and language processing. Sorry, you can't make this up. I worked with Karen Spark-Jones, a professor there, who's one of the great godmothers of computational linguistics.

But then I transitioned back to becoming an Air Force officer. And right away, I got interested in security, national security, computer security, AI security. I didn't know it at the time, but we were developing knowledge-based software development. And we think about how do we make sure the software is secure? Fast forward to 20 years later, I was asked to lead a federal laboratory, the National Cybersecurity Federally Funded Laboratory at MITRE.

supporting NIST. And I had come up the ranks as an AI person, applying AI to a whole bunch of domains, including computer security, building insider threat detection modules, building penetration testing, automated agents, doing a lot of machine learning of malware, working with some really great scientists at MITRE and the federal government and beyond, agencies,

And in commercial companies. And so that really transformed my mind in terms of, you know, how do we, for example, I never forget working together with some scientists on the first ability to secure medicine pumps that are the most frequently used device in a hospital. And so that's the kind of foundation of security thinking and risk management.

that comes through. I got to work with the great Donna Dodson at NIST and other great leaders. And so those really were foundational, theoretical and practical underpinnings that shaped my thinking in security.

Okay, but doesn't it drive you crazy then that so much of the world has this build it and then secure it later approach? I feel like that's pervasive in, well, in software in general, but certainly around artificial intelligence applications. It's always the features first and secure it later. Doesn't it drive you insane? How can we change that? There are methods.

and good practices, best practices for building resilience in the systems. And it turns out that resilience can be achieved in a whole variety of ways. For example, we mentioned diversity. That's just one strategy. Another strategy is loose coupling. The reason pagodas are

famously last for hundreds and hundreds of years in Japan is because they're built with structures like, for example, central structures that are really strong, but also hanging structures that loosely couple and that can absorb, for example, energy from the earth when you get earthquakes.

So these design principles, if you think about a loosely coupled cyber or a piece of software system, and even, of course, decoupling things, right, so that you disaggregate capabilities so that if a power system or a software system goes down locally, it doesn't affect everyone globally. Some of these principles need to be applied. They're systems security principles.

But they can absolutely be applied in AI. I mean, it's amazing how effective people can be when they're in an accident. They've got broken bones. They've got, you know, maybe damaged organs. And yet they're still alive. They're still functioning. And how does that happen? And so nature is a good inspiration for us.

We can't forget in the end, our company has a purpose for those who make the world. And that means that we have to be empathetic in understanding of the environments in which these technologies are going to go into and make sure that they're intuitive, they're transparent, they're learnable, they're adaptable to those various environments so that we serve those makers of the world effectively.

And Mark, as you're describing innovation, I think your brand is very well recognized and a lot of our audience would know. But could you just maybe quickly cover what does Stanley Black & Decker do and how have some of these innovations maybe changed the company for the better?

Well, it's a great question. One of the delights of coming to this company was learning what it does. So I knew Stanley Black & Decker, like many of your listeners will know, is a company that makes DeWalt tools, hand tools or power tools or storage devices.

Those are the things that you're very familiar with. But it turns out that we also have a several billion dollar industrial business. So we robotically insert fasteners into cars. And it turns out that nine out of every 10 cars or light trucks on the road today are held together by Stanley fasteners. Similarly, I didn't know beforehand, but in 1930, we invented the electronic door, the sliding door.

So, next time you walk into a HomeGoods or a Home Depot or a Lowe's or even a hospital or a bank, if you look up and you look to the left, you'll notice there's a one in two chance there'll be a Stanley logo because we manufacture one of every two electronic doors in North America.

And there are other examples, but those are innovations, whether it be protecting two million babies with real time location services in our health care business or producing eco-friendly rivets that lightweight electric vehicles. These are some examples of the kind of innovations that were continuously developed because basically every second, 10 Stanley tools are sold around the world.

every second. And so whether it's Black & Decker, whether it's DeWalt, whether it's Craftsman, these are household brands that we have the privilege to influence the inventive future of. Yep. You're really everywhere. And every time I sit in my car now, I'm going to remember that like the strong force that keeps the nucleus together. You're keeping my car together. That's fantastic. Can you give us an example of extreme innovation versus non-extreme?

Sure. By extreme, we really mean innovation of everything, innovation everywhere, innovation by everyone. We actually, interestingly within the company, delineate between six different levels of innovation.

We're just in the past six months becoming much more disciplined across the entire corporation with a common rubric for how we characterize things. So it's a great question. Levels one and two, those are incremental improvements, let's say, to a product or a service. Once we get to level three, we're talking about something where we're beginning to make some significant change to a product. When we get to level four, we're talking about maybe three major or more new features. It's something that really you're going to significantly notice

When we talk about a level five, this is a first of a kind, at least for us. It's something that we may have never experienced in our marketplace. Those we oftentimes call breakthrough innovations. And finally, our level six, which are radical innovations. Those are things that are world firsts.

And to give you a concrete example, we just introduced to the marketplace the first utilization of pouch battery technologies, successor to the FlexVolt batteries, which are essentially an ability using the pouch technology to double the power, 50% increase in power in batteries for tools.

two times the life cycle reductions in the weight and the size of those. So that's an example of an extreme innovation that's going to revolutionize power tools. That's called PowerStack.

Another example we brought forward in Black & Decker, PREA. PREA is the first conversational home health care companion. It's an example of using speech and language and conversational technology to support medicine distribution to those who we want to age in place, for example, in the home, but also to alert using AI to detect anomalies and alert caretakers to individuals. So those are examples that can be really transformative.

Levels one through six implies there is a portfolio and that there is a intention about how to build and manage and evolve that portfolio. Can you comment a bit how you think about that and how much like in level one versus level six and what are some of the trade-offs that you consider? That's an excellent question. And basically it is really market driven.

And it's even going to be further product and segment driven. If you're selling a software service, you're going to want to have that modified almost real time. But certainly within months, you're going to want to be evolving that service. And so that incremental modification might occur. We have an ability to just upload a new version of our cyber physical end effector, if you will, whatever it happens to be.

But to answer your question, oftentimes companies will over time, if they don't pay attention to their level one through six, so from incremental all the way up to radical, they'll end up with a portfolio that drifts to the incrementalism that's only focused on minor modifications. Those are easy to do. You get an immediate benefit in the marketplace, but you don't get a long term, a medium or long term shift. And so what we intentionally do is measure in empirical fashion how much

growth and how much margin and how much consumer satisfaction am I getting out of those level ones all the way up to level sixes? Because any organization is going to naturally be resource constrained in terms of money, in terms of time, in terms of talent.

What you need to do is you need to ideally optimize. And if the marketplace is rewarding you for, let's say, having new products and services in class four, which have, you know, major improvements, but they penalize you for having radical improvements because they just can't, you know, it's,

There's cognitive dissonance. What do you mean home health companion? I don't know what that is. I just want a better tongue depressor. And so in that case, you really need to appreciate what is the marketplace willing to adopt. And we have to think about if you do have a radical innovation, how are you going to get into the channel? And one final thing I'll say, because your question is an excellent one about portfolio, is we actually go one step further, which is not only do we look at what the distribution of the classes are,

and what the response to those investments are over time. But we further, for any individual particular investment, we actually put it into a portfolio that characterizes the technical risk and the business risk. We actually use technical readiness levels, which come out of NASA and the Air Force, my previous life, and use now in the business community. And then we invented, actually, previously when I was working for the federal government, we created commercial readiness levels

And I've imported those into Stanley Black & Decker. And now we actually have a portfolio for every single business in the company as a whole for the first time ever. And that's something that we're really delighted to finally bring to the company is an ability to look at

our investments as a portfolio because only then can we see are we trying to achieve unobtainium because it's technically unachievable or equally bad is, is there no market for this? You may invent something that's really great and if the customer doesn't care for it, it's not going to be commercially viable. And so those are important dimensions to look at in portfolio analysis.

Yeah, I'm really happy that you covered risk because that was going to be my follow-on question. Even that must be a spectrum of risk and a decision how much risk is the right level of risk and how much do you wait to know whether the market's really liking something or not. I'm not going to put words in your mouth, but I was just going to infer from that that you're saying that's a

lever and a decision that you guys manage based on how the economics of the market and the company is and when you want to be more risky versus less risky.

Absolutely. There are many voices that get an opportunity to influence the market dynamics. You know, if you think of the five forces model of Porter, classically, you've got your competitors, your suppliers, your customers, and yourself. And all of these competitive forces are active. And so one of the things we try to do is measure, is listen. Our leadership model within our operating model at the company is listen, learn, interact.

and lead. That listening and learning part is really, really critical. If you're not listening to the right signals, if you don't have a customer signal, you don't have a technological disruption signal, if you don't have an economic signal, your manufacturing and supply signal, you need all those signals

And then importantly, you need lessons learned. You need good practices early in the idea generation side. Are you using design thinking? Are you using diverse teams? Are you gathering insights in an effective way? And then as you go through to generating opportunities, are you beginning to do competitive analysis like I just talked about as you begin to look into these specific business cases?

Are you trying things out with concept cars or proof of concepts and then getting to maybe, you know, maybe we don't have the solution. Maybe we ought to have some open innovation outside the company. And then ultimately in our commercial execution, do they have the right sales teams, the right channels, the right partnerships to go to scale? And so the challenge is we can oftentimes, whether it be manufacturing the products,

We can get into pilot purgatory. We can create something that looks really, really exciting and promising to the marketplace, but it's unmanufacturable or it's unsustainable or it's uninteresting or uneconomical. And that's really not good. You really have to have a holistic approach.

intent in mind throughout the process and then importantly a discipline to test and to measure and to fail fast and then eventually be ready to scale quick when something does actually hit if you will the sweet spot in the market

So there's lots of different things on these levels. Can you tie them to artificial intelligence? Like is artificial intelligence more risky in market risk? Is it more risky in technical risk? How is that affecting each of your different levels? What's the intersection of that matrix with artificial intelligence?

Great question. Our AI really applies across the entire company. We have robotic process automation, which is off the shelf, low risk, provable, and we automate IT elements in finance, elements in HR. We have actually almost 160 digital employees today.

that just do automated processes and it makes our own, we call it not only AI, but sometimes augmented intelligence, not artificial intelligence. How do we augment the human to get them more effective? So to your question, what's risky? That seems less risk. That's very low risk. RPAs are very, very low risk. However,

If I'm going to introduce Priya into the marketplace or Insight, which is a capability in our Stanley Industrial business for IoT measurement, for predictive analytics, for shears and or extensions to very large scale excavation equipment and so on. In that case, there could be a very high risk.

because there might be user adoption risk, there's sensor relevance risk, there's making sure your predictions are going to work well. It could be a safety risk as well as an economic risk. So you want to be really, really careful to make sure that those technologies in those higher risk areas will work really, really well because they might be life critical if you're giving advice to a patient or you're giving guidance to an operator of a very big piece of machinery.

And so we really have AI across our whole business, including, by the way, in our factories and automation. One of the ways we mitigate risk there is we partner. We work with others so that they actually have de-risked a lot of the technology. So you see mobile robots from third parties. You'll see collaborative robots from third parties that we're customizing and putting to scale in our factories and de-risking them. So on that matrix, they're much more distributed across the spectrum of risk.

One of the things that Shervin and I have talked about a few times is this idea how maybe artificial intelligence maybe even steers people towards these incremental improvements. Maybe it's the ability for these algorithms to improve an existing process may somehow steer people towards the level one versus the level six. Are you seeing that? Are you able to apply artificial intelligence to these level five, level six types of projects?

We absolutely have AI across the spectrum. When it comes to AI, the stuff lower in the technical commercial risk tends to be commercially proven. So, you know, it tends to have multiple use cases. Others have deployed the technology. It's been battle hardened. But the reality is there are a whole series of risks. And we actually have just recently published a

a responsible AI set of policies at the company and made them publicly available. So any other diversified industrial or tech company or other consulting small to medium enterprise can take a look at what we do. And I'll give you a very simple example. And it gets a bit to your point of, well, will they gravitate to the easier problems? Well, not necessarily. One of the areas of risk is making sure that your AI sensors or classifiers are in fact not biased and/or they're resilient.

And one of the ways you make sure they're resilient and unbiased is you make sure that you have much more diversified data. That means if you have more users or more situations that are using your AI systems and there's active learning going on, perhaps reinforcement learning while that machine's operating, most likely human supervised because you want to make sure that you're not releasing anything that could adversely affect an operator.

or user, end user, actually the more data you get, the better and the more effective, the more risk you can reduce, but actually the higher performance you can get. So it's a bit counterintuitive. So you can actually become a bit more innovative in some sense, or just smarter in the AI case.

Because you have more exposure. In the same way that people who go through high school to university to graduate school, because the challenge is increased along those levels, their capacity to learn, to communicate becomes much more effective as they go through that training. Same thing with a machine. You can give it easier examples so they might be more incremental, simple challenges to that system. And as I get more difficult, so I go from the consumer to the prosumer to the pro,

my intelligence of that system, because the pro knows a lot more. She's been out working, constructing for 20 years or building things in a factory for a long time and knows what kinds of learnings that machine can leverage and can expose that machine to more sophisticated learnings. For example, for predictive analytics, if I want to predict an outage,

If I've only seen one kind of outage, I will only be able to deal with that outage. If I've seen 30 different kinds of outages, I'm much better and much more resilient because I know both what I know, but equally importantly, perhaps more importantly, I know what I don't know. And if I see something for the first time and I've seen 30 different things and this is a brand new one, I can say this doesn't fit with anything I've seen before.

I'm ignorant. Hold up. Let's call a human. Tell them it's an anomaly. Let's get the machine to retrain. Where do these responsible principles come from? Is that something you developed internally or is that something you've adopted from somewhere else?

So first, the motivation. Why do we care about responsible AI? It starts from some of my 31 years working in the public sector, understanding some of the risks of AI, having been on the government side, funding a lot of startups, a lot of large companies and small companies, building partnerships.

defense applications, indoor healthcare applications, national applications for AI, we recognize the fact that there are lots of failures. The way I think about the failures which motivate responsible AI is they can be your failure to, if you think of the OODA loop, right, the observe, orient, decide, and act,

Observation, you can have bad data, failed perception, a bias, like I was suggesting. And so machines literally can be convinced that they're seeing a yield sign when they see a stop sign. So there actually have been studies done that have demonstrated this. Exactly. Adversarial AI.

You can also confuse an AI by biasing a selection, by mislabeling or misattributing things so they can be oriented in the wrong way. See the classifications I talked about before. You could force them to see a different thing or misclassify. Similarly,

They can decide poorly. They could have misinformation. There could be false cues or confusion. And we've seen this actually in a flash crash where AIs were trained to actually do trading and then their model didn't actually recognize when things were going bad and poor decisions were made. And then finally, there can be actually physical world actions. We've had a couple of

automated vehicles fail because of either failed human oversight of the AI, over trusting the AI or under trusting the AI and then poor decisions happen. And so that's the motivation. And then we studied work in Europe, in Singapore, in the World Economic Forum.

In the US, there's a whole bunch of work in AI principles and algorithmic accountability and White House guidance on regulation of AI. We've been connected into all of these things, as well as connected to the Microsofts and the IBMs and the Googles of the world in terms of what they're doing in terms of responsible AI. And we as a diversified industrial said we have these very complicated domain applications everywhere.

in manufacturing, in aviation, in transportation, in tools, or in home healthcare products, or just home products. And so how do we make sure that when we're building AIs into those systems, we're doing it in a responsible fashion? And so that means making sure that we're transparent in what the AI knows or doesn't know, making sure that we protect the privacy of the information we're collecting about the environment, perhaps of the people.

Making sure that we're equitable in our decisions and unbiased. Making sure that the system is more resilient, that they're more symbiotic so we get at that augmented intelligence piece we talked about before. All of these are motivations for why, because we're a company who really firmly believes in corporate social responsibility.

And in order to achieve that, we have to actually build it into the products that we're producing and the methods and the approaches we're taking, which means making sure that we're stress testing those, that we're designing them appropriately. So that's the motivation for responsible AI. So what are you excited about at Stanley Black & Decker? What's new? You mentioned projects that you've worked on in the past. Anything exciting that you can share that's on the horizon?

Yeah, I can't go into great detail, but what I can say right now for your listeners is we have some extreme innovation going on in the ESG area, specifically when we're talking about net zero. So we've made publicly statements that our factories will be carbon neutral by 2030. We have 120 factories and distribution centers, so that's around the world. So that's not an easy, and that's not, no government has told us to do that. That's self-imposed.

And by the way, if you think, well, that's a future thing, we'll never do it. We're already ahead of target to get to 2030. But we're also by 2025 pulling in a little bit closer. We're going to be plastic free in our packaging. Right. So we're getting rid of those blister packs that we all have gotten so accustomed to do. Why? Because we want to get rid of microplastics in our water, in our oceans. And we feel that it's our responsibility to take the initiative. No government's asked us to do this.

It's just that we think it's the right thing to do. We're very, very actively learning right now about how we get materials that are carbon free. So how we operate our plants and design products that will be carbon free, how we distribute things in a carbon neutral way. This requires a complete rethinking and it requires a lot of AI, actually, because you got to think about smart design.

Which components can I make to be reusable? Which can be recyclable? Which have to be compostable? The thing here is really to think outside the box. Just because you did it, you know, we're going to be a 179-year-old company. So we've been around for a while.

And as an officer of the company, my responsibility as a steward, really, to make sure that we progress along the same values that Frederick Stanley, who was a social entrepreneur, the first mayor of New Britain, very much turned his factories to building toys when there are no toys during the war for children. I mean, just a very...

community-minded individual. And that legacy, that purpose continues on in what we do. And so, yes, we want high power tools and yes, we want lightweight cars and we want all those innovations, but we want them in a sustainable way. Thank you. I think that many of your things about, for example, the different levels that you think about innovation will resonate with listeners. It's been a great conversation. Thanks for joining us. Mark, thanks. This has been really a great conversation. Thank you very much.

We hope you enjoyed today's episode. Next time, Sherva and I talk with Sanjay Natchani, Vice President of Artificial Intelligence and Computer Vision at Peloton Interactive. Please join us. Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights,

and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.