We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Prototypes, Pilots, and Polymers: Cooper Standard’s Chris Couch

Prototypes, Pilots, and Polymers: Cooper Standard’s Chris Couch

2021/5/11
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
C
Chris Couch
Topics
Chris Couch: 我是Cooper Standard的首席技术官,我们是一家全球一级汽车供应商。我还创立并领导着一家名为LiveLine Technologies的AI初创公司,该公司源于我们在Cooper Standard的研发工作。我们生产汽车密封件、外壳以及制动液、冷却液等各种流体系统组件。我们还投资于材料科学技术,我们相信这些技术可以对汽车行业以外的领域产生影响。许多产品消费者可能看不到,但它们对驾驶体验和车辆的安全可靠性至关重要。例如,我们开发了一种新型聚合物Four Tracks,它可以更好地密封车门和车窗,尤其是在电动汽车领域,这对于减少风噪至关重要。我们利用AI来改进聚合物配方开发流程,显著减少了研发周期(70%或80%)。我们通过开放式创新与大学、财团和初创公司合作,获取外部想法和技术,我们的第一个AI项目就源于此。开放式创新的成功关键在于专注,快速缩小投资方向,快速淘汰无效项目或将有效项目推进到成熟阶段。在AI项目中,需要平衡对ROI的关注和对突破性创新的追求,早期阶段可以探索更多疯狂的想法,但在产品化阶段需要清晰地评估ROI。AI技术应用普及率相对较低,这使得企业级AI应用的实施和扩展更具挑战性,但AI项目的试点成本相对较低,这使得探索和验证AI应用的可行性变得更容易。将AI试点项目扩展到全球规模需要谨慎选择用例,确保其具有现实意义并与最终的ROI相关联。LiveLine使用机器学习技术自动化复杂制造环境的自动化流程,并通过向工厂人员提供实时数据和控制权来促进AI系统的采用。为了促进AI系统的采用,关键在于向工厂人员公开信息,使他们能够实时查看数据流、理解系统决策,并拥有控制权(例如,紧急停止按钮)。通过向工厂人员公开AI系统的运行机制,可以获得宝贵的反馈信息,包括改进建议和新的数据来源。工厂人员的专业知识可以为AI系统的改进提供宝贵的建议,即使他们不是AI专家。在AI项目试点中,工厂人员对AI系统的积极反馈和高度依赖表明了该系统在实际应用中的价值。吸引和留住优秀人才的关键在于持续的创新投入和积极的企业文化。不同类型技术人才的吸引方法有所不同,但持续的创新承诺和积极的企业文化是关键。 Sam Ransbotham: 访谈中提到的实验性学习、聚焦扩展以及企业文化的重要性,与我们之前研究中关于AI价值获取的关键步骤相呼应。 Sherven Kodabandeh: Chris Couch展现了对AI的热情和耐心,他强调了聚焦的重要性,以及企业文化从高层到一线员工都必须相信AI技术应用的重要性。人才招聘和培养需要考虑不同人才类型的需求,但对创新的共同追求是关键。

Deep Dive

Chapters
Chris Couch discusses how AI is indirectly benefiting consumers through advancements in automotive products like brake fluid and polymer seals, and how AI is used in the development of advanced polymer formulations.

Shownotes Transcript

Translations:
中文

Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.

Things like brake fluid and chemical manufacturing may not seem like gee whiz artificial intelligence, but we all may be benefiting from AI already and just not know it. Today we're talking with Chris Couch, Senior Vice President and Chief Technology Officer of Cooper Standard, about how we're all benefiting indirectly from artificial intelligence every day. Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI,

I'm Sam Ransbotham, professor of information systems at Boston College. I'm also the guest editor for the AI and business strategy Big Idea program at MIT Sloan Management Review.

And I'm Sherven Kodabande, senior partner with BCG, and I co-lead BCG's AI practice in North America. And together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Today, we're talking with Chris Couch. So Chris is the SVP and the Chief Technology Officer for Cooper Standard. Chris, thanks for taking the time to talk with us. Welcome. You bet. Thank you very much. Why don't we start by learning a little bit about your role at Cooper Standard? What do you do now?

I am the CTO of Cooper Standard. We're a tier one global automotive supplier. I'm also the founder and CEO of an AI startup called LiveLine Technologies that has come out of some work that we did as R&D within Cooper Standard. We provide components in the spaces of vehicle sealing and enclosures, as well as fluid handling, whether it's brake or fluid or coolant, all the fluid systems in the vehicle.

We also invest in material science technologies that we believe can have an impact beyond automotive. Many of our products may be not visible to the average consumer. In fact, some of our products, hopefully nobody has to worry about them when we're moving fuel around your vehicle, but they're critically important to the driving experience and having a safe and reliable vehicle. For example, we developed a brand new category of polymer called

that we have called four tracks. Four tracks provides a much better seal around the doors and windows in your vehicle. Why is that important? It's important, especially as we move into an electrified vehicle world. As engine and transmission noises decrease because there's no more gasoline engine, other sources of noises become more prevalent. And the largest one of those is the noise coming in due to wind around your doors and windows.

And so by providing an enhanced sealing package for those, we believe we've got the right products to service an electrifying world. So how is artificial intelligence involved in that development of the polymer? We spend a lot of time and money coming up with advanced polymer formulations. A lot of it historically has been trial and error. That's what industrial chemists often do.

And we used AI to develop a system that advises our chemists on the next set of recipes to try as they iterate towards a final solution. And we found dramatic reductions in many cases with that approach. And dramatic means reducing those R&D loops 70 or 80%.

Got it. But before we talk more about Cooper Standard's success with AI, can you tell us a little bit about your own background and career path? Well, I think the best way to describe myself is a lifelong manufacturing addict, first of all. As a kid, I took apart everything in the house, probably got hit with wall current more than once. Explains a lot about me today, I suppose. You know, I was the kid that built their first car out of a kit.

I was a hardcore mechanical engineer with a focus on manufacturing controls in school. My side projects include things like building autonomous drones that fly at high altitude. So I'm just a manufacturing nerd. That has really served me well in my career. I spent the first third of my working life in a Japanese company. I worked for Toyota.

and went and joined them in Japan, spent a dozen years with them designing and building and ultimately being involved in plant operations.

I spent the next third of my career running a P&L for an automotive supplier on the business side, mostly based out of Asia. So I have a business bent as well, which may color a lot of what I say today. And then the last third or so of my career has been in CTO gigs. I'm in my second one here. I'm, again, an automotive supplier, but we get our fingers into all kinds of interesting things.

tech domains, just given what's happening in the world today, whether it's material science or AI. So here I am and didn't really expect to be doing my second job here, if you would have asked me two years ago, but it's certainly been a lot of fun and we're excited about delivering some impact with these technologies. Chris, tell us about the open innovation at Cooper Standard. What is that all about? You know, as we looked around at our tech portfolio a couple of years ago when I joined the company,

I was, first of all, overwhelmed by the different domains that we really had to compete in. And I mentioned materials science earlier, but there's different aspects of manufacturing technology and product design, the whole topic of analytics and AI, right, that we're going to talk about today. And was very convinced that there was no way we could do it all ourselves. Cooper Standard isn't a small company. We're just shy of $3 billion in revenue, but we're not the biggest.

And so Open Innovation was really an attempt to reach out and build a pipeline to draw ideas and technology and maybe even talent from the outside world. And so through that, we engage with universities, with consortia around the world. We engage heavily with startup companies and use that as a source of ideas.

In fact, our first proper AI project, if you will, really came through that open innovation pipeline. And we partnered up with a brand new startup that was called Uncountable out of the Bay Area. And they helped us develop a system that would serve effectively as an advisor for our chemists that make new formulations for materials that we use all the time.

And that wound up being a great accelerator for our R&D process, cutting iterations out of those design and test loops, if you will. And, you know, that was one of those big aha moments, right? That, look, there is a huge potential to, you know, accelerate ourselves in many demands. We can't do it all ourselves. And so how do we really build that external pipeline? We now call it CS Open Innovation, but that was the impetus.

Sounds like a very sort of unique way of bringing folks with different backgrounds and different talents and getting them all work together. What did you find was the secret sauce of making that happen? I think whether it's AI, whether it's material science, whether it's other domains, my answer is the same. It really is all about the ability to focus.

And the reason that we, as many other companies, have put in place innovation pipelines and processes and stage gate processes that govern innovation is because of the focus. And how do we quickly narrow down where we're going to allocate our precious R&D dollars? And how do we...

govern those correctly. So we think like a startup, we're doing the minimal investment to sort of answer the next most important question and either wind up killing things quickly or take them to fruition.

And a fair amount of fail fast and test and learn and sort of go big behind things that are working and shut down things that aren't, right? Did I hear that correctly? Exactly. And that's not unique. I think that there's nothing special about AI-based projects, right? We sort of think in the same way and very quickly try to motivate those with a clear-eyed view of ROI. And frankly, one of the

things that I think we've seen over the years when it comes to analytics, AI, especially coupled with manufacturing and Industry 4.0,

ROI has sometimes been hard to come by. And a lot of creative ideas, a lot of interesting things to do with data. But the question is, how does it translate to the bottom line? And if that story can't be told, even as a hypothesis that we're going to prove through the innovation project, then it's hard to justify working on it. It seems like the opposite, though, is that, let me just push back a little bit.

If you get too focused on ROI, where are you going to get something weird and big and unusual? Absolutely. So how are you balancing that tension between focusing on ROI and also trying not to miss out on or trying not to be too incremental? I think the stage gate mentality is useful here. I think in the early stages, we look at a lot of crazy stuff. We have crazy ideas that come in through open innovation. We have crazy ideas from our own teams, and that's fantastic. And we don't hesitate to look at them.

And maybe even, you know, spend a little pocket money to chase them down to some degree. The question then is, you know, what are we going to invest in to try to productize? And that's really the next skate, if you will. So absolutely, the exploration is important. We certainly do some of that. You know, I hesitate to say it almost, but it's having some space to play, right, with ideas and technologies. But then when it's time to go productize, right, you have to be clear eyed on what you're going to get out of it.

That seems like something that might differ for an AI approach. I mean, you said, well, AI is no different just a second ago. But it seems like, you know, I guess I wonder if there is something different about these new technologies that may require a little more freedom up front to do something weird than perhaps some others.

I think that's fair. And in our experience, I think one of the differences with AI is that you probably have less familiarity with the tools and the applications among the general technical population. So if you're talking to design engineers or talking to manufacturing process engineers,

They may have read some things and maybe seen an interesting demo somewhere, but may not be so versed in the nuts and bolts of how it works, much less the nuts and bolts of what would it take to scale that at an enterprise level, right? Because getting models running in a Jupyter notebook off of a CSV file on your hard drive is a whole different story from production on a global scale. And so I think just that lack of exposure to the technologies makes it a bit different, right? If we're talking about transceivers

traditional robotics or maybe simpler types of IoT concepts. Plenty of engineers have a good clue and maybe have used some things in their career, but much less so when it comes to AI. That is the difference, I would agree. The good news is I am very convinced that one of the wonderful things about AI is that it is cheap to pilot. And I was just sort of making up a silly example about Jupyter Notebooks and

CSV files, but that's a great way to explore some concepts. And the cost of that is very close to zero other than acquiring the knowledge to do it. And, you know, even then, I think that we've proven over and over in our internal teams that even the knowledge acquisition is reasonably priced, if you will.

Chris, I want to build on that point you said that AI is relatively inexpensive to pilot. And I agree with that because we see, of course, a proliferation of proofs of concepts and different teams trying different approaches, different ideas. It also seems that there is a fact that AI is also quite hard to scale. And so I want to sort of get your perspective

reactions to something's easy to pilot, quite hard to scale, the real meaningful ROI will come after you scale it. So how do you make that transition? And how do you sort of make things that are really easy to pilot and get excitement around, but then harder down the line to actually embed into business processes and ways of working? How do you envision that transition working?

Right. Yeah, it's a great question and it's definitely not easy and maybe not for the faint of heart, right? Because sometimes it does take a leap of faith in the ability to scale ultimately. The best I can say from our experience with LiveLine, we did some very early prototyping. We thought we understood the data science aspect, right? But that was only the beginning and that was nearly two years ago.

And, you know, only in the past months have we begun to go to a global scale out. The only insight I have there is, you know, as you prototype, as you pilot, you've just got to try to be as judicious as you can about selecting use cases that are realistic and that everybody can get their heads around and connect the dots to the ROI at the end of the day. How are you getting people...

Once you've got these solutions in place, what about the adoption within the organization? How are you getting people to work on teams that used to have human partners and now have machine partners? With LiveLine, the basic concept of LiveLine is to automate the creation of automation for complex manufacturing environments. And we're using machine learning techniques to design the control policies that we deploy onto the lines to control machine parameters in real time.

And we think this is very useful for attacking a diverse range of processes that have been too complex or too costly to automate otherwise. And our early successes have been in continuous flow, manufacturing processes, chemical conversion, polymer extrusion. And we think there's a broad applicability to this to areas like oil and gas, wiring cable, etc.

One of my fears when we first got into plants to do live production trials is that the plant personnel might view this as sort of a threat, right? We're automating that has some negative connotations sometimes in terms of impacts on people's jobs and so forth.

But there's a couple of things I think really gained us some traction and the reception has been quite warm. In fact, the plants are pulling very hard now to roll this out. My attitude is to really democratize the information and what's happening with the tool.

And so, for example, we spent quite some effort to make sure that operators in the plant environment had screens where they can see data streams in real time that they couldn't before. And sometimes there were data streams that we had created for the sake of the machine learning. And we give them visibility into it. We give them visibility, if they want, into the decisions that the system is making and

We also give them the ability to turn it off, the big red button, right, if they're not comfortable with what HAL 9000 is doing on their production line. And also we give them the ability to bias it, right? So if they feel, based on their experience, that the system is making parts that are a little –

you know, let's just say too thin or too thick, they can bias it down a little bit. So I think that sort of exposure and opening up the black box, at least in the plant environment, is very critical to people buying in and believing in what's going on. One of our learnings as LiveLine with that was the enhanced feedback that we get.

We have received several very influential and useful ideas from people that were really doing nothing more than looking at data streams and watching the system making decisions.

And they asked good questions back to us and gave us good insights, suggested new sites of data that we could be tagging that might be useful once they really began to get a little intuition about what we were trying to do with the data science. So I think that sort of democratization, if you will, of the system and opening up and exposing the guts as it does its thing has been, at least in this case, one of the success factors.

That's a great example. It covers... Exactly. Sam, I feel like it covers a lot of what we've talked about. I knew you'd jump on that, Jervis. In our report in terms of different modes of human AI interaction, no black box, allowing the human to override or bias. But also, I was going to ask you, Chris, you hit the point before, I got a chance to ask you, which is the feedback loop? I guess my follow-on question is,

How has that feedback loop been working in terms of maybe skeptics having become more sort of AI friendly or more trust having been formed between, you know, humans and AI and, you know, any anecdotes you can comment on that?

Absolutely. So I'll give you a great anecdote from one of our plants in the southern U.S. In fact, this was the plant where we did our final pilot for LiveLine before we made the decision as a company to go do a global rollout. And we first had the line running in what we call automatic mode. Gosh, I think it was about Q3 of last year. And one of the criteria for the pilot was that we would do some A versus B runs.

The concept is very simple. For these four hours, we're going to run with the system engaged in automatic mode. For these four hours, we're going to turn it off. And you all can run the plant like you always do. And then over a series of days and weeks, we'll add up the statistics about scrap rates and quality and unplanned line stops. And we will quantify exactly what the value was. And we came to the first review point a few weeks into that.

And as they sat with the team, you know, they sort of, you know, pulled up chairs and looked at their shoes and said, hey, we have an issue. We don't have the B data with the system off. And I said, why is that? And they said, because once the plant turned on, they refused to turn it off again. And they don't want to run with the system disengaged anymore because the impact was so significant to them and help them to operate the lines better. They don't want to run with it off anymore.

And that was very consistent with the type of reaction we saw in other pilots in Canada and our tech center in Michigan. That is great. Yeah, that sort of feedback is very reassuring. But again, I think that from the get-go, having a philosophy of really just opening up and showing people what's going on, letting them look at data, be participants in problem solving and tuning and enhancement really sets the stage for that emotional connection and commitment to the project.

Those seem like some very different ways of getting feedback to a system. And then the other one you mentioned was the idea of suggesting new tags or new data to come back in. Right. So I can see, for example, adjusting the bias being a real-time sort of feedback and clearly pressing the red button would happen immediately, I hope. And that's the red button. That's right.

How about some of these, how do the processes work for some of these non-immediate feedback? Like what do you do with these suggestions for new data and tags? How do those, is there a process around those? This is, by the way, sorry to interrupt your response. This is a...

sam and to some extent my chemical engineering background coming out well and and you can think of at least for cooper standard the majority of our lines are chemical processing lines we're taking uh uh different types of compounds and we're extruding them and and in the case of uh thermostats putting them through oven stages very you know 200 meters of process it is in a lot of its chemistry as it goes yeah so you're you guys are

You're in your sweet spot. But so what's the process for like new data tags? How do you formalize that process? And that's something that's less real time. Sure. I'll give you, I'll give you a real example. We were, we were doing a pilot maybe a year ago, uh,

And we had a process engineer who's not a machine learning expert, was watching the system run, was looking at the data, was looking at the analysis that the machine learning was generated and how predictable the outcomes from the line were. And

At that stage, we weren't getting the results that we wanted. And we were seeing variation and output in the real world that we weren't picking up and predicting in the silicone world. Right. And as he was watching the line, he said, look, I have a theory. I have a theory that there's something going on with one of the raw materials we're feeding the lines.

And my theory is that that material is more susceptible to the history of temperature and humidity that it's experienced as it was shipped to the plant. So why don't we throw some data logging devices on those pallets as we ship it around the country and be able to look at that time and temperature history and integrate that into the analytics and see if it would help us be more predictive.

And lo and behold, that was actually helpful. So that's a real-life example of a non-AI expert interacting with the system and using their human judgment to suggest ways to improve it, even though they can't write AI code, right? But once we had exposed enough of them and what's going on that they were able to get some human intuition about what's happening here, then they were able to participate in the process. And that's a very powerful thing.

Chris, I want to ask you about talent. You know, you've been talking about a lot of innovation, a lot of cool ideas, different groups, internal and external, coming together to really experiment new things, try new things, make really game-changing impact.

What do you think it takes to get the right talent, motivate them, keep them excited and sort of get that virtuous cycle of excitement and energy and innovation going?

Yeah, that's a great question. I think the answer may be a little different depending on what sort of technical talent you're talking about. And the way that we would think about a manufacturing process engineer or controls engineer may be a bit different how we think about folks with different skills in the world of AI. And sometimes the talent is in different places in the country.

So I'm not sure there's a one size fits all answer. I think, you know, in general, when we find people that we would like to bring into the company, I think if we can show them that the sustained commitment to innovation and doing cool stuff is real, that helps a lot.

So I think being able to prove to people that you're willing to stay the course in what you're investing in is part of the story. And then the second thing that I think is important is just the culture.

Having people believe that, you know, in addition to, you know, investments in resource availability, we're just serious about being innovative. We're just serious about doing things better. We're serious about winning through technology from the boardroom to the shop floor. And if that culture is real, people know it. And if it's not real and you're faking it, I think people know it.

You can't earn that in a quarter. You got to earn it over a couple or several years. I like to think that we've done a pretty good job with that, but that's really key in my mind.

Chris, many thanks for taking the time with TalkList. They brought out really some quite interesting points. Thanks for taking the time. Yeah, Chris, thank you so much. You're more than welcome. Hopefully you can tell I'm excited about AI. I'm excited about what it can do in manufacturing as well as other industries. I think it's going to be a fun future and looking forward to help build it.

Sherbin, Chris covered quite a few key points. What struck you as particularly noteworthy? Yeah, I thought that was really, really insightful. I mean, obviously they've done a ton with AI and a lot of innovation and cool ideas, and they've put many of those into production. I felt a lot of the key sort of steps in getting value from AI that we've been talking about was echoed in what he talked about, the notion of experimentation and test and learn, and

and importance of allowing folks to try ideas and fail fast and then moving on the notion of focusing on a few things to scale focusing on a lot to sort of test and prototype but a few to scale and invest behind I thought that was really interesting I thought

I thought Chris also had a nice blend of both excitement and patience. I mean, some of the, they're clearly excited about some of the things they're doing, but at the same time, you know, some of the initiatives were taking two years or so to come to fruition. That has to be hard to balance, being excited about something and then also waiting two years for it to come out.

I thought that was a nice blend. Yeah, and also to that point, the importance of focus, right? I mean, once you've picked it and you've decided that this is the right thing to do and you're sort of seeing it progressing towards that,

realizing that it's not time to give up now. You just have to mobilize and double down on it. The one thing that really struck me a lot was the importance of culture and how he said from boardroom all the way to middle management, they have to believe that

that we're behind it and we're investing and this is not just a fad and that has to be sort of permeating across the entire organization to keep talent really excited and interested. - And it went down even into the people who are using the systems. I thought that was a beautiful example of people who may have been so busy

trying to just get their job done that they couldn't step back and think a little bit. And he gave a great example of how that freedom of having the machine do some of the work, let the human do things that humans are good at. He covered almost all the steps in our prior report about the different ways of people working with machines. We didn't prompt him for that.

The other thing I really liked was his view on talent. You know, I asked him, what does it take to recruit and, you know, cultivate and retain good talent? And he said, it's not a one size fits all. And that recognition that not all talent is...

of the same cloth and different people, different skill sets have different sensibilities and they're looking for different things. But the common theme of, you know, people that go there want a continuous, you know, focus and commitment to innovation.

and they want to see that. And maybe that's the common thing. And then, you know, data scientists and technologists and chemists and engineers might have different sort of career paths and career aspirations, but they all share in that sort of common striving for innovation. Yeah, I don't think he mentioned it, but Chris is a Techstars mentor, and I'm sure that some of that background also influences the way he thinks about different people and different ideas and how that talent can come together. Yep, that's right.

He didn't mention it, but that's true. Thanks for joining us today. Next time, we'll talk with Wei-Ming Ke about how the Home Depot continues to build its AI capabilities. Please join us.

Thanks for listening to Me, Myself, and AI. If you're enjoying the show, take a minute to write us a review. If you send us a screenshot, we'll send you a collection of MIT SMR's best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback at mit.edu.