Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.
You might not often hear terms like empathy and design thinking when talking about AI projects, but on today's episode, find out how one pharma company's AI center of excellence takes a holistic approach to technology projects. I'm Tonia Sidere from Novo Nordisk, and you're listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College.
I'm also the AI and Business Strategy Guest Editor at MIT Sloan Management Review.
And I'm Sherwin Kodobande, senior partner with BCG, and I co-lead BCG's AI practice in North America. Together, MIT SMR and BCG have been researching and publishing on AI for six years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Today, Sherva and I are joined by Tonja Saveri, head of Novo Nordisk AI Center of Excellence. Tonja, thanks for joining us. Welcome. Let's get started. First, maybe can you tell us what Novo Nordisk does? We are a global pharma company. We are headquartered here in Denmark, and we are focusing on producing drugs, supporting patients with chronic diseases such as diabetes, obesity, hemophilia, and growth disorders.
We are a 100-year-old company, but still growing a lot, but still very committed to our original values of the company and to our social responsibilities. There's more than 34 million diabetes patients using our products, and we produce more than 50% of the world's insulin supply. Currently, you lead the AI Center of Excellence. So what is an AI Center of Excellence? What is your role there? What does that mean?
AI Center of Excellence can have different flares in different companies, but what we do, we are a central team located in the company's global IT. We are a group of data scientists, machine learning engineers, and software developers working via a hub-and-spoke model across the company.
So we want to minimize our distance from ourselves and our experts in the company, our data and domain experts, by working in cross-functional teams, product teams across the company. And we also want to increase the speed from where we go from a POC of machine learning model to production. And that's why we...
have analytics partners working across the company, and we also have an MLOps product team focusing on creating microservices across the whole machine learning model lifecycle. We want to take all the petabytes of data we consume as a company, all the way from our molecule identification to our clinical trials, to our commercial execution and production and shipping of the products, and take them from database, from flat files, from cloud, the storages,
and convert them to something that is ultimately useful for the company and ultimately support patients' lives. And that's what we are here for. We want to bring this data to life.
We are around one and a half year old as a team, and we already have projects across the company. We're working with our R&D, for example, with using knowledge graph to identify molecules for insulin resistance. We have deployed different marketing mix modelings and sales uplift recommendations models across different commercial regions.
And last but not least, we have recently deployed a deep learning machine learning model that uses a vision inspection in our inspection lines. And that's very important because it's an optimization and existing process. However, it gave us a lot of skills of how to have live machine learning models in a very regulated setup, which is a GMP set, a good manufacturing practice as well. How does that work? Tell us more about that. That seems quite interesting.
We were already using visual inspection the last 20 years from a rule-based approach that we have optimized.
And now we have used different deep learning models to improve them. And of course, with deep learning, we are increasing the accuracy and the efficiency of the visual inspection process and thereby increasing quality and reducing the amount of good product going to waste due to particles being wrongly identified as defect. So we save products and we optimize our products that way.
in a more efficient way and we also produce less waste of good card that's going to waste. But most importantly, what we get out of this project is the necessary capability of how to do machine learning in very regulated spaces, for example, like manufacturing or pharma. Tanya, you've been a big advocate of design thinking in building data products, AI products. Tell us more about what that means and why it's important.
Yes, and I think it started, first of all, by I used to be a data scientist myself. So quite sometimes I found myself working in projects that I could see that they should have been killed earlier. So my interest in this is how to speed up our time for failure. And that's why when we started the area, and that was one and a half years ago, we
we really committed to actually start our projects by what we call a data to wisdom sprint. So basically, a hackathon that we work together with our business colleagues in a period of two weeks to really try to see what we can find from the data based on specific hypothesis. And in the end of these two weeks, then we ask ourselves, is there any signal in the noise? Are the data good enough? Do we have the necessary technology to scale it further?
And is there any business value out of this? And if the answer is yes, then we go to the next step where we do a POC, then the implementation phase, and of course operations. But if the answer is no, then within two weeks, very quickly, we should be able to kill it. And these two weeks we really use, with the help of our agile coaches, also quite some design thinking techniques. But for me, it's the outcome of the design thinking. How to use design thinking as a way to work cross-functionally and as a way to fail fast.
That's great. No wisdom, you're killed. Sort of like natural selection, right? Joking aside, I think this is a great idea because Sam, like how many times we either see in our data,
When we survey these thousands of companies or in our conversations with executives where they are doing hundreds of POCs and pilots, but there's just literally no value. And there is truly what I call AI fatigue across the organization because it's like the whole organization has become this graduate school of
lab of like, let's try this, let's try that, let's try that. So I love the idea of like, just kill the ones that aren't working. So you focus on a handful that are valuable.
Exactly. And for me, it's also those that are not working. We also have a lot of learnings because usually the reason that they're not working is related to data. So at least we stress test the data for two weeks based on what we want to achieve. And then we get some learnings. If we want to do this model in the future, what we need to fix in our data to get there. Oh, that's fabulous because that's actually tying back and learning from what you...
I mean, it's one thing to just kind of cut a project off and say, all right, well, we're not going to keep dumping money into that if it's not going to work. But then there's something else to if you keep starting projects just like that over and over again, there'd need to be some learning that those are going to fail or what you can do to improve those in the future. What kind of numbers are we talking about here? How much wisdom is there? Is there 2% wisdom, 20% wisdom, 97% wisdom?
I think it's very dangerous to try to quantify something like this, right? But one is the data wisdom and the other, of course, is the change management wisdom.
Because we work together through this hackathon with our business experts. So even if something fails, they understand our way of working. And also we get a glimpse of their reality and they get a glimpse of what can be possible. And I think this wisdom is even more difficult to quantify because it will have a, hopefully, a more kind of wave impact effect in the future across the company. If you look at
The total opposite paradigm for what you're talking about is the old school waterfall way of building these like gigantic buildings.
tech pieces, right? It was like tech development 20 years ago, where I remember we did the project, we looked at 100 companies building these massive tech products. And I think it was like 80% of these companies were building features and functionality that either nobody needed, or
or could not be used with the rest of the technology, but they would only find this out like 18 months after development had started. I guess it's a totally new way, but sadly, there's still many organizations that are operating with that old paradigm, and they spend months in business requirements gathering and planning and all that. And I think what you're saying is,
let's get a good idea. Let's start testing. If it's got something there, then we double down and we make it big. But if it doesn't, then we've learned something. And if that project, that idea was important, then we could fix it. And I really, really like also your point around it's not just the technical part, it's also the change management and what it takes for it to work. It's really, really good.
Exactly. And by saying from that in advance, then we have no risk of failure because that's how we work. We have two weeks. So it's not going to be our reputation in line if the project doesn't continue. And having gated steps also after even the MVP phase, also the ability to kill something there. And I think that's
that helps and also the budget. The reason that a lot of companies have these long projects is because they have long budgets allocated to this. But in our case, we also assess if there's any willingness to pay from our business sides. Is what we do useful enough that our business is willing to invest in it?
Set the expectations up front, Sam. Imagine you're, you know, Sam's a college professor. Your students come and say, Professor, I'm warning you ahead of time. I will fail in two weeks. No, no, actually, it's the opposite, Sherrod. I go in and say 90% of you are going to fail. I don't think that would go over very well. Tonya, how do you transfer these learnings?
You mentioned that you do that. Is there a process for that? How do you codify? How do you make these things explicit and not just lore? That's a good question. And while we grow, we still have to find out what's the right level of quantification that is not bureaucratic as well. But what we do is, first of all, during these two weeks, we have two demos across the organization and especially with the business unit that we are working on.
So at least that's the change management part from a broader perspective, not only from the people that are working in the product team. And then regarding the data improvements or technology improvements, then we bring them back to our data governance or to the data owners or to our technology organization.
Okay, that makes sense. One of the things you talked about and something that Shervin and I think are seeing overall is that there's a, let's say an increase in the, maybe in the maturity that we're seeing. I don't know, Shervin, maybe I'm reading too much into offhand comments that people are making, but I'm just seeing much more process getting put in place around what used to be
very ad hoc. And maybe you're a couple of steps ahead of this, you know, looking at some of your building block approaches to making different services consumable. Can you explain how that works and how you're developing these building blocks and how other people are using them?
Yes. So, of course, this building blocks in the idea of providing MLOps services or in general data services comes very much from this data mess approach. And now it's the new hype. But especially for the MLOps, what I can speak about is based on our learning of how long it took to get the machine learning model validated. Now we are creating microservices, wrapping existing services either open source or from our cloud vendors online.
all the way from how we do model versioning, model monitoring, model validation, ground truth, storage validation, and then validating these services as qualified system from a pharma setting.
And in that way, we reduce the time to market from when we need to validate a GXP model. Because then we don't expect any data scientists in the organization to build their own cloud solutions, to be both a data engineer, a software developer, and a validation expert to bring the model into production. Because by using these pre-qualified validated services, they can just focus on data science and use them as components.
And we're just building the first service based on our learning from this visual inspection model. This is such a great point. If you look at a typical data scientist in a company, there'll be such a wide variation of how much of their time is actually in data.
what you call extracting wisdom or patterns or building models and testing versus all the other stuff that's prep work and setting up the environment and feature engineering and things that somebody else has already done, but in another part of the organization. I want to ask you, Tanya, about talent. I mean, you're talking about a way of working that is driven by design thinking, fail fast, highly interconnected with the business,
What is the profile of the right skill sets from a data scientist, engineering perspective that's going to be successful in that environment? That's a good question. I think the technical skills, of course, should be given there. And I can also see the market over time is getting more and more mature. So it's easy to find those.
But what is more difficult is this other software skills that make you a good value translator and a collaborator. And for me, the most important skill of a data scientist is actually empathy, something we don't expect from people from a technical field usually.
is the ability to go to the business person mind and ask themselves, if I was a marketeer, if I was a production operator and I had to do this job every day and I had the problems that I have, how would I use the data to something that will be useful for me? And being able to make this mental leap meets a lot of understanding challenges.
of what is the reality of the other person and ability also to communicate. So empathy and of course curiosity about the application of your machine learning models and the other person. And that's very difficult skills to quantify or interview for. It's more a cultural thing.
or a character trait. It's interesting, Shervin, we're seeing this, maybe this first indication of it's getting easier to find these technical skills. I mean, I think that's an interesting transition. Yeah, that's like become more of a, as Tanya, as you're saying, the table stakes that you need just to get started. But the real value is the softer skills and empathy. And it ties well, Sam, to what we're seeing as well, which is when we look at the
evolution of companies that are investing in AI. And we see that technology and data is only going to get them so far, but that big leap is all around organizational learning, interactivity with
the business, process change. At least to be fair about data scientists, there's still a lot of shortage for machine learning engineers or data engineers or software developers. But for data science, because it becomes more mature as a field technically, it's all the other skills that can differentiate somebody. Tanya, what are you excited about next? What's coming with artificial intelligence? I mean, we're focusing on AI and machine learning. What are you excited about? What's coming down the pipe?
I'm actually excited on data. I know it's not so AI related, but I think it's regarding to the new trend that now it's data-based. Like in order to fix artificial intelligence and optimize it, let's optimize our data first.
We also actually going investing more in the data mass concept now. So for example, treating data as a product, meaning that every time we want to make a new, let's say, marketing mix modeling, we don't have to go through the whole ETL. Yeah, I know this is, I once did a study 10 years ago, small group, maybe like a couple of hundred people in one company, but like 80% of their data scientist time was spent on ETL. And then yet they had a data engineering group.
And the irony of it was, like you're talking about marketing mix optimization. This was actually for the marketing department. You've got data scientists next to each other in two cubicles working on something, using exactly the same data pipeline, but building it from scratch. Both of them, not even knowing that they're using the same foundational features. And yeah, that's a big deal. Tonya, I know you're excited about that because you talk about that in terms of tech indulgence.
It seems very related there. That IKEA effect, perhaps? Yes, the tech indulgence. Yes, for me, that's actually the worst scene that we make as technical people because the IKEA effect is the ability, I think, to give a higher value to something that you build yourself.
And sometimes we tend to stay in a project because we build it ourselves or because we think it's so cool to try the new machine learning algorithm. And for me, this tech indulgence is the biggest danger you can have. And that's why it's important to avoid this risk by working closer with a business and actually working in these product teams from a hackathon all the way to an operational product team. I love that term, tech indulgence. Yeah.
- Tan, you have a segment where you ask a series of rapid fire questions. So just answer the first thing that comes to your mind. What's your proudest AI moment? - I think this visual inspection problem that we mentioned,
not only for the business impact, but especially for the capability it provides us, how to use machine learning in a GXP setting, and how quickly we work together as a team with our business experts, with our manufacturing experts to make this possible, and how quickly it actually got to actually get it validated. I thought that might be your example because of how animated you were when you were talking about that. We can see this in video, but I think it probably comes across in your voice too. What worries you about AI?
As probably everybody in the show says, is how it can be used also as a way to replicate our own biases. But on the other hand, I think technology also has the ability to decode these biases because maybe it's easier to remove these biases from technology than with people in the first place. So it's a double-edged sword, but it worries me that we can replicate our own biases.
Bias is a common concern for everyone. What is your favorite activity that involves no technology? Reading books, definitely. And I try actually not to use even my Kindle for that, to be like physical 3D book. And I can really recommend, I just finished Izikuro's book about Clara and the Sun, about actually an AI robot that lives in a family and starts giving feeling about this family. I can really recommend that. That sounds great. Actually, I need a new book.
I love that. My 12-year-old boy grew up in the age of Kindle and screens and reading books. And so the first time he got an old school book from the library, he was like, Dad, these books smell wonderful. Like, what is this smell? And I was like, yeah, it's an amazing smell that even a child of today's day and age can appreciate. What was the first career you wanted as a child? What did you want to be when you grew up?
It's very weird, but I want to be a garbage collector in the surprise of my mother. Me too, me too. Really? That's a very rare chance to find a fellow. Yes, fellow garbage collector enthusiasts. But I tend to think it's somehow related, right? I mean, you take something and you convert it to something else and we collect data and we convert them to something else. Yeah, I'm sure there's some garbage analogy in there too with data. It's perfect. Yeah.
What's your greatest wish for AI in the future?
I will say to be really democratized, but I don't really believe that it will get democratized anytime soon because it needs so much conceptual understanding to really get democratized that I don't think we're going to get there. But that's my real wish, that everybody has the tools, but more also know how to use them. So by democratize, you mean everyone has access to those tools? Yes, and I think already there are so many platforms there that can help to have this low-code approach.
AI, but it's more has access to the tool, but also be able to use them. So it has the right level of necessary knowledge enough to be able to use them and be independent in using them. And I think for that, it will take a lot of time because it's not a tool thing. It's more, again, a change management and educational thing. Tanya, great meeting you. I think that a lot of what Novo Nordisk has done with
Systematizing and developing processes around machine learning and AI are things that a lot of organizations could learn from. We've really enjoyed talking to you. Thank you. Yeah, it's been really a pleasure. Thank you. Thank you. Please join us next time when we talk with Jack Berkowitz, Chief Data Officer at ADP.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights,
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.