Today we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcasts. Digital twins? Generative AI for engineering?
On today's episode, find out how one petrochemical company upskills its workforce to benefit from new tech like generative AI. I'm Ellen Nielsen from Chevron, and you're listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College.
I'm also the AI and business strategy guest editor at MIT Sloan Management Review.
And I'm Shervin Kodubande, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Hi, everyone. Today, Sam and I are speaking with Ellen Nielsen, Chief Data Officer at Chevron. Ellen, thanks for taking the time to talk to us. Welcome to the show. Thank you for having me. I'm really excited to have a very cool conversation today.
Let's get started. I would imagine most of our listeners, in fact, all of them have heard about Chevron. But what they may not know is the extent to which AI is prevalent across all of Chevron's value chain. So maybe tell us a little about your role and how AI is being used at Chevron.
Maybe talking about my role, it started three years ago. I was the first data officer within Chevron. That doesn't mean that we deal with data since a long time, but the need to put more focus on the data was starting to emerge. And with that, I was tasked in evangelizing data-driven decisions. And that, of course, includes data.
any kind of data science analytics along the way. And that was very, very interesting to see it growing over the time. We use AI in many places, some areas where we use robots, for example, in tank inspection today. You can imagine that was very cumbersome having the human involved. Now we do this with robots.
And we take basically the human beings out of these confined spaces. And that's a combination of computer vision, you know, taking images, comparing the images and take predictions on what's the status of this tank and this equipment. Is it rusting? Does it need maintenance? Do we need to tackle it in a very predictive way? So that's operating in a much more reliable and safe way in the future.
The other example is when we talk about sensors in compressors or any kind of equipment. In the past, we were, of course, installing them, but the prices dropped so dramatically for those sensors and the data collection.
And I just saw recently, actually it was a citizen development application which has been created because these sensors have to be installed. And when you install them, you basically take a QR code and with one click you can add the geospatial location to the sensor. And then you can see all these sensors you have installed in your facility interface.
in a map, you know, so you see actually really actively happening what's going on and where are the things actually working and which sensors have been inventoried there. So we have a combination here of computer vision of using citizen development and then, of course, using these sensors in a machine learning, AI-based way to come to predictions and how they work. So one of the things I know that you do quite well is digital twin.
Maybe you can comment a little bit about that example. Digital twin is one of many examples where we use that. What triggers to do digital twin? One is you can imagine that we have people out in the field, so we want to make their life easier and safer. That means the more data and the more information we can gather about our field assets and how to operate them,
will serve the purpose for being more safe, more reliable in the operations. And that was one trigger. The second trigger is that you collect a lot of information based on, let's say, Internet of Things, IIoT devices, censoring. And that feeds into another pool of information where you can drive even predictive decisions in these assets. So with the digital twin, we want to basically serve both.
We want to be safer, reliable, but also more predictive on what we do that speaks to efficiency and do the right thing at the right time. Can you give us a specific example of a place you're using a digital twin? How does that help with safety? How does it help with efficiency?
If you take a digital twin and you'd say you digital twin basically a facility, a refinery. So in a refinery, you can imagine there are lots of pipes, there are lots of equipment, there are compressors, there are generators, there
There are things very mechanically working and people have to maintain those to get the products out. So when you see the value chain from product coming in or materials coming in and product comes out, everything between this goes through this refinery. And if you have everything digital twinned, you can plan better, you can operate better. You know when things are coming in, you can predict better on how to get a better output.
And that's basically how we do it in refineries or facilities where we operate, is really looking at the flow of information and the data-driven decisions. So we were always driving decisions with information. In the past, information was more in the heads of the people who were very experienced and sometimes augmented, of course, with equipment information. But it was maybe more manually collected or putting things together. And I think with the digital twin,
we have the information right at the fingertips to drive this. That's a great example. I think long-time listeners will know that both Shervin and I are chemical engineers, but you may not know that, well, I'm no longer a chemical engineer, and partly because I got so attracted by the idea of simulating chemical processes. We figured out that we didn't have to build a little process to test something. We could build it on the computer and test it. And that was a long time ago, and really some ugly...
ugly tools back in those days. I'm guessing you're far more sophisticated, or I'm hoping you're far more sophisticated than that. I think that's an always evolving space, but I'm really excited about the opportunities. I can imagine, you know, when you have a catalyst or you had to test raw materials out, you had to plan it with the production. Hey, you have to stop production. You have to real-time test it. That took away output.
And now you can simulate things in a way more efficient way with specifications being at your hand and not doing it physically anymore. I was also in the world in my past company where we had to test things even physically in labs over and over again. And I think the times are more over. It's becoming more simulated in a much better way.
Helen, can you give us another example maybe around exploration or extraction or something that also used to be quite experiential and expensive and dangerous without the data and AI? I think we have also a great example out posted that actually from the company end of last year.
when you think about oil and gas, you think about how do you get more out of a reservoir? You want to get the best out of a reservoir and to do it in a very efficient, responsible way. And with collecting the data, you can imagine if you had not the computer power and not the data at hand in a digital way, this is quite cumbersome. I cannot imagine how the people did it in the past. They may were printing out things and laying it on top of it and
coming with assumptions based on their experience. And of course, they gained a lot of experience. Now we do this with machine learning algorithms. We understand how the rock composition is
We even created actually a Rockopedia to know what are the different rock conditions and compositions so that we can tap into this data every day when we need it. Yeah, and I think it's a bigger theme that with the advent of these technologies, the sky's the limit. And so the question is,
How else can you apply it and what else can you do with it? And I think this brings me to a question around the mission and the purpose, because there's obviously a ton of data. There is obviously a lot of tools and the use cases are driven by the mission. And what are some of the things we want to do with that? Yeah, I would link it actually in Chevron back to our strategy. We do higher returns and lower carbon safely.
And this is our guiding principle. So everything what we do should, of course, benefit the success of the company, the impact of the company, but also doing it in a low carbon environment. We know the world looks different in a few decades. We look after methane. We look after greenhouse emissions. We look after our carbon footprint overall. So this is something what we always tackle.
And data and AI plays their role, but also plays a role in how we operate and how we operate safely. Safety is a big component of Chevron's value system. And when you think about the future and think about AI and robots and digital twin and all of that, there is technology out there where we can help our people to do their work safer and much more reliable and in better ways in the new ways in the future.
What's interesting to me about Chevron or a company that's predominantly an engineering and science company is when AI is being put in production,
to augment some of the decisions and some of the insights that workers and engineers and scientists are making. But, you know, as an engineer, as an operator of these plants, I may not quite agree with it. I don't know whether this resonates. How do you get scientists and engineers comfortable to use these tools? Mm-hmm.
I think it's actually helping because engineers have a very logical mindset and they know the science. And we have a lot of science people in the company. So when you talk about data science and the things behind it, we have many people very interested in learning data science. And we also would say we have started to provide education. So I think, where do I start with?
So you start with learning, you know, hey, I don't understand this. That's a typical engineering mindset. I don't understand it. I want to understand it. I'm looking for what does it tell me? How can it influence my solution? We have a digital scholarship program since a while. And actually we do this with MIT where we have cohorts going for a year and they're not coming out of
one department, they're really coming out of the whole company, going through a design engineering master in one year, which is really a tough thing to do. But they're coming back and understanding the new technology, understanding the things, how we can use it differently.
And they are the first going back into their normal environment and influence and basically have other people participating from their knowledge and to venture out different things maybe they have not tried before. So this is one thing to influence culture.
The second thing in the data science space, we started to work with Rice University. We have a six, seven months program also going across the company. That's not only for IT people to learn what data science means. And they bring it back to their environment. So they are not leaving their role completely. They go into six months, seven months, and then they return back.
in the best way possible to influence the company. Hey, what is possible? The last piece is maybe the broadest way because we call it citizen development. We believe that many, many people in a company get things in their hand now with the evolution of AI. And we just saw Gen AI is now in the hands of any, of everybody who wants it. And with this kind of citizen development overall, we want to bring innovation
the technology which is becoming much easier to many people so that they can use it. And of course, they need data for this and that's why we provide the data in these systems to be more self-efficient. So I would say there's a three-prong kind of approach to influence the culture, leadership, and we have really nice cases over in AI citizen development. We are also publicly talking about it with certain use cases we do. I think that's the culture piece.
It takes a while, you know, to get into every artery of the company. But I feel there's really excitement in the company right now to go down that road. What I like about what you're saying is that
actually doubling down on the predominantly engineering and scientific culture of the company and making this a cross-disciplinary collaboration between science and engineering and AI versus any of these replacing each other. It's an and, not an or. Is there a specific example you have where someone has gone to one of these seventh month programs or the digital scholar program and brought back something that's made some change, made a difference?
Yeah, definitely. So we have many because we are, I think, two or three years into this. And of course, they bring it back and solve several issues. We even have this sometimes with internships. After two, three weeks, they recognize they could solve a planning issue where they were chewing on since some time. And it was pretty complex. But with the new, let's say, views and data and artificial intelligence, the outcomes were really stunning. And we have actually somebody...
Also influencing really the planning of our field, you know, field development and creating a low-code environment and really just breaks in and really changes the way how we work.
In terms of making the company more productive, more efficient, ensuring it's safe, ensuring that it does good for people and communities and environment and species in all different forms. What has been challenging? What's hard? I would say there are definitely some challenging parts. This is an early stage technology, especially in the Gen.A.I.,
Things are moving very fast. So what is challenging, whatever you do today might be different in three months. So the challenging part is you cannot work in the same way you worked maybe in the past. You have to maybe pivot faster.
It's not that you build a solution. I think a company told me they built a solution and that if six months later, if they would build it now again, they would do it totally different. So you have to watch when you, I call it, maybe put the eggs in a basket. You have to think about what's the right timing for what kind of use case and figuring this out because you don't want to lock yourself in when the technology is still in that kind of an evolution stage.
This is something what we watch. And then the second thing is not everything in terms of security or handling data in the right way is solved yet in generative AI. That's just the technology is not ready. There are no solutions yet. And you can build a kind of a sandbox or a kind of a fenced environment where
but you have to fence it by yourself. And I think the hyperscalers like Microsoft and so on, I think they're working on
also adapting those use cases in their normal, let's say, landscape where you can have an authorization process, where you have an access process, how you're administering and governing this the right way. So this is, I would say, still missing. I'm very hopeful that this will be closed very fast. But today, you have to pull different technologies if it's a vector database to talk a little bit tech language here.
It's not all ready to be used on a really wide scale very safely. And you have to imagine if you have a corporation, there are rights in terms of what information can be shared, what should be not shared, and so on. And that's something that we think is a challenge. The third challenge I want to mention is the policymakers. So we follow this very closely with Responsible AI. We are a member of the Responsible AI Institute.
and watching very carefully what's happening there, what kind of policy are coming around the corner, how do we incorporate that responsibly into our operations, into our productization of AI models. And that's, of course, evolution. It's not something you can buy and run it. And yeah, we'll see how companies are filling these gaps.
Helen, can you comment on generative AI and if and how it's being used or planned to be used? Yeah, absolutely. We are following generative AI already since two years or so, maybe a little longer. We were not totally surprised by the development. Maybe you can say, okay, when was ChatGPT coming? That was maybe a surprise for everybody that it was coming so fast.
But we were watching this and already did some use cases on a kind of innovative sandbox environment to see what will that be. And when it came out, we said, okay, this is new technology. We want to understand it. We give it into the hands of the people and use it and then understand the telemetry of what do we use it for and how does it resonate. And in May, June, we decided to put a more dedicated team together
on those activities. And yeah, we have hundreds of use cases now in the pipeline, which we down select to the most prominent ones and approach them. But technology wise, we are really, I would say, very much on top of what's going on and have really super smart people working on it. I can tell you my own use case. I use it for writing things down. You can talk about
maybe writing your performance agreement with your supervisor or with your team. You check on presentations or documentations you have to do to really optimize the writing. I know that my team is using it because we are thinking in product development and product management and portfolio management. So in the past, they took much longer to write down their thinking and
And I talked with one of my team members and she said, you know, in the past it took me maybe one or two weeks. Now it takes me one hour to get this done. So there are lots of efficiency in using, let's say, chat GPT in this space. When we look into other examples, you can imagine we have knowledge databases. We have knowledge systems.
around system engineering and other information we have available within the company on a very broad scale. And in the past, if you wanted to know how this generator works, you had to basically type in search criteria. And then finally, you found the document. Then you had to read the document. Oh, this document was not enough. You need another document. Okay, you find the second document. Then you complete basically your answer.
And then you go back, basically execute on it. We have created a chat system where you can collaborate with this kind of information and figuring this out much faster. So these are maybe two, maybe more one on a daily thing and one maybe more related to kind of how we work in a systems approach. If I combine some of your ideas, I see some difficulties. So earlier on, you were talking about citizen developers and the idea of putting a lot of these tools in the hands of people.
And then later you were talking about problems of security and policy that are not part of the infrastructure yet.
Historically, security always follows features. We care about features first and then we care about security. So we have the combination of a widespread proliferation of tools amongst citizen developers and low infrastructural guardrails or policies, and then concern about inability to fast follow. Those seem like they could smash together and create a lot of tension. How do you navigate that?
Yeah, I would say maybe we have to talk about AI in general and then generative AI. So when I talked about policy makers, this was more in the generative AI perspective.
When you think about citizen development, we have models or algorithms in the box. We have proven, we have secured, they have followed a review process. We checked on them in terms of responsible AI. So they are ready to use for any citizen developer who wants to use that. So they are secured and safe and they're actually in our safe environment.
So you can already start there and make it safe. But the new technology which is coming on the Gen AI with these large language models and the data behind it where the large language models learn from, that's maybe not ready yet to put into a citizen development perspective. So to make this very clear, when I talk about citizen development, everything, what is secured, kind of the telemetry is there, the space is there. We have ensured that we do the right thing.
This is made available for everyone in the company. And the other things which are maybe not secure yet, we are not putting that into the system. We are waiting. So we cannot just afford to have unsecured things into our citizen development program. Yeah, that brings out a nice sort of differentiation between the ideas that citizen data ship, data scientist can't just be a...
There's a curation process that goes on, and it sounds like you're pretty active in that curation process and deciding what tools go to citizen developers and which tools are still investigating and you're protecting. That makes sense. Yeah, that's exactly. Chevron is obviously a giant petrochemical company out there worldwide. Everyone knows it. And you're the chief data officer. How did you get there? Tell us a little about your history of how did you get to this role?
Yeah, I'm happy to be in this role. It's a super exciting area I'm always passionate about. When you follow my start of my career, I'm from Germany. I did a system engineering degree and then ventured out into digital data, later on to procurement and supply chain. I think the big red thread throughout my whole career is the data part.
but of course in different ways, you know? So one can say, when I ventured out into supply chain, you deal with a lot of money from the company bought by third parties. How do you organize that? And there's a lot of data and thinking and strategic thinking about how you do that. And I would say I'm a learner. I'm a humble learner. I like to embrace new things and very diverse perspectives for the best of the company and
And it's just by coincidence maybe that I got into this role because when I joined Chevron five years ago, I started in the procurement space because I have a procurement and data digital leg, I would call it. We tackled on data right away because the data was not sufficient to drive these decisions. And maybe the first two years proved me right in terms of that's possible. I'm also a big believer that data and AI will be
All around us. So this is an exciting space to be in and to learn and to see what's coming next there. So I'm just happy to be there. Actually, a former executive said when I said to him, not in Chevron, I'm so lucky, all the opportunities I had in my career. And he said, Ellen, you are not lucky.
So he sent me a book home. You basically condition your path, you know, so you're open to things even when you think it's not on your direct trajectory, but it's really enhancing your skills and how you connect the dots. So I like connecting the dots and that's why I'm enjoying this role. That's a great story. Okay, so these are a series of rapid fire questions we ask. Just tell us the first thing that comes to your mind. It's kind of a speed dating question, maybe. Okay.
What do you see as the biggest opportunity for AI right now? Healthcare. What is the biggest misconception about AI? Replacing human beings. What was the first career you wanted? What did you want to be when you grew up? I didn't want to sit on a desk. I failed. AI is being used like in our daily lives a lot. When is there too much AI?
I would say too much AI would mean if it guides me in the wrong direction and influences me in a way which is not based on the real facts. I already have too much AI in my car because I cannot open the garage because it recognizes where I am and which thing it has to open. And if it doesn't work, I can't get in.
I enjoy this. We have a pretty smart home here with all the kind of voice recognition, electronics, garage door openers, sprinklers, starters, and whatsoever. But I would say it helps to be more efficient. And if the network is down, that's really hard now, you know? That's right. That's right. So last question. What is the one thing you wish AI could do right now that it can't? Cure cancer.
Very good. It seems like there's a headline every week that this new AI thing is going to solve cancer. And then you look back and the
none of these seem to pan out. I'm not saying we should quit trying, but it's always the example and it seems like it never quite gets there. But it's a little bit of a stochastic process too, right? I mean, if you have enough trials at it, right? I mean, we were for sure trying a lot more things because of AI and our ability to experiment. Can I answer it maybe slightly different? So I think the other thing would be
what AI maybe cannot do, which would be great, really help us with the climate transition, the climate questions we have on this planet. I think it helps here and there. But that would be like fantastic if it can help more. Yep. At the same time, though, I don't think we can abdicate and just hope the machine solves some of the problems that we have created either. I think it's going to take both of us out working together on that. It's okay. That's part of the hope.
Is there anything you're excited about artificial intelligence? What's the next thing coming that you're most excited about right now? Hmm. Good question. I think we want to improve our lives, you know, and I think where I live right now, we are very privileged. We already have AI access in many ways. You know, we just talked about it in our smart homes and the cars and et cetera, but that doesn't count for everybody in the world.
it would be great if those advances and those benefits would be broader available.
You didn't ask me, Sam, but I totally agree. I mean, I think that if you think about just like in education, right, and the impact that it can have on underprivileged communities and nations that, you know, they don't need to have a school set up anymore. You could do so much, help so many people just, you know, learn and develop and build skills that normally would rely on infrastructure and physical people and teachers and all that.
You'd think I'd be threatened by that, but I'm not a bit. I mean, I think that's our biggest opportunity. We have so many people that, I mean, we just cannot get them all through education programs. And the education programs we have are not particularly optimized or fast. And if we could solve that problem and get better resources out of our brains, then that would be a huge win.
Hey, Sam, can I ask you a question? I know I turned this now around, but if you think that the shelf life of knowledge is decreasing, right? There were some recent articles about it that maybe what you learn today is maybe worth for five years and then it's kind of obsolete. So how do you think this will evolve in the, let's say, education system?
That's huge because I think about that. I mean, I teach a class in machine learning and AI, and I am acutely aware that unless they're graduating the semester that I teach them, everything that I'm, you know, these specifics that we're teaching them are likely to be quite ephemeral. We've seen how rapidly this evolves. But I think that pushes us to step back and be higher level. If we slip into teaching a tool, teaching how to click file, how to click new, how to click open, how to click save,
Those are very low level skills. And when we think about what kinds of things we should be teaching, I mean, my university is a liberal arts university. And I think that's a big deal, because if we think about teaching technical skills within a world of liberal arts, I think that's a big deal. We had the sexiest job of the 21st century being data science.
The next one is not clear to me that data science is involved. And it's not that data science isn't important. It's just rapidly becoming commoditized. And so then we have things like philosophy, which become more important, and ethics, which as the cost of these data science drops, these things become more important. Linguistics. Linguistics, yeah. There you go. Or large language models, right? Yeah.
Wonderful. Ellen, thank you so much. This has been so insightful and we thank you for making the time. Yeah, thank you. Thanks for tuning in. On our next episode, Sherva and I venture into the use of AI in outer space with Vandi Verma, Chief Engineer of Perseverance Robotic Operations and Deputy Manager at NASA's Jet Propulsion Laboratory. Please join us.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.