Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcasts. Concrete production, livestock, the Socratic method,
Somehow we talk about all three. Find out how these connect with AI in today's episode. I'm David Hardoon from Apoitas Data Innovation and you're listening to Me, Myself, and AI. Welcome to Me, Myself, and AI, a podcast on artificial intelligence and business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of analytics at Boston College. I'm also the AI and business strategy guest editor at MIT Sloan Management Review.
And I'm Sherwin Korubande, senior partner with BCG and one of the leaders of our AI business. Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Welcome. Today, Sherwin and I are excited to be joined by David Hardoon, who holds several senior positions at our Boyd's Group. David, thanks for joining us. Thank you very much, Sam. Sherwin?
Can you first tell us a bit about the Aboides Group? Where do you work? The Aboides Group is a hundred plus year old conglomerate, originated, started in Spain, Catalan, kind of relocated into Philippines and started in the hemp business, but now quite diversified from the main business being power, generation and distribution across the Philippines, financial services, cement, construction, utilities, estate, airports, food, agriculture, and
They're now going through a transformation and becoming, I love this term, by the way, a tech glomerate. What is a data innovation? About seven years ago, give or take, the bank started with the whole digitalization of the banking services. And what that had resulted in, as you would imagine, a tremendous amount of data. The more you engage your consumers digitally, the more you have digital services. Well, surprise, surprise, the more data you have. And the question came is, well,
How are we really using it? Are we using it? What's the best way to put it to good use? And that question kind of went also beyond just the bank into the rest of the business, because you can imagine power has a lot of data. Agriculture, airports, et cetera, has a lot of data. We were born with a very kind of on-point mandate, operationalizing data, operationalizing AI. Really, how do we put it to good use? What are some of these uses?
I mean, there's the usual financial side where we all learn from hyper-personalization and financial crime. And don't get me wrong, that stuff is the things that always gets me all excited. I spent a few good years in the financial regulator here in Singapore. But let me give you an oddity. Cement, you know, an industry that you wouldn't really associate with data or AI. We sat down with the CEO at the time and we said, look, even in the world of cement, you have a lot of data.
How can this work? So let me give you a little tidbit of how the world of cement works. And this something was new to me. Cement is actually like baking. I don't know if you bake, but it's like baking. It's basically you have mixtures. You have these kind of formulas and you end up with cement, which will have different type of properties. And these properties is what's absolutely critical depending on what you're planning to build. Whether it's a mall, a high rise, a low rise, a residential, etc. and so forth.
Having said that, as with baking, you kind of need to do a bit of trial and error. You need to try out these different mixtures to make sure it produced the right one.
That results in operational overhead. It results in wastage. I mean, and as with baking, you stick this stuff into keels. Literally, it's a furnace. It's to bake it. Using data, using the information that's coming from all the devices, the IoT, using AI, being able to actually tell the bakers, or in this case, the chemical engineers, what is going to be the output of this mixture before they even start.
While at the same time, maintaining that quality control, which is absolutely crucial. Now, this is, by the way, this is not just hypothetical. This is already operational for the last year in all the plants, about six plants in the Philippines. And results in operational efficiency, results in reduction in number of wastes, resulted in what I like to call quantifiable ESG, you know, 35 kilotons reduction of CO2 emissions. So that's a nice example.
unusual example I like to give in terms of how data is used. Well, I could tell you Sam and I are going to love that. We're both chemical engineers. Oh, well, there you go. When you said baking, I did my PhD in catalyst synthesis. So I spent a lot of my time baking various alumino silicates to create catalysts. And you're completely right. You try all these things, some work, some don't work. And
Had there been the ability for me to know ahead of time, I probably would have gotten my PhD in a tenth of the time. But seriously, this is quite interesting. Now,
If you go from personalization and cyber and fraud, and you also have this example in baking cement, then we must believe that there is such a wide portfolio of things that you're considering. So tell us more about what makes it into that portfolio, because there is no end to what you could do. What are the kinds of things you get excited about? Sure.
You're absolutely right. Being fortunate and working in a conglomerate, you kind of wake up every day and discover something new. So there are kind of two dimensions to it. On the one hand, and I'm going to go back to this term operationalization and operationalizing data and AI.
It's stuff that has to make sense to the business. So revenue, operational efficiency, risk management. And then we have to look at the things around the corner. We have to experiment. But those may not be things that get immediately deployed. Like effectively in agriculture, we have animals, we have pigs, wine and poultry. And as part of that process, you want to make sure that the animals have the best possible care provided to them. On an experimental side, we said, OK, how can we use technology that's already available
but may not have been put in exactly in this particular context, not in Southeast Asia. So we're using voice recognition and image recognition for pigs to help identify stress and detect illnesses. So that could be automatic alerts to the caregivers. What's the ground truth on that? That would be interesting. That's a great question. Like what's the training data?
So this is the amazing stuff. It's a very expressive animal. So when you actually go there with the people who take care of them, they can literally point out by saying, this animal is distressed. And you're constantly recording. We're kind of, okay, is this really something that's relevant? Does it make sense? Like, can we have that conversation with the baker, you know, the chemical engineer? Can we have a conversation with the animal keeper, the veterinarian and so forth, or the pole engineer when we're dealing with electricity cables?
It's extremely important. And that's one of the things that I realized throughout my career of doing data is where things failed, where you suddenly had this divergence of exploring a scientific research. And I came from the world of science, you know, like ex-academic, without really seeing that connectivity. And if we go all the way back, even when radar was invented, I mean, the reason things fall apart is whereby the very, very small gaps of, well, it's not quite there. Oh, it's not quite usable. So that's the first cut.
Then the second level is seeing, well, is this something that's as much as possible truly going to make a difference to either our internal users, because that's extremely important. And for many of the businesses which are within the group, which are actually B2B, again, power, essentially we provide power in wholesale. So it's our internal users in terms of, let's say, predictive asset maintenance, critically important. That is really fantastic. I mean, what you've said...
is inspiring on so many levels. One is let your imagination be the limit, right? Because the question of can something be done better, more effectively, can you see around the corner and there is data, then yes. That's one thing that's inherent in all these examples that you gave. You started with what most would consider quite advanced and interesting things. And we have guests who talk about those all the time. Personalization, fraud, cyber, all of those are very important. And
And then you went to cement and then you went to pigs. And then you talked about human and AI, which is quite critical too. I just find that very, very energizing. Well, it's the nexus between human and AI. There are two critical things that I believe have to go hand in hand, have to. While this may change in the future to some degree or extent, I mean, who knows what's going to happen around the corner? Things change so rapidly.
But the first one, and I'll be the first one to admit this, I truly came to this appreciation when I worked in the regulator, surprise, surprise, is this criticality of combining governance and innovation. And I used to get asked this question repeatedly of, oh, but don't you think governance inhibits innovation? It stifles us. And I came to the view of I'm vehemently against that perspective. I would argue that not only it does not stifle it,
it would result in more and even better innovation. It's essentially about just simply having common sense. I was privileged in being in the process and coming up with the FEED principles. So this was the Fairness, Ethics, Accountability, and Transparency back at the Monetary Authority of Singapore. And I remember when it came out and we deliberately kept it very simple. And I showed it to our governor, our managing director, and he was just like, David, isn't this just common sense? And I just kind of smiled and was like, well, you know, even common sense has to, it's not always that common and has to be written down.
But it's critical. That's number one. And number two, what you were mentioning is that, yes, while AI and data can do these, what is seemingly miraculous stuff, it's critical that this combination with us humans and how we use it is baked at the very beginning. And even now, obviously, everyone's talking about chat GPT.
But remember, all the data that it's trained on is from us to a certain extent. Yeah, you can't take humans out of the loop because after a while, they will lose what makes them human. Well, but we have examples of that. I mean, that's okay in some places. I mean...
neither of you know how to navigate by the stars, I'm guessing, unless, Shervin, you've got some tricks up your sleeve that I haven't learned yet. I mean, most people don't drive a manual transmission. That seems to be a skill that's, well, okay, maybe one or two of us do here. But the point is, I guess we don't have to retain all possible skills. We just have to be, I think, savvy about which ones we hang on to. That's exactly what you said. It's
Some, not all, but sometimes you find that you see this trend of like, oh, look what it can do. Like everything gets automated. And I remember if I go to my early days as a consultant, I used to be a consultant doing AI, and you would find a lot of times, you know, potential clients and people you speak to, even if they didn't set it explicitly, what they were trying to achieve was like, oh, just do everything automatically with AI.
And you need to have this almost this natural inclination by saying, okay, if it's contextual, if it makes sense, like you said, I, you know, maybe I want to pick up star navigation because I'm interested in it. I want to learn about astrology or astrophysics and whatnot. Great. But you see, it now becomes a niche topic that some people pick up. The general public doesn't need to know how to do it, but we need to be able to identify that decision point rather than just go like, no, everything now, AI galore kind of situation. Well, I mean, what you're saying is,
There's value in the ongoing dialogue. There's value in ongoing challenge. And every time there's a dialogue, I mean, even back in Socrates' time, right? The dialogue is where it elevates the conversation. And you're rightly pointing out that the moment you say AI is be all and end all is the moment that you are under delivering on AI. And then you're for sure under delivering on the human potential. Well, you're losing a potential inside. Let me give you two examples.
In the financial sector, we have the Bank Union, Bank of the Philippines, amongst others. While AI governance regulation is not yet a requirement, let's say, in the Philippines, we've situated a working group, which is an interesting combination of people, which from your risk officer, legal, compliance, and then you have marketing, customer engagement, experience.
Which, what happens is, while you still have the traditional process of model validation, et cetera, from a statistical, mathematical data point of view, models are presented in this working group for us to have a debate. Because a model may pass all the statistical tests, but if this model goes wrong, even that 10% or 5%, there is a significant reputational risk at play, or there's a potential impact to the consumers.
That debate is important because, A, if you just looked at it from that statistical, even a potentially automated process, you would miss it. Now, the resolution, interestingly enough, and I honestly tell you maybe eight out of ten times so far, isn't data, isn't AI. The resolution a lot of times is process, which is people.
And that makes us actually wiser in understanding, okay, how do we use it and how do we engage with it? And when do we allow, Sam, to your point, that automation? And when we go, no, I retain the veto to overrule to a certain extent. So that's one example. The other one is if I go back to my cement. And in fact, we did this very deliberately at the very beginning because we didn't want our colleagues and chemical engineers to think like, oh, great. So why do you need me? You're just going to automate the whole thing.
No, the whole point was we absolutely need them because there may be new type of mixtures that we haven't considered. You will still need to have that experimentation. The whole goal is providing information. But what it has resulted is efficiency. So if I give a swing again to another one, when ChatGPT came out and I got asked straight away from a few boards, and I said, what does this mean?
And my instinctive reaction, you know, rather than going to this whole lengthy explanation and liberation, I just responded by saying, it means that every one of us can have the productivity of 10 people. So this is what this stuff means. And that's what that nexus, the dialogue, the integration, the augmentation means is that we now have the ability to be far more productive, whatever productive means in that context.
For some people, it may say, I just want to work two hours, but as if I worked the whole day. Some may say I want to work a whole day, but again, it may differ. But that's what it means because now we're able to take all this data. I'm sure some of you remember back in 2000, you had this meme online of getting information off the internet is like drinking from a fire hose. It's still true. We're in our data with information, with data, but it's distilling it down to something that's relevant to me, usable, that I can do something with it and get that gain essentially. Right.
I think one thing that's coming out of this conversation, I think Shervin used the word Socratic and David used the word dialogue. What's nice about this is it's dropped this hubris that I feel like I see in a lot of machine learning. Machine learning seems to be about humans teaching machines. So it's this sort of we know all, we make the machines emulate us, and if they do, they pass the Turing test. And yes, everything is golden.
No, but then you get pushback and you say, oh, no, the machine can teach us things we've never known before. Well, that just has switched the direction. It still has that same directional hubris. But the things that you're both talking about are much more Socratic and dialogue. You think about what can that group form together? And sure, I've got some results from last year's research that said about 60% of the people are thinking about
AI as a coworker. And that strikes me as that sort of relationship because between the two, yes, you find some new compound that maybe someone wouldn't have tried. I don't know what the chemical engineering equivalent of the Fosbury flop is. Do you remember the Fosbury flop where he learned the different way of jumping over the high bar and then suddenly everyone else adopted that technique? That sort of idea seems like it could come out of this approach.
It's actually really interesting you bring that up. And I mean, I'd love to say like, oh, yeah, we had this all intended in the very beginning. But I'll be very honest to say, like, I think it's more of a nicer consequence that wasn't fully intended in point in time. But I want to go back to that fee principle. One of the principles resulted in a lot of discourse. And I mean a lot.
where we had a statement amongst all of them that said that we should hold AI to at least the same standard as human decisions. So AI-based decisions should be held to at least the same standard as human-based decisions. And the debate was phenomenal. It said, oh no, we should hold it to a higher one, et cetera, and so forth. But what was the intention of that principle was saying is, if you're using now, so let me go back to, again, let's say financial and loan provisioning.
And if you're using an AI algorithm and you're finding that, oh, we're discriminating. Okay. Yeah, absolutely. That's something that needs to be addressed, reviewed and corrected. But hold on your horses there. Take a step back. Take the AI out of the equation. Have you been discriminating before the AI? And that's really the question because, and I remember I had a long debate with many regulators. I mean, again, maybe debate's the wrong word. Discussions with many regulators. And I was actually a bit opposed to regulating AI. And I'll explain what I mean by that.
I'm not proposing regulation. But when they said regulating AI, I got a bit defensive. I said, what I'm worried about that is that
We were like, okay, well, since AI now is showing me all this stuff that I don't want to know about, then I'm just not going to use AI. And we're going to go back to the same procedures previously, which guess what? It's the same problem. It's just you just weren't paying attention to it because that information, that knowledge wasn't bubbled up to the surface. So what I kept on kind of arguing is that, yes, the regulation has to be in play. And yes, there may be certain scenarios whereby AI requires higher scrutiny, but the regulation is still on the outcome.
The regulation is still in the fact that, for example, it's a case of discrimination. You should know how to discriminate whether you're using a human-based process or an AI-based process is kind of besides the point. But I just want to emphasize that point, Sam, because it really goes back to what you were saying of it's teaching us now things that we may have been, let's say, sometimes consciously ignorant of, sometimes inadvertently unaware of. David, tell us about your background. How did you end up where you are?
If I roll back all the way to the beginning, and I kind of say this again with a big smile myself, how to end up where I am, detention. That's how I ended up here. I must have been, what, 14, 15, 16 years old. And I got sent to the library.
because of the tension. And, you know, if you're in a library, you have nothing better to do. I picked up a book on prologue. And I don't ask me why, from all the books I could have picked up, I picked up one about prologue. And this is really before knowing anything about the whole world of, well, I guess in that case, it was the expert-based systems. And I started reading and I just couldn't put it down. And that kind of triggered this exploration of
How can we better capture knowledge? How can we better learn? And that obviously resulted in kind of learning a bit more about neural networks, AI. In fact, I was one of the first two students who took the degree of computer science with artificial intelligence. It was literally brand new from that perspective. My PhD thesis was about semantic models. So literally the representation and encapsulation of knowledge effectively and information was on learning musical patterns, music, or generating music from brain patterns and
And the whole idea about that is essentially providing expert-based systems, knowledge, if you think about it in that way, for people, let's say, who can't sit in front of a piano and play, but are fully capable cognitively. So that's kind of what brought me here. I know it's a very weird kind of journey, but yeah, I need to thank my literacy teacher. Thank you for sending me to detention. Okay, so we've got a segment where we're going to ask you some quick questions. What are you proudest of in terms of artificial intelligence? What have you all done that you're proudest of?
Where do you begin? What I'm most proud of is the way we've been able to graduate, and I literally mean that, from the academic world to the industrial world. What worries you about AI? You've mentioned some worries today, but what worries you? What worries me is I don't think we're fully appreciating what we're creating. I think we need to head on with the realization of what we're creating and what we're seeding for possibilities, for good and for bad. What's your favorite activity that does not involve technology?
Sup. Stand up paddling. Being on the water and just paddling away, it's extremely soothing. It's actually a phenomenal exercise for those who haven't tried. I've tried and I missed the stand up part. I'm okay with the paddling, but the stand up seems to have trouble. What's the first career you wanted while you were sitting in detention? What did you want to be when you grew up? I wanted to be an astrophysicist. What's your greatest wish for AI in the future? What are you hoping we can gain from this?
I don't know, self-actualization? I hope we learn more about ourselves. It's already giving us capabilities. I mean, for example, look, I'm dyslexic. I mean, thank heavens for auto spell checkers. Well, thank you for taking the time. I think that there's a lot that you've mentioned. I think we can go back to even two examples of food 100 years ago. We had a terrible food cleanliness and now we have a supply chain we can trust.
perhaps we can build that same sort of supply chain with data. Thank you for taking the time to talk with us today. It's been a pleasure. Thank you, Sam. Yeah, thank you. And maybe if I just may just add on that note, I think that's really the critical thing. It's AI trust. It's about trust. Thank you very much. Thanks for listening. Next time, Shervin and I talk with Naba Banerjee, head of trust product and operations at Airbnb, about how the travel platform uses AI and machine learning to make travel experiences safer.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes, and we hope to see you there.