Welcome, welcome, welcome to Smart Talks with IBM. Hello, hello. Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio, and IBM. I'm Malcolm Gladwell. This season, we're diving back into the world of artificial intelligence, but with a focus on the powerful concept of open, its possibilities, implications, and misconceptions.
We'll look at openness from a variety of angles and explore how the concept is already reshaping industries, ways of doing business, and our very notion of what's possible. And for the first episode of this season, we're bringing you a special conversation. I recently sat down with Rob Thomas. Rob is the Senior Vice President of Software and Chief Commercial Officer of IBM. I spoke to him in front of a live audience as part of New York Tech Week.
We discussed how businesses can harness the immense productivity benefits of AI while implementing it in a responsible and ethical manner. We also broke down a fascinating concept that Rob believes about AI, known as the productivity paradox. Okay, let's get to the conversation. Thank you.
How are we doing? Good. Rob, this is our second time. We did one of these in the middle of the pandemic, but it's all such a blur, neither of us can figure out when it was. I know. It's hard. Those are like blurry years. You don't know what happened, right? Well, it's good to meet you again. I wanted to start by going back. You've been at IBM 20 years.
Is that right? 25 in July, believe it or not. So you were a kid when you joined. I was four. So I want to contrast present day Rob and 25 years ago Rob. When you arrive at IBM, what do you think your job is going to be, your career is going to be? What do you think the kind of problems you're going to be addressing are? Well, it's kind of surreal because I joined IBM in consulting and I'm coming out of school
And you quickly realize, wait, the job of a consultant is to tell other companies what to do. And I was like, I literally know nothing. And so you're immediately trying to figure out, so how am I going to be relevant given that I know absolutely nothing to advise other companies on what they should be doing? And I remember it well, like we were sitting in a room. When you're a consultant, you're waiting for somebody else to find work for you. A bunch of us sitting in a room and somebody walks in and says, we need somebody that knows Visio. Does anybody know Visio?
I'd never heard of Vizio. I don't know if anybody in the room has. So everybody's like sitting around looking at their shoes. So finally I was like, I know it. I raised my hand. They're like, great. We got a project for you next week. So I was like, all right. I have like three days to figure out what Vizio is. And I hope I can actually figure out how to use it. Now, luckily it wasn't like a programming language. I mean, it's pretty much a drag and drop program.
capability. And so I literally left the office, went to a bookstore, bought the first three books on Visio I could find, spent the whole weekend reading the books and showed up and got to work on the project. And so it was a bit of a risky move, but I think that's kind of... You would caution others against doing this. Well, but if you don't take risk, you'll never achieve. And so to some extent, everybody's making everything up all the time.
It's like, can you learn faster than somebody else is what the difference is in almost every part of life. And so it was not planned, but it was an accident, but it kind of forced me to figure out that you're going to have to figure things out. You know, we're here to talk about AI and I'm curious about the evolution of AI.
of your understanding or IBM's understanding of AI. At what point in the last 25 years do you begin to think, oh, this is really gonna be at the core of what we think about and work on at this company? - The computer scientist John McCarthy, he's the person that's credited with coining the phrase artificial intelligence. It was like in the 50s. And he made an interesting comment. He said, once it works, it's no longer called AI. - Mm-hmm.
And that then became, it's called like the AI effect, which is, it seems very difficult, very mysterious, but once it becomes commonplace, it's just no longer what it is. And so if you put that frame on it, I think we've always been doing AI at some level. I mean, I even think back to when I joined IBM in '99. At that point, there was work on rules-based engines, analytics, all of this was happening.
It all depends on how you really define that term. You could argue that elements of statistics, probability, it's not exactly AI, but it certainly feeds into it. So I feel like we've been
working on this topic of how do we deliver better insights, better automation since IBM was formed. If you read about what Thomas Watson Jr. did, that was all about automating tasks. Yeah. Is that AI? Well, probably certainly not by today's definition, but it's in the same zip code. So from your perspective, it feels a lot more like an evolution than a revolution. Is that a fair statement? Yes. Yeah. Which I think most great things in technology are
tend to happen that way. Many of the revolutions, if you will, tend to fizzle out. But even given that, I guess what I'm asking is, I'm curious about whether there was a moment in that evolution when you had to readjust your expectations about what AI was going to be capable of. I mean, was there a particular innovation or a particular problem that was solved that made you think, oh, this is different than what I thought?
I would say the moments that caught our attention, certainly Kasparov winning the chess tournament, nobody, or Deep Blue beating Kasparov, I should say. Nobody really thought that was possible before that. And then it was Watson winning Jeopardy. These were moments that said, maybe there's more here than we even thought was possible. And so I do think there's points in time where we realize maybe
way more could be done than we had even imagined. But I do think it's consistent progress every month and every year versus some seminal moment. Now, certainly large language models as of recent have caught everybody's attention because it has a direct consumer application. But I would almost think of that as what Netscape was for the web browser. Yeah. It brought the internet to everybody.
But that didn't become the internet per se. Yeah. I have a cousin who worked for IBM for 41 years. I saw him this weekend. He's in Toronto. By the way, I said, do you work for Rob Thomas? He went like this. He goes... He said, I'm five layers down. So whenever I see my cousin, I ask him, can you tell me again what you do? Because it's always changing. I guess this is a function of working at IBM. Yeah.
So eventually he just gives up and says, you know, we're just solving problems. That's all we're doing. Which I sort of loved as a kind of frame. And I was curious, what's the coolest problem you ever worked on? Not biggest, not most important, but the coolest. The one that's just like, that sort of makes you smile when you think back on it. Probably when I was in microelectronics because it was a world I had no exposure to. I hadn't studied computer science. And we were building a lot of
high performance semiconductor technology. So just chips that do a really great job of processing something or other. And we figured out that there was a market in consumer gaming that was starting to happen. And we got to the point where we became the chip inside the Nintendo Wii, the Microsoft Xbox, Sony PlayStation. So we basically had the entire gaming market running on IBM chips.
So every parent basically is pointing at you and saying, you're the culprit. Probably. Well, they would have found it from anybody. But it was the first time I could explain my job to my kids who were quite young at that time, like what I did. Like it was more tangible for them than saying we solve problems or build solutions. Like it became very tangible for them. And I think that's...
you know, a rewarding part of the job is when you can help your family actually understand what you do. Most people can't do that. It's probably easier for you. They can see the books. Yeah, yeah. But for some of us in the business-to-business world, it's not always as obvious. So that was like one example where the dots really connected. Yeah. There's a couple, let's talk about a little bit of this in the context of AI. Because I love the frame of
Problem-solving as a way of understanding what the function of the technology is so I know that you guys did something good did some work with I never know how to pronounce it. Is it Sevilla Sevilla Sevilla with the football club Sevilla in Spain Tell me about tell me a little bit about that What problem were they trying to solve and why did they call you in every?
the sports franchise is trying to get an advantage right let's just be that clear everybody's how can i use data analytics insights anything that will make us one percent better on the field at some point in the future and savia reached out to us because they had seen some of the we've done some work with the toronto raptors in the past and others and their thought was maybe there's something we could do they'd heard all about
generative AI that heard about large language models. And the problem back to your point on solving problems was we want to do a way better job of assessing talent because really the lifeblood of a sports franchise is can you continue to cultivate talent? Can you find talent that others don't find? Can you see something in somebody that they don't see in themselves or maybe no other team sees in them? And we ended up building something with them called Scout Advisor.
which is built on Watson X, which basically just ingests tons and tons of data. And we like to think of it as finding the needle in the haystack of, here's three players that aren't being considered. They're not on the top teams today. And I think working with them together, we found some pretty good insights that's helped them out. And what was intriguing to me was, we're not just talking about
quantitative data we're also talking about qualitative data but that's the puzzle part of the thing that fascinates me how does one incorporate qualitative analysis into that sort of so you're feeding in scouting reports and things like that I gotta realize think about how much I can actually disclose but if you think about so quantitative is relatively easy yeah every team collects that
what's their 40-yard dash i don't think they use that term certainly not in spain that's all quantitative qualitative is what's happening off the field it could be diet it could be habits it could be behavior you can imagine a range of things that would all feed into an athlete's performance yeah and so relationships there's many different aspects and so it's trying to figure out
the right blend of quantitative and qualitative that gives you a unique insight. How transparent is that kind of system? I mean, is it telling you... It's saying, pick this guy, not this guy, but is it telling you why it prefers this guy to this guy? I think for anything in the realm of AI, you have to answer the why question. Otherwise, you fall into the trap of...
the proverbial black box and then wait, I made this decision, I never understood why, it didn't work out. So you always have to answer why, without a doubt. And how is why answered? Sources of data, the reasoning that went into it. So it's basically just tracing back the chain of how you got to the answer.
And in the case of what we do in Watson X is we have IBM models. We also use some other open source models. So it would be which model was used? What was the data set that was fed into that model? How is it making decisions? How is it performing? Is it robust? Meaning, is it reliable in terms of if you feed it two of the same data set, do you get the same answer? These are all the technical aspects of understanding the why.
How quickly do you expect all professional sports franchises to adopt some kind of... Are they already there? If I went out and polled the general managers of the 100 most valuable sports franchises in the world, how many of them would be using some kind of AI system to assist in their efforts? 120% would, meaning that everybody's doing it and some think they're doing way more than they probably actually are. So everybody's doing it. I think what's weird about sports is...
Everybody's so convinced that what they're doing is unique that they, generally speaking, don't want to work with a third party to do it because they're afraid that that would expose them. But in reality, I think most are doing 80 to 90% of the same things. So, but without a doubt, everybody's doing it. Yeah, yeah.
The other case study that I loved was there was one about a shipping line, a trike on the Mississippi River. Tell me a little bit about that project. What problem were they trying to solve?
Think about the problem that I would say everybody noticed if you go back to 2020 was things are getting held up in ports. There was actually an article in the paper this morning kind of tracing the history of what happened in 2020, 2021, and why ships were basically sitting at seas for months at a time. And at that stage, we had a massive throughput issue. But
Moving even beyond the pandemic, you can see it now with ships getting through like Panama Canal. There's like a narrow window where you can get through. And if you don't have your paperwork done, you don't have the right approvals, you're not going through and it may cost you a day or two. And that's a lot of money in the shipping industry. And the Tricon example, it's really just about when you're pulling into a port, if you have the right paperwork done,
you can get goods off the ship very quickly. They ship a lot of food, which by definition, since it's not packaged food, it's fresh food, there is an expiration period. And so if it takes them an extra two hours, certainly multiple hours or a day, they have a massive problem because then you're going to deal with spoilage and so it's going to set you back.
and what we've worked with them on is using an assistant that we've built in Watson X called Orchestrate, which basically is just AI doing digital labor. So we can replicate nearly any repetitive task and do that with software instead of humans. So as you may imagine,
shipping industry still has a lot of paperwork that goes on. And so being able to take forms that normally would be multiple hours of filling it out, "Oh, this isn't right, send it back." We've basically built that as a digital skill inside of Watson X orchestrate. And so now it's done in minutes. Did they realize that they could have that kind of efficiency by teaming up with you? Or is that something you came to them and said,
Guys, we can do this way better than you think. What's the... I'd say it's always both sides coming together at a moment that for some reason makes sense. Because you could say, why didn't this happen like five years ago? Like, it seems so obvious. Well, technology wasn't quite ready then, I would say. But they knew they had a need because I forget what the precise number is, but, you know, reduction of spoilage has massive impact on their bottom line. Mm-hmm.
So they knew they had a need we thought we could solve it and the two together Who did you guys go to them though? Like I said, they come to you I recall that this one was an inbound. Yeah, I mean they had reached out to IBM And say we'd like to solve this problem. I think it went into one of our digital centers Yeah, if I recall so literally a phone call. Yeah, but so the the other the reverse is
more interesting to me because there seems to be a very very large universe of people who have problems that could be solved this way and they don't realize it what's your is there a shining example of this of someone you just can't you just think could benefit so much and isn't benefiting right now maybe i'll answer it slightly differently i'm i'm surprised by how many people can benefit that you wouldn't even logically think of first let me give you an example there's a
franchisor of hair salons. Sport Clips is the name. My sons used to go there for haircuts because they have TVs and you can watch sports. So they loved that. They got entertained while they would get their haircut. I think the last place that you would think is using AI today would be a franchisor of hair salons. Yeah. But just follow it through. The biggest part of how they run their business is can I get people to cut hair?
and this is a high turnover industry because there's a lot of different places you can work if you want to cut hair people actually get injured cutting hair because you're on your feet all day that type of thing and they're using same technology orchestrate as part of their recruiting process how can they automate a lot of people submitting resumes who they speak to how they qualify them for the position and so
The reason I give that example is the opportunity for AI, which is unlike other technologies, is truly unlimited. It will touch every single business. It's not the realm of the Fortune 500 or the Fortune 1000. This is the fortune any size. And I think that may be one thing that people underestimate about AI. Yeah. What about, I mean, I was thinking about education as a kind of, I mean, education is the perennial thing.
whipping boy for you guys are living in the 19th century, right? I'm just curious about if a superintendent of a public school system or the president of the university sat down and had lunch with you and said, let's do the university first. My costs are out of control. My enrollment is down. My students hate me and my board is revolting. Help. How would you think about
helping someone in that situation. I spend some time with universities. I like to go back and visit alma maters where I went to school. And so I do that every year. The challenge of a university is there has to be a will. Yeah. And I'm not sure the incentives are quite right today because bringing in new technology, let's say we want to go after, we can help you figure out student recruiting or how you automate more of your education.
everybody suddenly feels threatened at a university. Hold on, that's my job. I'm the one that decides that. Or I'm the one that wants to dictate the course. So there has to be a will. So I think it's very possible. And I do think over the next decade, you will see some universities that jump all over this and they will move ahead. And you see others that do not because it's very possible. Mm-hmm.
- When you say there has to be a will, is that a kind of thing that people at IBM think about? Like in this hypothetical conversation you might have with the university president, would you give advice on where the will comes from? - I don't do that as much in a university context. I do that every day in a business context. Because if you can find the right person in a business that wants to focus on growth,
or the bottom line, or how do you create more productivity? Yes, it's going to create a lot of organizational resistance, potentially, but you can find somebody that will figure out how to push that through. I think for universities, I think that's also possible. I'm not sure there's a return on investment for us to do that. Yeah, yeah. Let's define some terms. AI years, a term I told you I'd like to use. What does that mean?
We just started using this term literally in the last three months. And it was what we observed internally, which is most technology you build, you say, all right, what's going to happen in year one, year two, year three? And it's largely by a calendar. AI years are the idea that what used to be a year is now like a week. And that is how fast the technology is moving. And to give you an example, we had one client we're working with
they're using one of our granite models and the results they were getting were not very good accuracy was not there their performance was not there so i was like scratching my head i was like what is going on what business were they they were financial services the bank so i'm scratching my head like what is going on everybody else is getting this and like these results are horrible and i said the team which version of the model are you using this was in february like we're using the one from october
I was like, "All right, now we know precisely the problem." Because the model from October is effectively useless now since we're here in February. Are you serious? Actually useless? Absolutely. Completely useless. Yeah. That is how fast this is changing. And so the minute, same use case, same data, you give them the model from late January instead of October, the results are off the charts.
Yeah. Wait, so what exactly happened between October and January? The model got way better. But dig into that. Like, what do you mean by that? We have built large compute infrastructure where we're doing model training. And to be clear, model training is the realm of probably in the world, my guess is five to ten companies. And so you build a model, you're constantly training it, you're doing fine tuning, you're doing more training, you're adding data. Every day, every hour, it gets better.
And so how does it do that? You're feeding it more data. You're feeding it more live examples. We're using things like synthetic data at this point, which is we're basically creating data to do the training as well. All of this feeds into how useful the model is. And so using the October model, those were the results in October. Just a fact. That's how good it was then. But back to the concept of AI years, two weeks is a long time.
Are we in a steep part of the model learning curve, or do you expect this to continue along at this pace? I think that is the big question, and don't have an answer yet. By definition, at some point, you would think it would have to slow down a bit, but it's not obvious that that is on the horizon. Is it still speeding up? Yes. How fast can it get? We've debated, can you actually have better results in the afternoon than you did in the morning? Really? Yeah.
It's nuts. Yeah, I know. But that's why we came up with this term. Because I think you also have to think of concepts that gets people's attention. So you're basically turning into a bakery. You're like the bread from yesterday. You can have it for 25 cents. But you do preferential pricing. You could say, we'll charge you X for yesterday's model, 2X for today's model.
I think that's dangerous as a merchandising strategy, but I get your point. Yeah. But that's crazy. And this, by the way, so this model, is the same true for all models? You're talking specifically about a model that was created to help some aspect of a financial services company.
So is that kind of model accelerating faster and learning faster than other models for other kinds of problems? So this domain was code. Yeah. And so by definition, if you're feeding in more data, so more code, you get those kind of results. It does depend on the model type. Yeah. There's a lot of code in the world. And so we can find that, we can create it, like I said.
There's other aspects where there's probably less inputs available, which means you probably won't get the same level of iteration. But for code, that's certainly the cycle times that we're seeing. Let's stick with this one example of this model you have. How do you know that your model is better than big company B down the street? A client asks you, why would I go with IBM as opposed to
There's some firm in the Valley that says they have a model on this. How do you frame your advantage? Well, we benchmark all of this. And I think the most important metric is price performance. Not price, not performance, but the combination of the two. And we're super competitive there.
For what we just released with what we've done in open source, we know that nobody's close to us right now on code. Now, to be clear, that will probably change. Yeah. Because it's like leapfrog. People will jump ahead, then we jump back ahead. But we're very confident that with everything we've done in the last few months, we've taken a huge leap forward here. Yeah. This goes back to the point I was making in the beginning about the difference between your...
20-something self in 99 and yourself today, but this time compression is has to be a crazy adjustment So you're the concept of what you're working on and how you make decisions internally and things has to undergo this kind of Revolution if you're if you're switching from I mean back in the day a model might be useful for along years years I think about you know statistical models that sit inside things like
SPSS, which is a product that a lot of students use around the world. I mean, those have been the same models for 20 years. Yeah. And they're still very good at what they do. And so, yes, it's a completely different moment for how fast this is moving. And I think it just raises the bar for everybody, whether you're a technology provider like us or you're a bank or an insurance company or a shipping company to say, how do you really
change your culture to be way more aggressive than you normally would be. Does this mean, this is a weird question, but does this mean a different set of kind of personality or character traits are necessary for a decision maker in tech now than 25 years ago? There's a book I saw recently. It's called The Geek Way, which talked about how technology companies have started to
operate in different ways maybe than many traditional companies. And more about being data-driven, more about delegation. Are you willing to have the smartest person in the room make decisions as opposed to the highest paid person in the room? I think these are all different aspects that every company is going to face. Yeah. Next term, talk about open. When you use that word open, what do you mean?
I think there's really only one definition of open, which is for technology is open source. Open source means the code is freely available. Anybody can see it, access it, contribute to it. Tell me about why that's an important principle. When you take a topic like AI, I think it would be really bad for the world if this was in the hands of one or two companies or three or four, doesn't matter the number, some small number.
think about like in history sometime early 1900s the interstate commerce commission was created and the whole idea was to protect farmers from railroads meaning they wanted to allow free trade but they knew that well there's only so many railroad tracks so we need to protect farmers from the shipping costs that railroads could impose so good idea but over time that got completely overtaken by the railroad lobby
And then they used that to basically just increase prices. And it made the lives of farmers way more difficult. I think you could play the same analogy through with AI. If you allow a handful of companies to have the technology, you regulate around the principles of those one or two companies, then you've trapped the entire world. That would be very bad. Is there a danger of that happening? For sure. I mean, there's companies in Washington every week trying to...
achieve that outcome. And so the opposite of that is to say it's going to be an open source because nobody can dispute open source because it's right there. Everybody can see it. And so I'm a strong believer that open source will win for AI. It has to win. It's not just important for business, but it's important for humans. I'm curious about on the list of things you worry about,
Actually, let me ask this question very generally. What is the list of things you worry about? What's your top five business-related worries right now? Your first question, we could be here for hours for me to answer. I didn't say business-related. We can leave, you know, your kids' haircuts out of the... Number one is always, it's the thing that's probably always been true, which is just people. Mm-hmm.
Do we have the right skills? Are we doing a good job of training our people? Are our people doing a good job of working with clients? Like that's number one. Number two is innovation. Are we pushing the envelope enough? Are we staying ahead? Number three is, which kind of feeds into the innovation one, is risk-taking. Are we taking enough risk? Without risk, there is no growth. And I think the trap that every larger company faces
inevitably falls into is conservatism. Yeah. Things are good enough. And so it's, are we pushing the envelope? Are we taking enough risk to really have an impact? I'd say those are probably the top three that I spend most of my time. Let's talk about the last term to define, productivity paradox. Something I know you've thought a lot about. What does that mean? So I started thinking hard about this because all I saw and read every day was fear about AI. I studied economics.
And so I kind of went back to like basic economics and there's been like a macro investing formula, I guess I would say. It's been around forever. That says growth comes from productivity growth plus population growth plus debt growth. So if those three things are working, you'll get GDP growth. And so then you think about that and you say, well, debt growth, we're probably not going back to 0% interest rates. So to some extent, there's going to be a ceiling on that.
And then you look at population growth. There are shockingly few countries or places in the world that will see population growth over the next 30 to 50 years. In fact, most places are not even at replacement rates. And so I'm like, all right, so population growth is not going to be there. So that would mean if you just take it to the extreme, the only chance of continued GDP growth is productivity. And the best way to
to solve productivity as AI. That's why I say it's a paradox. On one hand, everybody's scared half to death. It's going to take over the world, take all of our jobs, ruin us. But in reality, maybe it's the other way, which is it's the only thing that can save us. Yeah. And if you believe that economic equation, which I think has proven quite true over hundreds of years, I do think it's probably the only thing that can save us. I actually looked at the numbers yesterday for totally random reason on population growth in Europe.
I'm going to see if this is a special bonus question. We'll see how smart you are. Which country in Europe, continental Europe, has the highest population growth? It's small. Continental Europe. Probably one of the Nordics, I would guess. Close. Luxembourg. Okay. Something is going on in Luxembourg. I feel like while all of us need to investigate, they're at 1.49, which in the day, by the way, would be a relatively low.
That's the best performing country. I mean, in the day, countries routinely had 2 point something percent growth in a given year. Last question, you're writing a book now. We were chatting about it backstage. And now I appreciate the paradox of this book.
which is in a universe where the model is better in the afternoon than it is in the morning, how do you write a book that's like printed on paper and expect it to be useful? This is the challenge. And I am an incredible author of useless books, meaning most of what I've spent time on in the last decade is stuff that's completely useless like a year after it's written. And so when...
we were talking about is I would like to do something around AI that's timeless. Yeah. That would be useful 10 or 20 years from now. But then to your point, how is that even remotely possible if the model is better in the afternoon than in the morning? So that's the challenge in front of us. But the book is around AI value creation. So kind of links to this productivity paradox and how do you actually get sustained value
out of AI, out of automation, out of data science. And so the biggest challenge in front of us is can we make this relevant past the day that it's published? How are you setting out to do that? I think you have to, to some extent, level it up to bigger concepts, which is kind of why I go to things like macroeconomics, population, geography, as opposed to going into the weeds of the technology itself.
If you write about this is how you get better performance out of a model, we can agree that will be completely useless two years from now, maybe even two months from now. Yeah. So it will be less in the technical detail and more of what is sustained value creation for AI, which if you think on what is hopefully a 10 or 20 year period,
It's probably we're kind of substituting AI for technology now, I've realized. Because I think this has always been true for technology. It's just now AI is the thing that everybody wants to talk about. But let's see if we can do it. Time will tell. Did you get any inkling that the pace that this AI years phenomenon was going to, that things, the pace of change was going to accelerate so much?
Because you had Moore's Law, right? You had a model in the technology world for this kind of exponential increase in. So were you thinking about that kind of similar kind of acceleration in the... I think anybody that said they expected what we're seeing today is probably exaggerating. I think it's way faster than anybody expected. Yeah. But...
Technology, back to your point on Moore's Law, has always accelerated through the years. So I wouldn't say it's a shock, but it is surprising. You've had a kind of extraordinary privileged position to watch and participate in this revolution, right? I mean, how many other people have been in that, have ridden this wave like you have? I do wonder, is this really that much different or does it feel different just because we're here?
Meaning I do think on one level, yes. So in the time I've been at IBM, internet happened, mobile happened, social network happened, blockchain happened, AI. So a lot has happened. But then you go back and say, well, but if I'd been here between 1970 and 95, there were a lot of things that are pretty fundamental then too. So I wonder almost, do we always exaggerate the timeframe that we're in? I don't know. Yeah. But it's a good idea though.
I think ending with the phrase, I don't know, it's a good idea though. It's probably a great way to wrap this up. Thank you so much. Thank you, Malcolm. Thank you.
In a field that is evolving as quickly as artificial intelligence, it was inspiring to see how adaptable Rob has been over his career. The takeaways from my conversation with Rob have been echoing in my head ever since. He emphasized how open source models allow AI technology to be developed by many players. Openness also allows for transparency. Rob told me about AI use cases
like IBM's collaboration with Sevilla's football club. That example really brought home for me how AI technology will touch every industry. Despite the potential benefits of AI, challenges exist in its widespread adoption. Rob discussed how resistance to change, concerns about job security, and organizational inertia can slow down implementation of AI solutions.
The paradox though, according to Rob, is that rather than being afraid of a world with AI, people should actually be more afraid of a world without it. AI, he believes, has the potential to make the world a better place in a way that no other technology can. Rob painted an optimistic version of the future, one in which AI technology will continue to improve at an exponential rate.
This will free up workers to dedicate their energy to more creative tasks. I, for one, am on board. Smart Talks with IBM is produced by Matt Romano, Joey Fishground, and Jacob Goldstein. We're edited by Lydia Jean Cott. Our engineers are Sarah Bruguere and Ben Tolliday. Theme song by Gramascope.
Special thanks to the 8 Bar and IBM teams, as well as the Pushkin Marketing Team. Smart Talks with IBM is a production of Pushkin Industries and Ruby Studio at iHeartMedia. To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts.
I'm Malcolm Gladwell. This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies, or opinions.