Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.
Today, we're talking to Sam Ransbotham, professor at Boston College and host of the Me, Myself & AI podcast about why only 10% of companies succeed with AI. You're listening to Joel Beasley, Modern CTO. All right. So your podcast, the premise of it is why do only 10% of companies succeed with AI?
That caught my attention. I was browsing the internet on LinkedIn or something, and I saw this, why do only 10% of companies succeed with AI? That's how I found you and all of this and all your show, me, myself, and AI. And I want to know, is it really that big of a deal? You made an entire show about it? Well, I think there is. You know how marketing works. I mean, I think that's that word. We have to lead with some statistic that gets people interested. Yeah.
But that's a pretty interesting one, isn't it? Given the amount of stuff that we're hearing about artificial intelligence, we were hoping that that number was bigger than 10%. And why are they failing?
hey, see, there's the trap. I don't think that people are really failing. And so, you know, our research, it looks at and says that about 10 or actually 11 percent are getting significant financial benefits. So it's not like they're not getting any benefits. It's not like they're failing. It's just maybe falling short of this, hey, AI is going to change everything that we're hearing so much in society. So it's not
don't cast it as failure. There's more than two options here. This is not Hobson's choice. That's funny. Yeah. So of these companies that are achieving significant returns and investments from this AI, tell me about those. Yeah. So we put together, we tried to look at, all right, given we have some have-nots and some haves, what's the difference? I mean, that's a natural question for an academic or for anyone to try to figure out those differences.
And the first few are things that you might, I think we would expect, right? They've got to get their technology house in order. You can't have something like artificial intelligence, complicated machine learning models, if you're basically working on an outdated copy of Excel that is run on an outdated PC, right? So there's a certain sort of infrastructural element to that.
And also there's talent. You have to have somebody use these tools here. Now, so what we found was that 10%, to get to be one of those 10%, you got to have some of those basic building blocks in place. And we think of those as talent, infrastructure, and strategy. And I could talk about each one and more of those, but that doesn't get you all the way there. There's a lot more after that. And that, I think, was what was more interesting for us.
You can't say, for example, we're just going to take the same old thing and just do it with AI. That is not going to get you into that 10%. One of my fun examples is in the healthcare industry. So here's a question for you, Joel. When was a fax machine invented? Oh, I don't know. This is about rehearsed, everyone. I'm putting him on the spot, literally. I'm going to say somewhere between the 60s and 70s.
Early 80s? 60s and early 80s. Now, I think you're thinking 1960s or 1980s, right? Yeah. No, much closer to 1860s. Really? Which is interesting because, you know, if you think about that, predates telephone and, you know, it was working across telegraph or whatever. So here's the point. That's a really old technology. I'm headed somewhere with this story. Don't panic. Yeah.
There's a lot of interesting stuff happening. And one of the things, the industries that uses faxes left and right is healthcare. They'll fax stuff left, you know, back and forth. They're practically the only people still using faxes at this state. And so what, I've read this story about
People in healthcare using AI, optical character recognition, text parsing, to take an image from a fax machine and scan it and try to get all the information out of it. And that, on the one hand, seems like a great use of artificial intelligence because nobody wants to retype everything that comes across the slick little fax paper, right? So again, there's value in artificial intelligence there. But what about just not sending a fax in the first place?
What about sending that information from one computer system to another computer system without a fax machine at all? And so that's the point of you can't just slap AI on top of an existing process, which is faxing. Think of some new way to do a process. And I think that's the real difference in that 10% that we're getting at.
By the way, my brother and stepmom are both physicians, and I've seen in their offices. And it was fascinating when I showed up and my brother was showing me his office, and I saw this fax machine with just pages. He's like, they just faxed this over their whole medical report. And I just go through it, and I said, man, at some point, someone's going to need to create some technology that makes this better. Yeah. Well, I mean, what's funny is we have the data in a computer system. We print it out.
Then we fax it and then someone gets it into another computer system. The fax is an unimportant step here. And so that's the key there is that just going in and saying, hey, let's artificial intelligence up this whole system we got is not going to get you into the, there's value in it. I mean, you're going to save money by somebody not typing in the fax, but there's a lot more money in thinking of ways around that process.
Have you seen how companies are attempting to do this? Are they using internal talent, external talent? How are they approaching AI? Yeah, all of the above. I mean, so I think, you know, it'd be nice if I could say, oh, no, man, always go internal, always go external. And nothing in business is that clean. Nothing has that pattern answer. I mean, sometimes you need expertise that's just not within your company. On the other hand,
What's happening, I think, is technology is becoming much more of a commodity. I'll explain what I mean by that in just a second. But the point is, if it's a commodity, then what matters is what people know about your organization. And that's much harder to teach and buy. And so that's an argument for using internally. Have you seen the OpenAI Sora model?
Yo, I have not played with it yet. Have you seen it, though? Yes. You've seen the preview videos? Yeah, it's crazy. And actually, that's what's so incredibly frustrating about being a professor. I teach a class in machine learning and artificial intelligence.
I am so jealous of the professors who get to use their slides from last semester because I put up the slides from last semester and like, oh my gosh, that technology, I'm like talking about MySpace and Friendster up here or something. I mean, things are dated so quickly. It's amazing. So yeah, it's really hard. I mean, and it's progressing so quickly. So tell me about this class that you teach.
So that takes undergrads, how do you do machine learning and artificial intelligence? Actually, to tie back to the point that I was making earlier about commoditization of technology, in our class, we will use an image classification, a number of classes, handwriting, handwritten digits,
We can, within that class, using their laptops and code that we've downloaded from the Internet that's freely available to everyone, we can beat what would have been a contest-winning performance five years ago, six years ago. But we can do it with tools that they've just downloaded and laptops that they're running in class.
That's really incredible. And actually, I'd like to think it's because I'm such an awesome teacher, but it's really the tools and they're just becoming so crazy. And just the image models, the video models that are coming out now are truly, truly amazing. Are you teaching a 101 class? What type of competency level do they have entering your class? Actually, they have to know a little bit about Python. And we have so we have an introduction to Python class. So they know about
coding. But this is really much more about scripting. And what people find in that class is that so much of this is about getting your data cleaned up and ready. So you press go on the model. And pressing go on the model is just a matter of kind of waiting for it to do its churning. So much of that is, again, built into scripts that are available to people.
You know, crank it up right now. You got your laptop. I can tell. I can see it in front of you. Let's download it and start going. That's pretty exciting. Yeah, I did a couple interviews maybe two or three years ago with a few companies that were either growing or already public, and they were helping with the data organization labeling and modeling, just cleaning everything up. That was their entire business, was just helping people get their data structured and cleaned up. Yeah, it's huge. Yeah. Are you teaching prompting?
Yeah, actually, we do play with some prompting there and going through that, how to get a good prompt, how to get a bad prompt. And what's a little tricky for me on that is that it feels like a very ephemeral knowledge. You know, you think about, I'd like for somebody to retain something from class more than six weeks or, you know, end of the semester, ideally. And prompting stuff is changing so quickly, too. It's really, it's hard to understand.
elevate that to a principle level. But I think it's more important is thinking about critical reasoning in this context. Because I think we have a... You've played with these models, apparently, or you're talking about them. Actually, let me put you on the spot again. I mean, this is too... Yeah, for sure. This is fun. Yeah.
Have you ever built a machine learning model in Python or any other coding language? I have not. I've experimented with them, but I have not built them. Have you played with ChatGPT or any of these other large language models? A significant amount of time invested in them, yes. We use them in the course of our business. I estimate it saves us about $100K a year at my company. That is huge. What's huge about that is, I mean, the magnitude is pretty amazing.
But what's huge about that too is that how accessible that is. Because I talk to people and most people have not built their own ML models. But increasingly, a lot of people, practically everyone is using these tools that are available now. And I think that's, again, towards that commoditization story that pretty much we're all going to need to know a little bit about AI. Yeah. When we started, the way it emerged at our company is obviously, I had been tracking
for about seven to 10 years, I've been tracking the progression inside of models, the large language models and what they were doing. Because I saw them fairly early and I said, well, this is going to be interesting. So I checked in on it every year. When it became, lack of a better term, public consciousness with GPT in February of last year, January, February of last year, what I wasn't expecting is
how much the world split in subject matter experts between the people that are just like, they essentially shunned it and shut their mind off and put their head in the sand. It can't do what a human can do or they did one prompt and they're like, it's not perfect and they just completely disregarded the entire ecosystem.
And then us, where, and to be honest with you, that is one of the paths I leaned towards at first. I was like, eh, it's not that great. And then I saw some other people doing more advanced things with that. I said, well, I just don't know how to use it then correctly. And so I went back and I said, I'm going to go back and revisit this.
And that turned into a phenomenal situation for our business. And we have a Slack channel dedicated to just what we're learning from GPT prompting. We pin specific prompts. We share knowledge between the producers on how do we get one example. And you might be able to tell me there's a better way.
Uh, but when we start a new GP chat, GPT conversation, we'll have these documents essentially that we paste into it, uh, to start. And then we can go from there. So we're like, all right, this is the baseline knowledge you need to put into the new conversation. And then you can ask it questions like this, and it's going to give you outputs like that. And so that's what we're doing currently, uh, to, to use it. But as I've started to see people come out with these custom, uh,
essentially, I haven't explored those yet. Well, I think a lot of that's what the custom is doing a lot of your documents doing and just trying to do it at scale, right? You get a company and you say, okay, let's pre-train it for everybody. And so they have that information already within or they have their specific knowledge.
There's like 12 things cool in what you just said, though. Like there's a Pew Research study that came out last fall, so it's already horrifically dated. But it says that the number of people that have tried this tool is phenomenal, and it breaks down inversely with age. If you're sort of post-65, very few people of those have played with the tool.
Pew couldn't ask anybody less than 18. And, you know, 75% or so of those people have used the tool. And I'm just based on my kids. If you went lower than 18, they're all over this tool. And that has really big implications about what changes in the future. And it's also how you use the tool as well. Like for, I'll give you another separate example. So I wrote a fiction book and I was not,
90% done. We were basically in the final editing and we,
January, February of the previous year. So by the time GPT came out and publicly, I was wondering like, is it possible to use this to help you with the book? And I had a couple of hypothesis. I was like, all right, maybe if I put in, you know, how could I use it? And to fast forward through all of the examples of, of how it didn't work, what I found it was really good at was helping me frame stories like frame. And if I was going to, if the spy was going to break into the facility and,
asking it for like 15 different ways that could happen. And then in sort of working with it as a, as a, another person in the room ended up being the most effective way for me to do it. But you can't just say, write me a full, I mean, it's not going to be what you want. If you just say, write me a full book on this topic or whatever. Well, I mean, you're right on exactly the problem right now. Um, or, you know, problem that sounds too negative. I mean, what, what's cool.
what's simultaneously cool and difficult about this is it is really just a tool. And it's yet another tool we've got to figure out how to learn. And there are good ways to use the tool and there are bad ways to use the tool. I mean,
the first caveman that picked up a rock. They either use that rock to build a house or they use the rock to bang rock on the head with it. The rock's the rock. I think that's really an important part of what we try to bring up in the podcast. The podcast is me, myself, and AI. Two out of three of those words are about people. One is about technology. I think we have yet another story where technology is important and it's critical, but how we use that tool is really important.
At what point do you think we'll be able to say that the AI is conscious? Yeah, this gets into a scary thing. Actually, we've had some of these same sort of arguments here. I mean, it's a slippery slope, isn't it? I mean, we're so terrible about, you know, am I bald? Well, yeah, but what point did that happen? You know, there were some planning years and...
Well, calling these very finite yes and no things on a gradual slope is really tough. But I mean, you saw the same stories that I did about how, oh my gosh, it's conscious, it's sentient. It certainly seems that way. And certainly compared with a lot of people that you end up talking to, it seems a little more savvy than them, right? Yeah, and I went down the rabbit hole on this and talked to anesthesiologists and people who are really close to consciousness about
And we don't even have good answers for what consciousness is. Is it a person? As a people. And so I was like, wow, this is still an open, there's not consensus here. There's consent, there's understandings of what consciousness is expected to be able to do to participate in society. But, um, well, that's one of my favorite things about artificial intelligence is you ask somebody for a definition and it typically has this shape. Artificial intelligence is blah, blah, blah, blah, blah. Intelligence.
And so, you know, I don't know if you remember back in math days, that's a foul. You can't put the thing on the left side of the equal sign on the right side of the equal sign. So what we're really phenomenally good at is defining artificial. What we're really terrible about is defining that intelligence part because it's changing all the time. Well, here's my base argument for this. Chat GPT is smarter than people that I know. I'm going to see you pretend you're not looking at me right now. Wow.
No, but yeah, that, I was talking, you know, I talked to all different types of people from the people who are really low level designing the large language models into them. They have their picture in their head. It's not until it's not consciousness. It's just not. It's just, I go, I know, but the result of what you're doing, my interaction with that is equal to or greater than interaction with other humans. And so if that's the bar, then it's there. Well, hey, let's call that bar stupid just for right now. I mean, I think there's,
The Turing test. I mean, what you're talking about here is that can this machine act in such a way that it fools you into thinking it's a person versus a machine? This is the Alan Turing test from long ago that we're all super familiar with. Well, we have been chasing that Turing test left and right for all these years.
And like a dog who's chased a car, we've caught the car and we don't know what to do. Like, what do we do now? We can fool people into thinking that something is human. Okay, now what? I mean, was that really the goal? We've got something like, what, 9 billion people on the planet. I don't think we need to necessarily replace those. I mean, we need to be thinking about things that this technology can do that we can't do. Otherwise, we're just in this
replicating mode versus thinking about what we and the machines can do together better. No, you're exactly right. And that Turing test, everybody, so I've been doing the show almost 10 years now. And when I was having the conversations in the earlier years, it was like the Turing test, that's what we're going for, Turing. And then there was just this day when I remember doing some interview and the person was just like, yeah, they were just super dismissive. They're just like, yeah, but that doesn't really count.
We haven't been working towards that as humanity for decades, but that doesn't really count. Now it's the fact that it can't do A, B, or C. And I said, whoa, how quickly we just went right past that Turing test as the point. Well, we're terrible about that. One of the definitions, back to the definitions of AI, it almost always says the word
like phrase normally, you know, that humans normally do or norm, like something about normally or usually. So it just poses, this is a slippery slope. If you went back into the 1700s and you gave someone a quill and that quill would turn red ink when you misspelled a word, that would be witchcraft, right? You'd get burned. We take you up to Salem and dunk you and leave you underwater for a while. But now spellcheck, whoops.
That's not artificial intelligence, right? So we just have this ever-expecting, changing expectations of what the technology can do. I don't think that's going to change. We were always going to want it to do more. Yeah, I get mad at autocomplete. I'm like, you can't read my mind yet? Heck, come on. Get out of here. Back to your fiction book, though. One of the things that worries me in this scenario is...
I don't think, knowing what I know about you, you're not looking to get an average fiction book out there. Is your goal to wake up in the morning and say, hey, what I'd really like is a statistically average fiction book? Negative. I would never. I do everything exceptionally. Exactly. Like all the kids at Lake Woobagon, you are better than average. Yeah, but that's, I think that's the crux of what we're grappling with right now is this race to mediocrity.
So what we've got is this phenomenal tool that's getting you to mediocre amazingly quickly. And there's two, I'm going to, you know, I complained earlier, you may be pick A or B, and I'm going to make you pick A or B here. But is this a tool that A, helps us and gives us a huge head start so we get to mediocrity and then we can build from it?
Or B, is this a tool that is a crutch that gets us to mediocrity, leaving us without the tools and resources able to go beyond mediocre?
And I think that's really what we're grappling with with our modern generative tools. Yeah, and I think that's going to come down to an individual thing. Yep. You know, like me. And actually, maybe a situational, too, with an individual. Like, I don't need to be awesome. Actually, one thing I do in class is I'll type it in there and have them make a theme song for the class. You know, it'll be like a little poem. And it does phenomenally better than I will ever do at that.
And I'm okay with that. Like it's mediocre is so much better than I'm going to do. And I'm going to choose not to compete on that, you know, on that thing.
On the other hand, there's a lot of stuff that I'm hoping to do a lot better. Anything that I write, I want it to be not average. I don't want it to be mediocre. I want it to be exceptional. And I think that's, you know, we're going to have to, within the individual pick, but also within situation pick, which, you know, is this a situation that I want to be awesome? Or is this where, well, I mean, mediocre is okay.
How can leaders explore this? We got a lot of leaders that listen to the show and most of them are in technology all the way from first time leaders through VPs, engineering, all the way to the CTO, CIO, CISO, the whole, the whole stack. The reason why people listen is to become better at leading their companies. And how, how, how should a leader like begin to explore this within their organization? What's really cool. The, the,
The example I gave you earlier about have you built a machine learning model? No. There's a whole lot of setup and infrastructure you got to get in place to even be able to run that model in the first place. I think about that as a learning curve. I don't like the phrase steep learning curve because it's not steep, it's actually very shallow. It takes you a whole lot of effort to get to a point where you can get something back out of that tool.
In contrast, the modern tools that we're seeing emerge now around generative are highly accessible. You can go out there and do them right now. And your listeners, they ought to be playing with it because everybody in their organization is playing with it. Everybody is, you know, figuring out what they can do and not do. But when you play with it, I think one thing to remember, you know, back to your example of how you played with it,
If you hired someone new in your company and they did not perform awesomely on the first day of the job, you fire them or you give them a little help to get them better. I'm hoping that it's give them a little help to get better because you'd have a pretty empty company if you just fired everybody who wasn't perfect on day one.
I think that's the model we have to think about when we're using these generative tools is how can we help progress and get them better and not just categorically reject what they do when they make a mistake or when they hallucinate. Are you doing consulting for any companies?
Oh, not many. No. I do some engineering consulting and software on the side, and that's what's fun for me. Have you gotten involved in the policy aspect at all? Has the White House called you up and said, hey, we need to know what's going to happen? We're just going to ban all models over 20 gigs type situation? They must have lost my phone number. I don't know what's happening here. That's okay. I think the regulation stuff is interesting. Here's what I would say when they do call, since you asked, is,
We have a good long history of being able to handle new technologies. And we're treating this like it's a magical exception. And I don't think that it necessarily is. I'll give you a couple of stories. One is in 1906, Upton Sinclair wrote a book called The Jungle. And it was about the meatpacking industry in Chicago. And it was absolutely disgusting. And what it did was it shone a whole bunch of light on a situation that was untenable.
It was existing because nobody ever went inside that meatpacking plant and looked at it, right? It was all hidden. All we knew is that when we ate stuff, we got sick. You fast forward to right now, you can open practically anything and eat it and not be worried about the supply chain that brought it to you. We can go into any restaurant and be pretty comfortable that you're not going to get sick. And in fact, once a year, we'll have an E. coli scare. We'll have something, you know, a restaurant that makes the headlines.
These are headlines because they're unusual. Because what's happened is that we've built in an infrastructure of oversight and trust around food packing and around restaurants that help us have trust in that. In contrast, we have none of that with the technology industry right now. Every single one of these models is happening behind closed scenes. What is open AI?
Well, one word is not open. I know. So I don't know about the AI. We can debate about whether it's AI, but the open part has really disappeared on that. And so I think we're going to have to think about how we treat these issues.
and oversights, and it's going to have to be, you know, we can buy stock in companies that we don't know what their books look like because we trust accountants to go in and look at it. You know, so there's a model. So we have models in society. I used to work at the United Nations with the Atomic Energy Agency and the weapons inspectors.
We don't want people building bombs. Okay, so we have an infrastructure around inspection and testing and a little bit of carrot and stick to where you get power reactor information to help you use these power reactors better to avoid them blowing up. So we built this infrastructure around that and, you know, knock on wood, and I'm not sure when this broadcasts, but we've not had any huge, you know, nuclear incidents, you know, since that organization
organization developed so it's not like we're the first time ever thinking about a technology that that is powerful and could do some harm and we got to figure out how to regulate him there is a unique aspect to it from those so you got those two examples the jungle and the atomic um and those those are both you know hurting or killing people right those are some of the examples but um
What about job loss? Like one of the things that I've thought and first of all, I look back at the stories of the horses and the cars and the common ones and the stories of the mail system, everyone thinking that the internet is going to put the mail system out of business, but it's just exploded it because of deliveries. And so I'm like, all right, I get those.
However, when I see like job consolidation, essentially, one of the things that's different now than those others is the speed at which it can deploy. So, for example, there was a certain there's a lag time from the horses transitioning to the cars. And that's, let's say, a year or two. Right. And there's a lag time of these technologies. We've compressed the lag time quite a bit. Do you do you think that?
we have the infrastructure in place to, to handle that? Like if we just wake up tomorrow and 30% of one industry's jobs is just gone because of this technology, like how would we respond to that?
Yeah, I mean, certainly if you wake up tomorrow and I think your point's valid about the increase in speed. I think it's one thing we're grappling with. You know, there was a call six months ago to pause that. I think that was, you know, I mean, there's too much in that prisoner's dilemma that lets people defect, you know, in a shocking turn of events, the people who are ahead are the people who want to pause and the people who are behind are not interested in pausing. And
In other ways, my analogy breaks down is that most of my analogies had physical goods involved, and this is purely information goods, so it moves across borders relatively quickly. I mean, those are not perfect analogies, but I think there's three things that are going to happen in terms of job loss. And the one that gets the most attention is
Jobs are gone. You come in tomorrow and the machine is doing your job and this is your 30% example. Job doesn't exist anymore. That's a scenario where machines have come in and taken the job entirely. I'm pretty doubtful on that one to start with. I get the fear, but I'm pretty doubtful.
I think the second one is much more real. And that is that some other human out there takes your job because they're better at using those tools. So there's a bunch of tools out there. And like you said, you are using these tools and you get better at them and you learn how to use them and you learn how to get more productive with them. And suddenly you are more attractive than your human compatriots. That's a very real risk. And that's the, you know,
machines won't replace people, but people using machines will replace people not using machines argument. I think that's very real. I think the third one is more, and then to go back to your example here, let's say that your organization doesn't use these technologies at all and another organization does,
Well, then you get an organizational level wipeout. So the cost structure of an organization that is using these technologies drops and the other organization not using these technologies is no longer cost competitive. And so that is a scenario for massive drop-offs, job-offs too. It's just not at the individual level. It's more at the firm level.
And I think that's another one to be. So the first one is something that catches our mind. But the number two and three, I think, are the bigger things going on right now. You mentioned, Joel, the $100,000 a year you think you save. Somebody out there is spending $100,000 and they won't be able to do it for too many more years before you
Put them out of business, right? Well, we're always trying to put ourselves out of business. Because you don't, someone else does. Yeah, yeah. Well, it's like I, we're seven years or so into this business and, you know, obviously there's a hunger curve, right? You try to stay hungry the whole time, but you're never going to be as hungry as before you made anything of yourself. And so those people are, they're just looking at, unemotionally, they're looking at the marketplace, seeing what tools are available, grabbing them and trying to achieve outcomes.
versus like, this is the proper way to do it, and this is the best way to do it, and this is the way the subject... You know, for example, Josh. Josh and I had these conversations a couple years ago, maybe a year or two ago, when the AI...
Got so good for post-production audio enhancements that it no longer made sense to just go manually do all of this stuff. And instead, you can just say, all right, AI is going to do that section of my workflow, and then I'm going to spend more time editing the conversation points and the flow of it.
And so, but there are a lot of people out there that are still like, ah, this is the right way to do it. This is the real organic way to do it. And all the real artists do it this way. It's like, well, look at outcomes. Yeah. Well, I think there's, let's pull two points out of there. One is most people are doing stuff that they don't want to be doing in their job, or at least some of it.
I mean, for me, I get an email that says, you know, hey, professor, I missed class. Did I miss anything? And my snarky response is always, nope. I looked up and saw you weren't there. We just shut everything down. But no, I temper my snarky response. And, you know, if I could have AI temper my response for me so I could write my snarky and then have it tone it down and make it friendly.
Or answering questions that are on the syllabus. You know, that's something I do a lot, and I would not say I'm adding a lot of value there. We did a study a few years ago, and we asked people, hey, what do you think about artificial intelligence? Do you hope that it's going to do some of your tasks, or do you fear that it's going to do some of your tasks? 73% of the people said they hoped it would do some of their tasks. 33% said fear.
And I think that's where we are. Now, I'm not saying that those numbers are not going to change as we get more general or more knowledge-oriented tools. Maybe those numbers change. But right now, we've got a lot of people doing stuff that they don't want to be doing. And that's probably not adding value. I'm not going to put people on the spot here, but I don't know about the post-production example. Is that fun work or not fun work? It may just be tedious work. Yeah, Josh, cleaning up real bad audio.
Yeah, I mean, where it excels is in the audio that sounds like it was recorded in a trash can. And that's never fun to listen to or to work on. And it's much more rewarding just to have a machine do it because it sounds better. And you can get to focus on what content and what's important there and what makes a difference.
And the second thing to pull out of that, that you were saying, Joel, is that you switched from the word job to skill or task. I can't remember exactly what you said, but I think that's the way to think about it. Our jobs right now are this composite of hundreds of things that we do on a daily basis. Some of those are more amenable for computer, some are less. And we got to figure out where those are. And again, that comes down to management because
Figuring that out is management. Figuring out where the ROI is for this task versus that task is the crux of managing scarce resource. You can't automate everything.
I want to make sure we touch on Optimus. Have you seen Elon Musk, bipedal robot Optimus? No. Oh. Oh, the actual robot? Yeah. Yeah, yeah, yeah. Okay, yeah, sorry. The thing walking around, folding laundry, things like that. Have you seen it? Yes. Actually, what do you think? What?
I can tell you're excited about it. Well, we're suckers for convenience, humans. And so the moment that you, like I would typically say, no, I'm not letting that bipedal robot in my house. But then the moment you say, okay, for $200 a month.
Your house is going to be cleaned all the time. You clean while you sleep or when you're at work or whatever it is. You're going to, your meals are going to get made. Your wife's going to be happier. Laundry's going to, like everything's going to get done. You're living the life of Jay-Z right there, aren't you? Right? Right? And I'd be like, yeah, my disdain for the dystopian future, but my love for cheap convenience is they're at odds. Yeah.
What I think is interesting about that is it also speaks to our fascination with the biped. Like when I, when people think about artificial intelligence, they immediately gravitate towards these videos they've seen of the Boston Dynamics dogs or these, you know, these, and they all look like humans.
I don't know who decided that was a good shape. Like, why is that our, why is that what we want to look for in a machine? I think there's lots of shapes of people, lots of shapes of things that would be more useful than people shape. Now, I admit you might need some people shape things to do jobs that were designed for people originally, but there's no reason to say that people shape is the right shape. And it gets us down into this
humanoid robot thinking, which I think is kind of limitless in scope. You think it's the anthropomorphism? Yeah. Like we just, we do it with dogs. We've been doing it with things forever and we're just doing it with the technology now. Right. But there's nothing that says that that's the right shape. And I think that's where, you know, back to the whole why humans
have the machine read the facts. Well, why have a shape that is the shape of a human? Just because that's what I mean, there's some convenience of form factor in switching immediately from the ironing board at the height of me versus ironing board that would be most optimal for a machine. But it won't be long before we we switch to that. I wonder if those mirror neurons are partly to blame.
We like things when they're like us, right? Exactly. Yeah. Oh, wow. What other... So I don't... I think one of the interesting things about this interview is in almost every section of questioning that I had for you, you brought up a completely new thought that I hadn't thought about. Yeah.
That's a lot of pressure down. I like you. I like you quite a bit. I was I was really surprised with the when you talked about the threat being the organizations that use the technologies out competing in a free market, the organizations who aren't. That's what that's going to happen. They're going to collapse because. Yeah, that's that's not even far fetched to think about. No, no.
And again, we get so focused on that first example that we lose track of the two and three that are going to spank us in the backside. What are you learning about? I just want to talk with you just about like professor stuff for a minute. What are you learning about this next generation? How long have you been a professor? More than a decade? Gosh, yeah. Okay. What's the trend that you've seen in changing with the students coming in? You know, okay, that's something, oh man, you're just like, let me lay down on the couch and talk. Yeah.
No, I mean, I think it's pretty fascinating. Kids are getting exposed to these technologies much, much earlier. And so you think about what that does to us in education is that it means that people are coming in with a set of skills that they didn't come in. We, in theory, used to provide those skills, and now they're coming in with those skills. So then the question is, what do we provide? Now, can we build on those? And so I think there's a shift.
And I think we can address the shift. But what I'm more sort of fascinated by is the dispersion, is the variance in that. So what I mean by that is that we have some people who've gotten super curious about technology and come in and are just amazing and on top of things already. And I'm not sure what I can teach them, right? On the other hand, we also have some people who seem to be, you know, getting further and further. And so what that means is that that bell curve that we think about teaching to is changing.
A bell curve is really nice in class if it's tall, because what that means is it's narrow. Because then I in class can talk about the same topic and I'm going to bore two or three and I'm going to lose two or three. But I'm going to hit that sweet spot in the middle. But when that thing spreads out, then I'm more in trouble because I can still cover the same sliver, but then my tails get bigger and I lose more and more people either through boredom or not being prepared.
And I think that's actually what I'm most excited about with artificial intelligence right now, because I think it can help with that. We've spent so much time thinking about what we can teach the machines. That's back to your, you know, the Turing thing we were talking about earlier. Can we teach this to be like a human? Can we teach it? And that's with us in this space.
I am the knower of all. I am the presence which knows and bestows knowledge upon everyone. And that is just a bad mindset to be in because it positions me as the giver of knowledge and students as the empty vessels to receive them. And there's a whole line of research on this. But what I'm excited about artificial intelligence is being able to meet people where they are. Like, ideally, we would have one-on-one teaching, but it just doesn't scale.
to what degree can we actually use artificial intelligence to help us learn? And I'm not saying teach the machines. I'm saying how would the machines help us learn? If you think about something like, go back to your example of the ironing and Jay-Z. If Jay-Z wants to get fit and have a personal trainer, he's got a personal trainer on staff constantly, ready the moment that he's ready to exercise.
Whereas me, when I peel myself off the couch, chances are there's not a personal trainer hanging out and waiting. But introduce technology into this. And we've seen Peloton, and I'm sure others, I don't want to particularly shout them out, but Peloton has a device that sits in your room and it can say...
You call that a plank, buddy? You know, that's not looking so, or I thought we said 12 pushups. That looked a lot like six to me. Or, you know, so these things that we can have the machine, it probably is not as good right now as a personal trainer, you know, as an individual human personal trainer, but it's there available and it's customized to what you're doing at that moment.
I think that's very exciting. I don't know if you use something like Duolingo. The goal of Duolingo is not to translate language. The goal of Duolingo is for you to learn language. It's for most people to learn English. You can have customized language instruction at scale. It knows exactly what you know. It knows exactly what you don't know. It knows what you're good at, knows what you're weak at. We can have human improvement that could be phenomenal. And I'm very excited about that.
And, you know, you think about how you may have learned language growing up. You may have had a teacher who, through years of experience, learned good ways to teach. But how did those good ways get propagated? With Duolingo, if they figure out something that helps people learn faster, they can scale it across the world tomorrow. And that's pretty exciting.
One of the examples I like is the Fosbury flop. So Dick Fosbury invented the high jump method where you, instead of going and running and jumping forwards over the bar, you jump backwards over the bar. In about two years, the whole world, high jumping world, switched from forward over the bar people to backwards over the bar people. Now, this is related. I'm not going to tangent here. You think about there's companies putting sensors in ski boots and they're monitoring in real time how you're skiing.
What's going to happen is that somebody is going to invent some crazy way of skiing that does better, performs better in some way. I don't know what it is. They're going to record it. And the next day, their engineers are going to look at it. And then the next day, it's going to be in the advice that everyone else using that device is using. Does that make sense? Yeah. So we have the ability to push out what we know at scale quickly. Again, these are ways that humans can be better. And I think I'm super excited about that.
Yeah, my kids use Simply Piano. Have you come across this application? And they have really mastered how to break it down so I can put a four-year-old or a seven-year-old in front of that thing, and it'll hold their attention while teaching them step-by-step how to play this piano. It's fantastic. See, this stuff is amazing because, again, the goal of that tool is not to play the piano or even to play music.
The goal is to help you play music better. And I think that's just an untapped resource that we're really only getting started thinking about what we can learn from the machines. Thanks for listening today. We'll be back with new episodes of Me, Myself, and AI on September 17th. Please tune in.