Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast.
Hi, everyone. Allison again. Really excited to share today's bonus episode on which Jennifer Strong joined Sam and Shervin to talk about a host of podcasts and artificial intelligence related topics. They will be talking about their favorite episodes of Me, Myself, and AI, what goes into producing a podcast, and other really fun stories about what they've uncovered as they've researched and told stories about technology and business.
We hope you enjoy it and again, really encourage you to continue to rate and review our show so we can continually improve and make this a podcast you really enjoy. So let's jump in. I'm Jennifer Strong from Shift Podcast. I'm Shervan Kottabandi. I'm Sam Ransbotham. And you're listening to Me, Myself and AI. Hi, everyone. Today we have a special kind of guest for you today. We're joined by Jennifer Strong, who is an audio journalist,
award winner, creator of Shift podcast for Public Radio Exchange. Jennifer, welcome to the show. Thanks so much. Jennifer, you've been covering AI for quite some time and have perspective on what is going on and what you're seeing. What is top of mind for you these days? What isn't top of mind for me right now? I feel like, yes, I have been covering AI for a while. I think 2017 is when I first entered this beat over at the Wall Street Journal. But
It's not my background. I feel like I need to say up front, you know, I'm in no way a technologist. I came at this as I'm a journalist. I'm endlessly curious. Journalism has always felt like a great privilege, this opportunity to be trusted with people's stories, to see the world and witness particular moments in history as they unfold.
And I feel that way about AI, too. I would say right now I'm dedicating a significant portion of my time to gathering oral histories because I think in the not so distant future, even maybe just a few years from now, we're going to look back at this time and all of the change and be happy to have that body of work. That aligns well with your I was there when type of approach from the machines we trust that trying to capture these.
sound bites and moments in history because actually one of them that I was paying attention to made the point that
This is probably a fleeting moment here. Yes. It's nice that we're capturing some of that before we blow past it. You're right, Sam. I started this work at MIT. I was a director at the Tech Review until this past summer, just collecting all of these stories. And we're not quite ready to announce this yet, but the oral histories that are being collected both in my last role and in my current one, hopefully will find their way into museums sometime in the next probably couple of years. It's like something of a time capsule.
Anyway, you know, I have my favorite episodes of the show. I've been a fan of you guys for a while now. You're very kind. I'm thinking Yashu last year. Well, no, it's true. It's fun. Also, a lot of the folks you have on are people I know too. And it's really a lot of fun for me. But thinking of some of my favorite episodes, Yashu from LinkedIn was one. More recently, you had Naba Banerjee from Airbnb. And I'm curious, what are some of your favorites or most memorable episodes that you've worked on?
Do you want to go first, Sam? Yeah. So I think it's kind of like saying, all right, which of your kids do you like the best? You know, you can't help but malign one if you mention some others, but...
You know, there are some that just keep showing up over and over again that we end up bringing up as examples. There's a recency effect. I thought our NASA episode, one, amazing person and super interesting to talk to. And honestly, flying helicopters on Mars is cool. But just also how interesting.
We talk about all the planning that must go into using technology in that context. And then at the same time, she was searching for unexpected. They want to find things they are not expecting. And, you know, I think that's an interesting analogy. I was thinking more about this, that a lot of what we're trying to do with this technology is push a lot of boring stuff into the background and highlight all the interesting stuff. And that's happening clearly in Mars, but it's happening lots of places. So NASA was a big one.
Yeah, NASA was absolutely one of the coolest episodes, I think, in tech podcasting this fall. That was an awesome episode for anybody who missed it. I agree. I mean, out of, I think we've done 60, it's hard to say which was the best one. But what is interesting for me is when I take a retrospective,
I think about the sound bites for many of them that have become so true in my experience with AI and my work with my clients and my work with my colleagues from first day is the hardest day. Comes up a lot.
to curiosity and learning with Amit Shah of 1 800 Flowers on what kind of people do you need to hire? And that episode was several years ago, right? And the question was like, what kind of people are you hiring and what kind of technical backgrounds, et cetera. And he said, look, I want people who love to learn. And is that become so true now that if you're asking companies, like what kind of backgrounds are you hiring? It's like people who are open and curious.
It's interesting, too, that most people listening probably assume that Sherwood and I hang out all the time. But we actually met for the first time in person a couple of weeks ago at the World Bank. And we were doing an event there. But it was kind of fun. I felt like we were meeting people that had a whole bunch of friends in common because we were sort of saying, oh, you remember X, remember Y. So we had this giant shared history from this big list of guests that was fun to go through.
And also the research, right? Well, yeah, that's where it all started years ago. But it's fun because then we can speak in code words. Like Shervin says something like the first day is, and I know that, you know, how to fill in the rest of, you know, the worst day for technology because this is a theme that keeps coming in.
Hardy Zyghami at H&M saying, well, you don't just do tons of AI in one place and ignore it everywhere else. You got to tighten it like you tighten a tire, one little lug bolt at a time, round and round. And so some of these themes keep coming up.
And I think that comes into some of the design that Shervin and I think about when we're talking about the show. Allison, our producer, we go through this a lot. We're looking for things that we think will be enduring. And that's intentional. We're not trying to chase a headline while we're recording this. There's some crazy thing happening in AI because that's what constantly happens.
But we're trying to hope for some of these sort of stories that will last for longer. And that's very intentional. And I feel like that's working. Yeah, I feel like I just learned something. I can't believe that you two didn't know each other. So how did you come to be making this show together and what motivates you to keep going?
Well, Sherman, he just adores working with me. I mean, that's clearly what it is. Actually, it is true. You know, we started back in 2019 doing our first series of reports on AI and implementation of AI and AI and strategy. So that was sort of a proper research piece of work between our two organizations. That's when we first, I think, officially met.
And ever since, it's been such an interesting partnership because it's been very intellectually stimulating, I would say. We both have the same educational background, right? Chemical engineering. That has to come up a lot. Do you also do your PhD in chemical engineering? No, no. I have a real business degree, so don't besmirch me. So we do have that background. You know, Sam is a professor, which I've always admired, professionally.
professors and people who impart knowledge on us. And you ask about the show, and the show itself is a bit of a COVID baby because we were doing interviews with people for our research program to pull out their stories. And it was frustrating because we'd interview someone, we get a bunch of great stories. One of the 10 stories that they told would fit with the research theme. And so we threw away nine.
And we'd end up with putting one or two sentences from them in our research. And it just made my little soul sad to throw all this away. And so then I think Allison had the thought of, if we just would record these things, we could use them. And so we started to think, all right, this is an interesting thing we could do is this is content that people could use in all kinds of different ways. People walk their dog and listen to it or commute and listen to it. But the show itself is a bit of a COVID baby from that perspective.
But I also find that when you let people just talk, a lot of other content comes out that it will just not come out in a more, I would say, structured way.
interview. Whereas the show, as you could tell, even by how I'm talking right now, is completely on rehearse. Other than reading the background of our guests, we don't send them questions in advance. I'm loving this very much. Also, it's pointing out how much that I suddenly feel like we have in common. So journalists never send their questions in advance. It's actually an ethics violation. You don't want people to prep in that way. You want their honest reaction to a question.
But for me, audio, the reason I focus on audio journalism is for this reason, because I don't want to have to just answer this narrow slice of something. I want to hear their stories. I want to learn something I don't already know enough about to tease out that one thing that propels that one idea. That's right. You're not you're not shooting for a headline that's going to be, you know, grabbing somebody's eyes in a written piece.
When we first started this, Dave Lushansky, our sound engineer, sent some article to me about what resonates well in podcast. And it really reinforces what Shervin just said, is that it's not a headline chase. It's when an emotion comes through. How did something feel? And I would never have thought of that before he turned me on to that idea. Yes. I remember a lot of emotion coming out when I told Dave that I forgot to record a session. Yeah. Unfortunately, yeah.
I have the recorder here, but nothing's moving. And I think I forgot to press the red button. I'm going to check right now to make sure that mine is moving. We've all screwed that up, so don't worry. Oh, we have all screwed that up. What's the book that either of you haven't written yet?
Sam wants the right one. You have to pick this because I'm nagging Shervin right now about this. So we're going to make him answer this on the spot. What's the name of the book, though? I think, well, I don't know. See, we're going to workshop this and it's going to be a problem. But I think it's Beautifully Boring is my working title because I think we get so interested in chasing these monstrous, oh, this looks like a robot. This looks like a machine. This replaces a human. Or I beat the Turing test and I couldn't tell anything.
I don't know. That to me just isn't interesting. I think that there's so much more going on in the boring corners of AI, both good and bad. And I think we have to worry about both the good and the bad. The insidious nature of the recommendations we're seeing is a boring topic. It's not fun of mine, like a Boston Dynamics robot, which are cool, but not, I think things are going to change the world.
So one, I completely agree with you, Sam. The title of an article I wrote at the journal was for business to embrace, for business leaders to embrace AI more, it needs to become boring. And the other thing is, well, in regards to those Boston Dynamics robots, I did spend time with one.
recent months that disobeyed me. And I now have to disagree and say, I mean, you can't eventually have a robot helping, for example, an elderly person if it's going to hurt them. So, and there are times when we're going to ask a bot to do something that's not safe, right? But I spent a few days after that encounter thinking deeply about it. But we digress. You were about to tell us about this bestselling book that you plan to write. Yeah.
Oh, you're going back to that. I tried to. Well, actually, we did get on a technology tangent and something that Sherpa mentioned earlier that I think is a recurring theme. See, Simon's still trying to dodge your question. The point is, we're me, myself and AI. And two of those words are human. And one of them is technology.
And I think that's something that has come out over and over again. And that really needs to be a theme of whatever we talk about. No, two parts human, one part technology. Exactly. Is that your recipe? That's interesting. Like that's sort of a BCG rule, right? When you think about it, we have a 10-20-70 rule, which is like 10% is the AI and 20% is the data and digital and technology backbone of an AI algorithm or system. And then 70% is technology.
the people and the organization and the ways of working and all that. So all those are the complicated things to mess up. And so that means the title of the book would be
Two-thirds human? That's a good title. Two-thirds human. Let the listeners chime in on that. Because that's another thing, you know, when we think about, we're being a bit sort of nostalgic now, but all the people listening and how the things that they've brought up are really fascinating to me. We keep running into people who say, oh, I listen to this. And it's really always fascinating to me which episode really resonates with them. And I'm really poor at predicting what's going to be the one that they mention. You know, given what I know about a person,
What are they going to pull out? And that's been a fun part of it as well. Oh, that's one of my favorite parts of making podcasts is
We had an episode back at Future of Everything, which is a franchise that I started, that podcast for the Wall Street Journal, that once a year would just spike on the charts and we couldn't figure it out. And it was about antimicrobial resistance and some of the efforts that were at hand. And we found out that a nursing, I guess it was considered a continuing education program or something, was using this years later. And so every time it would get used, it would spike on the charts.
Anyway, fun things like this. But it was not something. This was a rather nerdy episode that I taped at Duke and didn't really, I mean, who knew? Jennifer, how do you prepare for your podcasts? So my podcasts are weird in that they're really different. My field recordings require quite a bit of prep, but it's usually the prep of, I mean, I use this example of, I was in an experimental fighter plane last year and we went out over the Pacific Ocean and got into dogfights with AI. Okay.
So preparing for that little different than going out to tapes and moral histories where I just want to hear someone talk about how they're preparing for governance or, you know,
Maybe a different way to redirect that is that you started by talking about your background being particularly not technology, but then this is a heavy technology field. How do you get over that sort of, oh, they're going to say something weird that I've never heard of? I have that fear going into every episode because we bring in these smart people and I'm like, oh, my gosh, they're going to do something that I don't know. And it's going to it's going to be humiliating.
This is a luxury that I think I get because I don't have the degrees that you do. And also, I feel like everybody needs to understand in order to know where we're going the next five, 10 years to understand, to be active in our society, we need some baseline understanding of AI. So that's what I'm in it to do is to document history as it's going down, hear the stories of people, and also to really understand what's happening. So I have no ego in asking somebody to explain what's
What I don't know, because I will also assume that a lot of our listeners won't know as well, because they also don't have your background or degrees. Yeah, I actually think she's at an advantage. I think so. But I would also say in some specific examples, I don't trust that something works because I know technically it can. I trust that it works because somebody shows me it works.
And some specific moments where friends, colleagues, researchers were sure, like in 2020, that face rec was not going to work on somebody in a mask, except it worked quite well on people with masks. It took, what, 90 days for that to be working everywhere. And about the same time, specific groups, and I'm thinking of one in particular in Russia, got their video analytics working. So you could not only do...
Do face rec on the mask, know how well the mask was, see who you were associating with and read your license plate at the same time. You know, I didn't have it in my head that it couldn't work for these five reasons.
Or it wouldn't work yet for this reason. And then going forward, that's, you know, happily how I found myself in that fighter plane, because I can't take your word for it that I mean, my phone screen sometimes is hard to read when I'm standing on a sidewalk. So why am I supposed to believe that an AR fighter jet is going to be perfectly visible during sunset over the Pacific Ocean?
You know, just saying. I think we could take a lesson there. We need more fighter jet flying rides. We're doing something wrong here. Exactly. First of all, yes. But tell us about the fighter plane. So what were you reporting on?
So looking at the future of, well, looking at how AI is being used to train pilots in the future, this is extremely dangerous. We have lost more fighter pilots in training accidents than really any other way for a significant period of time now. You go from basically being in a simulator to being in a real plane. And the first time you land on an aircraft carrier, the precision required is refueling exercises too. And then just
The other part of this, how often are you going to train against a Chinese jet, right? Our pilots and other pilots are training against themselves and they will pretend that that other aircraft is not the aircraft it is being flown by someone who wasn't trained the way they were. What could go wrong? And so there's a lot of tech, including in particular, this group called Red Six that I was out flying with that are looking for what comes next.
So that's why I was up there, to see if it actually works, because it's a little bit hard, again, for somebody to say, oh, yeah, absolutely. The planes will be vibrantly colored. You'll see them in the sun. And I'm thinking, you know, my VR headsets also don't work if I leave my living room. So how are you flying at speed, you know, pulling Gs without any type of trouble? I think that's a great sort of, yeah, because, I mean, we hear all these stories, and I think that cynic is a great skill. It's a curious skill.
Well, thank you. It's a curious cynic. I'm not mean about it. I just genuinely want to know and see it. I do have to say it's the only time I've ever thrown up in a podcast. I tried to edit that one as politely as I could. Yeah. I was trying to not ask that question. I'm glad it came out. I thought that you would either say that, you know, you're proud that you did not throw up or that you would say that you did throw up. Oh, no. No, no. So the guy who took me up was a former Top Gun pilot and he was having fun.
And at the end there, you know, they love it. Some new kid. Yeah. I mean, friends, they handed me a sandwich baggie before we left. This sucker didn't have a Ziploc on it. I don't know what I'm supposed to do with it upside down, but yeah. We do in our shows, we have this five question thing and we ask rapid fire questions of our guests, but I don't really know. It doesn't fit well here. And so I'm going to take a little segue here and I'm going to rapid fire different questions for different people.
So this is going to be new. And including you, Shervin, I'm looking at you. No, you're completely surprising me. Okay. So I thought about some mean questions for you, Shervin. Like, what's your favorite part of working with me? Or, you know, just to really put you on the spot. But let's look at this whole thing. What about the podcast has gone better than you thought it would, Shervin? The whole thing. I mean, it's a lot more fun.
Jennifer, why do people like podcasts? Oh, take me somewhere, teach me something.
NPR used to call them driveway moments. The moment you're listening to a story that you just have to sit there and you don't get out of your car until it finishes. Yeah, we're looking for connection. And I think never more than the last few years.
Mm hmm. But so maybe a follow up. We never do follow ups on a rapid fire, but you could get that from others. You could get that from a printed word. You could get that from video. You could get that. Why? Why? Can you? Can you? What about the human voice? It's the first thing we experience even before we're born. I think it's the sound effect.
is special. I say this on, we're talking about AI and I don't know, maybe one day we'll feel that way about synthetic voices, but who knows? It's to me, the voice is the ultimate human experience though. Well, actually I'd like to answer, modify my answer a little bit. Part of this that I did not really appreciate while working with you in like prior to the podcast, Sam, is what a great, you know, radio voice you have.
It's so true. And this is in addition to a TV face. What he's trying to say is a face for a radio. What a great radio that you have, yeah. Okay, so what about our guests have surprised you, Sherman? How do I answer that question? I don't know. See, it's all fun and games when you're the one asking the rapid-fire questions. That's right. You don't have any sympathy for our guests, but now you're all... Well, Sam, I have a question for you. What about our guests has surprised you? Yeah.
Is that fair? You know, I thought when we started this that things would be much more tech heavy, that there would be a lot of discussions about this algorithm, that algorithm, this thing. And
You know how these are normal people doing – and I'm not normal. These are not superhuman people. These are people curious, trying, interested, working hard, not doing it perfectly the first time, willing to improve it the next time. And, you know, I think there's been a certain sort of aha with me that it doesn't have to – these are not sort of –
that can magically do things that the rest of us have. I mean, 0%. Two-thirds human. Yeah, I agree. Well, here's another statistic is that 0% of the people in the world are born knowing about how to do artificial intelligence well. I mean, that's a... I'm not going to cite my source on that, but I think our guests have shown that, that they've gone through a process, they've learned. One of the things we ask them is background, and it's always fascinating how they connect all the pieces of their backgrounds to...
to what they're doing now and how it informs that and connects that in weird ways. I have a question for you guys. What's something you've changed your mind about? I've changed my mind about how fun it is working with Sam. I think it's more fun. Which direction did that change? In the direction of more fun. Actually, I think I've changed to...
maybe a fear of the speed. Like I thought these things were progressing and we were making progress or whatever. And now as we've gone through more and more of these episodes and more and more of these examples, I see it maybe so much of what we're going to do wrong here is because of pace. That you think we're going too fast. You know what, actually, this is not me, you know, calling for a slowdown or a pause because I don't think there's any hope of that ever happening. But
We think they're moving so quickly and they're moving faster than organizations can digest them. They're moving faster than consumers can digest them. And that doesn't bode well. And that's not exactly, I think, what you're getting at. But that's something that sort of resonated with me listening to all of our guests. Yeah.
No, it's exactly what I was getting at. I teach a class in machine learning and AI. I, like every professor, like to be a little bit lazy and use the slides from last semester. And, you know, a year ago, I worked people through an example using a natural language processing model, and they all thought it was amazing. I used that same model this year, and people thought it was garbage.
So we went from one year ago, this model is amazing to this year, this model is garbage. And the model hasn't changed. It's what our societal expectations of how this stuff is working have just so, so quickly evolved. Absolutely fascinating. And maybe you've answered my next question then. Sorry to interrupt our lightning round, but I really do want to know, what's something you think we're going to look back on in regards to how we're thinking about AI right now and go,
Wow, we got that wrong. One of our guests from Orange Theory had a great answer to, what were you proudest about artificial intelligence? And he said, oh, when we solved it with linear regression. And so I think one of the things in our current zeitgeist is that
This technology fits everywhere, does everything, is the perfect tool for everything. And I think we're going to look back and say, that was not the place to use this technology. I think we're going to find a lot of places where this technology is overkill or is just not the right way to go. Yeah, I get that. Not as universal as we might have first thought.
It's been a fascinating discussion today. We've talked about all kinds of different things and somehow I can even think of more that we should talk about. If you're interested in more, Jennifer has a podcast, Shift, that you can listen to. Jennifer, tell us a little bit about that. This is a brand new project from Public Radio distributed by PRX, and you can find links to the show on all the different players in YouTube at shiftshow.ai. And we'll include some of those links in our show notes as well. Thanks for joining us today. Thanks so much for having me. This has been so much fun.
Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights, and learn more about AI.
and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.