Hello, I am Michael Magnano, a partner with Lightspeed, and this is Generative Now. Since ChatGPT debuted in late 2022, people have been asking a simple but pressing question. What happens to human jobs when AI can do them better, faster, and at a fraction of the cost? In this episode, we explore how AI is already transforming the modern workforce.
separating what's truly creative and strategic from what's just busy work. We'll look at what gets automated, what gets elevated, and what organizations risk when they don't adapt fast enough. To guide us, we'll revisit conversations with four guests who've been thinking deeply about this shift. Sameel Shah, founding general partner at Haystack and Lightspeed Venture Partner.
Mikey Schulman, co-founder of Suno. Anu Atluru, founder, physician, and angel investor. And Marissa Mayer, former CEO of Yahoo, and now the CEO of her own AI-first startup, Sunshine. Here are some of the insights they had to share. ♪
We're still in the co-pilot phase of AI. The tools assist, but humans are still in control. But that won't last forever. We're inching towards something more autonomous, where agents don't just help with your work, but own it end to end. And here's where things get complicated. Just like in the past,
Just because you can deploy an agent to do someone's job doesn't mean you should. The short-term upside might come at the cost of long-term capability or talent development. I recently had the chance to mold these topics with frequent podcast guest and Lightspeed Venture partner, Samil Shah. We talked about how organizations need to move away from thinking about what AI technology can do and start thinking about the things AI can do that are right for their business.
So right now we're more in like the co-pilot phase of things right now. And I think it's easy to trust them, but it's especially easier to trust them because you get to basically check their work, right? Like you still have your hand on the wheel. You know, when I think of co-pilot, I think of Tesla full self-driving. I think of Granola. I think of Microsoft co-pilot. Like you're using these things and they're augmenting your capabilities in such a way that you are still in control.
But as we all know, these things still make mistakes. They hallucinate. They don't always do what you're intending to do, which is why that control is so important and which is why people are not yet fully comfortable trusting these things. To get to an agentic world outside of
the technological improvements that that will need and and by the way i believe are coming i think that trust factor still has a ways to go right like you're not going to farm an agent out to go handle all your personal business without your hand on the wheel the reason i found this interesting was from a different vector i was reading matt levine from bloomberg you know he used to be the
this like crazy smart lawyer. He writes a lot about like really arcane fraud or creative fraud. - Yep. - And he writes a lot about just things happening in financial markets and products. And so he was saying basically that like we've all had these friends who graduate college and go to kind of investment banking,
And they give up two to three years of their life and they're doing all this grunt work of analysis. And he basically said the main three or four things that a junior investment banker does, like AI is either there already or very close to basically completing it, but that the leaders of the bank use those two to three years to like immerse
those people in the business and all the vagaries of the business and nooks and crannies so that, you know, in every class, if they have 100 investment bankers coming out of school, maybe one becomes an MD in 10 years, right? So if you're leading one of those investment banks or law firms or what have you, and you're trying to groom the next layers of leadership down the road, like, would you allow a full agent to come in
and do everything at the risk of not training your underlings? Or do you have to train them in a different way? I just thought it was a super interesting question that goes beyond like, what can the technology do to actually like, what's right for the enterprise to implement? Right, like what is the risk we introduce into this business if we're no longer training people, right? We're putting ourselves on a path to be fully locked into these things. Yeah, like yesterday, met a guy who's just,
Just building an ability to go into and do these like, you know, 500,000 slide, slide decks for like deep consulting work, right? That's his first product in like super hyper-focused. So imagine you and I were McKinsey leaders and the top GP managing different offices thinking about this problem. We're in our forties. We're going to be here into our sixties and we need to groom,
the 20 to 25 year olds in that world, how would we approach the introduction of that technology? Because the short-term carrot is to reduce our OPEX costs and we make more money. But then there may not be people who are trained by fire
later. Right. I don't know. I just found that fascinating. We're sort of like proactively signing up to eliminate ourselves by doing that. Like once we crack the door open, we're already heading down the path because what you're saying is like we're basically making ourselves extinct because we're not even training the next generation. The way you frame it is fascinating because I had a different frame. I think it's more like people who run businesses will still want
Like, Anduril, when it becomes the next thing it's going to be, is still going to use consultants because consultants are valuable in some ways. Maybe I'm not thinking broadly enough. Maybe I'm thinking like they'll have their own little army of consultant agents, right? I don't know. But I think of it as like the human to human, like the person leading the company will want to talk to another principal who has expertise across the
That sector and to give them an outside perspective. And then will those people still exist 10, 20 years from now? This feels like the right time then to start talking about artificial superintelligence. I mean, this is the theory of artificial superintelligence. ASI is that once it is created and it is agentic, that what you're left with is a drop in coworker.
that can not only do everything, but that can do it way better than you and I ever could or any McKinsey consultant ever could. And by the way, it's going to be a fraction of the cost of hiring that consultant. Yeah, I think there are a lot of people believe that that's the future we're headed towards and it might be here pretty soon. Right.
So let's go back to the McKinsey example. If you and I are the co-leaders in McKinsey right now. How do you not want this? How do you not want this? The short-term juice you get, each mid-level consultant who's getting on a plane and building these decks and doing all the synthesis, they have their own teams, right? They have their own entourage that you're paying and you're delivering this to the client. You can reduce...
Your costs are good sold tremendously. And the profits go back to the partnership. Yep. And the endowment. Your short-term incentive is to do it. Quite perverse. You're taking a leap of faith that it doesn't start at the bottom and ultimately eat its way all the way up to the top. Right. Because it would probably start from the bottom, right? Like-
If you factor in this trust dynamic, you're going to trust the lowest level employees to be replaced or augmented by these things first. And slowly, as the capabilities increase, they're going to go a layer up and a layer up and a layer up. And where does it end? I don't know. I guess it depends how intelligent these things get. I almost talked myself into a different corner because of the Andrew thing of like, I've listened to all these Palmer Lucky interviews and I'm almost can imagine him being on this podcast and being like,
We'll have our own agentic consultants. Like, you're an idiot, Samuel. Like, you're not even going to use McKinsey. Yeah. You're just going to have. Yeah. I mean, that's actually probably the reality. Right. Well, I actually think it's the same observation. Right. I think like as McKinsey is getting more and more efficient, you know, it's happening also directly within the companies, ultimately to the point where McKinsey is just completely irrelevant because the access to the technology, unless McKinsey.
counterpoint by this being deployed throughout McKinsey, there's such a strength of a data moat inside McKinsey. Actually, that's where I was going to get to. I think if I were leading McKinsey, I would say we should have our own agents that run on our own corpus of proprietary information. That's right. You keep the finders up top, make more profit per partner, and you basically sell a co-pilot or agentic McKinsey agent
service to an Andoril and say, look, you can benefit from our institutional knowledge. And if you want...
More human involvement, it'll cost more. And if you want just involvement, it'll cost this. I think that all actually makes quite a lot of sense. Now, if you were an entrepreneur, this is fascinating. Would you try to go build something to go sell to the McKinsey's, BCG's, Bain's of the world? Or would you try to build your own native consultancy? I mean, I think it would be very hard because you don't have any institutional knowledge to build off of. Yeah, I think that's the risk. You'd probably try to find a way to sell
some data to McKinsey. Anyway, it's fascinating. In a world where AI handles most of the grinding and minding, the org chart doesn't just flatten, it fragments. The institutions that survive will be the ones that know what not to automate. Will AI eat its way to the top of the org chart? And what types of consumer products will remain defensible from AI?
Another field where people are wondering how AI will come for their jobs is in the arts. The impact of Gen AI on artists and creativity has been profound, just as other prior tools and new technologies have. But the question remains, what is the best way for artists to leverage AI to enhance their creative process?
I hosted a Q&A with Mikey Shulman, co-founder of AI music startup Suno, who addressed this question directly in a way that I found fascinating. I think this is a really good question. This is a kind of a long arc, I would say, in making music. You know, technology has been part of making music for hundreds of years. In general, all of the technological advances that happen make it easier to make music, make more people able to make music, make more music out there and are actually quite beneficial for music in general. You know, I can tell you
Um, that I know one, one person who is a songwriter who had a lull in creativity and after finding Suno went from maybe making, uh, 50 songs a year to making 500 songs a year. And most of these songs maybe don't see the light of day or, but it's an unlock, um, in terms of creativity. So I think on the whole, it's quite obviously all of these technologies are, are good things. It's about how you use them. I think I'm a firm believer that in general, um,
AI and most technologies are neutral. There are good uses and bad uses. And so we focus on building the good ones. That's fascinating. The 50 to 500 kind of reminds me of how a lot of writers are getting value from LLMs as thought partners, right? For, you know,
fleshing out a report or a thesis or, you know, a creative piece of writing. It's like it's somebody you can bounce an idea off of, right? And have a conversation with. It's in every domain. And it's a little cliche to say co-pilot, but you see this in everything from code,
you know, engineers writing code. Now you can have a co-pilot that can help you do it. And these co-pilots can make code for you. They can check code for you. When you are writing an email, you can have something that writes the email or rephrases the email or checks the email somehow. If you are writing a script for a movie, you can have help there either in checking something or rewriting something or hell, I just come and I'm blocked and I just need to see some weird ideas and
I think in general, just increasing the amount of weird content that is out there is actually just really good for human creativity. And these things are amazing at that. We talk a lot about co-pilots, but maybe AI is more like a creative partner, the tool that helps you move past the blank page. Not everyone will use it well, but those who do, they might just outcreate everyone else.
Then there's medicine, maybe the clearest example of AI's potential to separate routine from real work. Doctors will be able to spend less time tackling paperwork and more time talking with patients. Hospitals will get more value out of these very expensive specialists. Highly personalized medical interventions will finally become commonplace.
At the same time, the regulatory landscape is complex and adoption is far from assured. I recently spoke about this with Anu Atluru, founder, author, and angel investor in social utility startups.
And in addition to her work with companies like Clubhouse and Slang, Anu is also a licensed physician. And though she no longer practices, she says she still has a soft spot for consumer health care and is bullish on the potential for AI to improve health outcomes and how AI can change what it means to be a doctor. Do you, even though you're not practicing, do you think much about the future of medicine and health care and
and personal health as a result of AI? Yeah, I do in an abstract sense. From the human level, I think AI is going to be great for understanding of diseases that we only have a marginal understanding of today. I think it'll be very useful for rare diseases that we haven't had enough of a population to really, and enough of an economic interest to really be able to fund often.
So I think the long tail of health will get better as well. Scientific research, this upstream part is going to get very good, very fast. And there's already a lot of things that are suggesting that. I think one of the, I'm not sure if it was a Nobel Prize winner this past year or something was related kind of to this movement in this direction. Broadly, I'm very optimistic about what it's going to do for our ability to understand health, to understand diagnosing it and understanding
somewhat to understand, like treating it. I think that's harder than the actual understanding the problem, which will get a lot better. I think on the side of like being a doctor as a profession, there are a lot of things that are going to change. Some welcome, some not, you know, depending on which side you're on. Like what? Overall, I would say that I think AI is going to make a lot of knowledge work.
which I would consider historically like doctors, lawyers, a lot of that to be in many cases, much more like a trade profession. Historically, you know, we think of trades as kind of like the they require less education and are more physical in nature than the non-trade professions, which tend to require far more education and tend to be more cerebral in nature. And I think
The mix of that will shift a little bit in a world where AI is doing a lot of the kind of like
gathering with more information than we would have had before. A lot of the computation, a lot of the analysis, like, you know, all the differential diagnoses, the probabilities, the, you know, the likelihood of a therapy to work and in particular for a specific person in terms of personalization. So I think there will be a lot more data given to us by AI or whatever we want to call that container that will then be kind of assessed and applied by
by physicians. And so from that perspective, physicians will do less of that cognitive work themselves. And also because we're going to learn so much more, it's going to be impossible for humans to learn all of that, like as rote memory and then compute all of that. So I just think that it'll become a little bit more trade-like in the sense that a lot of what you're doing is more the implementation aspect as opposed to like a holistic assessment independently. It's almost like more
like service oriented and yeah I mean like because you are doing you know the relational aspects will become much more important especially like I mean this is a great thing scribe like AI scribe or whatever and all that stuff I think is going to come in which will relieve like the burden of documentation which has been
Like terrifying for physicians for the past 10 years at least, but probably like 15 or more. Terrifying that the burden of documentation has gone up. They have to document more. They don't want to have to do that. Right. It takes a lot of time more than anything. I mean, I think and there's always argument about, oh, my doctor's never talking to me. Well, it's because they're documenting most of the time because they have to for like insurance purposes and for medical legal purposes, right?
So AI will do really good things in terms of reducing the burden of those things, but it'll also provide tools that'll offload some of the cognitive capabilities.
stuff they're doing. It's like autonomous vehicles or something. With healthcare and with physical health and safety in general, we are very skittish about adopting tools if there is one death that we're going to see to that. And so whether it's autonomous vehicles or healthcare, I think the same thing will apply. So I do think this is somewhat of a long arc of implementation, but I do think
Over the long term, yes, it'll become much more like a service profession. And that's not necessarily bad, but it'll change a lot of dynamics. So we started like zoomed out. Then we've talked about interaction with the physician becomes more assisted, becomes more of a service. Is there a world in which the human doctor is?
goes away for 80% of interactions in the near future. Like how, maybe just much more simply, like how soon until we have the AI doctor on our phones that we're interacting with for, you know, 80 to 90% of most times we would have previously scheduled an appointment with our physician. I might argue we already have the AI doctor on our phone. Yeah. You know,
That's ChatGPT or that's Dr. Google or that's Dr. Google? Well, Dr. Google was pretty bad. So I would say, yeah, ChatGPT or whatever the future model of it or, you know, pick your LLM that's going to become specialized for this thing. I would argue we already have that in the sense of the decision points are really when do you choose to seek health care within the health care system? And the system today is a people driven system. So the question becomes like when...
Will some authority say it's OK for you to use this chat bot or LLM instead of going in to see a physician? That can happen culturally, though, right? I mean, that doesn't have to happen through the rubber stamp. Culturally, I think it is already slowly happening for physicians.
very low acuity or, you know, as generally talk about acuity of like how serious severe like for triaging, like low acuity things like a cold or do I have strep throat? I think right now what it's actually doing is telling you
on some probability basis, should you go seek care? And I think that actually people have been working that for a long time, but it's just AI is going to be so much better now. It really just seems like the only thing stopping it is nobody's just built, nobody's built the great consumer product yet. Somebody can do it. And I think what they, what they will have to do is kind of what telehealth did, which is have experts or credible, trusted people who are in on
kind of quality control or something for this product. Like it can't just be a bunch of Silicon Valley. You know, and some of this is optics and trust-based and trust is very kind of fluid. It's obviously not very scientific and somebody probably will do it. I don't know why someone has not done it yet or maybe somebody is working on it. I'm not skeptical about the value to the user. I am
a little skittish on the ability for this to be executed as a consumer product, it will take the right team that knows both sides and cares about both sides. And usually it's just one. Anu pointed out something important. In the AI era, knowledge work becomes more like a trait. AI will handle the information gathering and the analysis. Doctors, lawyers, consultants, they'll do less of the deep cognitive lifting and more of the human to human work.
And that's not a loss, that's an upgrade. So yes, we may already have a doctor in our pocket. And with the right design and guardrails, that future isn't as far off as it seems.
To close out the episode, I spoke with Marissa Mayer, former Google product lead, Yahoo CEO, and now the founder and CEO of Sunshine. The company's first product, Shine, is reinventing contact management and photo sharing using AI. Marissa has been on the front lines of multiple platform shifts. So when she talks about how generative AI is reshaping user behavior and how companies need to respond, it's worth listening. Her message is part prediction, part warning.
AI won't just change what gets built, but how we build, who we hire, and which roles still matter. And while most leaders are still figuring out how to use AI to get ahead, Marissa says the real question is, how do you avoid being left behind? You've been involved with some iconic companies and products. AI, at least from my perspective, like seems to be sort of
resetting the playing field a little bit. It's not only like creating a new type of product opportunity to be built, it's also creating new opportunities inside of organizations. How do we staff teams? How do we build processes and actually get these products made? Maybe starting at the team level, what do you...
How do you think teams are sort of going to evolve as a result of AI? Like, will we value engineering more? Will we value product management more if product managers can just generate code through natural language? Like, how are sort of teams and team building strategies going to change in the coming years? I think that what's happening on a product level with AI is often the most powerful technologies change and user behavior.
And they'd change end user expectations. And we saw that in the early days of search, right? All the search engines were kind of the same. They could all kind of get things done. They weren't incredibly efficient. And then Google came along and had just better answers, caused people to search more on Google, caused them to search way more. There's way more internet search being done today than there was before Google existed.
And, you know, it basically caused people to ultimately change their behavior in terms of how often they searched, how they searched, what they searched for, and also change their expectation in terms of what they got. And I think that what you're seeing now across many forms of generative AI is whether it's chatting to get a better understanding of a problem, get editing help, get visual help and visualizing something and creating an image or a graphic, right?
The way that you can do that, the speed with which you can do it and the ease with which you can do it is going to cause users' behaviors to change and their expectation to change. So I think that ultimately the groups that will be valued most in organizations as a result of this change will be the ones that can adapt, be most insightful about those user behavior changes and expectations and
and adapt most quickly and respond. So I don't necessarily know that it will be a functional revaluing across companies, but I definitely think the companies that respond to these user behavior and expectation changes are the ones that are going to win. So one of the, that makes me think of an area of business right now that to me on the surface looks potentially highly sort of disruptable and
And that is consulting, right? Where you've got these huge teams of people that are being paid and billed hourly to do this type of manual work that potentially could be replicated by agents. I'm certainly very curious to see if large consulting firms sort of adapt and embrace the way you're talking about or sort of
try to hold on to existing business models and structures. Yeah, I'm curious what your thoughts are on like the types of businesses that you think will be most quick to adapt this versus the ones that will be sort of resistant.
Well, not surprisingly, I think that technology-rich industries are ones that are going to adopt this most quickly and readily. So I wouldn't be surprised if we see consulting actually, you know, they generally do use technology effectively and study it and it's like packed on HR and office and workflows activities.
you know, as part of what they do. So I wouldn't be surprised if they actually do well in terms of adopting it. But I think that anywhere where there's a tension between the new way of doing things and the new way your customer wants to do things and the tension with that and your existing business model, I think those are the companies that have really hard choices to make because you can probably cling to that existing business model a la your consulting example, right?
and make more money in the short term, but in the long term, they're really mortgaging their future. So I think that right now, most companies need to be thinking about, and most people working in those companies need to be thinking about, how can AI make me better and faster at my job? And there's all kinds of people who are doing studies now in terms of who does better, the consultant, the doctor, the lawyer,
Or the AI, right? If you go to the AI old suggestions. And there certainly are some professions and some cases where the AI is outperforming the people. And if the people attempt to actually edit what it's suggesting, things only get worse. But, you know, it's early days and there's anecdotes right now that cut both ways. Yeah. Yeah. I saw some, I think it was a New York Times article just like two days ago that said it
It compared sort of doctors on their own versus doctors with AI versus just AI. And I thought it was going to be doctors and AI were the most effective. And it actually, according to this study, I have no idea how accurate it is. It was just the AI.
That was the most effective or I don't know, however they were measuring it, which kind of blew my mind a little bit. Yeah, I've seen that study and I've seen some in a few other industries as well. And so, you know, those industries where the AI solo can outperform and the value that the people are adding could be negative, which is kind of the point that that makes. And I don't necessarily think that it was a big enough sample study to say that conclusively. Mm-hmm.
sample size in the study, but I think that it definitely is thought provoking and it does make you realize that where you should be spending your time and where we can add value versus where the AI should be adding the value is something that companies are gonna have to be strategic about. - In short, organizations that respond quickly to shifts in consumer behavior will thrive.
Companies that cling to old business models might succeed in the short term, but they're mortgaging their futures. It's probably too early to know what AI is fully capable of and what burdens it may relieve us of, whether we want it or not. Still, I think we can draw a few general conclusions. Number one, focus on your unique institutional knowledge. Two, seek out ways AI can enhance your existing process. And three, pay attention to user expectations. ♪
That's a wrap for this episode of Generative Now. Thank you so much for joining me. If you liked this episode, please do us a favor and rate and review the show on Spotify and Apple Podcasts. And if you want to learn more, follow Lightspeed at Lightspeed VP on YouTube, X, LinkedIn, and everywhere else. Generative Now is produced by Lightspeed in partnership with Pod People. I am Michael McManus, and we will be back next week. See you then.