This is episode number 895 with Sean Johnson, co-founder and general partner at AIX Ventures. Today's episode is brought to you by the Dell AI Factory with NVIDIA. And by Adverity, the conversational analytics platform.
Welcome to the Super Data Science Podcast, the most listened to podcast in the data science industry. Each week, we bring you fun and inspiring people and ideas exploring the cutting edge of machine learning, AI, and related technologies that are transforming our world for the better. I'm your host, John Krohn. Thanks for joining me today. And now, let's make the complex simple.
Welcome back to the Super Data Science Podcast. We've got a terrific episode today with an iconic trailblazer AI investor, Shawn Johnson. Shawn is co-founder and general partner at AIX Ventures in San Francisco, where he's led deals into companies including Perplexity, Chroma, and Work Helix. He's
He's a former VP of engineering product and design at Lilt and a former VP of product and design at NimbleRx. He holds a master's in electrical engineering from Stanford and an MBA from Berkeley.
Today's episode is well suited to any listener to this podcast. In it, Sean details how having investment partners like Richard Socher and Christopher Manning, who are practitioners actively building at the cutting edge of AI, gives AIX Ventures an edge. He talks about what it takes to become one of the few thousand people in the world pushing the AI frontier, the surprising strategy that makes enterprise AI adoption 10 times easier, why
why some AI startups are better off building in red oceans full of competition rather than seeking blue ocean opportunities, and the reason big tech companies are buying AI talent without acquiring the actual startups. All right, you ready for this excellent episode? Let's go. Sean, welcome to the Super Data Science Podcast. It is a treat to have you on the show. Where in the world are you calling in from today?
Yeah, John, thanks for having me. Colin from San Francisco. We're in our offices here in South Park. Where better for an AI investment firm to be than San Francisco? I didn't expect you nowhere else. So you're a founding partner of AIX Ventures, which is an AI-focused early-stage investment fund being led by active AI practitioners. So let's parse that for our listeners. What are the advantages of being an investment fund that is
First, AI-focused. Second, early stage. And third, led by active AI practitioners. Yeah, it's a great, great, great question to kick this off. So...
I think first being AI focused, every investor out there, our job is to find alpha, right. For our own investors, right. How do we think about returning, you know, outperforming funds. And we just have the strong belief that we're in right in the middle of an inflection point, right. We saw transformers, chat GPT and, and,
machine intelligence is growing every day. We think the next, you know, dozens of $100 billion, $50 billion companies are coming from AI. And so this is where we're heads down. We think this is transformative for industry, for humanity. Other investors could find Alpha doing non-AI, but here we are, I think is number one. And then two, I think,
Two was the operator, the operator. We can we can we can do that. I had early stage as an early stage. Yeah. So on the early stage side, you know, look, we we do two things. We think about backing entrepreneurs at the earliest stage because.
That's how AIX started organically, right? My co-founder Richard has been investing in the space since 2016 and Chris Manning long before that. And, you know, that's just early stages meeting founders kind of where they are when they're starting to think about these ideas. And then you bring that into your first fund. Now,
Also, I should say that our own investors think about early stage versus growth stage. And, you know, growth stage are, you know, lower risk, lower returns and early also more correlated to the public market and early stage or not. Right. They're less correlated to the public market and then also higher alpha, higher returns. And so.
That's quite a nice product market fit for us too. Perfect. Yeah, it makes a lot of sense. And I was talking to you about this briefly before we got on air, but Richard Socher has been somebody that's been an iconic person
in AI in the world for me, I guess, as an AI practitioner. It's going back roughly a decade that I've been studying and teaching from his Deep Learning for NLP lectures at Stanford in Chris Manning's class, whom you also just mentioned there. He's still a Stanford professor. And so these active AI practitioners that you have, that you've co-founded AIX with, it's
I mean, it's an amazing set of people that I, you know, it's hard to imagine people that I look up to more. And so, yeah, what's the advantage of working with AI practitioners as opposed to, say, career investors? Yeah, we think you need both. The most important thing, I think, for any venture firm is to orient yourself around the founder.
And so we think founders need, they will for a long time now be building at the intersection of AI plus a sector they want to disrupt. And so how do you support that founder? Well, we think it's hard to support on the AI side, being a full-time VC, right? We think you're better off being out, AI is moving so fast, be out practicing at the cutting edge of AI and
we mean the cutting, cutting edge, right? Like the Richard Socher, Chris Manning, Anthony Goldblum, cutting, cutting edge. And then when we think about...
sort of on the sector side, we think about full-time VCs, like market-focused kind of traditional VCs. And you have to bring those two together. And I think the way we brought it together is what's quite unique, which is in a very engaged way, right? So Richard and Chris and Anthony are in every single one of our investment committee meetings, right? I mean, Anthony is working in our office just a couple feet from me now. So it's just this very engaged model where
founders walk in and see that, great, they have both sides of this equation and it's differentiating for them. - Nice, well explained.
You were talking earlier about how investors, you know, you have this thesis that we're at this inflection point in AI and you're preaching to the choir a bit with me and probably our audience as well. I think we're on board in that belief. It kind of seems obvious with how dramatically, rapidly the cost of compute around the kind of the same level of capability advances over
I was just at the time of recording the episode that I published today of this podcast. I had some stats from the Stanford State of AI report that I was talking about in the episode. And if you take something like a 60 percent accuracy score on MMLU.
It took hundreds of billions of model parameters in something like Google's Palm just a couple of years ago to be able to get to that level of capability. And now today, we have models that are two hundredths, like one two hundredth of the size, just a few billion model parameters that are the same capability.
And the cost of compute has gone down even more. So depending on exactly the kind of task, it can be several hundred times cheaper to 900 times cheaper per token to be getting the same level of compute. So say you fix something at like,
GPT 3.5 level capability, and you track the cost of that over the past couple of years, it's up to a 900x reduction in cost. And so with those kinds of tailwinds, it kind of seems obvious that there's a lot of opportunity in AI. But there is also, I think there has been overhype
recently as well. Yeah. What do you think about that in terms of like Gartner, uh, Gartner hype cycle, maybe we're now in kind of a trough of disillusionment, which I could get into more detail on this later, but it seems to me like that actually means there's a lot of opportunity for real world implementation coming up next. Yeah. I think with any great technological inflection point, um, comes a hype. I mean, comes, comes a hype, um, period too. Right. Um, and so, um,
Right now, I think you're seeing that manifest itself in a couple of different ways. One is the efficacy of AI in consumers' lives or in the enterprise. And then another way is you're seeing an abundance of capital pour into the market looking to support these companies. And some companies have
sprung up that maybe there's no need for, but there's so much capital, entrepreneurs will give it a go and you're going to see softness and returns because of that. I think it's both true. I think even though maybe broadly there's hype or
or we're in a hype, you know, in that trough of disillusionment, it's too general, right? I think you have to look at specific applications and ask, where is that in the cycle, right? Because we see applications today regularly, founders coming in and talking to us where,
There's quite a blue ocean. There's a lot of opportunity. It's not all red. Not everyone's competing. Maybe they have an angle on it that's quite different. And so it sort of just depends. Right, right. That makes a lot of sense. When you're evaluating these early stage AI startups that you invest in,
What are some of the non-obvious signs of product market fit that you look for? Or I guess even more generally, what are you looking for in those investments? How do you evaluate that the market is ripe for that particular type
of AI in that particular application? You know what? Early stage is really a game of people. You back the founder when they're just, you know, or founders when they're just having their, you know, idea, you know, maybe they have some prototype or some product, but it's really a people bet. And, you know, they will go out there with their vision to change the world and they will learn quite a lot.
And that will result in pivots, you know, micro and macro. And so, you know, we don't we can't say like, you know, I don't I don't think VCs are like genius market timers. Right. I think, you know, they can have a sense of that, but then also recognize that the founder will do what they need to do.
And we really just look at investing in teams that can execute at sort of the speed of light and pivot however much they need to to find a resonant point between what they're offering is in the market and then and then, you know, get to that great growth trajectory.
That makes a lot of sense. But then it begs of me, it begs the same, a similar kind of question, which is how do you then identify that kind of founder or that kind of founding team? I guess, you know, is there kind of, I've had guests, I've had investors on in the past who have said that with AI startups, they typically look for this kind of three-legged stool.
of a CEO, which is somebody who's great at selling the idea, a CTO who obviously is highly technical. But then in an AI startup, you also have this AI expert who, you know, where the CTO is maybe more concerned with platform scalability, reliability, those kinds of concerns. You have this third co-founder that is the AI expert at
at or near the cutting edge, like you described, Richard Socher or Chris Manning might be with their research. Does that kind of ring true to you as well in the teams that you're investing in? Not, I would say, 100%. The way we think about it, we start by looking at the team and assessing two factors. One is AI nativeness, right? Do we consider this team to be quite deep in AI or...
or not? And then market savviness or commercial savviness, right? Do they have expertise in this area? Do they have any right building into this market? And that's really where we focus. And then we ask ourselves, given a team and given where we think
kind of that market landscape is, where will they need to improve, right? Like you, it's never perfect. You don't find teams that are always optimally AI native and optimally commercial savvy. And so if you invest in a team that's more AI native and less commercial savvy, then the question is, how do you de-risk the commercial savviness with the team? Maybe that's advisors, you know, et cetera. And then, you know, I think when technological inflection points like we've seen with ChatGPT happen, what, what, what,
What happens in the market is you have a number of consensus applications that are now possible. Everyone knows that we should do AI-powered tutoring.
And so everybody is like, well, let's build AI powered tutoring. But what you need to do there, we think, is invest in extreme AI native teams that can actually bring experiences to consumers that other teams just cannot.
And as the years go on and you start getting outside of this consensus driven or consensus driven investing, you go back into like the market savvy investing where you don't need as much the AI native teams. Like this will be very good, but it becomes more important to have a market insight that
is non-consensus. And so, you know, the way we think about it, if you think about SaaS investing, let's call it five years ago, SaaS investing, you know, like there wasn't a ton of differentiation on like the MCV stack. It's not like you're like, oh, right. Like the model is like, you know, the model technology is unique. The database is unique or like the controller is unique or the view is unique. It's all, you know, Mongo and right. Like,
MySQL and it's in the middle where it's, let's call it Node.js or Ruby and then it's React or HTML, CSS, JS.
And that's all commodity, right? It's just like, well, what's the idea? How are you going to configure it? And AI will get there within the current framework of the technology. Now, if we have a new architecture come out that does replace transformers, then game on again, right? Now a whole new set of consensus bets will be made against what that technology can do.
create. But right now, I think we're transitioning from you really need AI native teams in a consensus world to you're going to start needing more market savvy teams in a non-consensus world.
This episode of Super Data Science is brought to you by the Dell AI Factory with NVIDIA, two trusted technology leaders united to deliver a comprehensive and secure AI solution. Dell Technologies and NVIDIA can help you leverage AI to drive innovation and achieve your business goals.
The Dell AI Factory with NVIDIA is the industry's first and only end-to-end enterprise AI solution designed to speed AI adoption by delivering integrated Dell and NVIDIA capabilities to accelerate your AI-powered use cases, integrate your data and workflows, and enable you to design your own AI journey for repeatable, scalable outcomes. Learn more at www.dell.com slash superdatascience. That's dell.com slash superdatascience.
Right. That sounds like a great balance. And I guess I was being oversimplistic in my kind of thinking of like, yeah, this is the kind of founding team. It makes perfect sense that obviously every situation is different. Yeah. And the last thing I'd add to that question, John, is this comment on like, do you need the AI expert on the team, right? So you have your CEO that's like, you know, market savvy, and you have your CTO that's like builder. And then do you need this AI expert? And
I've seen this on lots of teams, even my last team at Lilt. I think the best way to orient a team is to have AI engineers, folks that can build teams.
very adeptly with the technology in production that are also savvy enough to be reading the papers and understanding how the technology is changing and be able to integrate that into the stack. I don't think you need like a PhD that can just sit there and read papers. Ideally, your folks building in production are also savvy enough to read papers is kind of our take on it. Good news too, because AI PhDs are expensive.
Yeah, yeah, yeah. And also good news is it's getting easier and easier to read AI papers because you can use your favorite LLM to help you understand what the heck's going on and get an explanation for the pertinent parts. And, you know, there's LLM tooling, obviously a great opportunity for startup founders in terms of the products that they can be building and therefore also for the investors that are investing in them.
But even just for any of our listeners who want to be making the most of understanding the great information that's out there, the exponentially more AI papers that are being published all the time, it's easier and easier to understand them. And it's easier and easier to code them up with tools like, at the time of recording, Cloud4 has just come out.
And they've really focused on the ability to co-generate in there. Yeah, it's a really exciting time to be in this space. Indeed. So when you're scaling up AIX, you know, you recently, you just closed your second fund for $202 million. Congratulations.
And as you scale up, as you invest in more companies, you probably need more and more AI practitioners. And another name that just came up from the research document that we've prepared for you here is Peter Abiel, who has been a guest on this show as well some years ago now, but still one of the most fascinating episodes that we've had. Peter's an incredible person. And so that's another name associated with AIX.
What kinds of mechanisms do you have in place to ensure that founder quality as well as these kinds of practitioners like Peter, like Richard, that you have advising? You said that Richard today is still in every investor meeting. Are you concerned about that scaling or does that seem straightforward to you?
Yeah, I mean, generally VC doesn't scale. Not AI specifically, all VC, right? It's a very heavy services business.
And the reality of it is every VC out there is creating a new portfolio of anywhere between let's call it 15 and 40 companies every three to five years. Recently, it's been on the earlier side of that. And so how does VC work? You're bringing on 30 companies every three years, 10 a year, approximately. How do you continue to service everybody as these portfolios grow and grow?
And the answer for a long time has been that, you know, maybe, you know, somewhat controversial, but it's the truth. VCs invest in companies 10 a year, and those companies usually raise for 18 to 24 months. And then when did they raise capital to build for 18 to 24 months? And at the end of that period, actually,
right? Not even the end, 12, 12 months before they run out of capital, they start raising again, right? And then they go and they take more capital from, uh, other venture firms too. And those venture firms start picking up the, you know, the, um, the responsibility to bring these companies to the next stage. And so it's kind of this like, you know, um,
Founders go through this journey of multiple VCs through the lifecycle of their company,
being the most prominent people on their board. And so it's always kind of changing over. And an example of this is Richard invested in Klamen team at Hugging Face when they were taking 224 at Stanford. And that was some time ago, 2017.
That's a $4.5 billion company now. Clem is surrounded by the best at Sequoia and Lux and others. Clem's not calling Richard every day. He has...
other problems. Um, they, you know, they keep in contact and, you know, Richard still advises, but it's a, it's a much lighter lift than, than, uh, than initially. Uh, Clem has also been on the show. He was in, uh, episode five, six, four. I just looked up here as another, uh, great episode. And I guess I should probably look up the Peter of Yule episode. Uh, that was five Oh three. Um, yeah. Going back almost four years now. Uh,
to that. So listeners can check out amazing people. And it's interesting to me when you talk about Clem and Richard and their kind of interaction like that. Do you think that this is, I don't know if this is even going to be a controversial question, but I suspect this kind of varies all the time. But when there's a really successful investment that you make like that, especially from such an early stage, like Richard had with Clem and Hugging Face, do you think you're kind of
Do you think you're more incentivized to kind of stay in touch and really be close with those really big successes? Or does it end up being the case that you end up, you know, you kind of end up having to spend most of your time with the startups that you're like, ah, it just hasn't quite, you know, there's just something about the product market fit or there's something we just don't have right. Or how does that, maybe it's kind of a bit of a, I don't know if that's a controversial question, but I think the saying is, yeah,
Your fund is made by your winners and your reputation is made by helping your non-winners, right?
And so, you know, we have a responsibility to all of our founders. And indeed, I'm working with founders that, you know, I speak to Arvind regularly. Arvind's killing it at Perplexity, right? And, you know, he doesn't need a ton of our help now, but, you know, we keep that relationship. But I'm also spending a ton of time with folks that, you know, haven't been as successful as quickly as Arvind. But like,
still have tailwinds and they're seeing progress and they're just looking to find the right way to position their company in the market.
And so, yeah, I think both is the answer. We have responsibilities to both, both from a fiduciary point of view and also just like helping founders and a reputational point of view as well. Nice. That was a really great answer. You know, you managed to make what I thought might be controversial into something that sounded quite ordinary. Thank you. Our research dug up
an interesting question here specifically related to climate change. There's a bit of context here. Your newest fund, it'll invest millions of dollar checks each in applications across enterprise software, healthcare, and the climate. I want to dig into that climate one a little bit. Due to capital intensity,
It sounds like you won't be investing in raw model builders, which makes sense. I mean, we're now at a point where it's at least hundreds of millions of dollars to be building state-of-the-art AI models. But in addition to that, those models, not just at training time, but especially at inference time when they get used a lot, these huge models,
They can have a lot of climate impact. They require a lot of energy, a lot of water to cool them.
Now, there are some solutions out there. I know that a lot of the hyperscalers, Google, Microsoft, they try to use renewable energy at best or nuclear where they can at worst to try to limit climate impact. I know that there is a lot of innovation around water usage where
the water that evaporates actually gets trapped in a system that allows it to be collected so that you're not using much new water while running these kinds of server centers. But
But yeah, given that you're specifically investing in climate AI startups, I'm curious kind of what your thesis is or what your perspective is around how is climate tech and AI become increasingly intertwined, the kinds of applications or infrastructure at this intersection that can both create outsized returns for you and your investors, while also, you know,
reversing environmental impact? Yeah. Yeah, it's a good question.
So when we were starting our fund two, we thought deeply about climate. And the first, there's quite a lot of work going on innovation in the climate space. But what we were quite keen to understand is the intersection of climate and AI, right? Where were we thought about like novel AI solutions being applied to climate. And our team,
basically concluded that there actually weren't a sufficient number to invest in that area thematically at the early stage. Also, they tend to be a little bit more capital intensive because atoms are just more expensive than bits. And so we actually have not pursued that as a strategy in our fund too. Our fund too is almost squarely focused on
enterprise applications, both horizontal and vertical, as well as tech bio, the intersection of AI and bio, but not quite climate just yet. I gotcha, I gotcha, I gotcha.
All right, so let's move on then to other aspects of the portfolio that you build, the decisions that you make. In a recent interview, you discussed two kinds of investments that you make in your portfolio. One of them is heat-seeking.
And the other one are what you called truffle hunting bets. So what are these? How do they complement each other as well? Yeah, this comes from the Andreessen guys. Chris over there, I think, talks a little bit about this, which is heat seeking is...
a company, a deal, a group of founders that the venture market knows a ton about and is very excited about. And, you know, there's a lot of heat surrounding that deal. And that means they tend to be a little bit more expensive too. And then truffle hunting is just that too, which is you're trying to find the founders that, you know, are the ideas, the companies that
folks just don't know, right? Maybe it's that they're unaware of them, but also maybe it's that they're aware of them, but they don't believe that what they're saying is true. It's non-consensus, right? And you want to have a portfolio of both, right? So, you know, heat seeking won't always be right. You know, there's this adage that if you go back and you look at any vintage of fund,
And you track that vintage to what was hot at that time. Those are unlikely to be the alpha returners, right? Those aren't the ones that are investing in those trends that during that vintage is not what creates like strong financial returns.
It's actually the non-consensus bets where you can get in arguably cheaper. And then also, if you're right, it could be a very, very big win. And so you just have to create portfolios of these companies, both heat seeking and non-consensus. It's unique.
When you're in the middle of this sort of consensus world where you just have ChatGPT November 2022 become a thing, everything becomes consensus very quickly. And then you have to ask yourself, OK, are we going to diversify with non-consensus or are we just leaning into this consensus world and taking a big bet there? And folks play it either way. We have companies that are both.
This episode is sponsored by Adverity, an integrated data platform for connecting, managing, and using your data at scale. Imagine being able to ask your data a question, just like you would a colleague, and getting an answer instantly. No more digging through dashboards, waiting on reports, or dealing with complex BI tools. Just the insights you need, right when you need them.
With Adverity's AI-powered data conversations, marketers will finally talk to their data in plain English, get instant answers, make smarter decisions, collaborate more easily, and cut reporting time in half. What questions will you ask? To learn more, check out the show notes or visit www.adverity.com. That's A-D-V-E-R-I-T-Y dot com.
Nice. It's great to get that insight into the kinds of investments that you make. The latest investment that you made, at least according to our research at the time of recording, and you can correct me on this, but we have it as Work Helix? Not the latest, but recent, yeah. Recent, yeah. And so Work Helix is a platform to understand AI's impact in a task-based, step-by-step manner, which is a revolutionary idea
I'm sure I'm going to butcher Eric's last name. Brynjolfsson. So Eric Brynjolfsson, he said that this approach could help unlock a trillion dollar opportunity. And of course, that's the kind of thing, I guess that kind of confidence is what you look for a lot of the time in founders of AI startups.
But could you explain this kind of what WorkHelix is doing differently, what this means to have this task-based, step-by-step approach and the kinds of enterprise cultural shifts that would be necessary to unlock the value from a platform like that?
Yeah, sure. So if you think about how enterprises are adopting AI today, you know, how does this happen? So I think there's like one of two ways they're learning about it through the media, through what they're hearing companies that are...
that are providing solutions, customer support, the windsurfing curses of the world engineering, and then they go and they do that. They give that a go. Another way is the board. The board of these companies is learning about AI and then they're putting pressure on the C-suite to have an AI strategy and to implement AI.
But the real question is what's going on inside the organization, right? Every organization is made up of sub organizations and those sub organizations have jobs and those jobs have tasks. And then the question is like, what is the task profile or characteristic of any organization?
What are those jobs and those tasks? What are they? And if you can go in and get access to the raw data in an enterprise, you can surface them. And then you can map tasks to...
You know, actually, Gen AI solutions and not only tasks, but bundles of tasks that cross job descriptions. And so what enterprises are learning is that, oh, you know, we have much more manual labor, for example, in manual tasks in marketing than we ever thought we had.
And this maps this other Gen AI solution very well. And so the WorkGalaxy team can expose all of that and then partner with the enterprise to actually implement and most importantly, measure, show that there is efficacy from this solution, right? In a proper A-B test. And so that's...
When you're done with that, then you start over. You go and reassess. The organization's changed. You want to reassess. Now, in terms of getting... There's this really interesting thing happening in the market, which is folks, investors investing in roll-ups, right? This idea that you should buy companies and make them older, maybe more service-oriented companies. A lot of OPEX companies
and roll them up and power them with an AI native system, reduce your OPEX and potentially even increase the top line through productivity enhancements and then have a more valuable business. And you'd ask yourself, well, why would you do that? If the enterprise is adopting AI very readily, why would you focus on roll-ups? And the answer is like,
There's friction in the enterprise, right? This is massive behavior change selling into the enterprise AI native products. And I think that's why if you look closely at some of these companies, you actually see professional services increase. I think we're seeing these companies, they'll come up with SaaS models or usage-based models, but they also have professional services line items.
And that's because of the change that is required in the enterprise. There's a lot of pushback. And you've seen on various social media recently, CEOs be quite vocal about, no, this is the way it's going to be. Some have put their foot down and just said, we are adopting. I was talking to the CEO yesterday and I asked him, like, are you using AI in any real way? And he texted me back and he said, no.
we had to mandate like every Friday is AI day, right? We're just like, you have to be, you have to be using it. There's no, no, like no objections. A lot of what's going on is trying to figure out how to change the behavior. And that's why folks are investing in roll-ups. You don't,
have to, you know, when you're rolling everything up and then just, you know, doing a larger riff to the organization, it just arguably is easier than that behavior change. Right, yeah, the behavioral change is tricky for sure. That's, I've
I've recently launched my own consultancy for bringing cutting edge things like multi-agent systems, generative AI, exactly as you said, the kind of opportunity that Work Helix, this trillion dollar opportunity that Work Helix's co-founder Eric brought up where there's so many tasks in the enterprise today that can be improved or automated fully with a large language model to a high level of accuracy, especially because
You can have a second LLM that is double checking work. You can have cascading LLM systems where for tasks that are more complex or the LLM, you know, an initial cheaper LLM is unsure of gets passed to a more expensive one. And so there's all kinds of tricks that you can have in an organization. It doesn't seem like at this time,
Technical hurdles are preventing enterprise AI adoption. There's plenty of opportunity, probably in any organization, to be streamlining operations with AI. It is people that are tricky. One of the impressive things about the agricultural, the industrial revolution is, you know, we went from a world where 95% of our workers, of our, you know, of employees
quote-unquote field workers were heads down working on these farms and then suddenly you have tractors, much more autonomous systems. But the beauty of that time is that
you didn't have like a mini tractor that everyone had to use. And, you know, you had to like say, Hey, you know, worker, now you have to use this mini tractor and have all that behavioral change. You just replaced a hundred overnight. And,
And that's not what AI is, right? AI looks like right now it's more gradual. We need to go through this phase where there's a co-pilot at first, and then there's a human in the loop. And then, you know, in some areas of a job, some tasks, there can be full automation. But that requires a partnership between the human and the machine. And that's full of friction. For sure. That's interesting. I was on a panel at the Open Data Science Conference in Boston last week.
And got asked a question from the audience about how I thought the world would change in enterprises. If we're talking three years, five years from now, like a relatively short timeframe, how different is life in an enterprise because of AI and all these powerful agent systems that you know, that I know could be adopted in organizations and that could be so transformative?
And it's interesting because on that kind of timeline, you know, there are certain kinds of people who think that we'll have artificial general intelligence on that same kind of three to five year timeline. I suspect that we will have kind of, I guess my take on that is that in three to five years, we'll have superhuman capabilities on more and more narrowly defined tasks. I think that there will still be because I don't think we'll kind of have the data to be training the world models
yet on a kind of three to five year time scale to have something that is really like replacing a human on all kinds of tasks that we do. But regardless, there's going to be really powerful systems in three to five years.
And but my answer to the question about what I thought the enterprise would be like in three to five years, I was like, in a lot of ways, it's probably going to be similar. I mean, we're probably still going to be sitting on panels at data science conferences in three to five years, complaining about slow paces of change and all the opportunity that there is in enterprises. I don't, you know,
It's so interesting to see people putting up charts of, in three to five years, we're going to have AGI. It's going to be transforming absolutely everything. But because of this human friction, I think there's still going to be tons of opportunity in the three to five-year time frame. What do you think about all that? Yeah, I think there's clearly going to be a distribution. Some companies are going to be blazing a path and some companies are not going to have improved that much yet.
I'm always surprised when I'm talking to enterprise leaders. I was talking to one a couple of weeks ago, and they were telling me that they run an organization of 5,000, and it's a fairly... It's not a well-known brand. I wouldn't call it... I'd call it a very middle-of-the-road brand. Not a lot of people have heard of this brand. And so not the most AI-native, forward-tech-leaning...
But they said that their team of 5,000 in two to three years would be reduced to 3,000. And so I thought that was significant. So I asked, how could that be? And as an engineering, there's a few thousand, roughly 30%, 40% that are doing patches and lower level work that they feel like AI can handle sufficiently.
Uh, and so it's just a matter of transitioning the team. Um, so yeah, it's going to be a spectrum. Some folks are, like I said, they're going to blaze the path and I think we're all going to be like, whoa. And then there's going to be folks where it's like, like you said, nothing's changed much at all. Yeah. I think that kind of incrementalism, it is, you know, that kind of number that you said it was kind of like 5,000 down to 3000. That was, yeah. And you know, that's a huge amount of the company.
And so it seems, I guess in my head, I would assume that's kind of smaller percentages for the most part. But we recently had, again, at the time of recording in recent days or about in the last week, 6,000 employees were let go from Microsoft. And reportedly, those are mostly software engineers. And it's kind of the same kind of thing you were talking about there. You know, patches, there's lots of kinds of work.
that can now be fully automated that, you know, some kinds of software engineers were needed for up until now. And so it is an interesting, what do you think for our listeners out there, you know, our hands-on practitioners who don't want to be, you know, amongst those kind of 6,000 that are being let go, you know, at any given time in the coming years, what kinds of things do you think
hands-on practitioners can be doing to future-proof themselves? Yeah, I think there's a couple of things that I think about. I think in the short term, being an advocate for AI is very important, right? The reality is that machine intelligence is going quite fast. And, you know, you have this partner now. It's no longer just this passive device.
that will take your Excel calculations or Google Sheet calculations and perform that for you or run your macro. It's something that can help you think and help you come up with new solutions to a particular problem. I think everyone needs to be focused. For folks that want to know how to stay far ahead, I think that's number one. And then I think in the longer term, we talk about this
quite a bit in AIX. I think it's going to be really important for knowledge workers, should they want to continue in that kind of their fields and in the enterprise. I think they're going to have to gravitate towards where there are human to human touch points, right? So the digital world is being quite transformed right now. Robotics is still preeminent
Pretty, pretty far away. Like we think two, three major research breakthroughs to be able to have, like when are humanoid robots going to pass the Turing test, right? And so for a long, long time, I think you're going to need that human to human interaction. And so if you're, you know, outbound sales, right? If you're working with clients, if you're directly connected to just other people
people not as much internally, but also externally. And at least with the higher levels of the organization, I think that's another way to think about, right, building your career. Build the future of multi-agent software with Agency, A-G-N-T-C-Y. The Agency is an open source collective building the internet of agents. It's a collaboration layer where AI agents can discover, connect, and work across frameworks.
For developers, this means standardized agent discovery tools, seamless protocols for interagent communication, and modular components to compose and scale multi-agent workflows. Join Crew.ai, LanqChain, Lama Index, Browserbase, Cisco, and dozens more. The agency is dropping code, specs, and services no strings attached. Built with other engineers who care about high-quality multi-agent software.
visit agency.org and add your support. That's A-G-N-T-C-Y dot O-R-G. Yeah, that's something that I come back to is, you know, your ability to influence is something that will keep you safe in this AI era.
Going back a bit to, you know, still continuing on with this kind of enterprise adoption of AI conversation, but going back to Work Helix, they had a press release saying that we're in the first inning. And this is, you know, a baseball analogy. So there's nine innings in a baseball game for our international listeners who wish that we had more cricket analogies, perhaps. Yeah.
So this work Helix press release says that we're in the first inning of a decades long AI transformation. And I absolutely agree with that. That is kind of what we're talking about already, you and me, Sean, in recent minutes about how we're going to have this incremental change, patches to software being something that's replaceable. And in two years, it'll be more advanced kind of software tasks, machine learning tasks that are increasingly automatable. There's something interesting here
to me with enterprises where it can be really tricky to rigorously measure the impact of an AI solution that's brought in, that has streamlined something
I'm experiencing this right now with a client that we're automating an aspect of their business with my consultancy using generative AI tooling. And I've been having almost daily meetings with my co-founder to make sure that we can come up with a great metric to ensure that we're able to demonstrate the impact of this. And so what do you think are kind of the key things in terms of being able to sell
enterprise AI solution to an enterprise, what are the kinds of key things that we need to get into place, especially if it's difficult to measure AI impact? Being able to demonstrate a return on investment is obviously, that seems to me like an ideal, but sometimes if you can't even, if it's even tricky to be able to have rigorous metrics that
demonstrate operational efficiency improvements, it can be even harder to get an ROI figure that you believe in. And so something that comes to mind for me is actually storytelling as kind of a way to get buy-in, to get influence and get those enterprise AI solutions adopted. Yeah.
Well, you've called it out that we are WorkHelix investors. So that's the disclaimer for the audience. But this is one of the best teams at doing just that. And I was talking to James, the CEO of WorkHelix the other day, and he's having customers come to him and say, hey, can you help us come up with metrics that work?
you know, are things we should look at and things we should expect to move. And so the WorkGalaxy team is not just, you know, partnering with enterprises to understand what's interesting to them, but also like what should be interesting
As as a particular metric, you know, what we where I think it gets complicated is when you're increasing productivity, but it's not going to move. It's not going to move revenue in any substantial way.
And it just reduces costs, but maybe it's just less clear because the people that are becoming more productive just become a cost center in a different place.
Um, what I mean by that is, you know, let's just take like, let's take like a dentist office, you know, small dentist office. There's only two or three people in the front desk. Um, if you make them all more productive, they just go do other things. There's so much entropy in that office, you know, that, you know, and how do you measure that? And,
That just becomes really complex. And then suddenly you're just a cost center because you've sold some AI in and it's not good for retention. Because if you're going to come in and make everyone more productive, riffing is really hard. How are you going to get adoption from a team that knows they're going to be riffed? Instead, you want a team that's so understaffed
um, that they're making errors that like they know they could do better. Um, they have to hire, you know, the hiring process is brutal. Um, just pull the JDs off the table is, is I think the fastest time to value. And for those of our listeners who don't know what a riff is, it isn't grabbing your guitar and jamming with your friends, which is fun. Riffs are not fun. They're a reduction in force.
in workforce size. And so, yeah, yeah, that's kind of that term. Another term actually that you've used a few times in this episode that I feel like probably a lot of our listeners would kind of understand just based on context or just saying it is AI native.
Uh, but I'd love you to kind of define that more specifically. Um, and, and you actually, actually, let's just do that first. I got, I got a kind of a follow-up question related to AI native, but, uh, do you want to just define what that means to you?
Yeah, I think there's two contexts in which we hear and use AI nativeness. One is at the product level, and then another one is at the team level. So at the product level, what I think the general community means when they think about AI native is, was the product built from scratch with AI front and center? What the converse is, is if there was some legacy product, and then you tried to bake AI in, right?
Being able to start from scratch, the idea is that every interaction, every user experience is going to have AI front and center. And when you're bolting it on, it feels a little bit more kludgy, if you will. And then on the... Go ahead. I was going to say that even from the perspective of this podcast, this podcast is nine years old. And we release two episodes every week, every single week of the year. So 104 episodes a year.
And so it's very difficult for me to like, you know, get ahead a month and be able to, you know, completely overhaul operations or something like that. And like, I would love to be starting a podcast right now so that I could be like, okay, our operations from scratch, we're going to have agents everywhere, Jenny, I everywhere. And like, there's, there's so much opportunity. Like if we were doing it in that kind of AI native way to give kind of people a concrete example, like,
everything would flow, I imagine, so much easier. Whereas we have, you know, you get, once you have operations, there's things that work and kind of the cost, the price of like overhauling one piece when you've got to keep the bus on its wheels and going, I got to get two episodes out every week. It's hard to be like, okay, we're going to like completely transform this piece into
because it could have knock-on effects that I don't anticipate and that hinder my ability to be getting episodes out on time. Precisely. Yeah. And then at the team level, I think AI nativeness is more subjective. But I think the question, the very high level question we ask is, do we think this team will be able to unlock applications and user experiences that most other teams cannot?
Right. And if you can do that, that's an AI native team. And it may it does indeed depend on domain where you're innovating. But, you know, example of this is like, you know, Richard's team at U.com is like they are able to unlock experiences in in you that.
that are representative of an AI native team. And same with our vendor perplexity, right? Like, you know, very focused on the consumer side of things, able to unlock experiences that indeed are representative of an AI native team. Yeah, yeah, yeah. It's kind of interesting to me how, you know, in my mind, in the vector database of my brain, U.com and perplexity are actually quite close to each other
in that space. And yeah, it's interesting to think how, oh, we don't need to get into that. But in terms of opportunity that you see across verticals, you sit in a position where you see you must get pitches in all kinds of verticals, AI native applications from AI native teams.
What are the kinds, you know, for our listeners, for people who are thinking about building something new, if you're able to disclose this, what are the kinds of verticals that you think are ripe still today for disruption by AI companies? Yeah, I think, you know, the reality is, if you think about this sort of continuous curve between this, you
world of consensus investing and non-consensus investing, right? Consensus being...
Obvious applications that everyone's building towards and non-consensus being applications that are a little less obvious, right? Requires some degree of insight. There's opportunities in both worlds, right? There was opportunities in SaaS five years ago. You just had to have some insight into why the world could change in a formidable way and how you're going to change it.
And it's still going to be true, right? Sometimes I see investors that are like, oh, you know, this space is too busy. Call it engineering, you know, AI native engineering applications, cursors there, windsurfs there, whatever.
Cognition's there. Too many are there. That's not true, right? These companies have grown bigger and bigger and bigger, and that's when they're ready to be disrupted, right? Just like always. And so there is a founder that is working on a vision of the future that might have a wedge that could grow very quickly. In fact, you hear that Cursor and Windsurf, to a degree, are being disrupted by Lovable.
And so, you know, here's a new player that's disrupting, you know, just a couple-year-old incumbent. So, you know, I wouldn't say that, like, even the Red Ocean is...
everything was red ocean and SAS five years ago, right? The, the VC world in the startup world and entrepreneurs continue to be innovative and think about great ideas and great startups that will turn into enduring companies. And so I wouldn't steer anyone away. Like if,
If you have a quite capable team and you want to go into like Red Ocean with an interesting idea, do that. Like we're about to invest in a team today or tomorrow that's doing exactly that. If you have a team that's looking at, you know, a vertical that...
not a lot of folks talk about. And it's this, you know, I was talking to someone today that's innovating in CPG and like a specific area in CPG transformation when it comes to product development. And it's so specific and their background was so specific to that.
Um, it's very compelling. Yeah. Yeah. There's a, there's a term that you used in there that I, I don't think I've come across before. You said red ocean. What does that mean? Oh, blue ocean, red ocean. Um, that means sharks, a lot of, you know, a lot of blue ocean just means a lot of opportunity.
I see. I see. I see. Yeah. I kind of, I did have that visualization, I guess that it means it's a good term because I kind of had that pop up in my head. There's also, there's an interesting, yeah, go ahead. I was just going to say, I think there's a book out there called blue ocean opportunities or something like that. There's also, there's an interesting, you pronounce it. There's,
The word or the acronym S-A-A-S, I always pronounce that Sass. And I feel like pretty much everyone does, but that obviously Sass is already a word, like Sassiness. And so it's interesting that, and this might be because I'm in New York and I'm not in Silicon Valley. And so I'm so out of touch with the way, but you pronounce it Sass. Like you mean software as a service every time you say Sass, right? Indeed.
Yeah, yeah, yeah. And so I like that. I'm probably going to adopt now. So listeners, you heard me say this your first. I'm now going to start pronouncing it Saz to distinguish against Sassiness. Yeah. And I guess that's a pretty common pronunciation for you on your side. Yeah, I mean, I hear Sazter quite a lot. Not Saster, right? Saster. Yeah, yeah, yeah.
Yeah, nice. So we've gotten through a ton of the topics that I planned for you. There's a trend that's been happening that I'd love to talk about on air that I don't think I've talked about on air before and kind of get your input on. There's been a lot of instances in recent years where big tech companies like Google, Microsoft,
have, it seems like, to avoid regulatory issues, instead of acquiring startups, they end up, and in fact, there's a, I can't remember which kind of acquisition it was, but just this week, the U.S. Justice Department opened up an antitrust, a new antitrust suit into Google for one of these, where instead of, you know, the entire company being acquired,
They acquire a bunch of the talent, including often the executive team. But it seems like they're trying to avoid these kind of antitrust issues where there was this problem up until a couple years ago of these big tech players getting increasing scrutiny.
for, you know, making more and more acquisitions, making it difficult for competitors, like you said, you know, these kinds of Red Ocean players that are trying to disrupt bigger players, you know, instead of acquiring them. And so now you see this thing happening more and more where companies, yeah, you know, leadership joins Microsoft, joins Google, but the whole company isn't
And so how does that, from your perspective, is that changing strategies that startups have? Is that something that you like as a VC or is that something that actually has the potential to undermine investments that you make? Yeah. So there are very few, let's call it maybe single digit thousands right now of AI practitioners that are at the frontier.
right there aren't many of them and that means that there's a talent disparity right there's there's massive uh scarcity um which creates a disparity like between your different companies the haves and have-nots and so when that happens what do you what do you expect to happen like the
Companies are going to increase salaries compensation. I was I heard the other day about like a new Google building that came up. That's like very, very fancy. That's just like focused on AI and it's color coded. And, you know, it's just like it's next level. And I think you're just seeing that.
You're seeing compensation skyrocket. I was talking to a frontier engineer the other day who said that, you know, they just joined one of the hyperscalers and, you know, money is no longer an issue. You know, they don't really think about the price of things anymore. Right. It's so competitive. And so when you live in that, when you're operating in that kind of environment, you'll do anything you can to survive.
you know, to buy up talent, right? To differentiate your business because what you do now, it all compounds what you're able to accomplish this week and the next week and the following week is compounding. And if you lose your edge in three to five years, you're going to see it in the market. And so, you know, what does that mean? So that means that I'll tell you, I think there are pros and cons, right? As a venture firm. I think the pro is,
you can talk to have a conversation with founders, founders that have raised capital, that have enough capital to get through 24 months. And they're, they're operating at a healthy pace, but you can say like, you need to recognize that you have a very large opportunity cost here.
right? Like you could be at a big company, which you know they don't want to be at because they're starting a company. They want to start their thing. But you could be at a big company making a ton of cash right now, right? And instead, you've decided to go the startup route. You need to go, go, go, right? Like make sure you're blazing here because the opportunity cost is so large. Now, the counter to that is like sometimes they might just be like, hey, I'm going to go and get bought and go work at those large companies. And so
You know, in that case, you know, you may you may lose a founder to or a team to one of these companies. But on the other hand, it provides downside protection. Right. If you're an investor and you're investing in AI native teams, you can be sure that you're going to get your money back or some large fraction on the dollar by, you know, through the acquisition. We had one team that went from fully operating to acquired by Google in like three days. Like it was so fast.
And we were able to get all of our, maybe even make a little money on that. That's a great perspective. It is pretty interesting to think about these kinds of, you talked about the single digit thousand kind of AI experts at the frontier. I don't know if you'd know the answer to this question, but
But I'd love to hear your two cents if you do have some thoughts here. For our listeners out there who are thinking, I'd love to be amongst those few thousand people. What are the kinds of things that you can do to get there? What do you need to learn? Or what organization do you need to be a part of to get there? Yeah, I think there's different paths for people depending on where they currently are in their career, right? So if you're...
you know at um you know if your background is an engineer that's building um and you have an inclination for math and um you know and some research you might think about right like could you get into how do you work your way into fair
right, you know, at Facebook or into DeepMind or, you know, and then if you're just starting off your academic career or, you know, studying CS at the undergraduate level, the master's level, you might consider a PhD in AI. Like that's that that, you know, is a is a fairly strong path, especially from universities like
you know, call it CMU or Berkeley or Stanford or NYU or MIT, Toronto. You know, there are bleeding edge AI. I heard the other day that Rice actually University in Texas has like a hundred GPU cluster for their students. So, you know, even some of the non-obvious schools are taking AI quite seriously. And so
That's a path for a student. Actually, I saw on social media just yesterday at the time of recording a funny post. I won't be able to find it in time for it to be of interest to listeners, but I'll try to maybe find it for the show notes and put it in there. There was Bryce University. They have a new undergraduate AI course and their branding for it was like one of the only practical
preparing you for AI in the real world, in the world. And the person who shared it on LinkedIn was like, this is shameful to say something like that, that there's like one of a few degree programs in AI. They are popping up all over the place. But yeah, the kinds of institutions that you mentioned there doing PhDs in AI, to obviously be at the cutting edge, you're not talking about an undergrad in AI. It's a PhD in AI from one of these top organizations.
But yeah, I also love how you have the kind of alternative route there. Obviously, you can learn anything at any age. And so if you already are highly technical, you like doing research, like you said, you're highly numerate, you like doing math, then you can be...
Yeah, see what's in the job descriptions for a deep mind AI engineer and start pursuing that. If you found that your bedtime reading turns out to be linear algebra, you should try it out.
Exactly. You know, I once interviewed someone for, you know, I've never been in a situation where I'm hiring people of that kind of expertise. I've certainly never been. I'd love to be in a position that I was hiring from a single digit thousand kind of AI experts in the world. But I did once have an interview. I interviewed someone where I was told by the person that introduced me that they do read linear algebra in bed.
And I was really excited to meet them, but it was a really disappointing interview. They weren't that strong. So I don't know. Anyway, I've got one last kind of question for you before I get into my final, as I mentioned to you before we started recording, that I always ask for a book recommendation. And I also always ask our guests how people should be following them. But my kind of my last deep question before we get there
So far in this episode, we've been leveraging your knowledge, but I haven't talked much about your particular background, which is the way that we do things on this show. Most podcasts, they kind of have people just talk about their career and how they got to where they are. But I kind of want to get to what are you doing now and what's really exciting, which is obviously where we're focused on the episode. But sometimes at the end, I want to go back a little and I'm going to do that now with one last question, one kind of deep question.
So you have a multidisciplinary background spanning engineering, business and sustainability. How do you leverage that background to help founders that you back integrate customer empathy and long term impact considerations into AI products? It seems like that's something that's important to you, you know, and because otherwise, you know, you might just end up founders might just end up prioritizing technical performance above all else.
Yeah. Yeah. I don't know. Well, maybe this two birds with one stone. We'll see. But, you know, there's this book called Range.
I mean, I think it broadly argues that if you see problems in a lot of different spaces, then you can bring a broader skill set to founders when you're looking at their problems, whether it be product market fit or hiring or any numerous things they're looking at.
And just be more resourceful. And so, you know, especially coming through my career from CS to EE to business to, you know, even within CS and EE from being an individual contributor to being a manager to working more on the marketing side to, you
you know, all these different things. I think I've just seen a lot, right? And now the beauty of AIX is I'm seeing way more even every day because I get to talk to so many different founders and pick their brains and get to know them pretty well and support our portfolio and see how they're doing. And so, you know, I like to think that every day I get better and I improve my own skill set and can be even more valuable to founders. So that's...
That's a little bit about how I think about it broadly. Nice. And yeah, you got the book recommendation in there as well. Nice. Was that the book that you wanted to recommend or do you want kind of... Yeah, I was thinking about...
Thinking about a book that I've read a long time ago and continue to read that I quite like, The Art of Happiness by Dalai Lama, I think is quite a good book to just kind of level set and continue to be appreciative every day of the time that we're living in and kind of the people and the innovation we're surrounded by is staggering. What an amazing time. And then books like
books like range um there's a new book called ai valley that came out i'm talking about silicon valley in this time and um those are those are all good readings
Nice. Great recommendations there. I haven't read that particular book by the Dalai Lama, but I read his, I think it's called An Open Heart some years ago. That was a good one. And I got to say, you do seem like an unusually happy person. So it's probably just that one book. I have a four-year-old that, you know, is getting to the point where he knows better than I. So I'm always very humble. Nice. Yeah.
All right, Sean, this has been an amazing episode. I've learned so much from you. I'm sure a lot of our audience has as well. What's the best way for them to follow you after this episode and get more of your thoughts? Yeah, I'm on LinkedIn. You can find me pretty easy. I'm at Twitter, Sean B. Johnson. My email is just s at AIXventures.com. S is just first letter of my name, S.
as in Sean at AIXventures.com. Nice. Thank you so much again, Sean, for taking the time. And yeah, maybe we'll catch up again with you in a few years and see if we can complain again about how enterprise AI adoption is going too slowly at that time. Thanks so much, Sean.
What an honor to have Shawn Johnson in today's episode. In it, the iconic AI investor covered how AIX Ventures' unique model combines AI practitioners like Richard Socher and Chris Manning with traditional VCs to evaluate both technical depth and market savvy in founding teams.
He said the most effective enterprise AI adoption strategy is to target open job requisitions instead of augmenting current workers AI can fill positions companies are already trying to hire for. He opined that only single-digit thousands of AI experts operate at the true frontier, creating massive talent scarcity and opportunity costs that drive both high compensation and talent acquisitions.
He talked about how both consensus applications in competitive spaces and non-consensus bets in overlooked markets can both succeed. Execution speed and the ability to pivot matter more than finding a perfectly empty market.
He also talked about how this trend of major tech companies are increasingly doing talent acquisitions rather than full company acquisitions to avoid regulatory scrutiny while also still securing scarce AI expertise. And finally, Sean talked about how knowledge workers should focus on human-to-human touchpoints and building influence to remain valuable as AI transforms digital work.
As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Sean's social media profiles, as well as my own at superdatascience.com slash 895. Thanks to everybody on the Super Data Science podcast team, our podcast manager, Sonia Breivich, media editor, Mario Pombo, Nathan Daly and Natalie Zheisky on partnerships, our researcher, Serge Massis, our writer, Dr. Zahra Karche, and yes, our founder, Kirill Aramenko.
Thanks to all of them for producing another excellent episode for us today. For enabling that super team to create this free podcast for you, we are deeply grateful to our sponsors. You, listener, you, yes, can support this show by checking out our sponsors' links, which are in the show notes. And if you yourself are interested in sponsoring an episode, you can find out how to do that at johnkrone.com slash podcast.
Otherwise, you can help us out by sharing this episode with folks who would love to listen to it as well, reviewing the show on whatever platform you listen to it on. Subscribe. Feel free to edit videos into shorts. But most importantly, just keep tuning in. I'm so grateful to have you listening, and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there, and I'm looking forward to enjoying another round of the Super Data Science Podcast with you very soon.