Elliot Durbin. I'm with Bold Start Ventures. I started it with my partner Ed Sim, 2010. And we are, we call it inception stage investing because seed is what we called it 10 years ago. And we back companies that eventually sell the enterprise with first check. And how do I take my coffee? Black. I'm a New Yorker through and through. So it usually just meant it was easiest and fastest at the counter.
We are back for another MLOps Community Podcast. I am your host, Dimitri Ose. And today, talking with my friend Elliot, all about his last 10 years, 15 now.
in the venture space, what he looks out for, what he's excited about. I've been enjoying talking to the VCs recently because they get to see some of the newest stuff. They get to talk to the folks that are pitching them on what the future is. And so I want a little bit of that. I want to know what they feel the future is going to become. But Elliot was also a bit philosophical and he
He talked about founder mindset and what it takes to really build one of those valuable companies and what he looks for when he is that first check-in. So let's get into the conversation with Elliot. But before we do, if you are listening on Zoom,
Spotify or any of the podcast players I am going to go ahead and give you a treat for your recommender system and play you one of my new favorite bands Emil Brandquist Trio let's just cruise on into this conversation with Elliot I hope you all are having a great day
Tell me what you just said, but again. I'm relearning the same lesson over 20 years of doing first check investing, which is if I have a notion in the back of my brain that I just should invest in someone despite them maybe not having the sharpest product focus or me not believing in something,
I regret it. And I would be retired by now. If I just listened to the little Jiminy Cricket going, this person's special, you should back them. Oh, man. So many times. And that's the practice, you know? Yeah. Do you put together an anti-portfolio? Are you that nuts about it? I do. I do. And I find that the reasons that I pass on something that ends up becoming successful
are usually something that the founder figures out. And it's a feedback loop. Everybody needs feedback loops. The ones in venture are long, so you've got to survive long enough to get them. But it always comes down to a special feeling or a special notion about that person's ability to productize something brand new and to have empathy with the users.
And they figure out something new that I could have never imagined. And that is that's what gets me up in the morning is like, can I get better at that thing? And there's plenty of times where I have that notion and then maybe something doesn't happen, but no one ever hears about those. So you don't write blog posts on those.
Maybe one day, maybe one day it'll be a next chapter someday. You did say something when we first chatted that I really liked and it's been staying with me. It's like, how can I find a founder that I can just continuously invest in that? It doesn't matter if this is their first company or it's their fifth company. I just want to go along for the ride. Something I didn't create. I learned it from a lot of the folks that I learned this business from and, and,
What I loved most about when I got into investing in venture, I call it an invite because I don't identify really as a venture cap. I just kind of spot product intuition and I can help you with things that are involved in building a company that may not be of interest to you. But knowing when a founder is on a mission, sometimes that mission takes the form of multiple companies. And I'll give you the quick and dirty on this if you want. Best founders, best
I found to be irrational. They could get jobs doing logical things that make a lot of money, but yet they decide this thing in their brain has to exist. Superhuman superhuman is great example, right? Rahul's first company was reportive. He hacked Gmail and put the sidebar in there because he wanted to make people brilliant. There still does make people brilliant at what they do. And email was the best vehicle
to do that and make folks super productive because we spend so much time there. So instead on his next company, when that we couldn't monetize inside of a free platform, we didn't know you learn these things. He comes back and he says, I want another shot, but this time I'm going to build the whole client. I'm going to make it a Chrome extension. It's going to be electron.
And it's going to be 100 milliseconds or less on every single action because that's when the brain registers as instant. And if amongst the billions of Gmail users, a couple hundred thousand of them want an upgrade at a premium price, I'm a venture scale company.
That literally, that was the, and his whole pitch deck was an X through Gmail. He's like, all this, all this is hideous and it's slow and we need to, you know, it's kind of like welcome to Walmart. You've got, it's built for billions of people, but superhuman was built for an hour set of folks. So I think the best founders are on missions.
But you tell me, where can I double-click on those things? There's so many things to talk about now. So much. So much good stuff. And thanks for having me. I'm a huge fan of your community. I'm really kind of tuned into it, and I've heard our founders mention it. So I'm really excited to be here. That's awesome. That is very cool to hear. The places where we can double-click on that, in my eyes, are like, what are the missions that you've seen besides...
superhuman that you're also like very stoked on right now? Right now, top of mind. Well, let's get into the meat of it. We can talk about agents and AI. Yeah. Yeah. That's the hot topic this year. Well, it's exciting. I remember, so I remember I had a buddy who was at OpenAI and he invited me in beginning of 2022. I just remember walking in there and I've been around a little while and the energy in that office,
In the beginning was, does anyone know where my desk is? Cause they were growing so fast and they were like, Hey, does anyone have any GPUs? Like I ran out and it was the energy was like almost Facebook, like hyperscaling, figuring things out on the fly. Everybody's kind of zipping around. And I just remember thinking to myself, well, this is great news. I can keep my job for a little bit while, because there's something brand new that creates magic.
And it was, they built that first hyperscaling consumer app. And if we look at the history of internet, even before that, I mean, it's been the hyperscaling consumer apps that have built the infrastructure that allows everyone to, I mean, a guy who sold books enabled Uber and Netflix and things like that. They feel like build on each other. So we're at this great stage where we get to build again on an order of magnitude clip and everything's faster.
Everything's faster and it always surprises you how much faster as a rate. I'll give you an example. You asked me in 2015, 16, right around the time RPA is going on, robotic process automation, if you remember that. Yeah.
Which is, explain that real fast for...
in ways and we can keep a human loop so that you can control it, but it's going to be very cost efficient for you. You're going to save a whole bunch of money. And I feel like it accomplished a lot, but it had its limitations because it was based on microservices, lots of APIs, things broke. Biggest problems were like extraction issues.
Hey, I've got this invoice with handwriting on it with a bunch of VAT codes and things like that. And I need to get this piece of paper that comes in via fax. Some of the Fortune 500 companies will remain nameless. I walked in. I said, how do your purchase orders come in? And they go, oh, we have this room of fax machines. And I just thought of those sitting there trying to figure out how with OCR, right?
How to put that into an SAP database. It was hard. It was in a broke. And if everything wasn't just right, it wouldn't process correctly and wouldn't execute. LLMs make quick work of that. So that was the first thing I noticed. I said, huh, so this can easily digest handwriting and probably even do a better for you. Okay. I mean, there's a lot here. I think the first problem they ran into, especially with agents, was they're not predictable.
And that was a lot of the last 18 months of time was they hallucinate. You know, my super duper engineering fans are like, it's not even close to there. I'm not letting this near production, things like that. But then I met this guy who shows up.
And it looks like he's plugged into a wall outlet. He's like, agents, lots going on. Hi, I figured something out and I've open sources thing. Like, okay, so what did you do? He says, well, I allowed agents to be separated into crews and put in roles with discrete tasks. And when you do that, you increase the accuracy and you're able to run automations much more accurately at scale,
in more of a production way. And that was Crew. And he was at Clearbit before that, building a lot of their ML infrastructure and got access to OLMs even before OpenAI started scaling. And it was really hard back then. So I think when they got acquired by HubSpot, he hates when I tell this story because I think his wife is giving him a hard time, but his wife's like,
You should go blog about AI because you know a lot about it and you're kind of, I think, driving me a little crazy. I don't know. That was the inference. I don't know for sure. Like, I'm sick of hearing about it. Well, you know, I don't know. I wasn't there. But what's cool about it was you have these folks that have this experience, like João at Crew, who open sourced Crew AI as a way to kind of share with the community and it grew so fast that you kind of almost...
was forced into thinking about how this could be a company because big companies would find it and call them. I haven't seen that stuff since like CI CD back in the day, or maybe even like GitHub or I think sneak experienced a lot of it. It was built for developers. And then all of a sudden by its use, because it showed where the dependencies were and what libraries were being used and how the vulnerabilities in those libraries could affect your application. Yeah.
it became a security tool. So in a lot of ways, Crew AI, I feel like, was out there helping people organize these apps, and it was so easy. My partner, Ellen's an engineer. She was on the phone with me, and she set it up while we were talking to him for the first time. And she goes, this is delightful. Interesting. And we didn't, I didn't know what an agent was at that point. This was February last year. And what's happened since is, I think, a lot of the same echoes of the past with automation, because that's what enterprises are.
want to do. They want to automate things and they want to save money and be more efficient. So now we're in the process of deploying into these enterprises and a lot of the same enterprise stack is being built. See of the application layer, the monitoring layer, the infrastructure layer, you know, and then you go all the way down to compute and the questions become billing, pricing, how do you
how you minimize variability in your pricing so that an enterprise can consume it, but also not get stuck with a bunch of the costs because these things aren't exactly, there's a lot of innovation going on. I won't touch that one, but the cost will go down, but not yet. And it's still variable. So it's really putting companies together very fast again because of the existing demand. And when I, 2015 to full circle back,
I would have told the company, hey, get to a million in ARR over 18 months and you're killing it. Now, get to a million in ARR in a couple months and go from one to in 12 months. So everything is growing faster. So the speed of execution, you really have to look for founders. And if you're going to start a company, just be ready to be really narrow. But when it runs fast,
ship right into it, lean into it and run fast, deliver what the customers want, because you have to kind of, if you're going to ride the wave, ride the wave. Don't, if you're a surfer, don't pop too late, you know, especially on a big wave. Cause you're going to get tumbled, but yeah,
I don't know, a lot of the, it's cool because I see a lot of the founders that have been maybe not excited. Maybe they found a company in the past. Maybe they've got other things going on. They see this and they're like, I'm sorry, I have to get on that wave. Yeah. Looks super fun. So it's an exciting time. But what am I super excited about? It's new companies that are building Greenfield agentic applications, but also our existing companies, maybe a little more mature clay being a good example.
clay.com and clay has been around for seven years and they're really powerful tool for enriching your data. And then eventually rev ops people found it and it's become this best practice. If you're building an outbound model for sales, you pull in all your leads via, you know, all these different data sources, kind of like clear, but offered the minute they added these things called clay gents, they have great branding. It's all claymation and all this other stuff.
But these agents would go out and scour the web for even more information to add to your data set, which made their application better for the users. And more people started using it. So you get this. Agents can be great as the foundations of software. And they can be great to enhance existing software. So it's an interesting thing.
delineation between the greenfield and the existing. All I know is that the companies that didn't leverage that, you're not getting any of the benefit of the demand that's out there from these companies. And I think that's the best thought exercise is when you have an existing product and you're trying to push the boundaries of, okay, can we find a way to make this
compatible or better with AI in some way, shape or form. It might just be an LLM call. It might be a full on agent that we can add to a feature set. But the other side of that coin is,
You just have product managers trying to stuff AI in any nook and cranny. And so there's going to be a lot of those failures where you think that, oh, we should add AI here. And then what you find is that no AI was needed. In fact, the less intelligent this product is, the better. Yeah. In some cases, that would be frustrating. To me, that's exciting because...
I don't know. We're back at a stage where it's like cool to fail. It's like, well, you tried it and it didn't really work. I only fault that whole scenario. If you then try and continue to make it recognize when something's not working and adjust best founders fail so fast. I tell our folks, you don't even see. And so if you can get into a loop where you're like, okay, maybe I hear. Nope. Next. No one will remember.
I remember when Rahul at Superhuman really taught me a lot about it because he was very early to this curve, adding and embedding these features that it can summarize, it can draft and compose in an iterative way. And now we're starting to see that move forward into it can automate things for you. And what I haven't seen yet, but I'm excited to it on the personal level, like granola is one of my favorites.
because it sits on the desktop and it doesn't get in your way, is AI finding more ways to get out of the user's way. And you may even start using it without even knowing it. I don't know. It's as close to consumer investing as I can get. Yeah, the granola one is fascinating. I remember talking to the founder when he...
was starting it and he was like, we're going to do like a zoom recordings, but for, you know, for like interesting person meetings. And they've since pivoted and people love it. It's I've seen them everywhere. I wish I had, but I'm not invested in it, but it is just such a great product. And the other thing I love about it is it's always updating. I was just double check when I, when I log out, I'm like, is there a new version? Cause if there is, there could be something really cool in there.
And it's a good lesson for the founders out there. The best way to inject confidence in your users is to rapidly release things. It's like this mini dopamine hit that I get every time I update the software. Something new. Yeah, the shit thing. But it's an example of a great product. Superhuman's a great product. And I think on the agent side, the challenge would be what's the interface? How does it represent itself?
Um, and how do you express what you want it to do? And the biggest thing that I see happening now is companies are proud of their evals. Did the agent do what I wanted it to do? And how do you start to just monitor that at scale? So you saw Google came out with that service where you can evaluate if you're blank chain or crew or, or whatever, you can have a service that basically checks your email structure. So yeah.
In cloud, people were like, oh, we don't need to test it. I think Netflix started testing because it was so high uptime, right? Yeah. We didn't see that stage for years when cloud came out. Everybody was just throwing stuff against the wall. But to have it already, the conversation already starting to tilt towards testing. That's pretty wild to me. Yeah. But I think it's because businesses are back to the demand. I have never seen demand like this. Mm-hmm.
You have two people, one person. I have a one person company that just sold their product to Microsoft because Microsoft found it and started using it. He looks at me and he's like, I think we should get some more people. I said, I think so. But then again, he's like, well, maybe not because there's almost this culture of I can do more with less. So let's think carefully about it. I think that we're going to start to see more and more efficiency on the business building side as well. So.
It's pretty broad, so I can double-click anywhere in there. What do you think? A, first of all, it is an insanely competitive piece of the stack. And so that's one part. The other part is, are you not worried that it is built on top of other...
open source tools like i think it uses and leverages a lot of langchain right and then it did now it's it's moved away from that since it's moved towards a more agnostic okay framework because i i think that that everybody's standing on shoulders whether they know it or not and if you're not you probably spent a lot of money and you're probably a pioneer and you're early okay cool right um
But I think that what Kroot did initially in my brain, and you'd have to ask as well what was in his head, but he made it super easy. And it was the easiest I'd ever seen. And I think that for new stuff, if you can make it really easy and intuitive for folks that express themselves in code, you're going to up the chances that someone creates something very big using your frameworks.
And that's exactly what happened. It's like big company coming in, automating things. I mean, one of the big accounting firms was like, yeah, we're looking at kind of automating our audit function. Okay. I mean, I'm sure there are things wrong with that, but you got two choices. You can be kind of rational.
They're like, cool, let's figure out what breaks and fix it and just keep shipping. And, you know, you see the boulders and this happens with any good startup. You see the bold boulders in front of you and you just try and smash them as quick as possible by shipping stuff. And that's what he's doing. So I think the value of crew is that it's super easy, but also it gives you a lot of control. And I think there's a lot of ways to do that. The way that crew is doing that is, um,
I think native natively to the next generation of apps versus the prior. And I think other frameworks may be going, whether it's a graph infrastructure or, or some sort of kind of design that will allow a, it's kind of like saying analog to digital. Crew's going to lay to control your agents and, and the legacy that it creates. I asked him, I was, I told him I was coming on the podcast today. And I said, if you got one message out there about crew, what would it be?
And he says, the decisions you make now with your tests will dictate how much maintenance and core infrastructure you have to build to maintain your applications for the next decade. So think carefully about how you design your agentic systems, whether you build them yourself, you use a third party, you're using multiple frameworks, multiple models. You just have to create a way that it can leverage the innovation of agents themselves. Mm-hmm.
to self heal, to, to monitor each other, right. Crews that monitor crews is one of my favorite things. You know, I always, I told him we should probably get like emojis or something like the clipboard crew. Yeah. But, uh, is it built on other things? Absolutely. But like LLMs, we couldn't have them unless we had internet. We wouldn't have internet unless we had like Google. Right. So everything I feel like is a dependency on something else, but that's,
I don't worry about these things. I try and get ahead of them. And I think that the tide's rising. So the real question is who can take those opinions from the market and create the best product? Yeah. Now thinking about the way that they manifest, I really liked how you put it where the chat
And how we interact with these agents is still yet to be determined. And maybe that's not the best way forward. And there's a lot of people that are trying to find new ways. They're actively trying to figure out like, what can we do instead of the chat? And then the other thing is like making sure they do what they do. And you kind of broke down three different areas that
I would love to talk a little bit more about because there's how we interact with them and then how we know that they're doing what they're doing. Right. And let me zoom out one level higher because I think that's the right. Yes, that's all happening now. But I think for context, anybody that's listening to this is going to be like, okay, that's a little too in the week. You've got this journey happening. And I had a head of AI engineering from one of the top five banks. I won't mention which one because I don't have the media approval. Yeah.
But I don't want to get sued. We're on this journey where you have, it starts with co-pilots on one side at the beginning of the journey. You have agents in the middle and then you have autopilot. So co-pilots to autopilots, agents to the bridge. So if you think about things from that kind of framework, you're going to need to trust that system, create a co-pilot so the humans get in there and are empowered in the loop to do the things they do
repetitively, faster, better, more accurately. We're seeing that in things like QA. You can replace your whole QA team now for those companies that have QA, right? Best engineers are like, I write my own end-to-end tests. I'm like, well, that could be automated in cursor too. They're already automating things, best engineers. But the QA teams for the more mature companies can go away and those agents can be more accurate with Playwright
than any human could ever be. That's the latest thing that I'm seeing. So you literally can walk into a company and be like, do you do QA? Yeah, we don't like it. None of our engineers like it. It always causes reproducibility and all these things. That's gone. It is now a no brainer to do that with agents.
How do you interact with that? Well, you're going to have to figure out little UX components and ways to kind of notify the user that it's been done and that it's been done correctly and allow them just a nice mental check. Yeah. That's a design experiment in a lot of ways. So I think the first interface will be at the application level, interacting with the user on a co-pilot basis that says, hey, we made this better and you can trust us.
Cool. Then you get into agents and agents that start automating things underneath in an even more discreet way, say in a multi-agent system that I'll use the original crew example, research as a blog post, writes a blog post, allows you to edit multiple versions of that blog post, then post it to LinkedIn or something. That's another kind of interface because then you're designing more discreet agents that can do things and you kind of eval each one. Did they do the right thing?
You know, I don't want to automate this fully because that's kind of like my brand out there. And I'm not sure it's going to start spewing a whole bunch of stuff that I don't want to associate with my brand. But you can do that with cruise innovation, which is discreetly organized multiple agents into a system. What comes next is the wild stuff, which is autopilot. Once you trust those systems, and then what is that going to allow us to do? What's that going to free up a whole bunch of time to do? If you no longer have to test your software,
that gives you time back to do even more. So I think that that is coming. I haven't seen anything that's that defined on the future end of the spectrum. So we're just kind of in that first phase, moving from co-pilots to agents to eventually, I think, autopilot. So going back to this UX idea and almost looking at it as
different knobs that we can turn and different ways that we can try and experiment with the human to co-pilot relationship that we have now in the first innings of this journey. I've been thinking a lot about how one design pattern that's coming up seems to be firing off agents from Slack or from like a chat that is...
Yeah, it's very much chat-offs. I like that term. I haven't heard it before. I got a whole book of those if you like those. It's funny because there is almost this world that I could see where you are directing your agents and orchestrating your agents in the place where you are talking and having the most communication with your team.
And it's unstructured and that's quick work for these things. Well, and maybe you're making decisions in a thread and then you just tag in the agent and say, okay, we made the decision, now go do it. Yeah. We're seeing that in support, our company customer with a K. They're some of the best engineers that I know. And when we backed them originally, it was because the graph database existed. They're like, hey, we built like five customer support companies and now there's a graph database so we can make a timeline and get rid of tickets. Yeah.
Now what they're saying, they're going, okay, now we have vector databases. So we can literally have virtual agents that can do everything a human can do. But we'll start with automating this kind of, and you lower the exception queue of stuff that the agent can't handle. But to your point, that's kind of 1.0 interfaces moving to 2.0 in this space. From a UX component, it's all about getting that user trust, I feel like, to automate things.
And once the user starts automating things or requests or demands that, that's when you're ready to take the next step. But I don't know. I sit on the VC side of the table, so I'm not very good at making those decisions. I just get to point when it works and be like, oh, yeah, that's how you do it. And you need to take credit for it and say, I saw this coming. It's one thing I don't do. I grew up around really senior founders who were like, no, no, no, VCs are...
You know, maybe seen but not heard. But that's changed a lot. You know, I has got to be no ego job because it's not about us. It's about supporting the founders who kind of figure something out that's big that everybody wants and helping them figure out how to deliver that with quality at scale really fast. Yeah, that's really the job. It's how do I scale a company to meet the demand that's out there? And with these agents, I mean, companies are buying them.
If you're selling an AI agent to an enterprise, I mean, across the board, you are seeing more demand than I've seen in the history of doing my job. Amazing. Going back to that, and this is one way that I imagine you are having lots of conversations with founders about the agent space is the pricing discussion that you were talking about earlier and how much of a...
Thorn in the side that can be because you don't want someone to have to think twice about using your agent because they're thinking about the price tag. But you also don't want, for some random reason, they fired off a few more agents this month and now the bill went up by 40K and the engineering manager is coming in and saying, what the hell happened? It's like a fire alarm goes off and your customer is like, the budget changed. What? Yeah.
Or they say, no, no, I only want open AI, not Anthrop. And you built your thing with open AI and you're like, right. So there's a lot of figuring that stuff out. On the extreme level, six, 12 months ago, I started to see the labor component of this. So it's replacing human labor. And historically speaking, the sales cycles for a software license and SaaS to a big enterprise, you have to prove your ROI, right? And that takes six to nine months.
In the best case, maybe three months. And I think they went from being called POCs to POVs. I don't know when that switched. What is the V for? I've heard that too. Proof of value. Oh, okay. I don't know. There's a lot of acronyms. I try to avoid them because I get confused. But they test it out to see if it's more valuable and it doesn't pay for itself, especially in this climate. I think that's a lot of the fast-growing companies are replacing three things and then creating a ton of value. So they get the budget.
Um, but with pricing, I see a lot of them going and saying, you know, for my first couple of customers, just sell it as a service. That works, but that only works for so long. I think what we're seeing Palantir is my favorite example. My partner, Ed and I go back and forth on this all the time because a lot of people doubted Palantir's model in the beginning with the four deployed engineers. They're like, yeah, I don't want to send a bunch of engineers to an oil rig in the middle of the ocean.
So like figure out data processing at the edge. But turns out their sales cycle, super efficient because they almost get in there with the customer and they just set it up, but they set up with the customers that train them on it.
And you have the Palantir AIP, which is a great product. It was very early, but like the adoption of that thing is wild. So I'm actually seeing a lot of our startups adopt that model. So you got to go in there with a forward deployed engineer. Oh, OpenAI just put job specs out.
for forward deployed engineers. Check it out. I quickly sent it to all my companies. I'm like, Hey, this is a great one. Great artists steal. So just take the elements that you want and put the JDs out there. You know, I think it was a really great rag. So kudos now. But I think a lot like early anything, you're going to have to deploy experts in to train your customers. Uh, but you don't want to call it training like workshop. You can charge for a workshop. Um,
And I think a lot of the companies that I've seen, if there's a best practice involved or a skill set involved on how to leverage the software appropriately, the best way to train your customers is to send an expert in that they can utilize to set it up. And that's fundamentally what we're seeing in a lot, whether it's a security company, whether it's even crew. I mean, everyone needs a little bit of expertise with their software so that they can practice the best way.
But they're not going in as a consultant and then installing. That's the trick. That is the core fundamental muscle you got to build as a product muscle. And the example I give a lot of our founders that have not done that before is your answer to things can't be bespoke. They can just make sure you add two zeros to the cost of that. But you got to productize things. Yeah.
You know, it's like, hey, how do we productize our onboarding process so that an engineer that's forward-to-plate can go into a customer, but they are teaching them how to use the product and then onboard other teams. Yeah. So you're either a sales-driven company, which I think a lot of consulting firms are, or you're a product-driven company, or you're a hybrid. And I think it's really great to be a hybrid because then in the beginning, you can go in and figure out what's most important, productize that, and then see if it
translates to the next customer. And if it does, that's a win. Celebrate it. Startups are hard enough. You got to celebrate these. But if you can get the same customer, you know, one customer up and then the next one, you can reuse some of those components, even with human assist. I think that's the new definition of repeatability. Yeah.
You were talking about evals and how testing didn't come till much later in the cloud lifecycle. And now evals are here very early, which is a form of testing in a way. And there's a lot of pieces in the cloud that are very mature. And I wonder if you feel like they're going to also translate to the agent world. Well, let's take what you just said for a minute and just kind of really hear it. The cloud is mature.
I remember the days where enterprises were like, I'm never, ever going to the cloud with my tier one workloads. That's crazy. Well, then the CIA did it. Everybody's like, oh, well. If they can do it. We can reduce our costs. And I mean, I just remember the last 10 years, I'd say 15, watching...
Those global MSAs, the master support agreements and sales agreements to go out and literally take department by department of the Fortune 500 and put them in the cloud. I think that that set up the infrastructure for what we're seeing now. So there's CICD, right? There's source control, you know, and I think developers are relatively empowered, but now there's this huge gain. So testing it relative to the existing software, right?
is in way the whole company, Tesla, who's championing the AI native development wave. But more importantly, I think how enterprises will consume that and what's important to them. And if we're at a stage where you have these agents that will write code, that's a lot of code. That is a lot of code. That's more code's going to get written this year.
then I think probably last 10 years because it's now like, I remember talking to engineers back in 2010 and it'd be like code generators. I'm like, never work. Never. No one will ever trust that code. Now it's like, Oh yeah, I just scripted up my whole integration with my database. And you're like, what? So I think that, that testing is going to be more important because there's a lot more code that's being written. And I think that,
It doesn't change the uptime that you need. It doesn't change the quality that you need to ship your software with. So it makes sense to me that testing is happening faster, but it's also the fact that sometimes these things hallucinate. And less than a year ago, we were worried about the accuracy of agents completing the task as we expect. So I think new ways of testing will be created. I'm spitballing here, but I'm thinking about...
Looking at how you would analyze a company, and let's say that I come to you and I say, I've got this amazing idea for a company that it does alerting for agents when they spend too much money. It's basically your alerting software. Are you going to tell me that's a feature that AWS is going to build in natively or
Is it going to be something that you are just like, eh, alerting is too small of a pie? And then maybe I say, okay, well. I'd ask you why. Because just to be clear, so bold start. We started in 2010 with a million dollar fund. We're now investing out of our sixth fund. Ed and I have been backing people, specifically engineers, that have a problem space that they want to solve. And they have an opinion on how it needs to be solved.
It's the earlier point, almost irrationally so. Guy from Snyk, his last company was called Blaze.io. And Blaze was solving mobile website caching. Got acquired by Akamai, so he sat at Akamai. And he basically saw that everyone was shipping faster. And that was going to create a big security hole. So he comes in and he says, I think the vulnerabilities inside of these dependencies that developers are using without even knowing it are going to be a huge problem. And then Apache struts and Equifax.
If they were using sneak, it wouldn't have been a problem because it would have patched that vulnerability. So all of a sudden things were, oh my God, how do we, how do we fix this? And turns out security doesn't really didn't at the time really know a lot about how developers worked. So he built this tool for developers. So he made the big bet that I might build a developer tool that will eventually enhance security and become a security product. Same thing applies to learning. Why?
Why is alerting such a big thing and why will it become even more important? Is it because that creates the triggers to automate other things? Interesting. But if you're looking to start a company, stop thinking about a company. Companies are built around really successful products and successful products are created by people. People with an opinion that this is the way that it is done best and the more credible the opinion, the easier my job is.
So some days I'm like, oh yeah, yeah. You know, you're coming out of, coming out of some great company that scaled really fast and this is the way you did it. Makes sense that other people are going to do it that way. And they'll probably listen to you because you're fully credible and you'll be able to recruit people and you'll be able to get your first customers. In other cases, they're like, but there's demand out there already. And I built a thing that I just kind of put out there like crew and all of a sudden it's just growing.
I mean, crew's grown faster than anything I've ever seen ever and been involved with. So handling that demand, I feel like is the big challenge. So the why in your case on alerting, we'll go back to why is that important? Why will that be important? What will your users realize that that can do? What do you want to prove is possible? Brilliant. In a static environment, any idea is good or bad. Yeah. But.
I think it's the, the, what does it prove? If you get that first revolution on the bicycle successfully done, can we, uh, my favorite example is can we rent out the couch in our spare bedroom to someone we don't know from our phone? I mean, in the beginning, literally everyone passed on that. Cause like, what are you talking about? Why would I do that? Kind of intuitive now. And so it proved that it was possible to create
you know, Airbnb and whatever it was. I'm not invested in that, but I saw it happen and I was like, wow, cool. There is something that you said there that I really like, which is just how, what does that enable? So you come with an idea and what does that get you? Is there more to it? And, and then also, um,
The founders usually have strong opinions about how the best way to do something is. Are we just back to founder who had to deal with vault all day, every day, hash core vault, amazingly successful product. And with agents though, I'm not so sure it's the best way. And if agents can go in and read all the identities and have their own identity and secure all the applications that are
kind of communicating with each other and keep your secrets and make sure it's available at the right place, right time to the right user. I mean, there's fire behind their eyes because they're like, okay, we can replace that now. Oh, all right, let's go. And I mean, that's, that's really what I look for is like, do you have this fire is burning need to like, just put a dent in or make something better or get rid of something that is just terrible.
but there was no better alternative. So on the tool builder side, great. We're investing a lot of the application level because I think that's the stage we're at right now. So whatever you build, it has to be for users. It doesn't have to be infrastructure for infrastructure's sake. I don't, and I don't do a lot of the, I don't invest in foundational models. I'm not,
You know, looking for the, not sitting at the big poker table, as they say. I'm much more comfortable at the application level or the kind of tooling for applications at the moment. And security, and I think a lot of the workflows that can be automated are exciting. But these things are integrating databases faster than I've ever seen. I think we'll see new ERPs coming out.
I think we'll see new PLMs coming out, like stuff, acronyms from, you know, Jurassic Park back in the day. You're like, wow, I could redo this now. And I'm just looking for the right founders who, whether it's through experience or maybe having the pain themselves, just have a very strong opinion about how it should be done so that they could share that.
I just saw Paige Bailey, who works as a dev advocate at Google, I think, on the Gemini team, post about how there was this Hacker News posting where someone said, we had this whole OCR software and it has a human in the loop and we were using it.
But since testing with Gemini 2.0, we now just got rid of the OCR software because Gemini does it better and faster and it is a much easier experience. And so I wonder about when you're talking about applications, there's going to be those subset that
Our applications, but then there's probably going to be a lot that just gets done. Oh, yeah. But that was the case back in the cloud, too. We used to have meetings. Amazon's wised up to it. They used to have meetings for us or reinvent for all the VCs. Which companies are we going to kill? That, I mean, it's part of it. But they gave us a heads up, which was nice, you know? And so I could give them, listen, if you're going to go into the jungle, get ready. There are lions and tigers and bears in there. And you probably won't make it.
And I think a lot of, as an investor, I think that is like the most important skill set. Be patient. Don't get in the founder's way because you can't fix things. But more importantly, you got to be comfortable that it might not work. Because if you're not comfortable with that, founder is going to have a tough time being comfortable with it. And the only way it works is if they're real comfortable to spot that failure. I had companies that literally wouldn't have made it unless opening eye came out.
And then they started leveraging LLMs and stuff started to grow really fast. So it goes both ways. And the last thing I'll say about that, yeah, Amazon, Google, they could kill your business. It's happened over and over. But you can move a lot faster than them and they're going to build a generalized product. And if you pick the right narrow set of users and right narrow kind of market to go after...
that they may not kind of custom fit their solution to SageMaker be a good one. You know, the fault injection service being on and on and on. You could create the best practice that they will eventually have to write back to. We saw that at Snake. I mean, all the big companies were like, well, I'm building a vulnerability database. And I was like, well, you can write back to mine if you'd like. I was like, wow, well, that's,
Best matters are like defiant, you know? And, and the best day was when, you know, they said, okay, we'll write back to it. We'll integrate you in lighthouse or whatever it is. But if you really believe that you have the best way and you can endure that game, you know, kind of chicken almost, or you're like a Firebase and okay, well maybe I'll sell the Google and then I'll enable all the mobile developers with my best practice because this is the way to do it. I'm going to make, I'm going to make everyone better and
And that's, to me, the bullseye. I don't always get right, but if I can find a founder like that, they're not going to give up. But more importantly, I think that they're going to see the mission as continuing on whatever happens to their company because it's what they're about in a lot of ways. So yeah, the risks are there. It's scary. They're releasing all sorts of things. Probably do it as a service. Amazon will probably release it before they even have a product for it. But if you've got the best way, you've got the best way.
Yeah, and Amazon will steal some open source version of it. It's very high level. I'm trying not to. Too abstract.