After gaining early access to GPT-4, Jake Heller and his team realized its transformative potential for the legal industry. Within 48 hours, they decided to shift all 120 employees to focus entirely on building a new product, Co-Counsel, leveraging GPT-4. This decision was driven by the technology's ability to perform tasks that previously took a full day in just a minute and a half, offering a significant competitive advantage.
Case Text's valuation skyrocketed from $100 million to $650 million within two months of launching Co-Counsel, a product built on GPT-4 technology. This rapid increase in valuation led to a successful acquisition by Thomson Reuters.
Case Text had been investing in AI and natural language processing for over a decade, building close relationships with research labs like OpenAI. This groundwork allowed them to quickly recognize the potential of GPT-4 and pivot their entire company to leverage the technology, giving them a significant head start in the market.
Convincing the team to pivot to GPT-4 was challenging because many employees had seen previous pivots fail. Jake Heller led by example, building the first prototype himself and involving customers early to demonstrate the technology's potential. Seeing customer reactions during Zoom calls helped change skeptical minds quickly.
Case Text implemented a test-driven development framework, creating thousands of tests for each prompt to ensure accuracy. They broke down complex legal tasks into step-by-step prompts, ensuring the AI could handle nuanced legal work without hallucinations. This rigorous approach made Co-Counsel reliable enough for mission-critical legal tasks.
Vertical AI agents, like Case Text's Co-Counsel, are tailored to specific industries, offering deep domain expertise. Case Text's focus on the legal industry allowed them to build a product that significantly improved legal workflows, leading to their $650 million acquisition by Thomson Reuters. This success highlights the potential of vertical AI agents in creating billion-dollar SaaS opportunities.
Case Text's experience with earlier models like GPT-3.5, which often hallucinated and lacked precision, taught them the importance of rigorous testing and prompt engineering. When they gained access to GPT-4, they applied these lessons, breaking down tasks into smaller, testable prompts to ensure accuracy and reliability in their legal AI assistant, Co-Counsel.
Startups can learn the importance of early investment in AI, the value of test-driven development, and the need to pivot quickly when transformative technology emerges. Case Text's success also highlights the potential of vertical AI agents in creating significant market opportunities by solving specific industry pain points.
This is our first ever experience talking to this godlike feeling, you know, AI that was all of a sudden doing these tasks that would take me when I practiced like a whole day and it's being done in a minute and a half. The whole company, all 120 of us did not sleep for those months before GPD4. We felt like we had this amazing opportunity to run far ahead of the market. That's why you're the first man on the moon. Yeah.
Welcome back to another episode of The Light Cone. I'm Gary. This is Jared and Diana. Harj is out, but he'll be back on the next one. And today we have a very special guest, Jake Heller of Case Text. I think of Jake as a little bit like one of the first people on the surface of the moon. He created Case Text more than, I think, 11, 12 years ago, actually. And he's a
And in the first 10 years, you went from zero to $100 million valuation. And then in a matter of two months after the release of GPT-4, that valuation went to a liquid exit to Thomson Reuters for $650 million. So you have a lot of lessons about how to create real value from valuation.
really large language models. I think you were of, you know, our friends in YC, one of the first people to actually realize this is a sea change and revolution. And not only that,
We're going to bet the company on it. And you were super right. So welcome, Jake. Happy to be here. One of the cool things I think about Jake's story and reason why we want to bring him on today is that if you just look at the companies that good founders are starting now, it's a lot of vertical AI agents. I mean, I was trying to count the ones in S24. We have...
literally dozens of the YC companies in the last batch for building vertical specific AI agents. And I think Jake is the founder who is currently running the most successful vertical AI agent. It's by far the largest acquisition, and it's actually deployed at scale in a lot of mission critical situations. And the inspiration for this was we hosted this retreat a few months ago, and Jake gave an incredible talk about
about how he built it. And we thought that it'd be super useful for people who watch the light cone, who are interested in this area to hear directly from one of the most successful builders in this area, how he did it. So how did you do it? Well,
First of all, like a lot of these things, there's a certain amount of luck. Over the course of our decade-long journey, we started investing very deeply in AI and natural language processing. And we became close with a number of different research labs, including some of the folks at OpenAI. And when it came time for them to start testing early versions, we didn't realize it was GPT-4 at the time, but what was GPT-4? We got a very early view of it.
And so, you know, months before the public release of GPT-4, you know, we as a company were all under NDA, all working on this thing. And I'll never forget the first time I saw it, it took maybe 48 hours for us to decide to take every single person at the company and shift what they're working on from the projects we were then working on at the time to 100% of the company all working on building this new product we call Co-Counsel.
based on the GPT-4 technology. How many people was that? We're about 120 people at the time. So you took like 120 people and completely changed what they were all working on. Yes, yes, yes. In 48 hours. Yes. And for the people watching, Case Text originally, I mean, had always been in the legal space. You're a lawyer and you built something for yourself. And, you know, sort of the first versions of it were actually sort of annotated versions of case law, actually. Yeah, that's exactly right. So in the very early origins of the company,
The mission of the company, what we're always focused on is how can we build something that brings the best of technology to the legal space? As a lawyer, I actually like the job a lot.
The parts of my job that I hated the most was when I had to interact with the technology that lawyers have to use regularly to get the job done. I remember thinking, and this is like 2012, when I was at a law firm, if I want to do something really trivial, I had like a new iPhone at the time, I can go on Google and find like movie times or where's the closest open Thai restaurant with vegetarian options. That was super easy.
But if I wanted to find the piece of evidence that was going to exonerate my client and make it so he doesn't have to go to jail for the rest of his life, or the key legal case that will help me win a billion-dollar lawsuit, well, that's going to be like five days in a row until 5:00 AM every day. I was like, there's got to be a better way. What is the process as a lawyer? You would have to read the stacks and stacks of documents? Pretty much, yeah. Right before I started practicing, before everything went virtual or online,
You would literally be in a basement with banker's boxes full of documents, reading them one by one by one to try to find all the emails in a company like Pfizer or Google to see if there was potential fraud. And then if you wanted to find case law, slightly before my time, you'd literally go to the library and open up books and just start reading. And new products were coming out that were some of the first web-based research tools for
But they were pretty clunky. It was just hard to find the relevant information. You couldn't do control F for any of this stuff, basically. Basically not, yeah. And what was interesting about your background is you also happen to be the rare breed of having also computer science training. So this must have driven you nuts. Yeah, exactly. I mean, in the law firm, I'll never forget, I was building like browser plug-ins.
to go on top of the tools I was using just to make my life more efficient and effective. And actually, one of the reasons I left the law firm to start a company and apply to YC was I got in trouble with the general counsel who thought like, hey, why are you spending all your time doing this tech stuff
And also made at the time very clear that my law firm owns all that technology. So I decided to do something different. So do you want to tell us a little bit about the first 10 years of Case Text, the sort of like long slog in the pre-LLM era? One of the lessons here, I think, that I took away from that time period was
is that when you start a company, you may not get the exact right, you may have like the right kind of general direction. You know, there's a problem you're trying to solve it, but it could take a very long time to figure out what the solution is. For us, for example, you know, we saw that there was this kind of combined issue of like bad technology in the legal sphere, but also like this very, like a lot of lawyers use content to do things like research and understand like what the law is.
And so we thought, okay, well, we can do the technology better, but how are we going to get this content? And we spent like a couple of years trying to get, as Gary said, lawyers to annotate case law and to provide information. So it was like a UGC site, like a user-generated content site. Yeah, that was a big focus of ours, like the kind of one-two punch of better technology, but also better content. At the time, our heroes were like Stack Overflow and Wikipedia and GitHub and other kind of open source or UGC kind of websites.
And it was a total failure. Like we could not get lawyers to contribute their time and information. And I think these are just different populations. The typical Wikipedia editor has more time on their hands than they know what to do with. And so they're adding, not all, but many do. And they're adding content for free, um,
And altruistically, lawyers bill by the hour. Their time is incredibly valuable. They're always running out of time. They had no time to kind of contribute to some UGC site. So we had to pivot. And we started investing very deeply. At the time, it was not called AI. It's just like natural language processing and machine learning and saw that
First of all, we didn't need to create all this UGC to replicate some of the best benefits of what our competitors had in these big content databases. Some of it you can basically do even then on a kind of automated basis.
And then also, we were starting to create these user experiences that were a lot better than what our competitors could offer based on... Then at the time, what seems kind of quaint AI stuff, like the same recommendation algorithm that powers Pandora and Spotify's recommended music.
You can use, they look at basically is how this song relates to that song. People listen to this, also listen to this and this and this, right? Similarly, we looked at, okay, cases that cite to, you know, other cases, they all reference earlier opinions. You know, they kind of build out this network of citations. And we found ways that we can check a lawyer's work. They'd upload their work so far and be like, well, everybody who talks about this case talks about this case too. And you missed that. So cool experiences like that. But
The truth is, until the very end, until co-counsel, a lot of what we did were, relatively speaking, kind of incremental improvements on the legal workflow. And one of the things that's kind of weird about this is when there's just an incremental improvement, it's actually pretty easy to ignore. A lot of our clients, they would never say this literally, but you get this impression, you walk into their room, their office, and you try to pitch them a product. You say, this is going to change everything about the way you practice.
And they go, well, I make $5 million a year. I don't want nothing to change. This technology, I do not want to introduce anything that has the opportunity to make my life at all worse or potentially worse or potentially more efficient because they build by the hour. It was really only after, like much later when ChatGPT came out. At the time we were privately and secretly working on GPT-4, ChatGPT came out. And all of a sudden, every lawyer in America, probably in the world,
saw, oh my God, I don't know exactly how this is going to change my work, but it's going to change it very substantially. They could feel it. And the same guys and gals were telling us, I make $5 million a year. Why would it change anything about my life? We're like, I make $5 million a year. This is going to change something. I need to be ahead of this. The technology itself, and we'll get into that in a second, really changed what we can build for lawyers, but also the market perceptions of what was like
what was necessary really changed as well. And for the first time in our 10 years, even before we launched CoCouncil publicly based on GPT-4, they were calling us and like, we know you work on AI,
we need to get on top of this. What can you show us? What can we work on? And I think it's because the change was not incremental anymore. It was like fundamental. And all of a sudden they had to pay attention. They could not ignore it. I guess the mental model I have for you is there's this concept of the idea maze. The founder goes in the beginning of the maze and they're just like feeling around, like actually in the arena, talking to customers, learning like where are the walls, which path to go? Should I go left or right? Like,
And then, as is actually common for startup founders in the idea maze, you will actually reach a dead end. And then usually you have to pivot. And then I think you have a very interesting story because you were sort of towards the end of maybe like one of the parts that weren't going to get you all the way to product market fit.
But then LLMs dropped, and then it's like the maze got shaken up. And then you were actually much closer to product market fit than absolutely anyone else. And so that's why this is a crazy time. Yeah, I think it's exactly right. That's why you're the first man on the moon. Yeah, I think there's really something to that. And the thing is, each time we progress through that maze,
It felt like maybe now we're a product market fit. You know, we were making real revenue before we launched CoCounsel and we had real customers and they said really great things about us. I keep on thinking about this article written by Marc Andreessen in like the early 2000s. I think it's called The Only Thing That Matters.
And in it, he describes what it feels like to have product market fit. He lists things like your servers will go down. You can't hire support people and salespeople fast enough. You're going to eat for a year free at Buck's, the kind of famous Woodside diner where a lot of VCs will take you. And I read that early on in my career. And I was like, okay, well, that's hyperbolic. But when we launched Co-Counsel, it was literally exactly that. Our
Servers were going down. We could not hire support people fast enough. We couldn't hire salespeople fast enough. I ate a lot of bucks. Before, it was a really big day if we were in the ABA Journal or some other legal-specific publication. We were on CNN and MSNBC. All of a sudden, everything changed. And that's what real product market fit looks like. I think market marks, even in 2005 or whenever the article came out,
exactly right about what it looked like in 2023. Can you talk about that crazy time? Because it was only two months from when you launched CoCouncil to getting bought for $650 million. So what happened in those two months? Well, to be clear, the transaction only closed six months after we launched, but it was two months that the conversation started. And so we started building CoCouncil. And just to kind of background purposes,
The idea we came up with, again, like 48 hours, like a weekend after seeing GPT-4 was, and it's something that is not the kind of sound crazy today, but it felt crazy at the time, which is this AI legal assistant, by which we mean it's like almost like a new member of the firm. You can just talk to it.
Not unlike how you might talk to something like ChatGPT today and give it tasks like, "I need you to read these a million documents for me and tell me if there's any evidence of fraud happening in this company." And then within a couple of hours, it's like, "I've read all the documents. Here's what the summary is." Or summarize documents or do legal research and put together a whole memo after researching hundreds or thousands of cases, answering the lawyer's initial research question.
And so in that sense, it was this really powerful extension of the workforce of these law firms. That was the concept from the beginning. And we made a very early initial version of it. And we started because we couldn't, you know, under our agreement with OpenAI, we could not be public about this product, but they did let us extend the NDA to a handful of our customers. And so we started having our customers use it.
And so, you know, for months before GPT-4 was launched publicly, we had a number of law firms, like they had no idea they're using GPT-4, but they were seeing something really special, right? This is actually even before chat GPT. So this is our first ever experience of
Talking to this godlike feeling, you know, AI that was all of a sudden doing these tasks that would take me when I practiced like a whole day and it's being done in a minute and a half. Right. And, and so as you might imagine, like it was, it was nuts. I mean, first of all, the whole company, all 120 of us did not sleep for those months before GPT-4 was like publicly launched and therefore could publicly launch the product.
We felt like we had this amazing opportunity to run far ahead of the market. Something really beautiful happens when everybody's working super, super hard, which is you iterate so quickly past. And actually, I still see some companies out there. They're stuck where we were in the first month.
of seeing GBD4, right? And I think it's because they're just not as intensely focused and engaged as we were able to be during those about six months or so before the public launch of GBD4. To do this transition, you had to shake the company. You kind of went into deep founder mode because there was a lot of pushback from employees. It was like, oh, this thing was working. Why should we throw ourselves into the deep end of AI? Oh, yeah.
Tell us about that founder mode moment for you. And so first of all, like this is especially true if you're running a business for 10 years because they have seen you wander through that maze and bump into dead ends. And a lot of those folks have been there for most or all that time watching, you know, me as the founder saying, we're definitely going this direction. It's definitely going to work. And sometimes it doesn't.
And you only get so many of those with employees, right? So this was maybe my last one that I had with some of these folks. And they're like, here Jake goes again with this crazy new technology and some idea we're going to invest deeply in. And yeah, it took a job to convince people. And if you imagine what some of the different roles are, if you're in the go-to-market role, if you're selling or marketing a product,
And we were making, you know, we're growing 70, 80% year over year. We're between 15 and $20 million in ARR. Things weren't like terrible. Right. That's great. Yeah. We were great. Yeah. But like, so they were like, what, why are we even the board? You know, some of the members, like I get this immediately. And some of them had to be persuaded. Right.
And about the founder mode moment, one thing that really worked for me is I led the way through example. I built the first version of it myself. Even with a 120-person company with a whole bunch of engineers and lawyers and stuff. Before that, you opened up your IDE and actually built the thing yourself. Oh, yeah. And part of it was...
The NDA only extended at first to me and my co-founder. That was it. That was a blessing then, actually. Yeah, exactly. It turned out to be perfect. And even after the NDA got extended a little bit, we kept it pretty small at first for the first little bit of time. I made my mind within 48 hours the whole company was going to do this. But we actually only told the company, I think, a week and a half after we first got access. And during that week and a half, we built the very first prototype version of this.
And again, I'll never forget this. The timing was just so funny. Like we saw it on like a Friday. We had it all weekend long. We're working with it. And then Monday was an executive offsite where everybody came, all my executives came and they expected that we're going to be talking about how we're going to hit our sales target for the next quarter. And it's like, guys, we're talking about none of that. You know, we are talking about something totally different right now. Let me show you something on my laptop, you know?
So yeah, I built the first version myself, but going through that process, me and then a handful of other people, I think it was really helpful. And we also brought in customers early and that helped convince a lot of people. As soon as like a skeptical sales or marketing or whatever person, or even engineer was on the other line at end of a Zoom call where a customer was reacting to the product in real time and giving us their honest reactions and like seeing the look on their face.
And again, you have to imagine, it's almost hard to imagine that the world was like pre-chat GPT, but some of these people are seeing that exact idea for the first time.
And they were just blown away. And that really changed minds quickly. I mean, we saw people go through existential crises live on Zoom calls. You could see their expression change. Exactly. In all kinds of ways. It's like, what am I going to do? A very common reaction amongst the senior attorneys we showed it to was like, well, they got a retargeting suit. I have to deal with this. And some of this was...
really driven by GPD-4 coming out. Like you had access to three, you had access even to two, I think. Is that right? We were in a close relationship with a lot of the labs, but including OpenAI, and they kept on
showing us stuff kind of early on in its development and they're like, well, can you build something with this for legal? And every time we're like, no, this sucks. Like, you know, by, by time we got to three and 3.5, it was like, okay, well, this is plausible sounding English and it sounds kind of like a lawyer. So kudos to that. But yeah,
it is just making stuff up wildly. It's very hard to connect it to a real use case, especially in legal where it's so important that you actually get the facts right. You can't hallucinate. You can't even make the wrong kinds of assumptions. And we had to do a lot of work with those earlier models to even get them close to usable. And they just weren't really... I mean, one totem or one example along the way is when GPT-3.5 came out, the study was run...
And it showed that GPT-3.5 got a 10th percentile on the bar passage. So it did better than some people, actually, but the 10% of them, yeah. Probably the ones who are just filling it out randomly, basically. When we got early access to GPT-4, we're like, well, let's run the study again, too. And we worked with OpenAI. We're like, we want to confirm this test is not in the training set, and it wasn't. Totally new test to it. And the test we ran, it did better than 90% of the test takers. So this is a big difference.
And also we started running some tests like, okay, here's like four or five cases to read.
Using those cases, write a memo, respond to this question. And we did a lot of prompt work to get it to essentially just do it accurately, to cite the actual things in context that we gave it and not make things up. And we're like, okay, well, this is very different than we saw before. So it's a big moment for us. And honestly, I'm not sure what the mindset was of the researchers we were working with, but it almost felt like by the time we were having that meeting, it felt like one of those other meetings we'd had in the past
where we were getting ready to say like this this is not going to work for legal keep on trying and i think they saw us go through maybe some form of the existential crisis on that call but our customers did and we're like oh wait this is super super super different i guess you know today we have a one we have you know chain of thoughts reasoning um i think a lot of people look at it as it's not merely the text itself but also the instructions that lead up to you know the workflow but you know
But way at the beginning, nobody knew any of this stuff. How did you start? You had your sort of tests that you had written for previous versions of the model. They outperformed. But then there's this moment where you say, OK, well, now it's something. But what do we do next? And how do we do it? So the process that we started with then, and it's actually not too dissimilar to what we're doing today, it started with a question of like, OK, well, what problem are we trying to solve for the user? The user wants to do research on
legal research. And they went like a memo answering their question with citations to the original source. So that's the end result. And then we're like, okay, well, how do we go from that end result, like working backwards almost, what would it take to get there? And what ends up happening a lot with the things that we built for co-counsel, we call them skills, which felt very unique at the time. I think a lot of companies now call their AI capability skills. So when you're building these skills, it turns out it usually takes
a lot of work to go from, say, the customer inputs something, say, a set of documents or a question or what have you, to the end result that they're looking for. And the way that we thought about it was, how would the best attorney in the world approach this problem? And so in the case of research, for example, the best attorney would get the request, say, from a partner,
and then break that request down into actual search queries that run against these platforms. And sometimes they use special search syntax that looks actually pretty like SQL almost, right? So from the English language query, you have to break it down into these different kind of search queries, maybe a dozen different search queries. You're being really diligent. And then they'd execute the search queries against these databases of law. And they come back with, say, like 100 results each.
And then the most diligent, best attorney would sit down and just read every single one of these results that come back, all the case law, statutes, regulations. And you'd start to do things like make notes and summarize and kind of compile an outline of what your response might be. Like line by line or paragraph by paragraph, actually. Yeah, 100%.
And you start like just taking out those like insights you're getting from what you're reading. And then finally, based on all of that work and all the citations you've gathered, et cetera, then finally you put together your research memo. And so we're like, okay, well, each one of those steps along the way for the vast majority of them, those were impossible to accomplish with previous technology, but now they're prompts.
Think step by step. Yeah. Think step by step. Yeah, exactly. But we actually broke it down each, each, you know, so getting to the final result may be a dozen or two dozen different individual prompts, each of which might, by the way, be thinking step by step themselves. But, um, and then for the, for each of those prompts, you know, as part of this like chain of, of actions you take to get to the final result, we hit a very clear sense of what good looks like.
and we're able you know we had a series like a battery of tests before but this got way more intense where we'd write at first maybe a few dozen tests and then a few hundred and a few thousand for every single one of those prompts so you know if if the the job to be done in the very beginning of this research process for example
is taking the English language query and breaking it down into search queries. We had a very clear sense of what good search queries look like and wrote like a gold standard answers for given this input, this is what the output looks like, right? And so our prompt engineers, and I was one of them at the very beginning, we all just kind of ended together. We're writing these English language prompts to try to write the test first, basically.
And wrote these English language prompts to try to get it. So of 1,200 times, they got the right answer 1,199 times or what have you. So sort of like test-driven development. Oh, yeah. Really approached from doing software engineering to prompt. That's exactly right. And the funny thing is, I never really believed in test-driven development before prompting. I was like, oh, the code works. It doesn't. It's fine. You'll see it when you... But with prompting, actually, I think it becomes even more important because of the kind of like...
nature of these LLMs is they might go in crazy directions unexpectedly. And so you might very easily add in a set of instructions to solve one problem you're seeing with these sets of tests, and then to break something with these sets of tests. And so that exact kind of theory of test-driven development applies 10x more, I'd say, in the world of prompting.
There's a lot of sort of the naysayers saying that a lot of companies are just building GP wrappers and there's not a lot of IP getting built. But it's actually, there's a lot of finesse to how you explain all of this. Like, can you tell us about all of that and how much more there is to be built? Oh, yeah. I mean, I think the thing is when you're actually trying to solve a problem for a customer and actually doing the job, in our case of like what a young associate might do and do it really well.
There are many layers of things you have to add in to actually get the job done. And by the time you like add that all up,
You're not like a GPT wrapper. You're a full application that may include, in our case, proprietary data sets like the law itself and our annotations to the law that we added automatically. It may include connections into customer databases. In our case, in legal, they have these very specific legal specific document management systems. So connecting into those is very important. It may include something as subtle as how well you OCR and what OCR programs you use and how you set those up.
When you're doing that task of, you know, one of the tasks that the co-counsel does, for example, is reviewing large sets of documents.
Once you start working a lot of documents, you see stuff like handwriting all over it, and they're tilted in the scan. And there's this crazy thing that they do in law where they print four pages on one page to save room. And all OCRs can read it directly across, but actually it goes one, two, three, four. So by the time you've dealt with all the edge cases, frankly, not even before you hit the large language model, everything else up to the large language model, there may be dozens of things you've built into your application to actually make it work and work well.
And then you get to the prompting piece and writing out tests and very specific prompts and the strategy for how you break down a big problem into step-by-step-by-step kind of thinking.
and how you feed in the information, how you format that information the right way. All of that also becomes like, you know, your IP. And it's very hard to replicate, very hard to build, and therefore very hard to replicate. Which is all the business logic, which is all, even all the very successful SaaS companies with very specific domain, you need very, very custom esoteric niche integrations like plug into this esoteric
law database? Yeah, absolutely. Two things that I think about all the time. It's like basically all SaaS for a while was just like a SQL wrapper, right? Like if you think about like very successful companies like Salesforce, they've built that business logic around basically just databases and connections between like tables and a database.
And sometimes bridging that gap between something that either a very technical person can do but most people can't and making it accessible or bridging that gap between that almost works. You can do a lot of cool demos in ChatGPT without building a line of code, but that almost works and works 70% of the time. But going to 100% of the time is a very different kind of task.
And people will pay $20 a month for the 70% and maybe $500 or $1,000 a month is going to actually work depending on the use case. So there's a lot of value gained going that last mile or 100 miles, whatever it is. Yeah. Can you talk about how you went from 70% to 100%? Because I think the other knock on this technology that we hear a lot is like, oh, these LLMs hallucinate too much. They're not accurate enough for real world use. But
As you said earlier, the use case that you're working on is a mission-critical use case. There's a lot at stake if the agent gives bad information to lawyers who are working on important court cases. How did you make it accurate enough for lawyers who are conservative by nature to trust it? This test-driven development framework, first of all, goes a long way because you can start seeing patterns in why it's making a mistake.
And then you add instructions against that pattern. And then sometimes it still doesn't, you know, do the right thing. And then you kind of really ask yourself, okay, well, was I being super clear in my instructions? You know, am I including information? Doesn't, you know, it doesn't, it shouldn't see or too much or too little information for it to really get the full context.
And usually these things are pretty intelligent. And so usually you can root cause why you're failing certain tests and then build to a place where you're actually passing those tests and just getting it right. And one of the things we learned is if it after passes, frankly, even 100 tests, the odd that it will do on any random distribution of user inputs, the next 100,000, 100% accurately is very high.
One of the things that strikes me that is tricky, like many founders we work with are very tempted to just raw dog it. It's like no evals, no test driven. We're just like vibes only prompt engineering. And maybe, I mean, you switched over to this very quickly then. Like, was it just obvious from the beginning? You're like, we just can't do it that other way. We should not raw dog any of these prompts. Yeah, I think the biggest thing, first of all, depends on the use case.
For a lot of things that we were working on, for better or for worse, there was a right answer. And if you get the wrong answer, lawyers are not going to be happy about it. I had been a lawyer myself, but also been signing lawyers for a decade. Every time we made the smallest mistake in anything that we did, we heard about it immediately.
And so I had that voice in my head maybe as I was going through this process. How is the learning from the 10 years of slogging through pre-LMs? You're like, no, it has to be 100%. Oh, yeah. Oh, yeah. It's probably true of way more domains than we realize, actually. It could be. Because another thing that we're thinking about a lot is you can lose faith in these things really quickly, right? You have one bad experience, especially if your first experience is bad.
And you're like, you know, maybe I'll check on this AI stuff a year from now, especially if you're like a busy lawyer, not a technologist. So we knew we had to make that first encounter the first week really, really work for the lawyer or else they're not going to invest in it deeply. So let's talk a bit about OpenAI 01 because it is very different model. I mean, up to this point with GPT-4 and all that previous generation, it's
The analogy in terms of the intelligence is sort of the kind of system one thinking in the Daniel Kahneman type of intelligence, right? He has this whole economic theory about the Nobel Prize around it. System one thinking is just very fast. It's kind of these decisions that humans make very intuitively and based on patterns. And Elams are fantastic at that. But they're terrible at the executive function.
Because what I'm hearing with all the stuff that you're describing is kind of you're just giving the LLM like executive function is like, how do you think it's right? How do I manage you? It's really that slower thinking. And I think O1 is exciting. We haven't seen things built yet because it just got announced a few days ago, right? Yeah.
I think it's getting to that system two thinking. And I think this has been a big area of research, which I saw a lot in a near reps a year ago, where a lot of the researchers were excited to unlock this because this is the missing piece to where AGI. Let's talk about what are your thoughts on O1 and how this changes? So first of all, I think O1 is a very impressive model.
Like with other things, we gave it the kinds of tests that we knew were failing and the degree of, and it's not just math, degree of thoroughness, precision, intelligence applied to some of these questions. And sometimes it's the stuff that you wouldn't expect you need a super smart model to do. Like in one of the tests that we run, we give it a lawyer's real legal brief.
But we edited very slightly some of that lawyer's quotations to the case to make it a wrong quotation or wrong kind of summarization of his case. So it's like 40 page legal brief. You alter things with just adding the word like not can change the meaning of something entirely. Right. And then we give the full text of the case as well to the AI. And we say, well, what did you know? What did the lawyer say?
uh get wrong about this case of anything and literally every llm before that would be like nothing it's perfectly right and it's just not a precise thinker about some of the the very nuanced things to be altered about the brief to make it slightly wrong and then one got gets like immediately like you said like it thinks actually for a while like it sits there for a minute you're like is this thing this thing on you know like but then then it starts answering and
It's like, oh, well, you know, change an and to a neither nor. So those are the kinds of tests that you kind of expect even, frankly, earlier AI, like LLMs to be able to pass, but just could not. And all of a sudden, O1 is even doing these things that take like precise detail thinking. Obviously, we don't have the internals on how O1 really works. We have, you know, this broad idea of chain of thoughts. Seemingly, we know that if
OpenAI had a giant corpus of internal monologue of people thinking through doing things step by step, O1 would be even a lot better. It sort of rhymes with
The thing you did to you know, put your first step on the moon, right? Like yeah, it rhymes with break it down into you know Chunks where you can get to a hundred percent accuracy instead of just throw it all in the context window And you know, maybe magically it will work. Yeah, do you think that that's what's happening then? I think there's a good shot that that they've had, you know Maybe change what their contractors are doing instead of just doing, you know input in
answer out, they're doing input in how would I think about solving this problem and then answer out. But then it's, you know, the interesting thing is then it's kind of limited by the intelligence of the people writing those instructions. And one of the things that we're investigating for what it's worth with O1 is can we prompt it to tell it what to think about during its thinking process and inject like, again, like we've hired some of the best lawyers in the country.
How would some of the best lawyers in the country think about solving this problem? And maybe, you know, we have no conclusive evidence one way or the other yet that this dramatically improves things. It's so early and just not enough time yet has passed. There's a chance that one of the new prompting techniques with O1 is teaching it not just like how to answer the question, what examples of good answers look like, but how to think.
And I think that's another really interesting opportunity here is injecting domain expertise or just your own intelligence. I'm just so thankful because I think
you're sort of sharing the breadcrumbs and where there are a great many other spaces where this technology is just beginning. I mean, you go to pretty much any company, people have no concept of what's just happened. They actually literally still repeat all of those sort of tired tropes of, oh, you better be fine tuning or all these things are just not connected to like
what we're seeing day to day with startups and founders trying to create things for users. What I'm kind of glad for is that we get to actually share this news, like this knowledge, because like even the things we talked about, you know, hey, you should probably do evals. Like there's a lot of alpha and getting to 100%, not just 70%. These are sort of the breadcrumbs that will actually go on to create
all of the billion dollar companies, maybe thousands of them, actually. We hope so. I mean, I think that you're starting to see a lot of other fields like law really level up when you don't have to spend, you know, millions of dollars in six months, literally in a basement reading document by document by document, right? When you actually can just get past that and get just the results. Now you're thinking strategically and intelligently. And the unlock for these companies, I mean, they currently pay, again, millions of dollars in salaries for these jobs to be done.
Each of them. Right. So for any company to come out with a AI that can do even 80% of that, the value is really there. And I just want to encourage people to not kind of give up based on those tropes, right? Like, oh, it hallucinates too much. It's too inaccurate. It's too whatever. There's, for example, of anything, it's like, there's a path.
And you can do it. And there's some good news in that, you know what? The jobs aren't going to go away. They'll just be more interesting. That's what I think. Yeah. Well, with that, we're out of time. But Jake, thank you so much for being with us. Thanks for having me. See you guys next time.