Industry expertise helps map out the most valuable use cases and understand how they synergize, which is crucial for creating compounding value. Technical AI knowledge alone is insufficient; integration with enterprise data, workflow design, and change management are equally critical for adoption and success.
Nikhil co-founded a company in 2013 using convolutional neural networks for low-cost malaria detection. He authored a textbook on deep learning and worked with mentors like Jeff Dean and the early OpenAI team, which shaped his conviction about AI's potential in healthcare.
Nikhil grew up with chronic heart conditions and saw his parents struggle with the U.S. healthcare system, which solidified his commitment to working in healthcare and improving the system through technology.
Remedy Health faced technical limitations, market readiness issues (e.g., consumer reluctance to adopt virtual care before the pandemic), and the complexity of building a medical practice from scratch, including hiring doctors and navigating insurance systems.
The shift was driven by the rapid advancements in AI, particularly the transformer architecture, which made previously intractable problems in healthcare seem solvable. Ambience focused on leveraging these advancements to create a better-integrated platform for healthcare institutions.
Only about 25-27% of a clinician's day is spent on direct patient care, with the rest consumed by administrative tasks like documentation, coding, and prior authorization.
Ambience focuses on solving specific pain points like medical documentation, which clinicians spend a quarter of their day on. They aim to automate these tasks to free up more time for direct patient care, ensuring the AI solution is robust and integrates seamlessly with existing workflows.
Off-the-shelf models often hit performance ceilings quickly due to the complexity and esoteric nature of healthcare knowledge, which is typically passed down through apprenticeship rather than public domain information. This makes fine-tuning and domain-specific models essential.
Founders should deeply understand their industry, map out valuable use cases, and ensure their team has the right mix of ML expertise and industry knowledge. Collaboration between product managers, engineers, and domain experts is crucial for building effective solutions.
Ambience recognizes that not all clinicians will adopt new technology immediately. They focus on creating nuclei of success within institutions, where early adopters champion the technology, gradually expanding its use while addressing any product limitations promptly.
Coincidentally, we somehow found ourselves while building in healthcare, literally in the middle of this most recent wave of AI progress, we ended up becoming one of the first teams to catch wind of the transformer architecture. We ended up putting it in production across a wide variety of use cases and learn firsthand all the challenges that come from trying to put transformers in production. And then from 2017 to 2020, several of our closest friends, some of whom actually were our literal housemates, proceeded to work on some of the most important
projects inside of OpenAI and subsequently Anthropic from PPO to the Scaling Loss Paper to GPT-2 to GPT-3 to RLHF. And across the span of those three years, as you hear your closest friends sort of talk about all the experiments that worked and all the experiments that didn't, we just started to see a lot of the puzzle pieces come together.
And a lot of the problems that we previously were trying to solve for our own clinicians that felt like we were hitting hard technology ceilings started to feel like, hey, they might be tractable today or they might be tractable over the next couple of years. Hi again, and thanks for listening to the A16Z AI podcast.
I'm Derek Harris. And this week, we have a discussion between me and Nikhil Bhaduma, who's the co-founder and chief scientist of an AI-powered healthcare company called Ambiance.
Ambience aims to improve the lives of both clinicians and patients by producing detailed reports across a number of medical specialties and tasks. Prior to Ambience, Nikhil co-founded another startup called Remedy Health and also authored the Fundamentals of Deep Learning book for O'Reilly in 2017, as well as a revised version in 2022. In this discussion, we talk through all of this, beginning with his childhood experience with chronic heart conditions that cemented his commitment to working in healthcare early in life.
But we also friend the discussion through the lens of advances in AI, to which Nikhil had a first-row seat. Google AI leader Jeff Dean was an early investor in both of Nikhil's companies, and he was roommates with some of the early OpenAI team who led the research on some important advancements in the field.
So stick around for an enlightening discussion about how changes in artificial intelligence have reshaped the way companies are built, how teams should think about adopting and applying AI models in their own products, and some best practices for building lasting companies in complex vertical industries.
As a reminder, please note that the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. For more details, please see a16z.com slash disclosures.
I actually started off my career falling into healthcare first. I was a pretty sick kid growing up. My parents immigrated from India. Didn't really understand how the healthcare system here worked. I had several heart defects when I was born. Ended up with a pretty complicated recovery process since then. And so I was constantly inside of the healthcare system as a kid. My parents ended up running into a lot of financial issues, sort of trying to just manage the cost of my care. So
For me, it was a foregone conclusion growing up that I was going to be in healthcare in some way, shape or form. And so I was on this sort of traditional track of becoming an MD PhD when the first exciting wave of machine learning kind of struck. So I spent about eight years doing research at Soundstate and Stanford University. And in 2009, across campus,
there was a group run by Andrew Ng in the computer science department. They were basically showing how they could use GPU compute to accelerate these large machine learning models. I think they had trained a 100 million parameter deep belief net that would normally take them weeks to train and got it to actually work in a single day. I didn't really fully appreciate it then, but over the next...
couple of years, we started to see a series of breakthroughs that I think kind of gave rise to what we call deep learning today, sort of culminating in 2012, GPU compute being applied to convolutional neural networks and sort of the leapfrog that AlexNet had on all the ImageNet benchmarks.
And so sort of being in biology and in medicine in parallel, I personally just became incredibly interested in what these techniques meant for medicine. I ended up co-founding a company. It's working in a different part of the health care stack now. But back in 2013, we were trying to use ConvNets to power these devices that would do low cost malaria detection in the field. And then I started to write a series of blog posts about
with the goal of teaching more researchers in biology and medicine how to use these techniques for their own work. And eventually those blog posts got picked up by O'Reilly. It turned into one of the early textbooks on machine learning and deep learning methods. And so that's sort of how I fell into machine learning and AI. I ended up meeting my co-founder, Mike, at MIT around this time. And he has his own incredible story sort of navigating the healthcare system. He fractured his back.
was told he would never be able to walk again because he was originally misdiagnosed with a sprain when he actually had a fracture. We were just obsessively thinking about sort of the intersection of machine learning methods and healthcare and spending a lot of time with some of our close mentors at the time. Jeff Dean has been an investor in both of our companies over the last decade, but back at that time he was building the early days of Google Brain and Google Health.
And then while I was in grad school, I got the call from Sam and he was at that point in time building the early days of OpenAI. They were still a hodgepodge of researchers building out of Greg's apartment in San Francisco.
I think it was obvious to us that the tech at that time was really early. But if you think back to all the major technology trends that were in the zeitgeist at that time, I think both Mike and I just felt incredibly strongly that if we could create highly capable, general purpose AI models, that would have the single greatest impact on how we work as a society. And we didn't know if it was going to take five years or 10 years or 20 years for all the pieces to come together. But when they did,
Based off of our personal experiences with the healthcare system and what the healthcare system has meant for a lot of our loved ones, we just felt like healthcare medicine was going to be one of the most meaningful opportunities for this class of technologies.
Early on, how did those mentors and how did those relationships kind of shape what you built and how you thought even thought about going about building a company? I think probably the biggest takeaway from working with them was cementing conviction that this is a space worth working on. I think with a lot of early technology trends, it's not always obvious that you've actually started to focus on them at the right time.
You look around and all the smartest people you know are betting their entire careers on this technology trend. I think you can kind of see the threads play out in your mind, but I think seeing the smartest people around you decide to double down, I think really helped shape conviction. One big lesson that we both got from both Jeff and Sam and the early OpenAI team, and it was basically that we shouldn't be afraid.
to take a really, really big bet. A lot of things that may not seem tractable today may very quickly become tractable over the next three to five years. And so focus on problems that feel valuable and think about what the world could be, as opposed to be hyper fixated on what's precisely possible today. How would you describe the state of AI in 2016? And also like what it looked like and how did that shape what you ultimately built? What?
Well, I think the spaces have definitely evolved in kind of a breakneck pace for the past decade. Just to give you a bit of a sense, I ended up starting to write the book around 2014. So it took about two to three years from 2014 to 2017 to write the first version.
And even in that timeframe, I ended up having to rewrite major portions of the book several times because the field was just moving that quickly. And something you'd write six months ago, you realize is no longer actually relevant. One example, there was a huge section in the book previously dedicated to deep leaf networks. And in the span of 12 months, the field basically decided, nope, we're not really building off that architecture anymore.
I think what's similar between today and 2016, we're still using deep learning architectures that benefit heavily from increasing the amount of scale in compute and increasing the amount of scale in terms of data. I think it's still valuable to put lots of effort and thought into curating high quality use case specific datasets. I think what's still really important is this whole field of machine learning operations or MLOps infrastructure that's been designed to
measure and monitor model performance to define what quality really means and then be very thoughtful about measuring it across all parts of the machine learning lifecycle. I think all those pieces are still the same.
But I think back in 2016, the barrier to entry was just incredibly high because you didn't have very many high-quality based models. So most people had to train models from scratch. And most of these model architectures suffered from a phenomenon we now call catastrophic forgetting, which means that they're very bad at learning how to perform on new use cases once you fine-tune them for a specific use case. So you ended up having these models that were built for very narrowly defined use cases.
It wasn't that machine learning models were not successful in production back in 2016.
There are many, many examples of machine learning products that were incredibly successful. But the kind of team investments and capital that had to go into actually going from concept to research prototype to production in the field was probably took probably 10 to 100x the amount of resources that companies would have to put in today to make their applications intelligent. Yeah, the applications you heard, the teams you heard deploying machine learning and production were huge names with huge teams and huge budgets and...
Like they'd been working on this stuff for a long time. Given all that, what did you end up building with Remedy Health? Like what was the product you were trying to sell or that the company is still trying to sell, I guess?
Mike and I ended up starting Remedy with two other co-founders, two of our close friends back in 2015. There was an opportunity to leverage technology to build a new kind of care delivery asset. We could use technology to find ways to deliver higher quality care at a lower cost with a better experience for everyone, for patients, their families, for physicians, for nurses, for everyone.
For first-time founders, one of the scariest parts about building for vertical-like healthcare is from the outside, the industry just feels incredibly complex. And I think this is especially true for a space like healthcare because it actually involves becoming an expert in three different areas from the science of medicine, like how the human body works, what happens when things go wrong, how a doctor actually diagnoses, treats, manages a patient. So you've got to be an expert in the science.
And then there's the modern day system and practice of health care, which is everything from how is the health system organized? What are the functional building blocks? Why are those building blocks important? And then there's the economics and the business of health care. What's the relationship between government, insurance companies, private payers, health systems, and how each part of the health care system makes money? You have to actually deeply understand all those pieces to even begin to build something useful in health care.
For us, Remedy was a crash course in all of this because if you were going to build a medical practice from scratch, you had to learn how to hire doctors. We had to implement a medical record. We had to learn to work with insurance companies. And I think what's exciting is despite having to go through that crash course, it enabled us to build things that I think would not have been possible otherwise. So we ended up creating this whole range of technologies for our clinicians to help improve the care delivery model.
And structurally, because we were delivering care, we were actually hiring clinicians, we were our own customer for all the technology that we built. So we ended up in the early days, we built up these chatbots to actually talk to patients ahead of the visit, accelerate intake and documentation. Obviously, when we talk about chatbots, those are incredibly primitive compared to the kinds of chatbot experiences that we're talking about today. We ended up creating an AI-powered tool to guide nurses to conduct more intelligent phone screens. So we'd surface up questions they should ask
record down the answers, figure out based off of the information they've collected what's the right next best question to ask. And the goal was to help them identify undiagnosed untreated chronic disease in the patients we cared for. And then we actually even put Transformers in production in 2017. We were ingesting claims data from CMS, which is essentially data that's shared between a health system and the insurance company to describe the treatments that were being prescribed and the visits that were happening. And we're using those streams of claims data to predict the risk of preventable hospitalization.
which is a huge driver of cost in the healthcare system. So that's sort of the high level story of what we ended up building. I think it was very difficult for a number of reasons. I mean, part of it was specific to us. We were first time founders learning how to build a great company for the first time. Part of it was timing from a technology perspective for a lot of the things that we wanted to build. There were just hard technical limits before we hit the R&D frontier, especially for some of the most valuable problems that we wanted to solve for our clinicians. I think there was also a market readiness perspective, I think, before the pandemic.
consumers were not particularly ready for virtual care and telemedicine. And so there was an uphill market education problem. But I think, you know, all in all, despite it being a challenging company to build, I think it forced us to build a really deep understanding of all the messy internal guts of how healthcare is delivered. I don't think ambience exists today without going through the experience of trying to build remedy.
I'm going to point to IBM Watson as a victim here, but you guys seem to say, "Hey, listen, we need to actually build this thing from scratch," whereas it seemed like at that point, AI was just mature enough that some companies would sprinkle it on its magic dust and be like, "This will revolutionize." It was like, but it never panned out. You're right. That part of it is definitely technology readiness. The story behind IBM Watson is probably way more complex in that it's a combination of picking the wrong sets of problems to focus on,
At that time, their oncology products were incredibly focused on clinical decision support. But I think for most oncologists practicing in a research institution, the bottleneck isn't making the right decision. The bottleneck is actually getting all the right information together to make good decisions.
And if you spend too much time optimizing the actual algorithms around decision making and not enough time thinking about what is the connective tissue with the rest of the infrastructure to make sure you're ingesting the right information in the right parts of the workflow to be even able to make an impact on the ultimate outcome in the end.
What ends up happening is you end up building something that's not actually achieve any kind of adoption. And I think that was the big problem for IBM Watson is focusing on the wrong problem. And I think part of it just might be, it's very easy to get fixated on the high level story of, hey, we're building technology to cure cancer. But it turns out there's a lot of
devils in the details. And I think in some ways being a customer for technology ourselves, we had to face the brutal reality of what was it our clinicians needed, even if sometimes it wasn't necessarily the most sexy or attractive problem to work on. So ultimately then what led you to kind of walk away from Remedy Health and launch Ambience? Again, with Mike, it sounds like as a co-founder. So what was kind of the catalyst and the shift there?
I think in the process of learning about how healthcare is delivered firsthand, I think there was a couple of interesting threads that were happening in parallel. Coincidentally, we somehow found ourselves
While building in healthcare, literally in the middle of this most recent wave of AI progress. So Jeff, as I mentioned, was an investor across both of our companies. So we ended up becoming one of the first teams to catch wind of the transformer architecture. We ended up putting it in production across a wide variety of use cases and learn firsthand all the challenges that come from trying to put transformers in production. And then from 2017 to 2020, we
Several of our closest friends, some of whom actually were our literal housemates, proceeded to work on some of the most important projects inside of OpenAI and subsequently Anthropic, from PPO to the Scaling Loss Paper to GPT-2 to GPT-3 to RLHF.
Across the span of those three years, as you hear your closest friends sort of talk about all the experiments that worked and all the experiments that didn't, we just started to see a lot of the puzzle pieces come together. A lot of the problems that we previously were trying to solve for our own clinicians that felt like we were hitting hard technology ceilings started to feel like, hey, they might be tractable today. They might be tractable over the next couple of years.
We needed to build a team that was focused on taking what would be these increasingly capable general purpose reasoning models and fine-tuning them specifically for healthcare and medicine to be thoughtful about all the right safety guardrails and then figure out all the rest of the puzzle to bring them successfully into the clinic.
What if we could take all the tech we were trying to build at Remedy, but build the second version of that platform and then focus our energy on making that the best in class platform and partnering with healthcare institutions that are already world class at delivering care to be able to do that more effectively?
We were just coming to increasing conviction that healthcare really needed the help of AI. This might be familiar for some of the audience that understands healthcare deeply, but for folks coming from outside of healthcare, as a country, we're in a tough spot. Healthcare accounts for 17.3% of our GDP. There's increasing levels of pressure on the system.
The most recent estimate is that there's 11,000 seniors aging into Medicare every single day. And so we're expecting a national shortage of something on the order of 125,000 physicians over the next decade. You want to take a guess, Derek, how much time a clinician actually spends? What percentage of a clinician's day is actually spent on direct patient care? My experience is going to be very minimal. I'm going to go with about
It's a little bit better than that. I think it's more like 25, 27% of a clinician's day is spent on direct patient care. But that doesn't feel like a recipe for success given that sort of macro context, right? But we saw for our clinicians and with a lot of our peer organizations that we saw, the other 73%, that's basically an administrative burden.
from writing documentation digging through the medical record to find information it's been on coding and billing which is essentially how insurance companies decide how much a health system gets paid for the work that they're doing it's on something called prior authorization which is how insurance companies decide whether a patient should get access to a treatment or a diagnostic it's
because of utilization management, which is how health systems make decisions about where a patient should be receiving care. Should they be in the medical ward? Should they be in a skilled nursing facility? Should they be in the observational unit? These are all things that physicians didn't go to medical school to go do.
What if we could leverage AI to flip that number? So instead of spending 27% of their day on direct patient care, we could use technology to make that more like 73% or more of their day spent on direct patient care. We ended up focusing on the problem of medical documentation to start. That was a challenge where our clinicians were probably spending a quarter of their day spending
summarizing their conversations and summarizing all the decisions they were making during the visit, making sure it was entered in the medical record. It was probably the single most painful part of their job that they complained about every single day. We never got to a place where we built something that worked for documentation when we were building Remedy.
And we considered partnering with a number of other technology startups and big tech companies that were trying to build technology that worked in this space. And nothing that was ever shipped during the time we were building Remedy actually moved the needle for our clinicians. Not only did we see our clinicians experience the problem, but at the start of the pandemic, there was actually a massive market for
for essentially hiring human scribes to follow doctors around and take notes for them. So we spent a ton of money on labor. I think we were spending $4 billion a year as a healthcare system to employ 100,000 human scribes to go do this. And so we felt like, look, if we could build the system that listens to the conversation, infers with the underlying decision-making that's happening in the visit, and automatically
automatically syncs all this information, all the structured unstructured data with the EHR on behalf of the physician. So they're not typing and clicking and multitasking during the visit and catching up on documentation afterwards. That was going to be the right initial wedge to become an indispensable part of the clinician's workflow. That being said, as you can imagine for a use case like this, like quality is mission critical. And if you don't get to a certain level of quality, physicians just don't adopt. They just abandon the product very quickly.
And so it took us a couple of years to get to a level of quality where we felt like, hey, this is something that a clinician is going to use every single day for every single visit. But once we got that wedge, we then leveraged that as an opportunity to start to expand sort of the platform offerings.
From a technology standpoint, it seems like 2017 to 2020 was, that was like right on the cusp of this stuff really starting to take off if you were paying attention. We were fundraising for our seed round back in 2020. Most investors would not take a call with us. They thought what we were pitching was so far out science fiction that we didn't even get most first meetings.
So you mentioned this book you wrote for O'Reilly and kind of the state of the world, let's say 2016, 2017. But then come 2022, you co-authored a second edition of that book or was published in 2022 at least. Aside from the technical advances that everyone in the industry saw, as a founder or as a company even, how do you think about keeping up with the state of the art, right? Like if you have a product that seems to be working...
And there was just these new techniques coming out or these new architectures or these new ideas coming out in a weekly, monthly, sometimes it seems like daily basis. How do you keep up with that? And how do you like know when to leave well enough alone or how do you know when to hit the gas on something?
I can speak to our personal experience, which is that a lot of the kinds of tasks that are important to us are at the border of what machines can versus cannot do today. As a company, we actually obsess over the frontier.
And that's a combination of both what's being published and what people are actively sort of talking about on the internet, as well as sort of what are some of the best researchers in this field thinking about and how do we anticipate the field is going to move over the next 12 to 24 months? Because the kind of R&D investments that we make ourselves have to be complementary not only to the state of the art today, but how we expect that frontier to shift over the next 12 to 24 months.
That being said, you know, if the techniques that are available are serving you well for use cases today, you can operate at a level where you just say, look, the foundation model makers are going to continue to make the next generation better and better and better. My goal actually is just deeply understanding my users, my customers, their business, and continuing to build out the rest of the connective tissue around the product that's required to make sure that as those models get better and better and better, I've got the chassis to then deliver it.
But I think it really depends heavily on the kind of use case. I obviously have strong opinions as to what companies are going to end up becoming valuable when all is said and done. But I think that's probably the lens I would look at that question. Yes, let's dig into that. I mean, because it sounds like maybe the difference is, are you building a vertically focused application or company versus are you building a horizontal? Are you building what you might call an AI company versus a company that uses AI?
I think in some ways, building an AI company versus building a traditional enterprise SaaS company, there's a lot of parallels still, right? Which is, first, you got to pick the right set of use cases to go after. Figure out what is that burning payable need, that hair on fire need that people are going to be able to allocate budget to today. Two questions we oftentimes ask internally that helps us kind of solve that question is, where do people whose time is expensive...
end up wasting or spending a lot of their energy and attention. That tends to be a really good place to look. And the second question is, where is there inconsistent quality in work product where if you could actually fix that problem, if you can get to consistently high quality, where that actually has high economic value? If you can get to a place where you're solving something that fits one or both of those shapes, can likely be a really high value use case.
And then once you've discovered a high value use case and the next question is, all right, well, how do I make sure that the models perform well in that use case? And you're going to have a range of companies that go from, hey, the off the shelf GPT-4 models just like knock it out of the park all the way through to the models have no idea where to even begin on this problem. And that dramatically changes the shape of the team that you need to build.
And then I think one cannot underestimate the importance of integrating with the right sources of data to make sure that models are able to reason over the problems that matter. Making sure that you nail the design, user experience, the workflow, the change management, and actually build out a strong delivery muscle because chances are you're changing the way people work and
That means that your customers may need more support than you might realize to actually realize the full value of what you're building. And so my sense is a lot of these machine learning companies and modern AI companies actually are going to be investing more heavily in the delivery muscle. And it's probably familiar to a lot of enterprise SaaS motions, but I think this is especially important for machine learning and AI companies. So I think all those pieces actually do have to come together. Hey, if you've got to invest heavily in the model stack, this might be a big complex company that you're building.
How are you thinking about foundation models? And I guess maybe LLM specifically, but let's just say in general, say, do you have like, do you use a proprietary model? Do you use an open source model? You know, is it small models? Is it fine tuning? Is it a mixture of experts? How are you going about saying this is our problem and this is how we're going to work and adopt these models to deliver on it? I think there's a lot to learn from the first wave of copywriting companies that sort of took advantage of the generative AI trend. And I
And I think there's probably two big takeaways for me kind of observing the last 24 to 36 months as an outsider for that sector. The first is just avoid the blast radius of foundation model makers with everything you've got. The reality is like if it's easy for them to build, they will build it. And the number of startups that we've seen get trampled by better versions of chat GPT over the last two years.
I think it's a bad idea. If a foundation model maker is likely to build it, don't build it. I think the second piece, which is a little bit more nuanced, is if off-the-shelf models are really good at the task today, or you expect them to be good at the task within the next one to two model generations,
it's much more likely that incumbents are likely to win in that scenario, especially if they've got a reasonably good product and software building muscle. If your incumbent is Salesforce or Microsoft,
Chances are they're going to capture the vast majority of the value because they can easily replicate what you've built and they've already got the built-in distribution. And so they don't have to be the first mover. They just have to catch wind of what's working for the customer segment, replicate it, which is easy if you're just plugging in off-the-shelf APIs into an existing UI UX framework. And then they've got the built-in distribution to then market outsell you.
I think that's partially what makes Ambience interesting. One area is like off-the-shelf models just do not work particularly well in healthcare and medicine. You hit a performance ceiling very, very quickly. It's easy to create a demo. And so there is a lot of buzz in our space because there's a lot of companies who are basically taking off-the-shelf models and building impressive demos. But there's a big difference between something that works well over a Zoom call in a controlled setting versus something that actually works robustly in the practice, given the real-world messiness of medicine and healthcare.
a lot of this expertise is actually passed down in sort of an apprenticeship model. And so there's not a lot of like public domain information that you can ingest where foundation model makers are going to naturally be really good at improving models along some of these axes of quality that matter to us.
I think the way we think about it is we actually have to, have always needed to, and will continue to need to, invest very heavily in trying to take the most capable models that exist today and turning them into the most clinical models to be able to build the kinds of products and solve the kinds of use cases that we want to go after.
Our customers are actually really complex. And I think part of it is two reasons. The first is that when you work with a health system, they're actually home to a really vast variety of highly trained specialists. And they operate across a variety of different care settings. And it turns out every single one of these specialty areas is actually distinct. And I think a good way to think about it is,
What happens in a primary care physician's office is actually very, very different from the workflow in oncologists. If you ever end up in the emergency room, you'll notice that workflow is completely different from what's happening in the primary care office. And that's also incredibly different from if you get admitted to the hospital and you're staying in the medical ward for multiple nights and you're being seen by many different clinicians.
All of a sudden, that's also super different. The medicine being practiced in every specialty area is different. The workflows in the EHR are different. And the service on economics, so how each of these business units actually makes money, is inherently different. What that's required us to do is actually, as we move from one setting of care to another, invest incredibly heavily in the model stack.
to be able to actually build products that are high quality enough such that they're adoptable in the workflow. And the models we trained originally in primary care actually needed a lot more work before they were deployable in an oncologist setting. And they needed a lot more work before we could actually deploy them effectively in the emergency room. And so we had to put a lot of effort into that layer.
The second piece, too, is health systems are notoriously low margin businesses, and that's been compressed over time as well. And so we're a part of the macroeconomic cycle where for most of these health systems, if there isn't a clear impact on the P&L, like if the CFO can't see the financial impact and doesn't realize it within weeks or months, then it's really hard for the health system to sign off on a multimillion dollar technology investment. And so there's an entire additional layer of knowledge that
around coding and billing that we basically had to invest in in order for these models to not just work well in the clinician workflow, but also create enough enterprise value such that we actually had a business case with the customers that we work with. This space, any space where you have esoteric knowledge and where quality really, really matters, which is common in regulated industries, I think this tends to be a really good place to invest heavily in the model layer.
I think if there's-- as you think about, should you build a foundation model company?
My guess is that's probably the wrong answer for the vast majority of organizations. Unless you have no choice, like you're trying to build foundation models for biology to ingest protein and RNA sequences, it feels like generally potentially a bad bet to be investing too deeply in the foundation model layer. But I do think that this idea of the vertical company, which is deeply integrated app layer investments, and thinking
Thinking about how do we bridge the gap between these esoteric, highly regulated industries and what off-the-shelf models can do, that's where I think the most valuable companies are likely to be built. So without giving away any trade secrets, are you fine-tuning existing foundation models within the various areas Ambience is trying to tackle? Or did you guys actually go out and kind of build, given the specificity of what you're doing, build these models yourself?
We've had to do the full range of things. And depending on the use case, some have worked better than others. All right. I'm curious too, like how do you think about building out a team to tackle these problems? I think this is a great question. I'll probably answer it in three different ways. The first is like, if you're a founder today,
trying to build a company. If you believe that the most valuable companies are going to fall out of some of this, some level of vertical integration between the app layer and the model layer, this next generation of incredibly valuable companies is going to be built by founders who've spent years just obsessively becoming experts in an industry.
I would recommend that someone actually know how to map out the most valuable use cases and have a clear story for how those use cases have synergistic compounding value when you solve those problems increasingly in concert together. I think the founding team is going to have to have the right ML chops to actually build out the right live learning loops, build out the ML ops loops,
to measure, to close the gap on model quality for those use cases. And then I think as we kind of talked about, the model is actually just one part of solving the problem. You actually need to be thoughtful about the product, the design, the delivery competencies to make sure that what you build is integrated with the right sources of the enterprise data that fits into the right workflows in the right way. And you're going to have to invest heavily in the change management to make sure that customers realize the full value of what they're buying from you.
That's all actually way more important than people realize. I do think that there's something really interesting about building at this point in time. In some ways, the world is changing more rapidly today than potentially any other time in technology history. I think most people will probably say that the last time we saw anything quite like this was the proliferation of the internet.
And I think one consequence for founding teams is you're actually navigating a crazy amount of uncertainty and ambiguity. And so you think about the teams that are going to succeed in this environment. It's going to be the ones that are disciplined enough to keep track of how the world is changing and have the internal aptitude to be able to respond and to be capable of shifting strategy as new information arises. And as you have to challenge some of the assumptions that have defined how you've built.
I think part of what makes this challenging is as companies scale in size, you get less and less agile. I think most companies, I think the question you're going to have to ask is how do you do more with less? How can you have an outsized impact as a small team? Because keeping your team small may be critical to survival and critical to preserving that level of agility. All these AI productivity tools that exist that are available today, you can not only build them to sell them,
But you should likely also be a massive consumer of them as a company yourself and use them for company building. In fact, I think you walk through the Ambience office today. My guess is that every single person, regardless of function, has some sort of AI productivity tool open on their computer. That could be ChatGPT, it could be Cloud, it could be Cursor, it could be something else. And my guess is that they're probably using these tools multiple times an hour.
multiple times every 10 minutes to do things that otherwise might take them 2 to 10x longer to do, which is kind of exciting because I think leaning on smaller teams, it might not just be a relative competitive advantage anymore. It might be just critical to long-term survival and success for a lot of these companies, given the way the world's moving.
That's interesting because if you think about like when we look back even during the COVID era where everyone was hiring like crazy and you're hiring engineers like mad. And then now in confluence with that, generative AI takes off and LLM takes off. And the idea is like not only are you hiring less, but you actually it sounds like you might not need to hire as much, which is like you said, that's a shift in how we think about scaling companies, I think.
Another thing that might be useful for vertical AI and vertical SaaS builders now, again, if you think about these sort of vertical AI companies and more esoteric regulated industries being one of the most valuable places to build, I think it requires a very different kind of team to go solve these problems.
Most of the problems we solve require combining the expertise of multiple people to actually go solve the problem, which means that our product managers and our engineers have to collaborate incredibly closely with physicians and coding experts and nurses and vice versa. And so we've had to be very thoughtful about when you hire a new PM, when you hire a new engineer, how do you ramp them up in all the healthcare context?
And when you hire a domain expert coming in from healthcare as a clinician or a nurse or a professional coder, how do you actually teach them all the technology and product management concepts so there's a shared language by which people can actually collaborate and build products together? That's something that took us a couple of years to crack. And it's a combination of actually hiring the right people who are excited to work in an environment like this, as well as arming them with all the knowledge and the tools to be successful in an environment like this.
I think this is actually a non-trivial competency for companies to build in their culture and will likely be the reason why some companies make it and some companies don't. And then I think on top of that, then you need to create the right kinds of partnerships with customers
such that if you're trying to build products in these spaces, getting to a place where you're able to go from concept to prototype to deployment to then iterate very, very quickly, I think becomes the name of the game. And I think a big part of what we've had to invest with Ambience is, you know, when we first worked with our initial set of customers, I think they took a leap with us. They didn't really know if it was going to work or not work. And we had to
work our butts off to basically show them that something that they thought may not be possible is actually possible. And it's only possible by working together. I think one of the magical pieces that we've unlocked over the last couple of years in doing this is now all of a sudden you look at these institutions like John Muir and UCSF out here in the Bay Area. There's Memorial Hermann down in Houston, Texas. St. Luke's up in Boise, Idaho. They're now deploying these technologies across the entire system.
And all of a sudden, you sort of experience this transformation as a health system. And all of a sudden, these early customers
they're believers now and they want to build the future together with us. And that's really special because if you've got the right team that can collaborate closely together, that marries the best of what the best PMs, the best machine learning engineers, the best clinicians, the best nurses, the best coders in the world can build together. And then you create an environment where these health systems are saying, hey, let's go discover new use cases together. Let's go prototype, test, iterate together. These design partners
in concert with the team that you've built, I think this is sort of the winning recipe for building a successful AI platform company. And I think there's something there for
for other companies in our space to kind of take as a blueprint. Yes. When you have those teams and you can get in with customers and you have at least the people who come in and say, like, we've done this before. We know the space. We know what your problems are. We know you're like and work with them collaboratively. Much better solution than giving them a technology and saying, you'll figure it out. It's AI, right? Like it's all your problems. Like, yeah, sure. I think, yeah, I think that's a fantastic point.
At some point, humans are stubborn. At some point, humans, whether it's doctors or lawyers or pick an industry, developers do not want to give up control or do not want to see too much of their autonomy over their job. So I'm wondering, where is that line? Or how do you think about going forward, how much of the job you can actually bite off until it becomes like this pushback? It's a good question. I think part of the recipe of getting this right is recognizing that
perfection may not be the right target for now. So expect that there might be a handful of users at every institution you go to who just say, you know what, I'm retiring in a couple of years. It has nothing to do with anything that you're working on. Like, just let me be. I'm going to continue to work in the way that I work today. And I do not want to change the way I work.
It's possible. And we definitely see it with some of the institutions that we work with. That being said, I think there's a lot one can do in terms of being thoughtful about how you deliver new technology and the sequencing with which you deliver new technology that can make it much easier to cross the chasm, so to speak. I think one thing that's definitely true is that if you're somebody who's sort of the middle of the pack and you need to hear more evidence of something working before you're willing to adopt, being surrounded...
by others who've actually tried the technology, use it every single day and speak its praises can mean the difference between actually being willing to give it the time of day versus just saying, you know what?
I don't know if I've got time for this. One of the things we think a lot about is how do we figure out who are the right users to start off deploying with before we start going after that middle pack? Because you want to create these nuclei of successes where people start talking about what their life is like after making the change.
to eventually create a wave of increasingly more believers within the institution. And then I think inevitably what ends up happening is you get to a setting or a use case where you thought the product was going to work well, and it doesn't for whatever reason, either because of a knowledge limitation in the models or a behavior limitation or an issue with workflow. And I think there, the difference actually is more...
More to do with how obsessive the team is in identifying those use cases, figuring out quickly is that a place that you're going to invest or not going to invest, and then helping build trust with those end users by saying, hey, this thing that previously didn't work for you because A, B, and C reasons, well, two weeks later, look, we've fixed this, and you're really important to us. Let's give this a second shot.
Being responsive in that way to users when the reason for not adopting is a product quality limitation can, I think, very, very well mean the difference between someone building trust and wanting to work with you versus saying, you know what, like I've given up on your company. Like, I don't think you're ever going to work for me. And one of the best product marketers I think I've ever worked with.
Her thing was like, find your champions and help them market their successes. That's how you expand. That's how you do it. It's true within an organization. It's true between organizations. I think the single greatest thing that's happened to Ambience in the last 12 months is our customers are now going on stage and at conferences and peer groups where other health system leaders are listening and essentially telling the story of Ambience.
all the incredible impact they've been able to create with ambience technology for their clinicians. And I think that's resulted in a level of inbound that is kind of unprecedented in our industry.
That is more valuable than any marketing that Ambience could ever pay for. For sure. And that is all for now. If you made it this far and enjoyed what you heard, please do share this podcast far and wide and rate it on your platform of choice. And keep checking your feed for more great episodes in the weeks to come.