We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 316: Punit Soni, CEO of Suki, On Healthcare AI Revolution, Voice Assistants, and Empowering Clinicians

316: Punit Soni, CEO of Suki, On Healthcare AI Revolution, Voice Assistants, and Empowering Clinicians

2024/12/30
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Insights AI Chapters Transcript
People
D
Dan Turchin
P
Punit Soni
Topics
Dan Turchin: 本文讨论了AI在医疗保健中的三个主要应用:增强医疗机器人技术,实现疾病预测和预防,以及通过技术恢复人情味。AI可以利用预手术影像信息创建患者解剖结构的三维视图,帮助临床医生更快地预测、诊断和治疗疾病,并与医生和外科医生结合,带来更好的诊断和更短的恢复时间。 Punit Soni: Suki语音助手旨在通过AI技术提高临床医生的效率,减少行政负担,改善患者护理。Suki的应用包括临床文档、编码、问答和患者总结等。Punit Soni认为,未来医疗保健系统将更加灵活和去中心化,临床医生将拥有更多工具,患者将获得更便捷的医疗服务,并最终提高生活质量和寿命。他还强调了在AI驱动的医疗保健技术中,透明度和数据隐私的重要性,以及如何负责任地使用AI技术。 在数据收集方面,Suki致力于保护患者隐私,通过去除个人身份信息来训练模型,同时保留有价值的见解。Punit Soni还讨论了识别自然语言中个人身份信息的挑战,以及未来需要进一步的技术发展。 最后,Punit Soni就未来职业发展给出了建议,他认为学习数学、哲学、历史和创造力等技能将有助于适应AI带来的变化。

Deep Dive

Key Insights

What is Suki, and how does it aim to revolutionize healthcare?

Suki is a voice-based digital assistant designed to help clinicians by automating administrative tasks like clinical documentation, coding, and order entries. It aims to make healthcare technology assistive and invisible, allowing clinicians to focus more on patient care rather than administrative burdens. By leveraging AI and voice interactions, Suki seeks to reduce the time doctors spend on non-clinical tasks, which currently takes up 30-40% of their time.

Why did Punit Soni choose healthcare as the focus for Suki?

Punit Soni chose healthcare because it is a domain with sophisticated users, a significant administrative burden, and repetitive tasks. He saw an opportunity to apply AI and voice-based interactions to solve these challenges. Healthcare also lacks a dominant tech company due to its fragmented nature, making it ripe for innovation. Suki's goal is to democratize healthcare tech by creating a unified experience that addresses multiple pain points for clinicians.

How does Suki ensure trust and transparency in its AI-driven healthcare solutions?

Suki ensures trust and transparency by clearly defining its role as an assistant, not a replacement for clinicians. It provides tools like the 'transcript view,' which allows doctors to trace how clinical notes were generated from patient conversations. Additionally, Suki emphasizes responsible AI practices, including double sequential de-identification of personal data to protect patient privacy. All outputs are under clinician oversight, ensuring that doctors approve and validate the AI's work.

What are the primary use cases for Suki in clinical settings?

Suki's primary use cases include clinical documentation, coding, order entries, and providing contextual information like patient summaries and medication details. It also assists with data retrieval, such as pulling up vaccination records, and can generate insights like plotting A1C levels over time. Over time, Suki aims to become a comprehensive assistant that handles scheduling, patient summaries, and other administrative tasks, allowing clinicians to focus on patient care.

How does Suki handle the challenge of protecting patient data while training its AI models?

Suki uses double sequential de-identification to strip out personally identifiable information (PII) and protected health information (PHI) before any data is used to train its AI models. This ensures that no sensitive patient data is exposed. The company also employs a thoughtful architecture design to overlay contextual patient information securely without compromising privacy. This approach allows Suki to leverage data insights while maintaining strict confidentiality.

What is Punit Soni's vision for the future of healthcare with AI?

Punit Soni envisions a future where AI makes healthcare more decentralized, efficient, and accessible. He believes AI will act as a scalable assistant to clinicians, enabling them to care for more patients and reducing the global shortage of healthcare professionals. In this future, patients will own their health data, and AI agents will facilitate seamless interactions between patients and doctors. Soni predicts that AI will lead to longer, healthier lives and a healthcare system that is less frustrating and more empowering for all stakeholders.

What skills does Punit Soni believe are future-proof in the age of AI?

Punit Soni emphasizes the importance of learning math, philosophy, and history as foundational skills for the future. He also encourages creativity and adaptability, as new technologies like AI and robotics will create entirely new industries and opportunities. Soni believes that understanding the past and thinking critically will be crucial in navigating the rapid changes brought by AI, enabling individuals to contribute meaningfully in fields like advanced manufacturing, space exploration, and healthcare.

Shownotes Transcript

Translations:
中文

Every time we have this epoch, I think AI is one of them. What it leads to is a whole new set of user interactions, a whole new set of jobs and things to do, a whole new set of winners and a new set of losers. It's just inevitable as a march of technology that happens. Good morning, good afternoon, or good evening, depending on where you're listening.

Welcome to AI and the Future of Work, episode 316. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. As you know, if you listen frequently, we launched a newsletter. And it's a great place to go for things that don't always make the podcast, but always fascinating insights. We will link to that newsletter so you can go and subscribe to it in the show notes.

If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I just may share it in an upcoming episode like this one from Madeline in Paris, France, who is an analyst at BNP Paribas and listens while walking the dog.

Madeline's favorite episode is that great one with Dipana Tadas, or D. Das, CEO of Sorcero, about using AI to find answers in scientific journals. Of course, we'll link to that in the show notes. We learn from AI thought leaders weekly on this show. The added bonus, you get one AI fun fact each week. Today's fun fact. Fast Company published an article this week about how technology is changing healthcare.

A team of experts at a recent Fast Company Innovation Festival shared these three themes. One, AI is supercharging medical robotics. AI can leverage preoperative imaging information such as CT or MRI data to create a detailed map of the individual patient's anatomy, the exact positioning of organs, tumors, veins. This information becomes a three-dimensional view of the patient.

The second theme that Fast Company published, AI is fueling prediction and prevention of disease. AI has fueled early detection tools that consider warning signs and risk factors, helping clinicians predict, diagnose, and treat, quote, silent killer conditions like sepsis far more quickly.

Theme number three published by Fast Company, tech can actually bring back the human touch. This is particularly relevant to today's conversation. Soon technology will both integrate seamlessly into the clinician's process and remain nearly invisible to the patient. Personalization and precision achieved through AI and robotics

Paired with doctors and surgeons will lead to better diagnoses and shorter recovery times. Of course, we'll link to that full article in show notes. Now shifting to today's conversation.

Puneet Soni is the CEO of Suki, the voice-based digital assistant for clinicians. It's reimagining how doctors and patients communicate. Suki has raised $165 million to date, including a recent $70 million Series D from an exceptional group of investors, including Venrock.

Head of Sophia, March Capital, and others, is also a prolific angel investor, the former chief product officer at Indian e-commerce giant Flipkart, and he held tech leadership roles at companies like Google and Motorola Mobility.

Bonit received his MBA from Wharton and his master's in EE from the University of Wyoming, which is, of course, home of Pistol Pete and the Cowboys. Without further ado, Bonit, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into the space.

My pleasure to be here, Dan. Thank you for having me on the podcast. Hardly illustrious. I think it's somewhat cliched, actually. But I grew up in India. Most of my education was here in the US. And I started my career as an electrical engineer, actually in QA, in electronic design automation companies. And a few years of that, went to Wharton, thought I would do venture capital, did a little bit of it, realized I was pretty bad at it.

Didn't want to actually do more of that. And then I decided I wanted to operate. So ended up in, you could either do startups or Google at that point of time. And Google was still relatively small. I joined Google in the search team as a product manager.

And one thing led to the other, ended up actually running the mobile apps product management team there. This was the early days of mobile apps. There were actually no mobile apps. We built first versions of everything on your phone from Gmail, Docs, Chat, YouTube, Calendar, etc. And did that. Mobile was a blast. It's like,

become a really, really massive business. Larry became CEO. He started a social initiative at Google to think about what Google's stance on social media would be. And I started working on Google+, where probably I saw the same amount of failure as I saw success on the mobile side. And then, you know, Google bought Motorola. And they asked me to go there and run software product management, which I did for three and a half years. That was the best job.

Honestly, the most interesting role I've had. Did that for a bit till we sold it to Lenovo. That's when I decided I'll leave Google. And you know, you do a bunch of VP of product, CPO roles here in Bay Area. But it felt like going back to India would have been an interesting experience, especially because it was really starting to just take off. And that's when I took on the role as the chief product officer of Flipkart, which is India's largest e-commerce company.

stayed there for a little bit till we actually sold it to Walmart. And then I came back and started Suki. That's a background in a summary. Of course, there's a lot more to talk about. But, you know, I think that the joke I make is that I'm an electrical engineer. I've done enterprise software. I've done

venture investing. I've done mobile apps. I've done search. I've done e-commerce. I've done hardware. I've done social. I've done gaming. I guess it's logical that I'm doing healthcare now. Is it in fact logical? That was going to be my question. Certainly the mission of, say, a Flipkart or a Google, while do no evil is certainly commendable, fixing healthcare is perhaps a bigger, bolder mission. What's kind of the through line? What inspired you to move into healthcare? I don't really have any real

romantic story of how I decided that I would do healthcare. I wanted to start a company and I had this thesis that something was going to happen with language and speech and AI. And now we call it large language models, but at that point, it was not very clear exactly how that would play out. But I had seen enough in the technology to know that

over time, there will be a new user interaction model that would be created, which would not be click and type oriented, which would actually be ambient speech, voice oriented. And you could just be casually talking to computers or talking to each other on computers or do the things they need to do. And then the question just became,

Well, what is an area where you have repeated structured interactions? The data is actually repeatable. The users are super sophisticated and they have a huge issue of administrative burden. If you look at all of those, I think healthcare rises to the top. It's a super sophisticated group of people. They have a very serious administrative burden problem. They do a lot of repeated stuff.

This is exactly where something like speech or an assistant that's based on speech interactions can actually do a lot. And so I kind of started thinking about that a little bit. And then I realized that one of the reasons there's no real massive, huge

you know, healthcare tech company, massive, huge as in like a Google or a Microsoft or Amazon or anybody else. It's because healthcare is like all these surface areas of problems that are like very fragmented. You have different specialties, you have different, you know, settings, you have different regulations, you have so many complex things that are happening. And what AI can do or this new user interaction model can do is it can democratize that problem set area.

And suddenly you can actually build one experience that can over time actually chip away and create more and more skills that can actually solve a lot of these issues. And so then I started looking and they're doing clinical documentation, they're coding, they're putting orders, they're asking questions. And every one of these things actually involves so much clicking and typing and

And imagine there was a voice-based assistant that could actually just be there and you could talk to and it would actually give you all the contextual information or it could just listen to you and the patient and actually nail what the outcome or the summary should look like and put it directly into the medical record system. That started to feel like where things would go. And so now you have this thesis, a really sophisticated user group, a huge burnout issue.

doctors, clinical documentation, coding, etc. And an incoming tech trend in AI that can fundamentally solve that. So that's when I felt like this was going to be a really, really huge opportunity to potentially change all of healthcare tech and how it behaves. And now, I guess, six, seven years later, it's starting to feel quite real. I'd say so. I'm going to go out on a limb and say in the past century,

The nature of how patients interact with their clinicians hasn't changed much. And so there's obviously an opportunity to innovate, to improve the experience. And yet, as a patient interacting with my doctor, it's a very private experience. It's very intimate.

How do you build trust with patients and doctors alike in all of a sudden having kind of a digital listener or, you know, this kind of digital third party participating in this very, you know, intimate experience? Yeah, I think it's interesting because

On one hand, it's probably more trusting of an environment when there's not another human sitting there listening to you, like a scribe or somebody else. And it's just technology that the doctor is using. It's one way. The second point is that you have to clearly define what the technology's role is. And it actually has emotional ramifications also on how you make people feel.

Some people say we are actually reinventing a doctor. Some people say we're building a scribe. But actually what we're really building is basically an assistant. And that assistant's job is to assist the doctor so that they can focus on clinical care and your care. And if you take that thesis, what was that anecdote that I tell often? They know when Gary Kasparov lost to Deep Blue,

And people are like, well, okay, there's not going to be any chess players anymore because like, you know, lost. You're lost to a machine. And he's like, sure, I did lose to a machine. But if you give me a machine, I will beat every machine and every human in the world. And so I think like trust will come from quality. It will come from being a true assistant. It will come from increasing the quality of care that the doctors can provide. It will come when you realize that your doctor now can look at you in the eye.

and actually understand you as a person and not have to worry about the 50 things they have to do. And then you have to combine that with responsible AI. Where does the data go? How do you actually use it? What are your security protocols? What are your privacy protocols? Are you going to scale this? You need to have a security

significant measure of transparency in how you actually build all of that infrastructure so that people can actually trust you. And then the final point I'll make is at the end of the day, every single thing that this assistant does or creates is actually under the oversight of a doctor.

So therefore, it's super important that we build user interaction models where the doctors actually can approve things and say, yes, this makes sense. Yeah, this information is good. Push it to the medical record system. And so there's some combination of how you actually define the product

an assistant? What is the user interaction model for this product? What is the infrastructure that's actually used to actually do this? And what is this oversight that you create for the clinician so it's not cognitively burdensome, but at least helpful? That can create the kind of trust that you're talking about. Not an easy problem, by the way, in the AI world, but definitely worth solving because it's going to be everywhere, regardless of whether we like it or not.

We're going to come back to that, put a pin in that one. So you're talking to a lot of product people and entrepreneurs, and I could envision once you define that as your problem space, a hundred different use cases for how that digital medical assistant could assist the clinician. Right. How do you think about where to start and maybe talk us through the top most common use cases for Suki in that exam?

Yeah, I mean, I think the way to do it is to reverse the amount of time that folks are doing that's not clinical care. The vision of Sookie is to make healthcare tech assistive and invisible so that a clinician can focus on what they love most, which can, by the way, be taking care of their patient, but also could be just going home, you know, or spending time with their family. And then if you look at the amount of time they spend, they roughly spend like 30 to 40% of all time outside of clinical care just documentally.

And then they spend probably another 20-25% time like orders and all of the other things they have to do there. Then they spend another 10-15% of time doing data retrieval. Hey, what information do I need to get? And there's six clicks and seven drop downs before you figure out what vaccination somebody's taken.

And then you're spending a lot of time actually just getting contextual information that you may not even be looking for that, but you know that you probably want to find and so on and so forth. And so if you build an assistant, which starts by actually doing clinical documentation, then the inherent act of creating the document that represents the encounter, patient encounter,

creates the structured data that's used to generate models that can train models that can then solve other problems along the way. And so Suki does clinical documentation. Then it started actually doing coding, which is how doctors get paid.

Then it started actually providing basic Q&A. Hey, what medications is Puneet taking? What's his A1C level? What are his vital signs? Then it starts getting much more fluid where you can basically just start saying things like, okay, I'm, you know, plot the A1C level for Puneet over the last three months and give it to me. Or what's the FDA recommendation for this particular patient?

And then you start actually doing patient summarization, where as before you walk in, you say, well, what should I know about that? And it actually provides you a summary of that. Then you start like staging orders into the thing. Then and so on and so forth. And you get to a world

where there is this assistant that you have with you that actually can tell you what your day looks like, who should you be looking at, what's the summary of the person you're going to see next. You can ask it to pay attention so it can write a note, put together orders, do all the work that you're doing in the act, and then also provide you all the other contextual information to operate. And suddenly, you have it always with you and you are focused on empathy and clinical care and everything else is being taken care of.

If that's actually going to happen, I think it will happen in the cusp of AI, user experience, language models in healthcare. And I think it's somewhat inevitable that in the next 10 years, we're going to see pretty much this is the way doctors are going to operate, which is a very different world from the world they came from, just like

pre-Internet and post-Internet were very different words. We talk a lot on this podcast about what it means to exercise AI responsibly. And I think that topic is no more important than

in the medical exam room. A couple of the principles I talk about a lot with regard to responsible AI is transparency. So I need to be aware when an AI is influencing a decision potentially about my health. And if I, for whatever reason, disagree with the information it's captured or the decision it's made on my behalf, I should have a say in influencing or at least understanding where that decision came from.

How do you think about the need to disclose what's being done by Suki to both the clinician and the patient? And how do you coach your teams about the responsibility that you have because of powerful use of these technologies?

It's a very important question. And I think it's a very important thing to do because in the early days of all of this, see, I think if it all works very well, I do not foresee a world where people are sitting there actually like checking to see how it did what it did.

But I actually think that it's important to have that exposed to people so when they want to, they can. And that gives them the confidence that they're using the product in the right way. And so, for example, we're launching this thing

which is called the transcript view, where basically if a doctor and patient talk to each other and it creates a clinical note out of it, you can take any part of the note and click into it and it'll tell you where that information came from. Well, this is the point at which people were talking to each other and this is what you said and this is what that person said. Or here's the information from your medical record that I pulled up that actually I used to create this thing.

And so this idea that you can actually go and click into any aspect of the output of AI and actually cross-check what the information reference to actually create is super important. Now, patience is a whole different thing because patients may want that, but actually mostly just want to know what they have to do. And so,

If you can expose these clinical artifacts to the patient in a human legible form, if you can give them their plan, their instructions, the encounter information, I think that over time, this will become even more and more important because the world I foresee about five, ten years from now,

It's pretty impossible right now with how everything is set up. But I do expect that there will be some amount of reasonable decentralization of healthcare data. Where everybody will own their own data and own their own record. And there will be agents that actually operate on behalf of them. And their job will be to actually go ahead and say, well, look, Dan, it looks like you're falling sick. Do you want me to actually go...

you know, find somebody to talk to. And then they will take all your information. They will go look at your financial information, your geographical information, find the agent of a doctor, do a handshake of data between them. The doctor will then basically see you. The agent will listen and will provide clinical documentation and again, pass it back to your agent. So you have all the information. This agent architecture that I talk about,

is actually going to be real. We may call it an assistant, just the way we call it today, but it will be an assistant, an instance of an AI assistant that represents every stakeholder.

And today, that's not the world we are in. Today, the patients are actually kind of honestly a little helpless. They don't really have all the access to the data. They are beholden to all these somewhat monopolistic systems who basically will provide some cut of the data if they want to. They don't understand why they pay what they pay.

So then we are far away from that world. But one key part of this transformation is going to be this transparency, not just transparency in what the AI does on your behalf, but also transparency in what the healthcare system has to do on your behalf. So that's how I think about this area, Dan. Good perspective. Thank you. So all AI really ultimately comes down to a data problem.

And all data problems ultimately come back to this conversation about kind of what I'll call a commons problem. In this domain, I want Suki's models to be as accurate as possible. And therefore, I would like

Suki's models to be trained on everybody else's medical data in the world to provide me with the most accurate diagnosis and prescriptions, etc. But I don't really want to share my data because it's very personal to me and I don't want to potentially have something private about my health be exposed. How do you think about, how do you collect data?

data from other people and use it to the benefit of all patients and doctors using Suki, but while, of course, maintaining confidentiality and the privacy that patients and clinicians expect. I think it's super important that you actually don't leak

any kind of identifying data into these models. You know, that should be like the number one thing, which is, and by the way, I am not very sure if everybody's following that, you know, in a concerted fashion, not my space to say who is and who is not, but I would think it's a little bit of a wild west out there.

in terms of how people are actually using this. Suki has definitely had the stance that any PII, PHI that's there has to be stripped out before it ever gets close to a foundational model. And we do double sequential de-identification to actually do that, to make sure we just absolutely can make sure that nothing ever goes. I think it's super important because

As AI is developing, it's still early in its capabilities and we're not really exactly completely sure what kind of things it can do. Having personal identifying data anywhere close to these models is not a great idea until you actually build a very secure infrastructure that allows you to do something like that in a model that's actually also disconnected and secure in its own way.

And so how do you do it? You do it with a lot of caution. You first make sure that you're stripping out all this information. Now, the good news is that after you strip out all the information, there's still a lot of insights, specialty-based insights, pattern-matching information, lots of ways to actually really make healthcare tech more efficient with that. And then the context of what the person's situation is can be overlaid on top of it.

And in a very secure way so that when you actually provide information to the doctor, you might actually be using something that's very relevant to the patient they're going to see. But you can do that without actually having to take that information and send it to any of the underlying foundational models. And so that requires a very thoughtful architecture design.

where you're able to build user interaction models and user-facing products that actually do use contextual information about the person that they're going to treat, but you're able to disconnect that and clear them out from any data that's used to train these models. Possible, we do it, and will have to be continued to be done because I just think that's the more responsible way of actually developing in this space.

So, this is a super hard question to answer, but I'm going to ask you anyway, because you're a technologist and you and I go back to days when information was structured. It came in the form of fields and forms and it had labels and things like that, but human language is messy.

It's really, really hard to identify PII, personally identifiable information, in natural language. Because me talking about my genealogy, my grandma, my pets, my various things, it's the line between what's

personally identifiable and what's not. Even if you take a simple example like an address in different countries, the addresses are very different. Phone numbers are very different. So credit card numbers, the patterns are different. So gosh, just how do you think about building AI that's smart enough? I mean, it's a standalone technology for a separate billion-dollar company. How do you think about identifying what's PII?

First of all, I think that there is a huge company to be built in that space in general. I do think that there are

The current state-of-the-art algorithms that are distributed are pretty good. For example, we use a lot of Google's infrastructure, including some of these PII and PHS stripping, de-identifying algorithms. They're pretty good at what they do. Are these things ready for what the world will look like when all of the data is going to be like a 24-7 stream of conversation? It's not structured into these nodes that you're pulling in and doing stuff.

I'm not so sure yet. That is where there is a billion-dollar company to be built. But if you look at today, if you look at healthcare, to some extent, the kind of data that's actually even stored there is the deficiency of the fact that it's limited and it's somewhat unstructured and carpool is also a little bit of its advantage because it's not so comprehensive. It's not so fluid that you don't actually know where what's going to actually end up being. And so...

I think we have the right technologies for now, but I don't believe we have the right technologies for what's coming very, very quickly and very soon. And so to your point, the problem is going to be very real. And again, I think there's a really huge billion dollar company to be built. So Puneet, let's say a thought experiment. Puneet and Diane are back here in a decade and we're having a version of this conversation. What's the end state where AI is doing as much

as possible because it's just smarter, it has perfect recall, and is able to maybe automate more of that patient-doctor interaction. And maybe the doctor becomes kind of, you know, AI with a scalpel. I mean, potentially that can be done with robotics. But let's say, you know, there's certain things that we always want the human to intervene, where we want the human to intervene, but maybe just thought experiment, that patient-doctor interaction could be

almost fully automated? What's your sense about where this field is going? I think, who was it? Was it Vinod who was saying it? Vinod Khosla or somebody? They said that the first AI, that a small village in India will probably see an AI cardiologist way before an American will see it. I think that today, you know, with any new technology, there's always like a lot of anxiety. What's going to happen when the electricity showed up? Or what's going to happen when

the steam engines came and industrialization had happened or what could happen when the internet showed up. There's anxiety. People who are actually doing all sorts of things are like, what am I going to do? Like my son was asking me the other day, he's 12 years old, he's like, what should I study? You know, there's anxiety that comes. But if you look at, if we look at history, every time we have had an epoch, a printing press, there was an epoch setting technology. Electricity was an epoch setting technology. Internet is an epoch setting technology.

Every time we have this epoch, I think AI is one of them. What it leads to is a whole new set of user interactions, a whole new set of jobs and things to do, a whole new set of winners and a new set of losers. It's just inevitable as a march of technology that happens. Then if you go back to healthcare, you realize that

For the 7-8 billion people that we are, we cannot create the number of doctors and nurses and clinicians that we need to actually take effective care of them. In fact, the biggest issue we have in the world is that we just simply don't have enough. And all of this is so expensive and so difficult and so inaccessible and people are just dying way earlier than they should be because of lack of medical care. In that world,

The idea that like an AI can over time emerge to be a really serious assistant to a clinician and then scale them and expand them so that they can do a lot more and they can take a lot more people and then also potentially end up showing up in areas where we may not even have access to a clinician.

is game-changing. It's going to actually lead to a wonderful new world with people living a lot longer and healthier and clinicians being more enriched and more contented with the kind of work they do. And so I'm an AI optimist. I believe that if we were talking 10 years from now, I think the healthcare system would be much more fluid and decentralized.

I think that clinicians would have access to way more tools, way more in advance to actually do a better job at clinical care. I believe that people will actually find less frustration. They will actually be able to get help as soon as they need it. Their body might be able to tell them faster than they even feel sick. And then somebody will actually give them what they need to do to actually get better. I think our lifespans will increase. We will live longer. We will live healthier.

And in all of that, you know, healthcare will become some sort of interesting, invisible and assistive thing that's just around us, but it's not a cause of frustration, but a cause of happiness and longevity. So I believe that's going to happen. And I believe AI, as we're seeing it today, is just like phase one of that first step of making that happen. And so looking forward to that podcast when we are there. Get it on your calendar, right?

2034. I'm pretty sure about this. Make sure you got it blocked in your calendar for that. Yeah, it'll be amazing. It'll be amazing for us to be, we are all at the beginning, we're the pioneers. We're just starting. And in 10 years, just like pre-internet and post-internet, everything will look different. It's going to look very different. I love that answer. And I'm glad you referenced that comment from Vinod Khosla. I so celebrate that vision.

that every small village in a third world country has a world-class, like you said, cardiologist or ophthalmologist or radiologist. And it does, however, lead me to ask one final question. I got to get you off the hot seat. But so whether it's the conversation that you're having with your 12-year-old son or with the next generation of medical students, and they say, you know, yeah, yes, Puneet, I get it. That's a bold vision. But

What does that mean for me? And obviously the economics of healthcare are different. What skills are future-proof?

If that's the future of medicine, I always thought that I was supposed to be a doctor, an attorney. These were all careers that were future-proof. Maybe they're not. And what is future-proof anymore? Yeah, I think that there is a convergence of a variety of different technologies that is going to lead to a golden era of productivity in front of us. And when we think about

speech models. It's basically effectively a new user interaction model on how we actually are going to communicate with computers and are able to do things. But there is another whole trend around robotics that's actually also coming along and will actually create a whole new kind of expansion of things we can do. Which brings you to, you

you know, areas like advanced manufacturing, where we will be able to build things at scale in ways we have not been able to build before. Even if you think about the economics of US, just manufacturing has been hard. Well, manufacturing can come back, right? Can be here again. And it's inevitable if we actually plan for that world. And then if you think about like medicine and healthcare and how it can actually scale up in doing things,

If I was actually telling people what does it mean for them and what skills they need to get, first of all, I guess I half joke and tell the same. I'm an Indian dad, so the first thing I'll always tell is learn math. Math is important. And no matter what the world looks like and how AI is iterated and built, learning math, it's a language. That's important. But there'll be other things to learn. We should learn philosophy. We should learn history.

We should learn from our mistakes, the past, because we will have all these things available to us from a technology perspective that we can build things. So when my 12-year-old asked me, I told him, he was anxious. I was telling him, what if we could figure out how to be space explorers? That's a whole new thing that we never did before. What if somebody figured out how to solve remote across distance tasks?

like healthcare, there'll be a whole new set of things to do then. There's basically advanced robotics. There's so many interesting new areas that are coming up. Some that we don't even understand. Like the limits of our understanding of physics stops us from thinking what's going to happen with time and space. And there's so many new things to do.

Learn math, keep your eye on AI, learn philosophy, learn history, be creative. The world will change, but you will have an interesting role to play in it with tools that we can't even dream of. That is such a better answer than what I give my kids when they ask me. So the next time they ask me, I'm going to have them listen to Uncle Puneet, okay? Digital Puneet, all right, is going to educate my girls about that.

about how to prepare for the future. This is a fantastic conversation. There's so many topics that we had planned and didn't get to almost any of them, but this was more interesting as a result. So thanks for hanging out. Thank you so much for your time. I appreciated this conversation. Yeah. Now, where can the audience learn more about the great work that you and Suki are doing?

Yeah, absolutely. You know, and by the way, like, you know, check it out, www.suki.ai. We are an AI assistant for clinicians. We do things like clinical documentation, coding, order entries, and over time, more and more things so that healthcare can be assistive and invisible and

clinicians can focus on what they love to do, which is take care of you, patient. And so that world's coming. You know, whether it's Suki or it's something else, it almost is inevitable that the world will be here. And so read up on it because that's where the healthcare tech is going. Well, it's such an important conversation. There's so much more to talk about. I'm going to ask you to come back and not in 2034, before then. How about that?

Sounds good. Yeah, give us an update on your progress. Well, hey, Puneet, we're all rooting for you. Great work and best of luck. Thank you so much. Well, gosh, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.