We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Reid riffs on AI wearables, quantum, and Deep Research

Reid riffs on AI wearables, quantum, and Deep Research

2025/3/5
logo of podcast Possible

Possible

AI Deep Dive AI Chapters Transcript
People
P
Parth Patil
R
Reid Hoffman
Topics
Reid Hoffman: 我认为可穿戴设备和AI的结合最终会取得成功,但其商业化突破还需要时间。未来可穿戴设备将非常出色,并会在专业领域率先应用,例如医疗和公共安全。关于AI对版权的影响,我们需要明确的法律框架,在保护创作者利益和促进AI创新之间取得平衡,这将是一个复杂且需要时间解决的问题。AI编码助手将显著提高软件工程师的工作效率,但不会取代软件工程师,因为软件工程师仍然需要进行高层次的思考和决策。 量子计算将解决传统计算难以解决的问题,尤其是在药物研发和材料科学领域,虽然其在消费者领域的应用还需要时间。 Parth Patil: 深度研究工具目前仍存在准确性问题,但其在信息整合方面的效率很高,未来有望在代码生成和“氛围式编码”方面发挥更大作用,并改变软件开发模式。

Deep Dive

Chapters
Discussion on the future of AI wearables, focusing on potential breakthroughs and applications in professional settings. The conversation touches on the recent acquisition of Humane's AI pin by HP and explores the potential of glasses that can parse the world around the user.
  • Humane AI Pin shut down after raising $241 million, later acquired by HP for $116 million.
  • Wearables will likely see breakthroughs in professional settings first (nurses, doctors, etc.).
  • Glasses that can parse the world and provide helpful information are a likely future development.
  • Improved vision in wearables can democratize access to information and help equalize experiences for people with poor vision.

Shownotes Transcript

Translations:
中文

I'm Reid Hoffman. And I'm Aria Finger. We want to know what happens if, in the future, everything breaks humanity's way. Typically, we ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take. This is Possible.

Humane pin, the AI pin, it recently shut down. And people love to dunk on companies. It raised $241 million from all the big investors, Microsoft, OpenAI, CEO Sam Allman, et cetera. And maybe you disagree, but they were trying to do something. They were trying to do something big. It didn't work out. They shut down. And now it was acquired by HP for $116 million. And so

you know, a lot of people would say, well, see, they tried something in the hardware space. They tried something in the wearable space and it didn't work out. I know you don't like predicting the future per se, but like if you had to give a guess, like,

And maybe do the year time horizon or 10 year time horizon. What will the commercialization breakthrough be for wearables and AI? People love to sort of theorize, oh, it's dead or like, oh, it's happening tomorrow. Yeah, because classically it's, you know, how can you soapbox posture? Exactly. You know, and that's whether it's the press or people in social media or everything else is kind of the way of doing it. And, you know, mostly you would say wearables.

It's clear that wearables will be spectacular and will be there. And so, for example, you know, glasses that can help parse your world. And like, for example, if you have you kind of staring at the washing machine in your Airbnb going, how does this work?

Right. And he goes, oh, you look like you're staring at this thing. Are you trying to figure out how to make this work? Here's the you know, you're for a washing load. You do this and, you know, did it. And that obviously could be very helpful. You're walking down the street and you're trying to figure out things. That's just clearly the future. And then, you know, I tend to think for a lot of these wearables, they'll start actually in kind of a more of a.

you know, kind of a professional circumstance. Like I think we're going to want nurses and doctors and firefighters and policemen and, you know, community workers and everyone else to be wearing them. And I think it'll make the whole thing better. Now, the last part of that is the timing question, which is frequently the venture investing question. And it's a reason why as a venture investor, I tend to orient deeply to things that are within software. So like

you know, co-founding inflection or co-founding manis, you know, like one is drug discovery. One is a chat bot, but they're both intensely within the kind of the, you know, how do we use bits intensely to make Adams better? And, and,

Going into atoms directly is always a much more fraught investing and much more fraught on time. I mean, all of us children of the 80s and 90s can remember the like one parent's friend who had the car phone and you were like, you must be rich. Like little did we know that 20 years on it would be like, who doesn't have a smartphone? Like, what are we doing here?

Why doesn't the 12-year-old love a smartphone? Right, exactly. What's going on? I do think it's interesting, though, to think about what are the ways that we can use, you know, now that AI can see...

Whether it can see from your smartphone or see from your glasses, like there might be so many things that we can do that it can see from your smartphone first because that's sort of the easier way to make it happen. And there's all sorts of computer vision and it can see your computer screen and sort of all those other things that can get us there. Although that being said, one thing I think is interesting is

We talked a lot about like everyone, when they wear glasses, they think about the, oh my God, if it could just tell me that the approaching person I met one year ago and they were, you know, their name is John. But the thing we don't think about is actually the fact that people with poor vision have a much harder time remembering people because they can't.

see their facial features as well. So there is actually such a democratizing thing that we can do with glasses that isn't just, oh, yeah, it would be nice to remember people's names better, that it actually does sort of help equalize for things that people need. One of the sort of advances that I have to admit, like truly boggles my mind, I like try to wrap my head around it, is Microsoft's recent announcement of Majorana One chip.

This is quantum computing. Like, this is just like a whole new realm of innovation that I think a lot of people wouldn't have guessed we were going to get into 2025. And I feel like quantum is all the rage these days. People are talking about it all the time. Can you break it down for us? Like, why is quantum so important? And is this going to be tangible for us in the near future for us to see the benefits?

Here's why quantum is important. There's a whole bunch of problems that are really great to solve that, by the way, AI does help us a bunch with, but are still better with quantum computing and probably with quantum computing and AI. And, you know, most of the dialogue tends to be people think, oh, well, actually you need quantum security. And what happens with Bitcoin with quantum? And so there's this notion of kind of logical qubits, e.g. logical quantum bits, which

And probably to get into the security realm, you need, you know, call it 2,000 to 5,000 logical qubits to really change that. Interesting. And there's ways to do quantum security. And when you kind of think, well, the current quantum computers are like 70 or 80, you know, qubits, you're like, well, that's a ways away. There's definitely some smart people who think it will be tangible in the near future. I still tend to think that we may be

kind of a few more years out than the loudest proponents. But at 100 and plus, maybe call it 150, 200, you begin to be able to solve problems in quantum that are hard for traditional computing. AI makes it much better, but not perfect, which are like small molecule things. So that could be drugs, that could be materials, semiconductors, you know, other kinds of things.

I mean, there's many reasons why AI is so exciting, but it's also accessible in the way that, yes, I'm not going to go out and discover new drugs with AI tomorrow, but I can go and use Pi or Chachi PT or Claude and like I, just as a consumer, can go, wow.

Whoa, the benefits of AI are truly mind-blowing. Are there going to be sort of ways that consumers can understand the benefits of quantum, or is it mostly going to be sort of in the scientific realm for accelerating those things?

Well, I don't know, kind of, is there like a consumer chat GBT of quantum? I mean, that's kind of a, you know, maybe that's a Schrodinger cat problem. And when we evaluate it, you know, the cat's either alive and kicking or, you know. Or not. Not big yet. But, you know, like, for example, the derivative benefits. Like, for example, you say, hey, we can suddenly...

accelerate, just like, you know, Siddhartha Mukherjee and I are with Manus using AI to accelerate drug discovery for curing cancer. And to make that much closer in reality and adaptive to make this happen, AI is going to be a great accelerant. Quantum can be another great accelerant. You've put the two together. You might even get to something that's like, you know, massively more accelerated. And that could be very good. And the consumers will get the benefit of what

what the drugs are, even if, you know, they're not carrying around, you know, qubit processors on their smartphone. They don't know that quantum was helping them be cured of cancer. No, that makes sense.

sort of thinking about AI in the global context. And, you know, this question, I feel like, is one that people really come back to to criticize AI. I don't think you agree, but I would love to hear more. So the UK government, they recently launched a consultation process on proposals to give creative industries and AI developers clarity over copyright laws. So

So I certainly agree that clarity is key for any business environment. We want to understand sort of what are the rules of the road. And that included an exception to copyright law for AI training for commercial purposes. So

Nobel Prize-winning author Kazuo Ishiguro mentioned that we're at a fork-in-the-road moment regarding creative works. At the dawn of the AI age, why is it just and fair to alter our time-honored copyright laws to advantage mammoth corporations at the expense of individual writers, musicians, filmmakers, and artists? So when you think about

you know, people ask you time and time again, is allowing LLMs to train on authors' work theft? Like, how should government strike this balance between protecting creative content, but also giving tech firms, you know, the freedom to innovate that they need? Look, the clarity thing I totally agree with because, you know, clarity, you know, creates uncertainty in all areas. Now, the problem, of course, is one can make both very compelling arguments about, you know, kind of

how this is a fair use under copyright. Because for example, I can take, you know, issue guru's work. I can hand, you know, clearing the sun to someone. I can have that person. I can teach them. They can learn to write. They can learn ideas from it. They could be

be inspired to do other things. They could generate other creative work after having read it, you know, et cetera, et cetera. And so, you know, the whole notion around kind of like, well, what does it mean when it's machine reading? And of course the critics try to say, well, but because it can reproduce it, right. And it can, it's like, well, but,

But if it doesn't reproduce it at all, or in the case of like the New York Times lawsuit, it was like, well, the only way you could reproduce the article is when you put in the first half of the article and say, now please complete this. And it goes, okay, well, I presume you're referring to this article, so I'll do that. And you could obviously train it and not do that. But like,

The presumption is you're not actually doing the harm because if you had the first half of the article, you probably had the whole article. Right.

isn't necessarily theft from any particular person. Just like when I'm reading "Clara and the Sun," I'm not stealing from Ishiguro, right? In terms of, you know, I bought my copy, I bought my copy on Kindle, I bought my physical copy, you know, et cetera, et cetera. - Yeah, think of a thought experiment where, yes, let's assume it absolutely is legal and we just think it's good for innovation. We're happy that it is legal and that it's good for innovation for LLMs to be able to use this data and these copyrighted works.

Are there any downsides? Are there any like, yeah, no, I think this is right. But yeah, in four or five years, we'll have to worry about X, Y, Z. Or you think the critics are overblown? Look, I think the underlying unspoken thing, the reason why it goes to something more sloganistic about theft or something is like, oh,

Suddenly the value of my creative work go way down because now a whole bunch of things can be created by this new machine, which is partially enabled by the work that I've done before. And I think that the notion of, you know, will my creative work be valued? It's a muddy issue that we tend to navigate poorly. So a current one in the music industry tends to be the vast majority of

of musicians benefit from live concerts, merchandise, et cetera, because streaming and the change of the things has made the economics very different. And it is a, well, that's a huge tragedy because the previous economics was selling CDs was like really good. And it's so that's unfortunate. It's like, well, but it's not clear that the selling CDs thing is

is like the thing that should be through eternity, right? That the change is a thing, but we want to have these laws respecting kind of creators when that kind of stream of creative output is something we value as individuals in a society so that we have the right kind of incentive loop. So I do think that they kind of figuring out how that works is

Now, of course, part of what I think about AI tools is right now, of course, everyone's like, oh, my God, end of the world. Part of the reason I wrote Super Agency with Greg is

But I think that, you know, what's going to really start happening is you're going to start going, oh, this will really enable me to do so much better, faster, creative work and to make it happen. You know, you've been part of this journey with me, too. You know, I've been trying to say, OK, can I get the various AI tools to help me write some of the science fiction that I've been thinking about? And it's not very good.

So so it's like, well, there's not it might get to a competitive threat with very good science fiction writers, but not right now. And by the way, one of the things is interesting is even as it gets there, like like the good science fiction writers writing it suddenly might be able to write so much better, so much faster. Like, you know, one of the things that I always find frustrating about the series that I really like is I find the series I really like and you go, OK, I got to the last book.

How many years until the next book? Yeah.

You know, it's like, I'm in it right now. I'm in that universe right now. And so you'd be like, well, actually, in fact, if I could be doing this, I could be producing a book, you know, for this series every month. Right. And as I'm going down the journey with it, I think it could be enormously beneficial to some creators doing that. But I understand the first reaction of, oh, God, like, for example, you take someone as amazing as Ishiguro and

is I have done this really hard thing of creating these masterpieces. You know, I'm one of the world's most, you know, celebrated authors doing this. And now you're changing the game, right? Like, I get that as a, ah. And I would be remiss because if Greg was here, your co-author on Super Agency, he would also point out that on the music front, you know,

CDs were technology. And before that, you couldn't even make a living as a musician. Before that, we had radio for a little bit, but before that, you were just strumming alone in your basement. And so technology also has actually enabled

a lot of this amazing creativity. You know, we don't even have to go back to the printing press to get the fact that, you know, CDs opened up this whole new world. And then ringtones for like a minute made a lot of money. And then we came on to how can even more musical artists, you know, make a living. And so I think to your point, this is the right way to go. And how can we navigate this?

So that people can honestly have agency to make their careers better, make more beautiful art and sort of do all the things they want to do just in a new technological context. And the transitions will be painful. You can't stop the future. But what you can do is you can try to navigate to what the better futures are.

All right. So to end our episode, we have a special guest with us on Possible today, Parth Patil. He is one of the creators of Read AI and was the first data scientist at Clubhouse.

And as you all know, we talk a ton about AI with Reid and many of our guests. And Parth is one of those resident AI experts. And so, Parth, I'm so excited for you to come on Possible and help us break down some of the most recent AI happenings. Hi, Parth. Thanks, Aria. Glad to be here. So I want to share with you a recent experience I had and then ask you for your kind of diagnosis of it and then kind of going in the future.

So I was, you know, kind of recently, you know, kind of hanging out with Atul Gawande and I was like, have you tried deep research? He's like, no, I don't know what you're talking about. I'm like,

about. Like, okay, like what's a book you're working on? It was like, okay, I'm working on the following thing that has a chapter on anesthesiologists. And so we pulled up, you know, chat GBT, you know, O1 Pro, deep research, and we pull up Gemini and we ask the questions for getting answers. And let me run you through kind of what our discovery was that was really interesting. And then this will be the diagnosis of what is the current state of deep research and where are we going and how does this play? Which is chat GBT is

generated like just some, like, he's like, Oh my God, this is amazing. Like, like this, this, this just saved me thousands of hours with my research assistant. And so he fired, he cut and pasted it and fired it off to his research assistant. And then we did Gemini. It was like, well, you know, this is much less inspiring, but you know, okay, fine. We'll fire that off too. Now what the research assistant came back with was, well,

On the chat, GBT answers, 90% of them were inaccurate. Like the quote that the surgeon said, anesthesiology helped me in my policy. That quote doesn't exist. That source isn't there the right way. It's kind of misquoted, etc.,

So like that was a problem. But the thing that was interesting was, but it pointed me to interesting documents. It was almost like, like where to look at to find the kinds of things that we want was in doing the research cross-checking actually, in fact, did save me many, many hours because I went to a bunch of different sources, which actually had some of the stuff that could be interesting and

And so you had this kind of thing where Gemini didn't have any factual inaccuracies, but was less exciting and interesting. And so could have been used a little bit more just kind of flat. And the chat GBT one was like, if you just quoted it, you would have been like, oops, I wrote something like I wrote something as fact that was wrong. But

but it was a doorway into the right things. What does that make you think about the current state of deep research tools, how people should be thinking about using them, et cetera? Yeah, so I've been working on similar tools to deep research for like over a year now. And my early experience was like, wow, we can do a lot of work. But then you realize if the work isn't high quality, it creates work because now you have to go and verify like all the things that it's coming to you with. And yeah,

I think it means that it's a downside to like asking certain types of questions. Like you should not expect fact in the response, which is kind of, you know, the LLMs can be confidently presenting information, but we should always, you know, take that with a grain of salt right now while it's hard to verify. On the other end, the way I like to use deep research is more for like subjective intro exploration into a space.

usually for something that I wouldn't have the time or energy to do anyways, right? So if I'm like, oh, I'm like brainstorming a new, like a concept for a new app. And it'd be like, oh, where do the people who are interested in this fandom exist? What are they talking about? And then deep research can go find where on the internet, like I might find the answer to those questions. So I think you're right there. I think it'll get better. But the real magical feeling is that it can do in 10 minutes what would otherwise take me a couple of days to do.

That kind of like accelerated kind of information synthesis is actually really valuable as it gets more and more high quality. Like the reasoning kicks in and the quality of the response starts getting higher. And what are some of the, you know, one of the things that, as you know, I described you to other people is the person who has not only taken the red pill, but is bathing in the red pill.

You know, what are some of the current kind of like, oh gosh, this is some of the stuff that, you know, the future is already here, just unevenly distributed. What's some of the kind of the use of AI that's caught your attention in the last month?

I think for me, it's something that I've been hacking on for an idea that I've been hacking on for almost two years now is now starting to like a lot of other people are starting to experience this Red Pill moment of I think Karpathy calls it vibe coding.

where you kind of just lean into the exponentials and the general awareness of these models to just be as programming assistants. And you're like, oh, why don't I just make this, make this, like build this feature, think of a game idea. And you kind of just let the model, you know, generate a lot of the code. And you shift more to like speaking. And a lot of people use Super Whisper. So they'll literally press a button and describe the app that they want. And then they let the model make the first version of it.

Claude 3.7 Sonic came out. It's one of the best coding models out there right now. And more and more people are realizing this. And you can tell because you have non-technical people that are like, oh my God, look at this game that I made with AI. And then you have like experienced technical people that are like, oh, but it's not robust and scalable. And in my mind, I'm like,

The fact that we can even just create software by describing it is the magic. And yes, the models aren't perfect, but this is exactly the direction we should be going in. And the reason I say that is because my mom is a programmer.

And 15 years ago, she developed carpal tunnel. And I thought that was like crazy. Now it's like every button she presses is actually like deteriorating like her hand and it hurts, right? But that's your career. And so 15 years ago, she got the company to pay for Dragon NaturallySpeaking, which is a transcription software. Transcription costs $500. Like high quality transcription was expensive back then.

and then she would connect it to she would give it blocks of code so she would be like write a for loop you know write an if statement and those predetermined blocks of code would be inserted into her code while she spoke and that was the first time i had this i got the idea of like what if we just talked talked to the computer and it wrote the code and of course we didn't have language models then so it was really like rudimentary but then you know now that we have we have whisper technology we've got language models that can write a lot of code very quickly in a higher quality i can't

I came back and I showed her and I connected Whisper to these CodeGen and it's like a hundred lines of code. That's 51 characters per line. That's 5,000 button presses. But now you can just talk to the program and it just exists. So I'm excited for more people to experience vibe coding, even if it's just

even if it's not the same as normal coding, because I think in certain ways it's just a hundred times better. All right, Reid. Yeah, I got a question for you. When we were talking about code generation early on when we met, I was like, wow, this thing can write perfect SQL. Doesn't that mean that conversational data analytics is basically here where

Instead of an analyst writing queries by hand, you should just talk to an analyst that talks. Your analyst should just be talking to an analytical agent and he should write the queries. And then you made the comment that was like, yeah, but, you know, that's really scripting. It's not programming.

And, and that's because I think the, at the time the models were kind of limited, but I think we've come a long way since then. And curious what your thoughts are on, uh, like coding co-pilots. I definitely think that, you know, and I know we're aligned to this, that like this year, all the major shops are working intensely on increasing the coding capabilities for co-pilots for kind of press button, get, you know, kind of software engineer, you know, you know, active agent and, and,

you know, one of the things that people always think is, oh, what is that going to mean for software engineers? Now, in parallel to the data scientists, I actually think that there's still infinite demand for software engineers. They just may be deploying with a set of agents in terms of how they're operating. And so I think that the same thing is now true. And I think the coding capabilities are way up. But the thing that your question also gestures at, which I know,

you think about in depth too, is that actually, in fact, every professional is going to have not just a set of agents working on the thing they're doing, but also some of like some of these agents or, you know, all of these agents have encoding capabilities that

which like eventually as you get from scripts and other things, those coding capabilities get very deep and that the most, you know, prominent programming languages, you know, won't be C++ or Pascal or anything else. They will be, you know, kind of English, Chinese, you know, in terms of how generating we're all going to be using it. Now, the reason why there's still room for a lot of kind of human activity, data science is a parallel, is

is because the way you think about it and the kinds of things you do, like as opposed to the, hey, I'd like you to run a query to say, you know, how active are all of our users who've been here for over a year? And say, well, I could just ask the thing myself and then kind of generate the thing. But if you might say, well, what are the different ways that we should try to understand this?

churn, then actually, in fact, you know, you also working with or someone data scientists also working with these tools would say, no, I can actually generate a whole bunch of stuff that's really interesting to you that you as, you know, call it the general manager might not have actually, in fact, known exactly which kind of questions and analyses to run through. And so I think we're making great progress, although I think that there's still a

Just like writing other things, I suspect we're still some ways away from where you should get a large block of code from an AI co-pilot and just check it in without looking at it. Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young.

Possible is produced by Katie Sanders, Edie Allard, Sarah Schleid, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Sayida Sepieva, Thanasi Dilos, Ian Ellis, Greg Beato, Parth Patil, and Ben Rellis.