Hello and welcome to a free preview of Sharp Tech. Hello and welcome back to another episode of Sharp Tech. I'm Andrew Sharp and on the other line, Ben Thompson. Ben, how you doing? Doing good, Andrew. Doing good. You know, another F1 season in the bag. Another championship for Max Verstappen. He did his best to try to carry Ferrari over the line, but alas, it
What are you going to do? Well, and he was able to recover from the scorched earth comments from George Russell on Friday on the way into the race weekend. There were some direct hits there as a Mercedes fan. I'm feeling reinvigorated and ready for next season. But sure. What direct his George straight title? George didn't deny it. And.
Crofty you do is bad enough when Crofty had to on the broadcast be like yeah we checked on George's claims and they weren't true so you know what you know what claims were true the claims that were true from George Russell were that Max Verstappen does not handle adversity well and
every time there's been adversity, he freaks out and acts like a complete baby. And I heard George say that, and I thought to myself, you know, that's absolutely right. All these years, the last couple seasons, I've convinced myself I like Max Verstappen. I don't like Max Verstappen. And now that George has taken the fight to him, I'm ready to go for 2025. Yeah, he doesn't handle adversity well, which he responds to by just winning.
So maybe your guys should handle adversity a little bit worse and try to win for a change. That might be a good place to start. Listen, just be less of a baby. Sometimes, Max, for stopping. But hey, he's expecting a new child. This is too much Formula One. On that note, let's get into the mailbag. We've got a lot of fun beats to hit on this episode. And we are going to start with...
with a follow-up to your article last week. So this is a week ago before the news cycle was hijacked by the Intel board. You wrote about Gen AI and the future of user interfaces. It was sort of a meditation on what we've learned in 2024 and where things may be going from here.
And in response to that piece, Leon says, this is mostly for Ben, but I'm still pretty confused by the idea of a quote-unquote on-demand UI, specifically around what is made possible by generative AI.
As I understand it, Ben is saying that an AI-enabled wearable will be able to provide context-specific UIs that offer simple choices and obviate the need for touch navigation. An example would be walking past a movie theater and looking at the list of showtimes and your glasses or whatever can see this, check your calendar, check your texts to see what movies you've discussed, and then you can see what movies you've watched.
and say, quote, do you want me to see if your wife wants to go see Wicked at 8 p.m. on Thursday? The UI in this case is immaterial. It's just a yes button, maybe a no button. The difficult bit is determining the proper context and the question to ask.
We already have dynamic UI all over the place. And to me, what you're describing is a dynamic UI where instead of populating the text in the UI based on user input or from a database, you're doing it using generative output. Am I missing something? This doesn't seem like that big of a value unlock, specifically on small wearables where the complexity of the screen is low.
So, Ben, the floor is yours. Explain to Leon what he may or may not be missing. Well, no offense to Leon, but I think his example is terrible. I mean, who wants that?
Which also is sort of a cheat and a dodge by me because the problem with a lot of this stuff is it's hard to sort of envision and think about what is going to happen and what the specific use cases would be. If they were, they would sort of be built. And it's just one of those classic examples of you have to sort of get the building blocks in place and then people figure out cool stuff to do. Like just to use a very classic example of,
The idea of Uber, of people being in cars and just picking you up at random is
required the building block of there being mobile, of there being the app store, of there being access to location and required cloud computing. And like all these different pieces had to be in place and then you can invent Uber, but you couldn't even envision that. Like no one was, when Apple was, so the context we're in right now is like say 2004, 2003. No one in 2003 is saying, yeah, we're going to, even though at that point there were smartphones available
that did have location services, but we weren't even to the iPhone moment yet. And there was just this, like, we were a decade away from this service existing. And yet it was basically impossible for anyone to think about and come up with.
And so I totally get I'm sort of dodging the question here. But the reason I wasn't just talking about this specifically, but if you look back over time in the way computing has developed is what you get from the old platforms is sort of the foundation to develop the frameworks that go into the new platform. And so when you're on sort of mainframes and you start out and they're fully integrated, then you separate the idea of you can have executable
interactive programs. Once you have executable interactive programs, it's less that you have a PC, someone invents, what do we do with this? It's that the people demand a PC because this functionality is so cool and want to be better. If it was on a PC, if you're, if you're sort of, and you're, you're, you fed fast forward and you're on the internet with the, with the PC, with the phones were there, right? Smartphones were there, but people like, Oh yeah, like office war, you know, uh,
Road Warriors will use them. The whole reason Steve Ballmer was skeptical of the iPhone is because, oh, people need it for email. That's what they're going to use phones for is email. Well, they're not going to have a keyboard. Ha, ha, ha, ha, ha. Let me go buy the Clippers and run them into the ground, right? So you have this- Grace Ballmer impersonation. Thank you. Yeah, I may be able to be a little more enthusiastic. But suddenly, because the internet's on your phone,
That is like, I want a device that will let me use this everywhere. And notice the key distinction here. Up until now, we've been saying, okay, Facebook's building these headsets. What are you going to use these headsets for? Right? Yeah. And it's like there's a physical device being built, but what's kind of the point here?
My point...
I identify with Leon's question, and I've had that same question myself. I mean, talking about even the meta Ray-Bans, it's like, how much use is there really if you can just look at a building and hear about the history of that building? That's kind of cool, but does it really move the needle? Does it move the needle enough so that you're going to be wearing glasses everywhere you go if you don't already wear glasses? And so I have those questions as the resident tech skeptic
on the show. But by the same token, you've made this point a couple times. It's like, it's really hard to imagine use cases and utilities that are unlocked by technology that doesn't exist yet. And so I have to quiet my skepticism and just assume that
Once this stuff becomes more widely available, people will build on top of it and come up with different applications that are hard to fathom as we sit here in 2024. Let me try to explain this again because I feel like you completely missed my point. And I'm appreciating you as a stand-in because I think Leon missed my point also. So I'm clearly not doing a good job of explaining my point.
So if you think about the wearables, you're like, we have a wearable. What do I use that for? So this is an assumption that the device comes first and then we try to figure out what to do with it. My point is the way if you look back historically, what has happened is actually the use case comes first and then the demand for the device comes after. It's an inversion.
So when you were just running punch card tabulations on a mainframe, no one wanted a PC. Like it just wasn't even the realm of possibility.
when you had interactive programs where if I can get time at the mainframe, I can do cool stuff, but I can't get time at the mainframe because everyone wants to do it. Man, I wish I had my own computer. You already had the use case, which was interactive applications that pulled the devices into existence. So it wasn't the device came first. It was the use case came first. So
So you fast forward and so you have PCs. Now you're using PCs and you can go on the internet. You can get information online. And we already had phones, but it wasn't that the phones created the internet. It's that, wow, I wish I could use the internet everywhere. Once that pulled...
devices into being incredibly useful, even more useful than PC because now I can access the internet everywhere. And my contention with wearables is not that we are making up use cases for wearables.
Rather, my argument is we're going to see the development of a new kind of application, a new kind of layer over the next five years, say, that is much more dynamic, is much more reactive, is much more cognizant of your context and just gives you what you want at the moment that you want it. Again, I think this is an over-prescribed idea. I'm walking past the theater and ask if I want to see a movie with my wife.
I mean, that just sounds annoying. So, yes. It does. But I think what we're going to see is a development of a new way of computing that maybe it's going to be sort of just the voice interaction is going to be much better. You're going to be able to tell your computer, hey, can you go do this? And it will go do it or do X, Y, Z. And it's going to be this much more interactive interaction.
sort of the computer just does the right thing to a far greater extent than it can today, where you can drive the car down the road and the road is maybe very well defined through a well-designed GUI, but you don't want to drive the car. It's like a self-driving car versus like a regular one. You just want to say, I want to go to this place and it will just get you there. And I think computers are going to become much more self-driving. You're going to have self-driving computers. Once you have self-driving computers, it's going to be
I wish I could self-drive my computer 24-7 in all contexts. And suddenly you have a desire for wearables to do the thing you're already doing. So your skepticism, I think, is totally appropriate because what does not happen, but what we think happens, is devices come first. You have to make up something to do on the device.
I'm arguing, actually, if you look back historically at tech, it's the opposite. It's we figure out these incredibly powerful new use cases on the existing paradigm, and that powerful new use case demands new kinds of hardware. And so that's my case here, is that, look, this generative AI is going to make self-driving computers a possibility. Man, I wish we would have used this in the article, self-driving computers, right? Once you have self-driving computers, you want to have the capability to
of directing that computer at all times. Now, if you could just say something and the computer does something for you to pull your phone out, to unlock it and trigger it, it's like, why am I bothering? This is a waste of time. I could just say something and it will go do the right thing. So what's going to happen though is I think that development of a self-driving computer is going to happen on computers and on phones, the devices that exist. And then suddenly we're going to look up
in five, six years, and maybe this is when Orion is ready. And it's like, of course I want Orion. I don't have to imagine a use case. I want to do the things I'm doing on the computer right now. Just like when the iPhone came along, there was no app store. There were no apps. And yet you wanted the iPhone. Why? Because it was an internet communicator. That's a very understandable and useful thing. And yeah, the idea I can use the internet everywhere instead of just at my desk. I got it. I'm sold. You don't like, like it's super.
it's super clear what the iPhone, why it was such a breakthrough. And the fact that wearables exist now is like smartphones before the iPhone. It's like, yeah, they exist and we can sort of like do stuff, but most people aren't going to want this and blah, blah, blah. And you've asked for like, you can't even imagine not having one. And so that's, that's the inversion that I'm trying to get at.
I'm glad we talked this out because the idea of self-driving computers and that being the way we all do computing going forward, that makes a lot of sense. Yeah, I know. I need to rewrite my article now. Self-driving computer, I think that's exactly it. Once we have self-driving computers –
Suddenly, wearables are super obvious. And that's the point I was trying to make. And once we're all self-driving, suddenly the idea of pulling out your keys and putting it in drive, people just aren't going to want to do that 25 years from now. And it's going to be this antiquated way of interacting with automobiles. I imagine a similar future for computers and computing one day as well. I got tripped up with the iPhone analogy and Uber because...
Uber could never have existed on BlackBerrys. And then iPhones made it possible to have an Uber. You're right. That's a bad analogy. I take the L. I was like... I led us astray.
All right, and that is the end of the free preview. If you'd like to hear more from Ben and I, there are links to subscribe in the show notes, or you can also go to sharptech.fm. Either option will get you access to a personalized feed that has all the shows we do every week, plus lots more great content from Stratechery and the Stratechery Plus bundle. Check it out, and if you've got feedback, please email us at email at sharptech.fm.