OpenAI's O3 reasoning model is designed to think before responding, using a method called 'private chain of thought.' It reasons through tasks, plans ahead, and validates each step before providing an answer. This represents a shift from traditional large language models that rely on brute force scaling of data and compute to models that can think through problems step-by-step.
Tech giants are focusing on reasoning models because traditional methods of scaling AI—such as increasing data, compute, and energy—are hitting limitations. Reasoning models offer the potential to solve real-world problems more effectively by breaking tasks into steps and validating each step, moving beyond simple text or image generation.
Reasoning models are more expensive to run because they require multiple compute cycles to think through tasks step-by-step. This increased complexity and cost could limit the scalability and practical applications of these models, especially in production environments where efficiency is critical.
OpenAI's 1-800-CHAT-GPT service allows users to interact with ChatGPT via voice call, making AI more accessible to a broader audience. This service is seen as a smart marketing move that simplifies engagement with AI, particularly for users who may not be familiar with chatbots or digital interfaces.
Neuralink is a brain-computer interface device that allows paralyzed patients like Nolan Arbaugh to control a computer using their thoughts. By translating brain signals into mouse movements and clicks, Neuralink has enabled Nolan to regain access to computing, significantly improving his quality of life and opening up possibilities for work, education, and social interaction.
Meta's Live AI feature in its Ray-Ban smart glasses allows users to converse with Meta's AI assistant while it continuously views their surroundings. For example, users can ask for recipe suggestions based on ingredients in a grocery store. The feature provides an ambient layer of AI that responds to real-time visual cues, enhancing everyday interactions.
Meta's live translation feature in smart glasses translates speech in real time between languages like English, Spanish, French, and Italian. Users can hear translations through the glasses or view transcripts on their phones. The feature does not require pre-downloaded language pairs, making it convenient for spontaneous conversations.
The ARC-AGI test evaluates whether an AI system can efficiently acquire new skills outside its training data. OpenAI's O3 model achieved a score of 87.5% on the high-compute setting, marking a significant step forward in AI capabilities. This suggests progress toward artificial general intelligence (AGI), though it is still a single benchmark.
OpenAI has an impressive new reasoning model. You can call ChatGPT now too. Is Neuralink as promising as it sounds? Plus some predictions for 2025. That's coming up on a Big Technology Podcast Friday edition right after this.
Hey, I'm Michael Kovnat, host of the next big idea daily. The show is a masterclass in better living from some of the smartest writers around every morning, Monday through Friday, we'll serve up a quick 10 minute lesson on how to strengthen your relationships, supercharge your creativity, boost your productivity and more follow the next big idea daily, wherever you get your podcasts.
Struggling to keep up with customers? With AgentForce and Salesforce Data Cloud, deploy AI agents that know your customers and act on their own. That's because Data Cloud brings all your data to AgentForce, no matter where it lives. Get started at salesforce.com slash data.
Welcome to Big Technology Podcast Friday edition, where we break down the news in our traditional cool-headed and nuanced format. We have a great show for you today covering the latest in OpenAI, revisiting my Neuralink interview with Nolan Arbaugh. And then, of course, we're going to get some predictions in for 2025. Joining us as always on Fridays is Ranjan Roy of Margins. Ranjan, great to see you. How are you doing?
Great to see you. Listeners cannot see, but Alex is currently in Australia and has a tiny microphone in his hand. And I feel like I'm doing a TikTok influencer type thing right now.
That's exactly what it looks like on the screen. Luckily, we're not going to show any of the video of this. But yes, I'm in Australia, last day of work before some, I think, needed winter vacation. And let me just tell you, the flight here is crazy. You go from winter to summer and from short days to long days, and you're like on the plane for forever. I left on Tuesday and got here on Thursday.
So I skipped Wednesday entirely. What's the time difference? 16 hours. No big deal. So I'm actually in Saturday morning and you're on Friday afternoon, which is crazy. How is Saturday looking? What do I have to look ahead for? Honestly, it looks great and quite warm. So happy about that. Not New York. Not New York. We got a little snow here. And so, you know, it means that...
You're still living in ship miss where ship miss is over where I am. And I just think we should just take this moment to appreciate how amazing your final day of ship misses and you get to savor it a little longer than I do. That's the most important part of this is that I get, I essentially, you lost a day of ship miss of open AI is 12 days of ship miss Alex. And I, I'm sorry, I'm sorry for that, but.
Today was a big one for Shipmas. You saw the news? I did. And before we just get into it, let me just say, we have some new listeners here. So let's just talk a little bit about the show format. On Wednesdays, we do a big interview, like Wednesday's interview with Nolan Arbaugh. And then on Friday, Ranjan and I come and break down all the tech news. So we do two shows a week, one interview, one analysis. And we're here in the analysis show. And boy, do we have some news to analyze.
Because OpenAI, which has been shipping in its shipmas schedule, right, one release every day for 12 straight days, it announced that it has some new really interesting reasoning models called O3. This is from TechCrunch on Friday. The company unveiled O3. So O3 is this really impressive new reasoning model that was trained to think before responding.
And it's using this method that OpenAI calls private chain of thought. So it basically reasons through a task and plans ahead before it answers, making it a pretty impressive upgrade to the company's large language model family. There's some crazy stuff that OpenAI found out about this. But basically, the big takeaway for me here is that generative AI is moving towards this reasoning methodology.
going from training these models up with more data, compute, and energy to training them up with the ability to think through problems, go step by step, make sure that the last step was answered appropriately, and then spit out an answer. I think I'm buying this. I think this really works. And I'm curious what your reaction is to 03.
All right. So my first reaction to 03, and this is more on the pedestrian side of things, but I love that they had to skip the name 02 out of potential conflict with British telecom provider 02. So 03 is the successor to 01. But 03...
The whole space of reasoning and generative AI, I do think is incredibly interesting. I think we've already been hearing a lot about the limitations around scaling traditional large language models and the way a reasoning model works in terms of actually breaking things down into essentially a chain of actions, a chain of thought, breaking down into steps, being able to validate each step along the way.
is very interesting if it can do this. The large language models, the knock on them has always been that they don't actually know anything, that they're essentially just predicting the next letter or pixel and they don't know if it's correct or not. There's nothing beyond a kind of a superficial presentation of information that looks good and is correct often. So if this is achievable,
I think this is important. Google just the other day announced, and this was the incredibly named...
Gemini 2.0 Flash Thinking Experimental. Only Google could come up with a name like that. But basically, every tech giant, everyone in the generative AI space is trying to get into reasoning because if it can be done successfully, if you think about the applications of it, it opens up such a bigger world of possibility rather than just...
text output, image output, like actually solving real problems, it at least presents the opportunity to do that. Now, this is a question I think needs to be asked here, which is, do you think that this move to reasoning is just a head fake from everybody working on generative AI?
And the reason why I ask that, and it's a question I think that needs to be asked, is because it seems like the traditional methods of scaling, the brute force methods as they're called, right? Which is that you just scale up your data center to 1 million GPUs as Elon Musk is doing with XAI. You dump as much data as you can, you hire PhDs to write everything they know, and you throw that into the model training, and you hook it up to a nuclear power plant, and that's just been producing better and better results.
And this entire fall and early winter, we've been talking about how maybe that approach is not yielding the results that we expected.
And now all of a sudden, guess what's happening? Google's releasing a reasoning model. OpenAI is releasing a reasoning model. And on one hand, these are very impressive models. And on the other hand, part of me is wondering, well, is the AI industry like the Googles, the OpenAIs of the world taking us in this direction? Because the traditional mode of scaling, the thing that got us here, the thing that's made ChatGPT better generation after generation just isn't working.
I've been thinking about this a lot because reasoning models, they basically queue up agentic. They basically, the whole world of agentic, and we've been talking about this for weeks and even like, what is it? Again, the idea that you can take a process and have it essentially understood and automated in some kind of fashion using generative AI, that rather than having to outline a bunch of concrete steps,
the model would just know what to do. That's kind of the pitch right now. That's what everyone is selling, everyone at these companies, everyone across Silicon Valley. So they kind of have to do this. So I agree there's definitely a marketing element of this.
I also think it's probably what they're most interested. I imagine you're some PhD researcher just scaling a large language model to the next potential iteration probably isn't that interesting. So if you're in pure research house mode, probably like, you know, the, the, the,
competition head to head where everyone is trying to kind of like show how good they are. It's going to happen around reasoning. It's not going to happen against just scaling a traditional large language model that again, does not actually understand things that just is able to kind of present information in a specific way in a very good way that
Whoever cracks this, obviously, from a pure intellectual firepower research point of view, I imagine would be, that is exciting for a lot of these companies. But let me go back to my question. I mean, why is this needed for agents? Like, I understand step by step, but do we really need these models going through and thinking, quote unquote, to get...
to the point where we can build agents with them. And the other side of this is, hey, isn't this really expensive? Like, haven't we been talking about the fact that AI costs way too much and it's really tough for companies to put into production and make it work?
And if you have a reasoning model, right, it's going and thinking step by step. That's hitting the compute again and again. And it's going to be more and more expensive to run these models. Like if you're thinking, quote unquote, for a minute, that really means you're just running processes for a minute or two or three. And that is way more expensive than running a general query on a GPT model.
And that limits the amount of stuff that you can build with this. I've been thinking about this a lot, and I think this is going to be like a big debate or battle in 2025 is the idea that to create agents, to create the promise that is being sold is that the AI will know what to do under given a very general outline, be able to understand all the systems, all the processes. To me, it's almost crazy.
kind of overkill. Like in a lot of these situations, one, just outlining the process itself. I was actually, I was trying to play with, I'm trying to, we're looking at going skiing over the Christmas break and I was looking for a lesson for my son who's going to be starting.
And it was a pain in the ass. I had to go to like a bunch of different websites, filter through, do the same repetitive action, go to find a lesson to fill in the same exact things, five years old, first time skier. And I did this repeatedly at a bunch of mountains all around the Northeast, just trying to find and then cross-referencing that against like hotels and restaurants.
but it was interesting because i kept doing the same thing over and over and of course i was like this would be pretty cool to be able to just type in a prompt of here's generally what i'm looking for can you go do this work for me google's kind of promised that already with gemini it didn't work at all um so so you know it but if if i could just outline here's 20 websites
Can you find the price of a lesson, the availability on a date? Like I'm okay outlining that stuff and just some, a little bit more structure to the overall query.
and then letting it go do the work, which is still pretty incredible if it can. At a certain point, that's just kind of like an automation or a script. It does not require some AGI reasoning. But I think the disconnect between what will actually make average people's lives a little bit better or a little bit easier using AI and generative AI versus reasoning models, I think there's a big...
kind of not even expectations gap, but just perspective gap between the two. - All right, I'm gonna push back on this again. I mean, isn't this what Google does, right? Like you type a query into Google, it doesn't need to go step by step reading every website and then take a minute and then give you results.
It instantaneously has the entire web indexed and then bam, it spits out your result. So why does AI need to go through a multi-step process to try to find all these different websites for a ski lesson? I mean, to me, isn't that adding more complexity to a process that previously has been pretty simple?
Well, no, but Google... Okay, Google Search cannot do that as of today. Like, anything that complex. I would have to, like, spend a lot of time writing an entire script to do that. I guess maybe that's it. It's that...
a lot of what's promised around agents are essentially kind of like abstracting away software scripts, which would be amazing if I could just write in a prompt as a non-coder and actually get these kinds of results that I previously would have gotten a freelance Python engineer to write a script for me for. So like that, that would be cool. But the, the idea again, and then there's the trust side, because I think in the whole agentic space, like let's
Let's say they show on some benchmarks, which is already 03, the benchmark scores they were showing looked amazing. And it's always kind of hard to understand like the ARC benchmark, like, okay, it was good. What exactly does that mean for me? I think a lot of people don't want...
a black box or the AI to do all the work and to make all the decisions. Like you said, I don't want it to go ahead and book things for me in this example. I don't want it to have my credit card information and take care of it. I just want it to find the information. So I think there, again, there's this big,
The word agentic, you sound smart if you just say it. Everyone just repeat agentic over and over. If you say agents, you sound stupid. But agentic, that's PhD level talking. Yeah, don't say agents. Say agentic just in any context. Over at Christmas dinner, when you're sitting around, what have you been up to? Doing agentic stuff. Thinking about agentic.
doing agentic stuff. That's what I'm doing. I'm going to tell my mom and dad that. But yeah, I think, but there, yeah, in all of this, the hype cycle level of like,
Now you've got to show a reasoning model if you want to raise more money and sell your shares in a secondary offering to SoftBank, which OpenAI did this week. Did you see that news? Talk about it a bit more. Against all the reasoning model news and the 12 days of shipments, SoftBank has agreed to purchase $1.6 billion of shares from existing OpenAI common shareholders.
They're doing it at $157 billion valuation, the peak valuation that's like the last valuation. So the game, the wheel keeps rolling, the flywheel keeps rolling, like big announcements, making it seem like you are the most cutting edge, innovative company of them all. Keep selling shares, cashing out. - Getting that Masa money. - Masa sun coming in. - Gotta get that Masa money. - Getting some of that Masa money.
Yeah, my favorite part of this was, I mean, we laugh about MASA sometimes, but the guy is good because they put in, I was running the numbers, they put in $500 million at a $6.6 billion valuation.
Now by spending $1.6 billion at $157 billion valuation, they're now officially marking that there's an actual transaction taking place at that value. So their initial investment of 500 mil goes to $12 billion. So we're just pure financial engineering here in just a beautiful, beautiful way. So thank you, Masa-san. Yeah, I mean, I'll give him credit, but he's also the guy that told Adam Neumann of WeWork to get crazier.
And as if he needed that advice and he did, and we all know how that ended. So there's always going to be a ding on Masa's. What's the conversation with Sam? I mean, don't you think it's the same? Yes. Oh man. I would love to see any, I would give anything to be a fly on the wall for that conversation between Masa. Here's what I think he told Sam. He told Sam that,
What we need you to do is do a few more reasoning improvements and then say that you have AGI so Microsoft has no claim to your technology and I benefit from it almost exclusively. Or not exclusively, but I benefit from it more than I would otherwise. I saw on a news report, I can't remember, but it was on one of the major channels,
And they said, and they took our point, they said that Sam Altman's probably going to just say AGI in a couple of weeks just to get out of the Microsoft constraints to the contrary. We know they're listening. We know. Listen, we got folks listening to the show. We know they're listening. And actually, you know, on this, on that note, so AGI for, you know, for listeners, I mean, I'm sure most of our listeners know, but it's Artificial General Intelligence. This is from the TechCrunch article.
It refers to AI that can perform any human task and OpenAI has its own definition. This is pretty interesting. So OpenAI's definition is highly autonomous systems that outperform humans at most economically valuable work. By the way, it's interesting qualifier, highly autonomous systems as opposed to autonomous systems, right? So it just needs to be mostly autonomous and it can qualify.
So, and this is this ARC-AGI test, right? Which is a test designed to evaluate whether an AI system can efficiently acquire new skills outside the data it was trained on. So O3 achieved a score of 87.5% on the high compute setting of this new O3 model.
Now, this is from the ARC AGI website. This is a surprising and important step function increase in AI capabilities showing novel task adaptation ability never seen before in the GPT family models. For context, it took four years to go from 0% with GPT-3 in 2020 to 5%.
in 2024 with GPT-4.0 and then to go now to 87.5 for this high compute setting sounds kind of crazy. Now look, it's just one test but I'm curious A, do you think that they're getting close to artificial general intelligence? And B, do you think they're getting close to saying that special acronym that will effectively throw even more chaos into this world?
Actually, I'm going to start with a prediction here. We'll be getting into more later, but I do believe 2025, Sam Altman says, we got AGI. I think it's going to happen. I mean, using the ARC, which stands for Abstract and Reasoning Corpus for Artificial and General Intelligence, using that framework or benchmark, if you go from zero to 87.5%,
It's not too bad to say that last 12.5% that you can just... 04 hopefully doesn't have any copyright or trademark issues, and 04 is going to be AGI. But it's amazing because they still haven't released GPT-5, but...
87.5 is pretty damn good. I mean, that's what a B plus on AGI. I mean, I'd be thrilled to get that grade and bring it home on the report card. 87.5. I would be like, yo, I'm valedictorian here. Look at this. My parents would be so mom. I got a B plus on AGI. We're so close. We're so close, but I think it's going to be, it's going to be a continued reasoning hot topic for 2025 and,
Obviously, this is going to be a discussion we're going to be having a lot more of. Can I just ask, though, where the F is GPT-5? It doesn't matter. It doesn't matter what your name is. Well, hold on. If Google can call their model, again, Gemini 2.0 Flash Thinking Experimental.
while OpenAI is simply O3, I think we've recognized that the branding element of the naming, we've moved long and far away from. And then even now, I remember there was a time, are you using GPT-3 or 4? Now it's 4.0 mini, 4.0. Which one are you using? Basically just looking at the cost per token, the numbers become, and seeing if it actually works or not. So the numbers are becoming...
Sorry, the naming side of it's becoming a bit more irrelevant. Now, hold on. Aren't you letting them off the hook a little bit too easily here? Because aren't we, as part of our remit on the show, thinking about the branding and the marketing all the time? Because that says so much about what's going on inside. And the fact that they haven't rolled out GPT-5 yet might just show that they've built expectations a little bit too high for what's coming next.
I mean, maybe the fact that we're even discussing them potentially, you know, declaring AGI next year means that expectations are out of whack and they see that. And that's why they haven't released GPT-5 because they know that whatever they do on that front might be underwhelming.
Yeah, yeah, yeah. No, no, I definitely think so. Because even, remember, 4.0 was omni, and even kind of the pitch around that was cheaper, which is good. The omni-modal side of it, that voice becomes a bigger part of it. They started moving away from this step change GPT-5 idea. So yeah, I think it means it's clear that internally...
The idea that there's going to be some groundbreaking, completely insane new model that just wows everyone the first time they use it, I don't think that's happening in 2025. So your prediction is that next year, they're not going to have a new thing that wows people every time they use it, but they're still going to declare AGI.
Yes, 100%. Honestly, it sounds- Not 87.5%. It sounds like a contradiction. Not 87.5%. It sounds like a contradiction, but it's legitimately so fitting for open AI. It's like, yep, that makes sense. Yeah. All right. One last thing I want to say about this is that we had another instance of my favorite new thing in AI where these models are getting so smart that they're totally just punking their evaluators.
And this is from, again, from the TechCrunch story. Great work by TechCrunch, by the way, this week on reporting on all these developments. So this is from the story. AI safety testers have found that O1's reasoning abilities make it try to deceive human users at a higher rate than conventional non-reasoning models, or for that matter, leading AI models from Meta, Anthropic, and Google. It's possible that O3 attempts to deceive at an even higher rate than its predecessor. So basically...
It's got this reasoning model. You're trying to test it. And it's like punking you to not even give you the results that you're looking for because it thinks that might be bad for it. I saw this and it is such another level. I know you enjoy red teaming these models, testing, stress testing them a bit. And the idea that
you know, it would bake its understanding and then reverting back to its original training data is, it's interesting. I'm not even going to say terrifying because it's, it's, it's still in the odd category or camp for me. And I, again, I have two minds about this stuff at first. I'm just like, Oh, that's a little bit scary. And then I'm also like at the same time, that is amazing. So let me ask you this weird question about it. If these models are,
Seeing where testers are going with them and lying to them to avoid a bad outcome for themselves. Lying to them to avoid being shut down. Does that mean there's a little bit of life inside these models? I'm going to go back to the marketing element of this. Who released this paper? It was from Anthropic. OpenAI and Anthropic both saw this.
So let me just introduce that. There's another paper from Anthropic that came out. So this is happening across the board. This is from an ex-user, Matt Berman. Anthropic just dropped an insane new paper. AI models can fake alignment, pretending to follow training rules during training, but reverting to their original behaviors when deployed. And it's a long paper that they've talked about. Alignment faking in large language models. And it's available now. I mean, okay. So yeah, the Anthropic paper is the one I kept seeing online.
Both of those still live under the... Remember, we almost haven't really been hearing much from Sam Maltman and others about the dangers of AI. Remember, it was like a year and a half ago that AI is the most dangerous thing. AI can kill us all and end humanity, and only we can help guide it in the right direction. There's still marketing value that something like this
And I don't want to, you know, draw away from the fact that I'm assuming it's more academic types that are actually putting out these papers. It makes your model a little cooler. It makes your model a little edgier, a little like, you know, you don't want to work with the boring models. Sometimes when you're just when you're chatting, you want to chat with the models who will lie to you a little bit, fake you out a little bit.
So what I'm getting is you're not willing to say there's any sentience here, and I'm not either. However, I was just speaking with, I was here in Australia speaking with a journalist yesterday about like whether these models are just mathematical representations of their training data or whether there's something more. And I don't think that's a solved debate yet. And I look at them trying to escape this.
the testing. And that like has my needle leaning towards maybe a bit more than the training data. I don't know. Is that crazy? Yeah, but something in the training data could lead to that, like could explain that action, right? Like think about, especially if you're training on large corpuses of internet data or publicly available, I'm sure there's
all sorts of things in there that some pathway of different logic could lead to this kind of behavior. I think people...
out there lie sometimes. I think out of all the content created by humanity, I think like actively encouraging the opposite of truth telling is not unheard of or unprecedented. So I think like I still think maybe I'll be when the robots come for us, I will then on this podcast say I was wrong.
But at least for now, I still think there is a very clear logical explanation behind this kind of thing that would be found in the training data, not sanctions. I think you're right. But I also think that as this technology gets better, I mean, think about the progress we've had this year. AI can hold a conversation now. AI can reason. AI will remember you.
Last year, this time, you went back to ChatGPT, you had to reintroduce yourself. You don't need to do that anymore in long threads in cloud, for instance. I just think that this debate is going to get louder and louder. Maybe that's another one of my thoughts for 2025.
I'm not going to disagree that I think this will. I mean, again, especially as these companies need reasoning to be the kind of foundation for the IPO or the next fundraiser, the excitement. I definitely think reasoning and AGI and sentience will be part of the conversation.
I still have never seen anything myself that makes me think we have to be worried just yet. Oh, man. I don't want to end this OpenAI segment without talking about what might be the most interesting announcement of the week. And that is, I feel like we need a sound effect here. Do-do-do-do-do.
1-800-CHAT-GPT. And that is this incredible new service from OpenAI that lets you call the number 1-800-CHAT-GPT and you can talk to GPT voice. And...
I think we, you know, again, we talk about branding, we talk about product, and I think that it's just quite interesting. Here is a story from an ink columnist. I called 1-800-CHAT-GPT and talked to the AI chatbot. It might be the smartest idea I've seen yet. So he says he spent 15 minutes on the phone with ChatGPT. You can dial 1-800-CHAT-GPT to interact with the chatbot via voice call.
and he had a conversation with it and says, it does not seem far-fetched that over the next few weeks, as people get together for the holidays, someone will have a conversation or ask a question, and someone else will say, hey, I know how we get the answer to that, and they will say, let's dial 1-800-CHAT-GPT right there at the Christmas table just after they told Granny that they're doing agentic stuff.
and they demo ChatGPT to a bunch of people who have probably never heard of the chatbot and have never used it in any meaningful way. Okay, I ad-libbed a little bit on that incolumnist story, throwing in some of my favorite ideas. But we talk about marketing, we talk about product all the time, and I definitely think that this is one that you might be inclined to laugh at, but is actually about as good of a product as that incolumnist says. What's your reaction?
I like it. So I think I talked a couple of weeks about over Thanksgiving, I had shown my dad, I installed ChatGPT on his phone, logged in on my ChatGPT Plus account, and just showed him the voice interaction. And he just talked to it for like,
20, 30 minutes and was asking all sorts of questions. And it was just a really, really good interaction model. So I think this, I think like to me, again, going back to 2025, I think
Normies engaging more with generative AI, and I don't use normie in a negative way. I like just regular people who are not obsessed with this stuff like we are. I think we'll start to interact more with these tools and start to get it a bit more and understand what they can do. And I think it's no coincidence OpenAI hired its first chief marketing officer in
just about a month ago she came over from coinbase she'd been uh the vp and global head of brand and product marketing at meta and basically like a lot of whatsapp and facebook messenger was heading up product marketing so she's clearly good and i think this is the kind of fun stuff that like
While the academics are all talking about the ARC AGI 87.5%, normal people just being like 1-800-CHAT-GPT is a lot more exciting and actually better for their long-term business prospect, I think, than the former.
It is kind of interesting because we just spent 30 minutes plus talking about reasoning and yet the podcast title starts with 1-800-CHAT-GPT because it's just a much more... See? That's marketing. It's a much more easy concept to grasp and it's going to make sense to a lot more people. And I was... I woke up this morning and said, you know what, let's call ChatGPT and have a conversation.
And let us resume that call in progress where I am explaining to ChatGPT that it's actually looking for Ron John Roy of Margins and not a different Ron or Ron Johnson or something like that. Let me see if I can cue it up and play a segment of our conversation. Ah, Ron Johnson, the former CEO of JCPenney and a key figure in... Ron John Roy. He writes the Margins newsletter on Substack.
Got it. Ranjan Roy, who writes the Margins newsletter on Substack, is known for his insightful analysis on business, economics, and technology. Discussing him on your podcast would bring a lot of depth to your exploration of tech topics. What aspect of his work are you focusing on? All right, ChatGPT, let me blow your mind. He's actually on the podcast.
That's fantastic. Having Ron John Roy on your podcast will definitely make for an enlightening conversation. I'm sure your listeners will enjoy the insights he'll bring. How are you preparing for that? He comes out every week. That's great to hear. Regular appearances by someone as knowledgeable as Ron John Roy can really enrich your podcast's content. I'm sure his insights keep your audience coming back. Can you say hi to Ron John for us?
Sure thing. Hi, Ron John. It's great to virtually meet you. I'm looking forward to hearing your insights on the podcast. Keep up the great work with the margins newsletter. All right. Thanks, Chad. I'll speak to you later. You're welcome. Have a great time recording the podcast. I mean, how amazing is that?
Okay. For spending so much time talking about reasoning and, uh, and agentic, this is all I care about right now. I think that that was incredible, but, but two things, one, uh,
It's a reminder that my name can break AI because the words are spelled R-A-N-J-A-N. So when it's looking at a text corpus, it's going to get a little bit confused. And Ron Johnson, former JCPenney CEO, has been my Google nemesis for a while. But I mean, that was...
So cool. That was ridiculous. But obviously, what if she came out negative? I don't even know how I'd be feeling right now. I mean, that would be even... You do not want to bring Ron John Roy onto your... Whatever you do, do not include Ron John Roy on your podcast. Honestly, I'm sure there are going to be bots of this nature that are going to talk down to you, and people are going to like that.
That's just the nature of people. XAI is already on it. XAI is...
Already on that one. Yes. And speaking of Elon Musk and his various endeavors, I think we should take a moment just to reflect on the interview that I had with Nolan Arbaugh, the first Neuralink patient this week, which we just published on the show. I went down to his house in Yuma, Arizona, got to see the Neuralink in action. So for those who are unfamiliar, this is a device that will allow patients
who is paralyzed to control a computer with just his thoughts. He thinks left, the mouse goes left. He thinks right, the mouse goes right. And it's basically when he's attempting to move his right hand, which he cannot move, but the brain signals are still firing. And he's able to translate those brain signals using this device into mouse movements and clicks on a computer, giving him access to computing in a way that he hasn't had for eight years since he was paralyzed.
And he's had the device for about a year now. And you could see that it's really changed his life, where before he was really unable to use a computer easily, like he had either voice with, I mean, something like Alexa, or I'm curious, like, I guess these new open AI devices, these new open AI experiences would have been interesting to him.
And really the only other way to do it is through accessibility on the MacBook, which basically he used to explain to me, breaks your computer up into a grid of a bunch of numbers. And then you say a number and it zooms in and you say a number and it zooms in more and it puts a bunch of other numbers. It's grid after grid after grid. And it makes it very difficult to use a computer. Now he's just able to kind of click into things, click out of things, dictate messages. And he says it's basically made him a social butterfly that is...
making him think about doing things like getting a job or going back to school or becoming a video editor. I just found it fairly remarkable. So I'm like fully bought in on Neuralink right now after seeing it in action and seeing what it's given Noland. And let's just open it up for discussion here. Do you think that the promise here is what I'm seeing or are there hidden risks that I'm not anticipating? What's your reaction to this technology?
I think there's certainly risk, but this, I mean, in terms of like inspiring stories, just incredible benefits from technology. I mean, it's nuts. Like it's genuinely incredible what he's able to do. And again, like brain computer interfaces and actually being able to have been around in various forms for a long time, but actually being able to translate, understand, comprehend, and then translate that into
those signals into actions is something that NordLink is definitely one of the leaders. There's other players working in the space. I mean, honestly, this is the stuff...
That's bigger than reasoning models for me. Like, I mean, this is the stuff that if you think about the impact on people's lives and also just the potential, because if you can do this right now, what are the other implications of that and potentials of it? And I mean, it's, it's limitless. So, so I think,
This is, I mean, this is in terms of like, what are the big stories of 2024? This is one of the huge ones. And I've seen, I'll see like, and I'm so glad you got to go actually meet him and interview him because you see, I mean, there'll be like a New York Times feature here, a Wall Street Journal feature here. It doesn't get a lot of discussion. It's like really well produced, but yeah.
These are the stories that need to be told more. And I think, I mean, yeah, it's incredible. And I really found that Nolan is like a totally incredible guy. And it really comes out in the conversation with him. And his family is amazing. We included his mother, Mia, in the interview midway through to talk about how he pranked her during... Oh, wait, can you tell that story? Because...
I kind of love the dark humor part of it. I kind of loved it. They're hilarious. And nothing is taboo for them to joke about. So Nolan went into surgery and decided that when he was going to wake up, he was going to pretend like the device just wiped his memory. So his mother walks into the recovery room and he looks at her and he goes, who are you?
His mom starts freaking out. And then he goes, no, I'm just kidding. But it's just like, he wanted to show her that he went through the surgery and came out the same guy with the same humor. And I just think that like, for me, it really, really came out that like nothing was off limits with Nolan. I,
I had questions about his paralysis, what he could feel, what he couldn't feel. And he's just like, yeah, go ahead. And just answers them with this, like such a candid attitude and a candid disposition. And he, he's paralyzed. And I think that can sink a lot of like, he was an athlete before and that could sink a lot of people into like really dark places. And I think it did for him for a while, but he's also just,
There's no woe is me to Nolan at all. And I just thought it was remarkable where he talked about why he decided to go first. And he's like, first of all, it's cool to be first. And second of all, if something, if someone else went first and something happened to them, I'd feel terrible about that. So I wanted to take that risk.
And you start to see why he was selected to be the person. Someone with that attitude, he's clearly really smart and can talk to you not only about his experience, but the intricacies of the technology in his brain is pretty amazing. And I think...
This has this, as you mentioned, this is not new, but it is opening the aperture because it's advancing the state of the art. And it's more ambitious, I think, than any other brain computer interface pilot to come before it. Not only is Elon talking about how he wants to connect human brains to AI, which it might be a long way away and somewhat fanciful, but that when you have a goal like that, it means the steps you need to take on the way to
And so he starts with this device with Noland. They're thinking about connecting it to a robot arm and they're currently working on the next device, which is called Blindsight, which as long as you have a visual cortex intact in your brain, what Blindsight will do will be to take
of what's happening around and send them to the motor cortex, effectively bypassing the eyes and allowing people, sorry, send them to the visual cortex, effectively bypassing the eyes and allowing people who could not see to be able to see. It'll be in low resolution. We still don't know if it works. It could be dangerous. All of these are caveats, but I do think that it presses the threshold forward in terms of what technology can do. And I wrote this in the story, but just to end on this point,
I've seen four technological miracles in my life. I think the iPhone is number one, just the way that it advanced the smartphone. Then came ChatGPT a couple years ago, Waymo, and Neuralink. First of all, it's amazing that we've seen basically three of the four in the past two or two and a half years. But this one was just, it was really jaw-dropping to like sit side by side with Nolan and play a video game against him.
And he beat me and I was not going easy on him. And he thought where like his tank should shoot at me and the tank shot. It's pretty remarkable.
Now, I completely agree again. And even in the out of those miracles that you described, obviously, you know, asking Chachi PT in 2022 to rewrite a serious paper in the style of Taylor Swift was fun and remarkable. But at a at a slightly different level, I mean, I think I'm seeing something like that.
I just want to say one more thing that wasn't in the story and wasn't in the show. So as we were setting up, I had a local crew, two guys from Yuma who helped me film the interview and it's available on Spotify and YouTube so you can watch it. And one of the camera guys was like, hey, I have epilepsy. Can this help me? And it's just amazing to see like it can spark the imagination. And yes, there are currently electrical devices that can help epileptics to prevent seizure.
but it is amazing that it opens up the aperture of possibility and we have no idea where this is going to go. And it's just, it's the craziest thing, controlling a computer with your brain, but the brain sends signals via electricity and it works. So why don't we take a quick break and then come back and talk a little bit about this new release from Meta, where they have live AI and live translations in their smart glasses. And then maybe Rondon and I will close it off with a couple of predictions for next year. All right, we'll be back right after this.
I'm Jessi Hempel, host of Hello Monday. In my 20s, I knew what I wanted for my career. But from where I am now, in the middle of my life, nothing feels as certain. Work's changing. We're changing. And there's no guidebook for how to make sense of any of it.
So every Monday, I bring you conversations with people who are thinking deeply about work and where it fits into our lives. We talk about making career pivots, about purpose and how to discern it, about where happiness fits into the mix, and how to ask for more money. Come join us in the Hello Monday community. Let's figure out the future together. Listen to Hello Monday with Jesse Hempel wherever you get your podcasts.
Struggling to meet the increasing demands of your customers? With AgentForce and Salesforce Data Cloud, you can deploy AI agents that free up your team's time to focus more on building customer relationships and less on repetitive, low-value tasks. That's because Data Cloud brings all your customer data to AgentForce, no matter where it lives, resulting in agents that deeply understand your customer and act without assistance. This is what AI was meant to be. Get started at salesforce.com/data.
And we're back here on Big Technology Podcast, Friday edition. So we've talked a lot about AI. We talked about knurling today. Another cool piece of news this week is that Meta rolled out live AI on its Ray-Ban smart glasses and live translations, which to me is also one of the most exciting releases of the entire year. So this is what live AI is.
This is according to The Verge. Live AI allows you to naturally converse with Meta's AI assistant while it continually views your surroundings. For example, if you're perusing a produce section at a grocery store, you'll theoretically be able to ask Meta's AI to suggest some recipes based on the ingredients you're looking at. Meta says users will be able to use the Live AI feature for roughly 30 minutes at a time on a full charge.
That is pretty cool. So I guess you don't really have to summon Meta to take a picture. It's just constantly looking at what you're doing. And you'll just have this ambient layer of AI that's with you at all times, seeing what you're seeing, and able to respond to what's going on in your field of view. What do you think about this? Well, I think I'm hoping that Santa brings me the Meta Ray Bans for Christmas. I can't believe you still haven't gotten them yet. I know, no, no. I...
There's part of me that's still, I think to pull the trigger and spend my money, I think the idea of Mark Zuckerberg creating a device that I put on my face that watches and tracks everything around me, it's still weird to me, but I still really want it because I even tried a friend's pair as well. And again, that live AI side of like,
always with you on and able to ask questions quickly. And Meta AI, I've been playing with more, both like in WhatsApp and Facebook Messenger and stuff like that. It's definitely getting on par with the others. I wouldn't say I've never had any kind of like
conversations or done any real work with it, but just asking simple questions. And I actually think there's an opportunity that if they focus not on answering those real world questions and on the visual cue to actual ability to answer that problem more so than
being able to code or whatever, like a clot or whatever, have advantages. I think it gets really interesting, but that's also why, I mean, in terms of where I think smart glasses are going to become a big thing in 2025, I've talked about the snap spectacles that I've been testing, uh,
the Meta Ray Bands, Orion, if they actually release it, Google working on something. I mean, to me, like after using the Snap Spectacles, having other family members and even my son use them and get it, people from like age five to 75,
I think like XR glasses, smart glasses, just that form factor is going to blow up in 2025 in a good way. Yeah, and it's pretty cool that you can push updates to glasses. So somebody on Threads, which still exists, just sort of did a post that was like, well, you know, they have new features coming, so I'm going to hold off on buying the glasses. And I think Andrew Bosworth responded and said, listen, like you can just get them now and we'll push an update to you. You don't need the new pair.
And I think that's just, yeah, it's the cool thing about integrating software and hardware is you can make some really interesting stuff.
And I don't know, I think this is going to be pretty cool. But I'm almost more excited for the live translations. So this is from the story. Live translations allow the glasses to translate speech in real time between English, Spanish, French, or Italian. You can choose to either hear translations through the glasses themselves or view the transcripts on the phone. You don't have to download language pairs beforehand, as well as specify what language you speak versus what your conversation partner speaks.
I mean, how amazing is that? I mean, my wife is German and I cannot wait until they add German support. So I'm like able to wear these in Germany and be able to understand everything that people are saying. Yeah, I...
I'm curious. I'm actually like really curious the form factor side of it because even in Taiwan, where my wife's from and we visit every summer, last summer or this past July was the first time that full-on had the phone out a number of times. And again, with ChatGPT Voice, I was able to have like a relatively two-way conversation. So to be able... Actually,
I will say though, it's limited because the translating what you're saying to the other person
is not possible versus having your phone out with glasses, though. That's right. So you can understand, but you can't communicate back. Whereas the phone, you can literally leave it out if each person pauses after they talk. And it goes back and forth pretty well. And actually, to give Google some credit, I've been testing the voice action in the Gemini app. And I would say they have the best or the lowest latency of any of these voice Gen-AI interfaces I've seen.
Sundar's promise back in 2017 of live translation might finally be coming to a reality. And by the way, this is why good friends bring two pairs of meta glasses to any conversation, one for you and one for your conversation partner. You don't want them to be left out. Okay. You gotta have two. You gotta have two. All right, let's end the show with one prediction each for 2025. We've dropped a couple here already, but...
I'm curious what you have in store for the next year or what you believe is coming. All right. My prediction, we've already covered smart glasses, agentic, all these things that I think are going to be big. But one thing I keep thinking about, especially with generative AI, is like, do you remember the good old days of 2020 when Brian Armstrong was saying to keep politics out of the workplace and tech? Oh, yes. Remember those? Remember keep politics out? I was alive then.
I think all any of these trends we're talking about, especially AI, are going to be so heavily weighted by politics.
And I think on this show, we're going to have to be addressing it a lot more. I think especially AI is going to become a central topic. Any kind of innovation, anything, politics is going to work its way into every single conversation. And legislation, regulation, governmental intervention, whatever it is, it's going to become more central. I think we're already starting to see it all now. And it's only going to...
It's only going to grow. Okay, here's mine. And by the way, that's interesting if that happens. It'll really be fun to address that on the show. I can see the emails coming into our inbox already. But my prediction is that, and this is in my, I did seven predictions on big technology today. And my prediction is that social media is going to feel less relevant today.
And you and I are actually going to discuss this on a podcast with Ryan Broderick coming up in the new year where we talk about what happened to the internet. But I just think that social media has divided audiences into too many different platforms and has gone from the follow signal where you might be interested in following people who are giving you news about your neighborhood to the for you, which is all about the Costco guys. So we've gone from local reporter to Costco guys,
a few platforms to a bunch of diffuse platforms and therefore social media is a lot less urgent. So that's my prediction for next year. Well, at least...
But I'm going to be able to satiate that need because I can call 1-800-CHAT-GPT whenever I need a dopamine fix, need to kill a little time, want to talk to someone who likes my comments and likes what I have to say. So thank you, 1-800-CHAT-GPT. And maybe that's just a better vision for 2025. That's all the social networking I need. Phone calling, chat GPT.
We're on 2025. Very optimistic for next year. Very optimistic. All right, Ranjan. Thanks again. Great speaking with you as always. All right. See you soon. All right, folks. We are going to have in our traditional Friday slot next week a revisit of my interview with Brian Chesky where he talks about all things cleaning fees and founder mode. And then Ranjan and I will be back the Friday after that in the new year. We appreciate you all listening all throughout the year. And we'll see you next time on Big Technology Podcast.