We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
主持人:我认为目前人工智能领域的投资和技术支出呈现出显著的增长趋势。Ilya Sutskever 的 Safe Superintelligence 公司估值大幅提升,表明投资者对 AGI 领域的信心增强。同时,亚马逊等大型科技公司也在大幅增加其在人工智能领域的资本支出,这反映了他们对未来人工智能技术发展前景的乐观预期。尽管短期内商业模式可能并不清晰,但各方都坚信 AGI 实现后的巨大价值,并愿意为此进行长期投入。这种趋势预示着人工智能技术将在未来几年内迎来快速发展,并对各行各业产生深远影响。我个人认为,这种投资热潮是推动人工智能技术进步的关键动力,值得我们密切关注。 主持人:我认为亚马逊增加资本支出表明其对云计算和人工智能的长期信心。亚马逊 CEO Andy Jassy 认为,技术成本的降低不会减少总支出,这符合杰文斯悖论。AWS 的快速增长也支持了这一投资决策。我个人认为,亚马逊的战略是合理的,他们正在为未来的增长做好准备。然而,投资者对支出增加的担忧也需要关注,亚马逊需要在增长和盈利之间找到平衡。

Deep Dive

Chapters
This chapter analyzes AI's presence in Super Bowl commercials, focusing on Meta, Salesforce, and ChatGPT's ads. It discusses the effectiveness and reception of each ad, highlighting varying approaches to showcasing AI's capabilities and brand identity.
  • Analysis of AI-focused Super Bowl commercials from Meta, Salesforce, and ChatGPT.
  • Discussion of ad effectiveness and audience reception.
  • Exploration of different approaches to marketing AI in a mainstream setting.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, five observations on Sam Altman's three observations on AGI. Before that in the headlines, AI ads hit the Super Bowl. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in around five minutes.

Today, our title topic is going to be the AI Super Bowl commercials, or rather the commercials for AI in the Super Bowl. This is something I have some amount of experience with and some opinions on, and it's just generally a fun discussion. But for the sake of completeness of the headlines, I have two stories that I wanted to hit first. For those of you who are watching on YouTube and just want to get straight to the Super Bowl content, I would suggest skipping ahead about two minutes.

First up, former OpenAI co-founder Ilya Sutskever is raising another round of investment for his Safe Superintelligence company, and this time the valuation appears to be at $20 billion, up from $5 billion last September. The seed round for Safe Superintelligence brought in $1 billion from five investors, including Sequoia, Andreessen Horowitz, and DST. At that stage, the fundraising was based on little more than Sutskever's reputation as a world-leading AI scientist, and at this point we still know very little about the company's work.

The startup is pre-revenue because they have made clear that they avowedly will not be distracted by business model or revenue or any such things. In fact, we've talked before about the idea that they are the purest play expression of the notion that the prize on the other side of AGI is so valuable that it might make more sense to not be distracted by selling products in the short term.

Alongside Sutskaver, the founding team includes former OpenAI researcher Daniel Levy and former Apple AI projects lead Daniel Gross. The company recently opened a small office in Tel Aviv to complement headquarters in Palo Alto. And basically, that's all the information we got. Now, some people online were a little puzzled. Joe Wilkinson writes, What do you have to show when you raise a round like this? Like, do they have something that's almost there? Superintelligence that isn't safe? Do they just, asterisk, have something like O3 that they're using to prove that they can get to superintelligence? Could be even more simple than that.

At this stage, with this level of competitiveness, it could either be A, some serious and meaningful progress, or B, just some investors who want to get more capital in and are willing to mark up the value of their own previous investments.

Speaking of big investments, Amazon is upping the ante on AI spending. While reporting fourth quarter earnings, Amazon said that the current pace of $26 billion in quarterly CapEx would be, quote, reasonably representative of what the company will spend this year. That would equate to about $105 billion this year, far exceeding the $86 billion expected by analysts.

It's also a big jump over spending commitments from rivals. Meta guided a 60% increase to $65 billion in spending this year. Google have announced plans to spend $75 billion. And Microsoft CEO Sundar Pichai said he's good for his company's $80 billion. This puts Amazon's spending in the same stratosphere as the projected cost of Project Stargate. That $500 billion project was earmarked at $100 billion for the first year of its four-year plan.

Amazon CEO Andy Jassy joined other big tech CEOs in rejecting the idea that DeepSeek's cheaper cost would mean a decrease in demand. He said, I think one of the interesting things over the last couple of weeks is sometimes people make assumptions that if you're able to decrease the cost of any type of technology component, in this case, we're really only talking about inference, that somehow it's going to lead to less total spend in technology. And we've never seen that to be the case. In other words, it's Javon's world and you're all just living in it.

The other business logic here is that Amazon's cloud division is growing at a rapid clip, and so scaling to support that is just a reasonable bet. AWS generated $107 billion in revenue last year and is expected to hit $150 billion by 2026. And yet, investors are still kind of jittery about the expanded spending on AI, alongside soft earnings projections. In addition to guiding a big CapEx increase, Amazon's revenue and earnings projections came in under analyst estimates, leading the stock to drop 4% in after-hours trading.

But with that, let's shift over to the ads for AI in the Super Bowl. AI was in a number of different places with varying degrees of focus. For example, it was a part, although just a part of Meta's ads, which were focused on their Ray-Bans, which were mostly just funny scenes between Chris Pratt and Chris Hemsworth with a cameo from Jenner. It didn't really beat AI over the head. It just leaned into a product that has been very successful and is clearly a platform that Meta wants to build on.

Much louder in their focus on AI in their ads was Salesforce, whose multiple spots all featured Matthew McConaughey and Woody Harrelson and were explicitly focused on agent force. Basically, the setup premise was that agent force helps businesses avoid the bad situations that Matthew McConaughey kept finding himself in. And so in this way, they were trying to show the value of AI, particularly to businesses who would be agent force customers. Reactions to the Salesforce spots were pretty middle of the road. Some people liked them. Some people didn't love them. Mostly, they got a chuckle and moved on with their life, which

which isn't necessarily where you want to be with your Super Bowl spots, given how much they cost and the opportunity that they represent.

Now for context, and at this point most of you know this, I have in fact made a Super Bowl commercial, one of at this point probably the most infamous of all time, which is of course FTX's Super Bowl ad with Larry David from three years ago. It was actually just placed on Entertainment Weekly's list of the 23 best Super Bowl ads of all time, gratifyingly not only because of the infamy that it lives in now and how accidentally perfect it is that Larry David actually rejects crypto at the end, but because as they say, quote,

Quote, it succeeded in tackling the biggest Super Bowl ad challenge of them all, introducing a relatively new idea, which is a much harder sell than shilling toothpaste or hamburgers. Point being, regardless of what you thought of that ad, I've spent a bunch of time thinking about how to introduce a big new technology category on that Super Bowl stage. And that brings us to the ChatGPT ad, one of the most talked about, at least when it comes to the AI set.

The ad shows the progress of human invention. Honestly, some similar themes to what we had explored and something that you've seen in the past. You have a current innovation. You connect it back to the history of human progress as a way to ground yourself in a larger story of human exploration and discovery. It's incredibly different. It's stylized. It's all built on the black dot as a way to connect the brand with the storytelling. It tries in subtle ways to evoke some nostalgia, and then it ends in a way that's not so subtle.

With the line, all progress has a starting point, what do you want to create next? Opinions were pretty strong on this one. Representing one side of the conversation is Jack Appleby, future social writer who wrote, ChatGPT, what? You just lit $7 million on fire. Worst Super Bowl ad ever. Imagine having one of the coolest tech innovations ever and not showing the super audience what it actually does.

For note, the ad time was actually 16 million because it was a full minute at $8 million per 30 seconds. That doesn't include any of the production costs. It sounds like this was prototyped with Sora, but not actually made with Sora in terms of the final product. On the other end of the spectrum, there were some people who thought they got where they were trying to go. Signal writes, the OpenAI Super Bowl ad gave me major 2013 Worldwide Developer Conference vibes when Apple showcased a video which remains one of my all-time favorites.

it perfectly distills the company's ethos master class in defining purpose craft and simplicity openai's ad tries to evoke that same deep connection within life it's clear they're pushing for that kind of cultural significance with ai brandon jacoby said i thought the same thing but the apple ad has so much story in it the commercial tonight felt like they got the vibe and aesthetic but lacked substance signal said agreed i think there's a much deeper story to be told with ai in its relation to humanity and how openai plays that role today's ad was not that

And I think what's important here is that even in the critique, there's this appreciation of the vibe and the aesthetic and the aspiration. And I think a lot of the folks who responded to it positively felt connected to the ChatGPT brand that they know. It felt aspirational. It felt big.

In terms of my take, first of all, I think it's extremely hard to Monday morning quarterback a Super Bowl ad. There are so many different ways that you can approach it. There are so many different types of objectives that the company could have had. One of the things that makes a wrinkle for ChatGPT is that they have the most brand recognition of any AI company. So they might have felt, and I think this would be a reasonable thing to feel, that they had a little bit more room to be expansive.

They also might have wanted to go for more of an emotional connection rather than just an advertisement for their particular set of services. They want to ground this and connect it to the human experience, especially when AI feels like it can be disconnecting to the human experience. So I understand all of that. I think to the extent that there was one thing that I

that I think that I probably would have done differently. I likely would have tried to get the name ChatGPT up front. I assumed when I started watching it that it must be the ChatGPT ad, but I wasn't positive about that. And I do think that the aesthetic, as cool as it was, was a little hard to tell what was going on exactly until you got to sort of the moon landing, which is really iconic and very clearly recognizable.

but also that could just be me. I think on net, it was an ad that extended their brand aesthetic, that told the story they wanted to tell, that got good response from many people, and that's a pretty decent place to be with your first entrance into the big game.

Lastly, there is one more ad that took a very different approach to telling the story of AI that I have to admit I think was my favorite because I was 100% the target audience. It was an ad that tells the story of a dad who's using Gemini Live to prep for an interview, using his experience being a dad as his reference points for everything that he has to do in that new company. Now, Google is absolutely, for my money, over the last decade, the single best company in

at doing tearjerker type ads that aren't overly cheesy or overly rot that just hit the right note. And this was no exception. If you were one of those people who wanted to see an AI ad that connected with the real human value of it, this might have been a little bit closer. It also didn't beat you over the head with AI. It didn't even really say Gemini, I don't think. It just showed him using it, which I think there might be something positive there too.

Overall, a totally decent showing for AI at this Super Bowl. What did you guys think? Let me know in the comments. For now, though, that is going to wrap the headlines. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in.

Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company.

Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vantage to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents buy industry horizontal agent platforms.

Agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.

That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.

If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI Quarterly Pulse Survey.

Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yet, it's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles. They're not just talking about AI, they're leading the charge with practical solutions and real-world applications.

For instance, over half of the organizations surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US. Welcome back to the AI Daily Brief. Over the weekend, OpenAI CEO Sam Altman dropped a new blog post called Three Observations.

As is always the case when Altman writes a blog, the whole AI world started discussing it. What we're going to do today is go through and look at the key parts of the piece, I'll read a few excerpts, and then we're going to discuss five observations that I have about these three observations.

I think one big thing that stands out to me is that we are all still potentially radically underestimating the scale of the change that we are about to experience. This piece is all about AGI, Artificial General Intelligence. As a funny aside, they make sure to note that they're not using AGI in any term that would change their relationship with Microsoft. He actually had to put that in a footnote. But the point is, it's all about the world after AGI and what it's going to mean.

The first part is a version of poetry, just talking about the steady march of human innovation, how it's always led to new prosperity. And then we get to the sub-theme, which actually isn't one of the three points, but which is woven throughout. Sam basically says, In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we're building together. In another sense, it is the beginning of something for which it's hard not to say, this time it's different. The economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases,

have much more time to enjoy our families, and can fully realize our creative potential. And here's the key line. In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today.

We'll come back to that, but first let's get into what he states are his three observations, specifically about the economics of AI. The first is, the intelligence of an AI model roughly equals the log of the resources used to train and run it. He identifies those resources as training compute, data, and inference compute. And he says, it appears that you can spend arbitrary amounts of money and get continuous and predictable gains. The scaling laws that predict this are accurate over many orders of magnitude.

Number two, the cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. This is our own version of Moore's Law and Javon's Paradox all in one. The specific example he points to is the token cost of GPT-4 dropping 150 times between early 2023 and mid-2024.

Last observation, the socioeconomic value of linearly increasing intelligence is super exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future, i.e. this is not a bubble. Then he goes on to talk about some specifics. A key piece of this is agents, which he says will eventually feel like virtual co-workers and you get the feeling that eventually is later this year.

Pointing to software engineering agents, he said they will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do for tasks up to a couple days long. Importantly, he says it will not have the biggest new ideas and will require lots of human supervision and direction. Still, he writes, imagine it as a real but relatively junior virtual coworker. Now imagine 1,000 of them or 1 million of them. Now imagine such agents in every field of knowledge work.

And yet somewhat paradoxically, in the next section, he talks about how at least in the short run, everything will go on the same as it has. People in 2025, he say, will mostly spend their time in the same way they did in 2024. But he writes, the future will be coming at us in a way that is impossible to ignore. And the long-term changes to our society and economy will be huge. We'll find new things to do, new ways to be useful to each other, and new ways to compete. But they may not look very much like the jobs of today.

So what matters in that future? Well, he says, agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value. Resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness and enable individual people to have more impact than ever before. He points out that the impact will be uneven, specifically saying that the scientific progress that comes from AGI may be the impact that surpasses everything else.

In terms of how this affects specific prices, he says that many will fall dramatically, specifically those where the constraint is the cost of intelligence or the cost of energy, but luxury goods and inherently limited resources like land may go up even more. And then he talks about policy and society and how unclear what to do next and how to address this future really is.

They have only the barest of guidance. We believe, Altman writes, that trending more towards individual empowerment is important. With the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy. He points out that it's going to be important that the benefits of AGI are distributed broadly, but there may need to be new ideas for how to do it.

One specific warning, vague though it is, in particular he writes, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. He continues, we're open to strange-sounding ideas like giving some compute budget to enable everyone on earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

And then he ends on this doozy. Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025. Let me read that again. Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025.

Altman is saying that all of us, all of us listening to this podcast and everyone else living their lives around the world, all of the intellectual capacity they have access to from themselves, their friends, their family, and the AI at their fingertips combined is what anyone will have access to one decade from now. And that it seems is where Altman finds his optimism. He concludes, there's a great deal of talent right now without the resources to fully express itself. And if we change that, the resulting creative output of the world will lead to tremendous benefit for us all.

Alright, so like I said, now let's go do five observations from my reading of this. The first is that there's a clear way in here on the scaling debates that we've been having for the last few months, with Altman continuing to come down on the line that scaling laws hold. Now, what's interesting is that he is now bundling inference into those scaling laws, so rather than drawing test-time compute, which is the way that they're scaling these reasoning models, as something fundamentally different, it's just a different version of the same equation of more resources equals better output.

The unspoken piece here is that from where they're sitting, there's no reason to think that this doesn't just carry through all the way to whatever we decide AGI is, which is of course a controversial point, given that there are some like MetaChief scientist Jan LeCun who don't think that today's current architectures can ever get to AGI. So nothing particularly new here, but a doubling down of OpenAI and Sam Altman's previously stated positions. Second observation, again, a very obvious one, but it really isn't saying just how fast the cost of intelligence is coming down.

We're thinking about this in Super Intelligent, where we're pricing a product that has some meaningful upfront cost because of the modality of interaction with AI. But we're trying to figure out if we expect it to cost a tenth of what it costs now in a year, how we should price it. One novel point here is Sam officially reifying Jayvon's paradox by arguing that lower prices do in fact lead to much more use. He doesn't back it up with any specific examples, but that's something I'd be interested to see from OpenAI's point of view.

A third observation, and a big sub-theme from this, is the once again very obvious point, but one which I'm still not sure we're totally grokking, which is that there's a significant skill shift that's going to be required. Sam obviously paints a picture of the skills that he thinks are going to be most important, agency, willfulness, determination. But there's another one implicit in this idea of having access to a thousand junior virtual co-workers or a million junior virtual co-workers across every field of knowledge work.

presumably that means we all become managers. And that is obviously a very different skill set than doing whatever it is that we'd now be managing the robots to do. One of the things that I think that we are just starting to put together now is that the skill shift that's going to be most required with AI is probably less going to be specific prompting techniques and tool usage, but instead totally different managerial disciplines and entirely new ways of thinking.

A fourth point, which I think again lies just under the surface here, is the magnitude of this change that's coming. Altman has for some time been trying to downplay this, and there's even a little bit of that in here now. The idea of people in 2025 doing the same thing as people in 2024. However, it also very clearly feels like he's starting to get to the next point of his narrative.

It comes out in this line where he says, in some sense, AGI is just another tool in the scaffolding of human progress. But, and you get the impression that this is what he really means, this time it's actually different. He also has a line just before the agency willfulness and determination line that feels like maybe it's a thesis statement for the whole piece. The future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge.

I think you can maybe read this piece as trying to put a stamp on this scaling conversation and saying that from OpenAI's point of view, yes, AGI is still coming, despite what you've heard about the problems with AI scaling. My fifth observation is that there are no real policy ideas here, save perhaps the very lightly floated idea of universal basic AI or a universal basic compute budget. As Professor Ethan Malek points out, there is no clear vision of what the world looks like, and the labs are placing the burden on policymakers to decide what to do with what they make.

Now, I'm sure that what Altman and OpenAI would say is that they're trying to provoke a conversation that we can all have, not just expecting policymakers to figure it all out for themselves. But the notion that there may need to be a little bit more prescription on at least the type of conversations that we're having might be a place to explore in the next blog post.

Ultimately, all of this feels like another log on the fire of acceleration whose flame has just gotten bigger and bigger over the last couple of months. I personally can feel right now, or at least imagine myself to feel, some shifting sands that are going to have fairly dramatic impacts on the years to come. All of those things, of course, which we will be talking about every day here at the AI Daily Brief for the foreseeable future. But for now, that is where we'll wrap. Appreciate you listening or watching as always. And until next time, peace.