We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
A
Aidan McLaughlin
A
Amjad Massad
A
Andrej Karpathy
A
Anthropic
B
Bernie Sanders
G
Greg Camrat
M
McKay Wrigley
M
Michael Mignano
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
S
Smithy Colon
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
法官
Topics
Bernie Sanders: 我认为人工智能节省的时间应该回馈给工人,例如实行每周四天工作制。技术应该改善我们,不仅仅是技术拥有者和公司CEO。通过给你AI提高你的生产力,应该减少你的工作时间而不是解雇你。让我们利用技术来造福工人,让他们有更多的时间陪伴家人、朋友和学习。 主持人: 我认为重要的是,像Bernie Sanders这样的人在理性地讨论如何分配技术收益。讨论的重点在于进行有益的对话,而不是是否同意他的观点。如果AI乐观主义者是正确的,我们需要决定围绕超额财富的新社会契约。四天工作制是开始讨论新社会契约的一个合理起点。

Deep Dive

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, dueling vibe code announcements expand how and where you can build AI-powered apps. And before that in the headlines, does AI mean that we should have a four-day workweek? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.

Hello, friends. Quick announcements before we dive in. Thanks to today's sponsors, Super Intelligent, Blitzy, Vanta, and KPMG. And to get an ad-free version of the show, go to patreon.com slash ai daily brief. For those of you who haven't subscribed yet, I would so appreciate it if you would subscribe on your podcast app of choice and on YouTube. And for those of you interested in sponsoring the show, we are thick in conversations for the fall and even early next spring right now. With that, though, let's dive into a topic about the potential for a renewed AI social contract.

Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We kick off today with something a little bit different than our normal content. In a recent Joe Rogan interview, Vermont Senator Bernie Sanders argued that the time saved by using AI tools should be given back to the workers. Specifically, he's calling for an AI-powered four-day workweek.

Bernie said, technology is going to work to improve us, not just the people who own the technology and the CEOs of large corporations. You are a worker. Your productivity is increasing because we give you AI, right? Instead of throwing you out on the street, I'm going to reduce your work week to 32 hours. Now, before we get more in depth than this, let's talk about why we're even talking about it.

There is going to be a temptation with AI to try to frame things in very binary terms. Either it is the relentless march forward where people in society are just completely subject to immutable forces that they can't control, or on the other side of the spectrum, it's going to be calls for bannings, claims of apocalypse, and all of that sort of thing.

I think it's incredibly important that a leader of Bernie Sanders' stature, who is, you would imagine, not natively going to be super a priori pro-technologist, is having a rational conversation around how society might want to distribute the gains of this technology. In other words, the point isn't whether or not you agree with him. The point is that it's a good conversation to have and he's providing an opening to have it.

So what about this idea specifically of a four-day workweek? Sanders contended that, by the way, this is not a radical idea. There are companies around the world that are doing it with some success. Some examples of that might be Microsoft Japan piloted a four-day workweek in 2019 and reported a 40% boost in productivity. In a larger trial, 61 UK companies tested the four-day workweek for six months.

and found an average 1.4% boost to revenue. Several European countries have adopted the four-day workweek in various forms over recent years, with Iceland now having around 90% of the population working a four-day workweek. Productivity remained the same or improved in most workplaces, and measures of worker well-being improved dramatically. Closing his pitch, Sanders said, "...let's use technology to benefit workers. That means give you more time with your family, with your friends, for education, whatever the hell you want to do."

Now, there are infinite counterarguments to this. Mandated restrictions on work kind of just tend to mean that the people who are willing to work more will get ahead. Then there's all the issues of whether people will just fill this time with second jobs. But that doesn't mean the discussion is not worth happening. If you listen to the most bullish AI optimists, they think that we are careening towards a world of hyperabundance. If that's correct, we're going to have to decide what the new social contract looks like around that hyperabundance.

If they're right, it's going to be a heck of a lot more transformative than just a four-day workweek. But that's not an unreasonable spot to start the conversation.

Next, an update on Mark Zuckerberg's aggressive recruitment tactics. It appears as though he has been successful in poaching three OpenAI researchers for his new superintelligence division. The Wall Street Journal reports that Meta's recruiting drive has yielded three new members for Zuck's AI Dream Team. The three researchers that appear to be heading over are all from OpenAI's Zurich office, which was set up late last year. To those lamenting the tactics, there is a bit of turnaround as fair play here, as OpenAI had themselves poached the trio from Google DeepMind. The

The Journal also wrote that this has definitely forced OpenAI to play ball in a different way. They wrote: "Some AI researchers have turned down the meta-CEO and in several cases, OpenAI has counter-offered, giving its researchers more money and scope to stay."

Now, last week, you might remember that Sam Altman had boasted that none of their top people had taken up Zuck on his $100 million offers. And in a Tuesday New York Times podcast appearance, he shrugged it off, saying that he wasn't worried, adding, it's like, okay, Zuckerberg is doing some new insane thing. What's next? And yet, ultimately, if you're really seeing $100 million offers, there's no way at some point that some people don't take that.

Now, while some commented that this looks all a little bit mercenary, I'm pretty firmly in the camp of all is fair in love and war. And frankly, I'm just waiting for these labs to realize that if the top engineers are worth 100 million, the top go-to-market and strategy voice has got to be at least a tenth of that, right?

Speaking of Meta and following up on a story from yesterday, they have also won a judgment in a copyright lawsuit. Hot on the heels of the Anthropic ruling on Tuesday, a federal judge has ruled that Meta didn't breach copyright by using books and training data. This is one of the lawsuits brought by Sarah Silverman alongside 12 other authors. In a summary judgment, the judge wrote that he had, quote, no choice but to grant summary judgment to Meta, adding that the plaintiffs had made the wrong arguments and offered insufficient evidence. He added, quote,

The key question in virtually any case where a defendant has copied someone's original work without permission is whether allowing people to engage in that sort of conduct would substantially diminish the market for the original, where the copying of the work diminishes the ability of the author to sell the original work, and the judge in this case said that the plaintiffs had, quote, "...presented no meaningful evidence on market dilution at all."

Importantly, and in much the same way as the anthropic judgment, while this is a major win for Meta, the judge went to pains to limit the decision to the specific circumstance, writing, "...this ruling does not stand for the proposition that Meta's use of copyrighted materials to trade its language models is lawful."

In cases involving uses like Meta's, it seems like the plaintiffs will often win, at least where those cases have better developed records on the market effects of the defendant's use. In other words, while copyright holders are now 0 for 2 this week, there is still a lot of fighting left before this issue is resolved. As I keep saying, it is completely inevitable that one or more of these cases end up in the Supreme Court, and even the federal judges don't seem to have a lot of sympathy for the arguments being made by AI companies, even if they're ruling in their favor. In

In the meta-ruling, for example, the judge commented on the argument that copyright should be set aside to prevent the technology from being stopped in its tracks. He wrote, these products are expected to generate billions, even trillions of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they'll figure out a way to compensate copyright holders for it.

Anyways, friends, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Superintelligent, specifically agent readiness audits. Everyone is trying to figure out what agent use cases are going to be most impactful for their business, and the agent readiness audit is the fastest and best way to do that.

We use voice agents to interview your leadership and team and process all of that information to provide an agent readiness score, a set of insights around that score, and a set of highly actionable recommendations on both organizational gaps and high-value agent use cases that you should pursue. Once you've figured out the right use cases, you can use our marketplace to find the right vendors and partners. And what it all adds up to is a faster, better agent strategy.

Check it out at besuper.ai or email agents at besuper.ai to learn more.

This episode is brought to you by Blitzy. If you're a technology leader, here's something that probably sounds familiar. Your organization's competitive edge is buried in legacy code that desperately needs modernization, but the resources required feel out of reach. That was the case for a global investment analysis firm. They needed to migrate 70,000 lines of complex MATLAB financial algorithms to Python. Algorithms that drive investment decisions for trillions in assets. Their estimate? Months of high-cost specialized engineering work.

Instead, they partnered with Blitze. Blitze's autonomous AI preserved mathematical precision and generated over 80% of the new codebase.

completing the migration with just five days of engineering time. They cut the timeline by 95% and saved 880 engineering hours. If your organization is facing similar modernization challenges, visit Blitzy.com to schedule a consultation and discover how AI-powered development can transform your technical capabilities. Today's episode is brought to you by Vanta. In today's business landscape, businesses can't just claim security, they have to prove it.

Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices. The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Venta makes it easy and faster by automating compliance across 35-plus frameworks.

It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months. The proof is in the numbers. More than 10,000 global companies trust Vanta. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off.

Today's episode is brought to you by KPMG. In today's fiercely competitive market, unlocking AI's potential could help give you a competitive edge, foster growth, and drive new value. But here's the key. You don't need an AI strategy. You need to embed AI into your overall business strategy to truly power it up.

Welcome back to the AI Daily Brief.

Today, nominally, we are talking about a pair of vibe coding app announcements, Google's Gemini CLI, and Anthropic's new vibe coding tools that are embedded directly in Claude. But really what we're talking about is just the utter explosion in vibe coding as a phenomenon, as a pursuit, as something people are building around, and as a tool unlocking new opportunity.

It's kind of hard to believe this, given how ubiquitous this term is now, but it's been about five months or so since we got this term.

Back on February 2nd, former OpenAI co-founder Andrej Karpathy tweeted, There's a new kind of coding I call vibe coding, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs are getting too good. He then goes on to describe his process of just talking to the code, using English as the programming language, and how to navigate different types of challenges in this new environment.

Now, vibe coding went from this thing that Andre was talking about to the term encompassing all of the text-to-code apps and both new ways of working for existing software engineers as well as this force that was bringing non-coders into the coding space.

From not existing back in January to being twice as searched as compared to prompt engineering just a couple months later, vibe coding hit the cultural consciousness very, very quickly. Now, there were, even before the term, some apps that were unlocking new capabilities. Bolt had exploded onto the scene. Lovable was growing super fast. And of course, Cursor was getting more and more dominant.

Now, those trends have done nothing but increase. A couple of weekends ago, Lovable, for example, held a global hackathon that saw more than a quarter million different apps built with their tools. CEO Antoine Osico pointed out that that was more than the first five years of the internet.

Every day if you go on X, you can find stories of kids vibe coding, basically everyone vibe coding. And so perhaps it isn't surprising that vibe coding is also rewiring and impacting an incredibly wide array of applications. Yesterday we talked about how Airtable was really in its own words, refounding itself as they put it, an AI native app platform combining the magic of vibe coding business apps with real production readiness and scalability.

On the same day, Asana launched their competing AI studio, something they described as a no-code builder for designing AI-powered workflows to handle routine tasks tailored to your organization. Again, the point is that these are not new vibe coding platforms that are launching. These are existing legacy tools that are embedded in the enterprise that are rewiring themselves around vibe coding capabilities.

Ramp also recently shared this chart showing how much market share Cursor had taken from GitHub Copilot, just further reinforcing how this new set of Vibe Code native tools was really changing the way that people were interacting with code. Which brings us up to yesterday's announcements. At this point, the AI coding announcements are rolling out faster than I can even cover them, and this is a daily show.

Yesterday, we saw dueling announcements from two of the big players. Google debuted their agentic Gemini CLI, which is the most approximate to, for example, Codex from OpenAI or Anthropix Cloud Code. The Verge writes, Google has launched a new open-source AI agent that brings Gemini's coding, content generation, and research capabilities directly into developers' terminals. And while going through the command line is a little more lightweight than full IDEs like Cursor, this approach remains a programming staple.

To stand out in what is quickly becoming a very crowded market, Google is leveraging its scale to compete aggressively on price. Specifically, Gemini CLI is being offered for free. Usage limits are set at 60 queries per minute and 1,000 per day. And if you're worried that that's not going to be enough for most people, Google said that they came to the number by measuring their own developers' usage patterns

and then doubling that average to come up with the limits. The product taps the Gemini 2.5 Pro model, which is in the top tier on coding benchmarks, and it also has full MCP support. It's very clear that this is both a feature parody play to make sure that they have an offering that operates in this particular way, as well as Google flexing their pricing muscle.

By way of example, Smithy Colon, who does developer relations at Google Cloud, tweeted, I used the new Gemini CLI and gave it only the YouTube link of a tech tutorial, and it set up the entire project for me perfectly. This is a massive win for productivity and builders everywhere.

Meanwhile, over in Anthropic Land, their announcement was adding a Replit slash Bolt slash Lovable style vibe coding experience directly into Claude's chat window. In a blog post titled Build and Share AI-Powered Apps with Claude, they write, Claude can now create artifacts that interact with Claude through an API, turning these artifacts into AI-powered apps where the economics actually work for sharing. Simply describe what you want to create and Claude will write the code for you.

Cloud can help you debug and improve the experience, and then once it's ready, you can share it through a link with no deployment required. Other users can fork and customize artifacts, creating what could be the start of a social layer around vibe coding. Now, one interesting thing about the announcement is that it blurs the line between traditional web apps and agentic workflows.

The feature can be used to create games and apps, but also can be used to vibe code a type of basic agent. Basically using natural language prompts, a user can ask Cloud to stitch together multiple actions to complete more complex tasks. These artifacts can then be used over and over again in what becomes a very light way to get into agent architecture. Now this idea of the lines blurring between software and agents has been on the mind of Replit CEO Amjad Massad.

In a talk at VentureBeat Transform this week, he said that he believes software is going to become, as he put it, agents all the way down. He opened the talk by demonstrating how capable Replit has become, showing off a live polling app with databases, authentication, and quality checks that could be vibe-coded in 15 minutes. Mossad said, This is sort of like an almost semi-autonomous agent. You can watch it, you can also go get coffee, and it'll send you a notification when it's ready to show you the future.

Now, the place that everyone's mind goes with this is, does this change the way that we think about software in the future? Specifically, if it's this easy to build and deploy apps, do enterprises and companies continue to pay for expensive software, or do they just start to build their own apps that work in ways that are completely customized to them? At

At that same talk, Mossad was asked if Replit could actually replace enterprise-grade tools, to which he responded that they're seeing customers have three orders of magnitude on savings on apps. The anecdote he shared was a Replit user who had claimed to use the platform to make a working version of ERP automation for just $400 instead of the $150,000 he was quoted by a vendor.

Said Massad, when you think about what the software does, a lot of Replit users wake up in the morning, they have a problem in their minds, and they create an app to solve that problem. The software agent will go and build software in order to solve that problem and will solve that problem for you.

So what does this all mean for developers? Well, at one stage, an audience member asked about the importance of people learning to code for themselves, rather than just blindly accepting the AI's suggestions. Mossad responded that the platform can highlight any piece of code and give an explanation, effectively questioning whether that's still a real problem. Going further, he questioned what the vision actually is for AI coding, suggesting, "...I think we're going to get to a point where you don't have to interface with the code. We're going to be able to interact with software on a higher level of abstraction."

He did continue, we need something a little better than English somewhere in between code and English, and maybe someone will build that.

Now, one upshot of all of this is that users of all different types are just writing a ton more code. The technical people have been talking about how much extra code they're writing all year thanks to the AI coding tools. Example, OpenAI's Aidan McLaughlin writes, for what it's worth, 80% of my code is now written by Codex, and I'm writing a lot more code. Lightspeed partner Michael Mignano writes, the irony is that so far, AI has recruited far more developers than it has eliminated.

Everyone I know is coding right now. And what's more, the way that they're coding is getting more and more insane. McKay Wrigley of Takeoff AI posted his stack, writing, My workflow to spin up three coding agents at once. Type AI in terminal. Launches Cloud Code, Gemini CLI, and Codex CLI. Fully synced Windows. Prompt with voice. Hit send and watch. Now Cloud Opus 4, Gemini 2.5 Pro, and OpenAI 03 work on a task, and you pick the winner.

And indeed, if I had to guess at one trend for where we're going to see vibe coding next,

In addition to it coming to other areas like design, for example, people integrating app building with more direct UI creation, I think we're going to start to see versions of my Dr. Strange idea, but for coding. Basically, multi-agent systems that work in parallel, in some cases like the one that McKay shared, doing repeats of the same work so that users can pick the best version. Another example of this comes from Greg Camrat from ArcPrize who writes, My new vibe coding setup. What?

one orchestrator agent which controls 85 sub-agents working in parallel. Each sub-agent spawns from my stream of consciousness and tests from the main orchestrator.

Jason Zhao writes,

And of course, I think the question for some is just how far can this go? In a recent A16Z blog post with Mark Andreessen and Eric Torenberg, the two shared a thought experiment of whether a non-technical founder armed with AI agents could outbuild a team of elite engineers. It's not clear exactly what the answer to that is now, but the fact that it's even a question shows just how much has changed in a matter of just months.

Anyways, friends, I think it's fairly safe to say that we will continue our coverage of vibe coding platforms for now. Go try vibe coding directly in Claude and see how it works for you. That's going to do it for today's AI Daily Brief. Thanks as always for listening or watching. And until next time, peace.