We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
N
NLW
知名播客主持人和分析师,专注于加密货币和宏观经济分析。
Topics
NLW: 我经常思考AGI这个话题,尤其是在帮助企业部署智能体时。Dave Pittman的文章《逃逸速度:我们不需要AGI》正好阐述了我的观点。AGI的概念分散了企业注意力,而现有的AI技术已经足够强大,企业应该专注于如何更好地利用现有技术,而不是等待AGI的出现。目前,AI的能力增长速度已经超过了企业整合这些能力的速度。虽然AGI具有巨大的潜力,但对于大多数企业来说,现有的AI技术已经足够强大,企业应该专注于如何更好地利用现有技术,而不是沉迷于AGI的概念。 Dave Pittman: AGI的一个预期好处是它将导致智能的超人类加速,从而在各个领域解锁发现和进步。基本论点是,如果AGI与人类一样聪明,但思维速度快得多,它将能够以前所未有的速度找到问题的解决方案。然而,我们目前还不清楚何时才能获得廉价且强大的AGI。我们不需要等待AGI,因为我们已经可以通过其他方法实现AI能力的快速提升,例如追求自我维持的逃逸速度(SEV)。SEV的核心是一个无需人工干预的反馈循环,AI模型不断自我改进,其改进速度会越来越快。SEV系统能够实现AI能力的指数级增长,而目前的AI模型改进速度则呈亚线性增长。实现SEV需要三个要素:强化学习策略、高质量的合成数据和高效的反馈循环。SEV是一种更有效的方法,它专注于在特定领域内实现AI的自我改进,而AGI则试图构建一个能够解决所有问题的通用工具。对于企业来说,SEV提供了一种更可预测的AI改进路径,降低了风险。SEV已经具备了实现的基础条件,我们已经取得了强化学习、合成数据生成和AI优化方面的重大进展。 NLW: AGI的概念过于关注未来某个时间点AI技术将比现在好得多,但现在AI技术已经非常出色了。大量知识工作现在可以由AI完成,而且很多情况下,结合AI的人类工作效率更高。目前限制AI应用的因素不是AI的能力,而是系统、流程、集成、部署以及思维方式等方面。

Deep Dive

Chapters
This chapter introduces the concept of AGI and its presumed benefits, focusing on the accelerated problem-solving capabilities it offers. However, it highlights a crucial challenge: the uncertainty surrounding the availability of both affordable and universally accessible AGI.
  • AGI's presumed benefit: superhuman acceleration in intelligence leading to breakthroughs across various fields.
  • Challenge: Uncertainty about the availability of affordable and universally accessible AGI.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, why AGI is a useless term for businesses. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.

Hello, friends. Welcome back to another Long Reads episode. Today, we get to talk about a topic that I think about a lot. In fact, it's sort of constantly lurking in my conversations that we're having with businesses when we're helping them figure out agents at Superintelligent. Luckily, we got a piece written this week by AI engineer Dave Pittman that gives us a chance to talk about this theme. Dave's piece is called Escape Velocity, Why We Don't Need AGI. His question, what happens when the trajectory of improvement increases so fast we don't care about AGI?

And this one, the reading is not AI, this is actually me. Dave writes, "...one of the presumed benefits of AGI is that it will lead to a superhuman acceleration in intelligence, which will then unlock discoveries and advances across, well, everything. The basic argument is that if an AGI is just as smart as humans but can think much faster, it will be able to find solutions to problems at previously unbelievable speeds. Trying to figure out how to make fusion work, a bunch of PhD brains can only think of new ideas and reason through them so fast, often in months or years."

With an AGI, the speed limit is theoretically how much computing power we give it. Unlike a human, the AGI can work 24-7, and again, we assume, come up with new ideas and test them out a thousand times faster. Suddenly, a lot of challenges we are facing at humanity scale seem tractable because we have zero-cost intelligence. Trying out combinations of proteins for new cancer therapies? Just ask a few data centers.

Test out many, many ideas of how to reduce carbon emissions during concrete manufacturing? Done. Design ultra-efficient antennas for global internet? Kick off the task on Friday, come back on Monday is the promise. There's just one problem. It's not clear when we will get both AGI itself and AGI that is so cheap its inputs — novel data for a task, energy for computation, chips, etc. — are a rounding error. However...

It turns out we don't need AGIs or AGI that is universally cheap. We are on the verge of achieving a new type of AI improvement that I call self-sustaining escape velocity or SEV. Once you have achieved escape velocity, having an AGI becomes irrelevant. It will be easiest to understand SEV if we talk first about a few other ideas to help frame our thinking. The first is a classic lesson for startups. Always hire someone who is smarter than the last person you hired.

By following this rule, as your company grows, it actually becomes more capable. It's often assumed that this is very difficult because of the Peter Principle, or basically, how can you actually know if someone is smarter than you? However, this assumption is based on knowing how someone is smarter than you rather than merely establishing someone is smarter. The second scenario, establishing intelligence, is much easier. At its most basic level, give someone a challenge you failed to solve, and if they solve it, they're smarter. The key lesson here is that we should, for the purposes of SEV, focus on evaluations rather than understanding AI performance.

Our second mental framework is to think about improving foundation models and their scaling laws as suffering from Sharlofsky's rocket equation, also known as the tyranny of the rocket equation. The rocket equation says that trying to launch larger and larger rockets becomes less and less efficient. This is due to a larger rocket needing even more fuel, which causes the rocket to weigh more, which in turn means you need more fuel to launch your now heavier rocket. Once you reach escape velocity, however, the balance has tipped in favor of your rocket and it is no longer at risk of crashing back down.

Currently, foundation model providers are struggling with a similar problem of more capable models requiring even more data. As they've begun to rely on synthetic data generated by other AI models, it also becomes harder to build the larger model. Because an even bigger synthetic data model is needed to generate more sophisticated data, which in turn requires... you can see where this is going. When people talk about the benefits of compounding intelligence and breakthroughs made by AGI, they are primarily referring to the concept that an AGI has reached an intellectual escape velocity, where all of the reasoning done by the model improves its answer or solution.

So foundation models are collapsing under their own weight and we don't know how to know if they're improving. What's an AI company to do? I think we should pursue a new strategy, self-sustaining escape velocity or SEV. The promise of SEV is this, just keep dumping in some basic resource, computer memory, and arrange your AI in a feedback loop to generate results to build on top of themselves.

Once you have an AI in a setup where it can produce a better AI, your only constraint is how fast you can fuel the rocket engine. The core of SEV is a hands-off feedback loop. Each time a new AI model is created, it is evaluated using a more sophisticated benchmark that is the result of the previous AI model, the baseline, pruning down the problem space into problems it cannot solve.

The new model is a candidate to replace the old model. If the candidate proves that it is indeed smarter than an old model, it becomes the baseline model. This new and improved baseline model is then used to challenge our synthetic data generation model in a critic adversarial fashion to produce a higher quality model for synthetic data generation. Now our baseline model and our synthetic generation model have both been leveled up so we can repeat the process without human intervention. If our process is truly self-sustaining, then the only external input it needs is more compute power and time and memory to improve itself. In a

And if our rate of self-improvement is fast enough, then our model improvement process will reach a point of escape velocity where improvements are not just linear or additive, but exponentially compounding. Compare this to our current scaling laws where if we see foundation models have crossed over the tipping point and are achieving sublinear gains in performance for their resource inputs, they're going to teeter and effectively fall back to earth.

Today's episode is brought to you by Super Intelligent and more specifically, Super's Agent Readiness Audits. If you've been listening for a while, you have probably heard me talk about this. But basically, the idea of the Agent Readiness Audit is that this is a system that we've created to help you benchmark and map opportunities in your organizations where agents could see

specifically help you solve your problems, create new opportunities in a way that, again, is completely customized to you. When you do one of these audits, what you're going to do is a voice-based agent interview where we work with some number of your leadership and employees to map what's going on inside the organization and to figure out where you are in your agent journey.

That's going to produce an agent readiness score that comes with a deep set of explanations, strength, weaknesses, key findings, and of course, a set of very specific recommendations that then we have the ability to help you go find the right partners to actually fulfill. So if you are looking for a way to jumpstart your agent strategy, send us an email at agent at besuper.ai, and let's get you plugged into the agentic era.

Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded.

Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk.

Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash NLW. That's V-A-N-T-A dot com slash NLW for $1,000 off.

The startup or tech giant that cracks the code for exactly how to power that self-sustaining feedback loop will experience, literally, runaway success that is only limited by their resources. I think this self-sustaining loop needs three fundamental pieces. A solid reinforcement learning policy and environment to steer the AI in its evaluations. Two, generation of synthetic data that focuses on quality rather than quantity. Three, a highly optimized feedback loop to overcome drag in the system that will prevent achieving escape velocity. So why does SEV mean we don't have to care about AGI?

With AGI, we're building a universal hammer that can be great at everything. However, I have yet to come across many, if any, use cases where someone actually wants AGI. Instead, they usually need a more specialized AI that has performance good enough that it feels smarter than the smartest person in the room. Pursuing AGI is one way, via boiling the oceans, to get to this.

SEV, on the other hand, is a more targeted approach that focuses on setting up a system that can self-approve an AI in a limited domain. This domain must be conductive to transitive improvements, meaning we can assume improvements to our AI can stack on top of each other. An example of a domain with good transitive properties is summarizing legal contracts. A domain like contemporary performance art is not. In my experience, though, most problems that businesses care about solving are in transitive domains. Existing neural net models lend themselves to performing well in transitive domains,

And the recent success of test-time compute for reasoning models is another win in favor of transitive domains. As an AI CEO or CTO looking for predictability in all this chaos, SEV is a very attractive approach. It's notoriously difficult to establish a stable trajectory in your AI improvements, which in turn means it's nearly impossible to predict where you'll be in a few months, let alone a year, or the time horizon for your next funding round. With SEV, we've wrangled this chaos into a more predictable trajectory that is based more on resources than engineering sweat and AI researcher talent.

The good news is that many pieces of SEV already exist. We're seeing massive leaps in making RL stable and easy to use. Likewise, with synthetic data generation, we've reached large enough models to overcome earlier shortcomings. Although it hasn't been widely appreciated, DeepSeq's AI optimizations are the start of an avalanche of infrastructure improvements we will see over the next several years. So there you have it. SEV gives us a shortcut to get what we want out of AGI without having to build AGI itself. This post lays out the strategy for SEV, but there are still many open questions in the tactics to implement SEV.

I would not be surprised to see many variants emerge that leverage hacks in specific areas. As an early reader of a draft put it, reaching SEV may reduce to a challenge of who can find the most impactful problem in solution space where the AI's quality is relatively cheaply measurable. All right, so that is Dave's contribution to this discourse. Now he is, of course, coming at it from a builder's perspective and thinking about how to set models on a trajectory for continuous improvement.

Implicit in that is a critique of this over fixation we have on this nebulous point in the future, which we call AGI and which itself is still fairly ill-defined. Again, coming at this from a technical perspective, Dave is saying we don't need AGI because we can get continuous improvement without having to worry about that term one way or another. But I'm coming at this from a different side. When you're thinking about it from a business perspective,

Why we don't need AGI is even simpler. The fixation on AGI is a fixation on a future point at which AI is spectacularly better than what we have now.

But AI right now is spectacular. A huge portion of knowledge work right now can be done as well by AI as it can by humans. Nearly all knowledge work at this point is going to be better by a human, at least using AI. It's very clear from some recent moments like the Manus agent that we're still under utilizing the capabilities of the models that we even have right now. The rate limiting factors when it comes to AI impacting business is not currently about capabilities.

It's about systems, new processes, integration, deployment, new ways of structuring operations, and new ways of thinking. In fact, I would argue that right now, capabilities are growing at a faster rate than businesses' ability to integrate them.

Now, I don't want to diminish the possibility and potentiality of this grand utopian idea of AGI that really can solve a huge swath of the world's problems that we can't right now. I'm not at all trying to argue that that wouldn't be unbelievably transformational in a way that just more efficient and more complex marketing could never be.

What I'm saying, though, is that for the practical lived reality of most businesses and people who are deploying AI to make their work better in some way, what we have right now is already a staggering leap into the future. And the work to be done simply to catch up to the capabilities facing us right here is enormous. Getting stuck on terminology is a sure way to get left behind. And so I think for the moment, businesses and enterprises can fairly safely leave the AGI discussions to the researchers and the future society designers and

and just focus on the power that is sitting there at their fingertips. Anyways, a great piece by Dave Pittman. Thanks again for writing it. That is going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.