We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode “Rolling Thresholds for AGI Scaling Regulation” by Larks

“Rolling Thresholds for AGI Scaling Regulation” by Larks

2025/1/12
logo of podcast LessWrong (30+ Karma)

LessWrong (30+ Karma)

AI Chapters
Chapters

Shownotes Transcript

This is a plan for how ASI could be relatively safely developed.

Abstract: A plan that puts all frontier model companies on a unified schedule of model training, evaluation and approval, with regulatory compliance promoted through market access. This aims to combine (most of) the economic benefits of unrestricted competition but with more safety, (most of) the time-to-think benefits of AI pauses but with better compliance incentives, and (most of) the central oversight of a Manhattan project but with more freedom and pluralism.

** Background**

It is based on the following worldview, though not all are cruxes:

  • Rushing to ASI by default leads to deceptive and misaligned ASI, with catastrophic consequences for humanity.
  • A lot of alignment work will be empirical and requires access to (near) cutting edge models to work with.
  • A lot of progress is driven by increased compute and it is possible to measure compute in [...]

Outline:

(00:39) Background

(02:23) The Plan

(04:57) Advantages

(07:18) Potential problems

(08:11) Do the frontier training runs have to be simultaneous?

(08:57) Quis custodiet ipsos custodes?

(10:19) How does this work internationally?

(12:25) To be determined

The original text contained 1 image which was described by AI.


First published: January 12th, 2025

Source: https://www.lesswrong.com/posts/GvMakH65LS86RFn5x/rolling-thresholds-for-agi-scaling-regulation)

    ---
    

Narrated by TYPE III AUDIO).


Images from the article: undefined) Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts), or another podcast app.