Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Imagine you’re the CEO of an AI company and you want to know if the latest model you’re developing is dangerous. Some people have argued that since AIs know a lot of biology now — scoring in the top 1% of Biology Olympiad test-takers — they could soon teach terrorists how to make a nasty flu that could kill millions of people. But others have pushed back that these tests only measure how well AIs can regurgitate information you could have Googled anyway, not the kind of specialized expertise you’d actually need to design a bioweapon. So, what do you do? Say you ask a group of expert scientists to design a much harder test — one that's ‘Google-proof’ and focuses [...]
The original text contained 16 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
First published: November 21st, 2024
Source: https://www.lesswrong.com/posts/fm8SEKkeshqZ7WAdp/dangerous-capability-tests-should-be-harder)
---
Narrated by TYPE III AUDIO).
Images from the article:
)
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts), or another podcast app.