
Aug 30 • 2HR 16M
AI and Existential Risk - Overview and Discussion
Last Week in AI podcast co-hosts Andrey and Jeremie provide an overview and discuss AI X-risk
Weekly AI summaries and discussion about Last Week's AI News!
Subscribe over at https://www.lastweekinai.com/
A special non-news episode in which Andrey and Jeremie discussion AI X-Risk!
Please let us know if you'd like use to record more of this sort of thing by emailing contact@lastweekin.ai or commenting whether you listen.
Subscribe
Outline:
(00:00) Intro
(03:55) Topic overview
(10:22) Definitions of terms
(35:25) AI X-Risk scenarios
(41:00) Pathways to Extinction
(52:48) Relevant assumptions
(58:45) Our positions on AI X-Risk
(01:08:10) General Debate
(01:31:25) Positive/Negative transfer
(01:37:40) X-Risk within 5 years
(01:46:50) Can we control an AGI
(01:55:22) AI Safety Aesthetics
(02:00:53) Recap
(02:02:20) Outer vs inner alignment
(02:06:45) AI safety and policy today
(02:15:35) Outro
Links
AI and Existential Risk - Overview and Discussion
♥️♥️♥️♥️♥️♥️
Thanks for the debate, it went like many others I've listened to - broadly:
- For existential risk: lots of arguments with reasoning and examples.
- Against: "but I can't really imagine that" and "but that won't happen for a while..." and "we'll be able to contain it and/or solve the alignment issue [how not included]"....
Sorry if this sounds harsh but I'm yet to hear a debate where there are tangible reasons given on the against side, does anyone have any good links to such arguments/discussions?