Fraser Cain has recently uploaded a video on YouTube to do with his concerns with AI and whether or not its progress should be slowed down:
This is an unusual interview for my channel. It's mostly about AI, our current developments and the threats it poses to us.
Guest: Dr. Roman Yampolskiy
https://scholar.google.com/citations?...https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbWpnMFZXRU5vV1dtUDdhM2FnRjEzc0VGdnk3UXxBQ3Jtc0tubHUza3k2NDZnS2ZzZmJoMG1xSHphLXFOcUdESUpzZzJYV0xtZVVONlJ3S25IYUhNMkU3Y1puQTNScDlKcTFWSmM1U0ZPdWF3TVZiei0wbE9pY0p5UXNRQ0lQYkxjRnA2QWg4UzFYQXlWUVNsd1ZQMA&q=https://www.goodreads.com/group/show/1198440-universe-today-book-club&v=A4M3Q_P2xP4
00:00 Intro
01:51 Arrival of LLMs
06:30 Alignment of AI
17:35 Existential dangers
18:58 The Pause AI movement
23:47 Safety
32:42 Wake-up calls and red lines
39:12 Possible response
41:43 AI as part of evolution
44:25 Simulation hypothesis and the Fermi Paradox
49:52 How to get involved
51:48 Current obsessions
57:19 Final thoughts