As of yesterday, we've released the final posts in the Late 2021 MIRI Conversations sequence, a collection of (relatively raw and unedited) AI strategy conversations:
- Ngo's view on alignment difficulty
- Ngo and Yudkowsky on scientific reasoning and pivotal acts
- Christiano and Yudkowsky on AI predictions and human intelligence
- Shah and Yudkowsky on alignment failures
Eliezer Yudkowsky, Nate Soares, Paul Christiano, Richard Ngo, and Rohin Shah (and possibly other participants in the conversations) will be answering questions in an AMA this Wednesday; questions are currently open on LessWrong.
Other MIRI updates
- Scott Alexander gives his take on Eliezer's dialogue on biology-inspired AGI timelines: Biological Anchors: A Trick That Might Or Might Not Work.
- Concurrent with progress on math olympiad problems by OpenAI, Paul Christiano operationalizes an IMO challenge bet with Eliezer, reflecting their different views on the continuousness/predictability of AI progress and the length of AGI timelines.
News and links
- DeepMind's AlphaCode demonstrates good performance in programming competitions.
- Billionaire EA Sam Bankman-Fried announces the Future Fund, a philanthropic fund whose areas of interest include AI and "loss of control" scenarios.
- Stuart Armstrong leaves the Future of Humanity Institute to found Aligned AI, a benefit corporation focusing on the problem of "value extrapolation".
The post February 2022 Newsletter appeared first on Machine Intelligence Research Institute.
via https://AIupNow.com
Rob Bensinger, Khareem Sudlow