Tag: AI safety
-

If anyone builds it, everyone dies (Part 5: Solutions)
Yudkowsky and Soares have a simple solution to AI safety: Shut it down.
-

If anyone builds it, everyone dies (Part 4: We would lose)
Yudkowsky and Soares argue that we would lose a conflict with artificial superintelligence
-

If anyone builds it, everyone dies (Part 3: Remaining arguments for misalignment)
This post addresses the second of Yudkowsky and Soares’ two main arguments for misalignment in Chapter 4.
-

Instrumental convergence and power-seeking (Part 4: Conclusion)
This post draws lessons from our discussion of instrumental convergence and power-seeking
-

Harms (Part 5: Supporting frontier AI companies)
Given their stated beliefs, effective altruists often show an unusual degree of support for frontier AI companies
-

Papers I learned from (Part 6: A timing problem for instrumental convergence)
Should we expect means-end rational agents to preserve their goals? Southan, Ward and Semler are skeptical.


