- Superintelligence: Paths, Dangers, Strategies
- Human Compatible: AI and the Problem of Control
- Intelligence Explosion
- The Alignment Problem
- Concrete Problems in AI Safety
- Measuring the Intelligence of Machines
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big
- Learning to Value Human Feedback
- AI Safety Research
- Towards a Formal Theory of Fun
- Works by Eliezer Yudkowsky
- The Singularity Is Near
- Global Catastrophic Risks
- Explainable AI
- Formal Verification
- Machine Superintelligence
- Society of Mind
- Theoretical Foundations of AGI
- Future Progress in AI: A Survey of Expert Opinion
ANI (Artificial Narrow Intelligence) = Domain-Specific AI for specific tasks
AGI (Artificial General Intelligence) = General behavior in a human-like way across all tasks
ASI (Artificial Super Intelligence) = Intelligence surpasses that of humans