Notes on
Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
| 3 min read
After hearing so much about Nick Bostrom’s Superintelligence in AI circles, I finally decided to read it myself. Honestly, the book didn’t quite match the hype. Bostrom’s philosophical background really shows—it’s like he’s trying to list every possible scenario involving superintelligence, no matter how far-fetched or unlikely. He builds complex logical structures that seem rigorous but often feel like they’re supporting conclusions he’s already decided on.
My main issue is that the book assumes superintelligence will almost inevitably lead to existential threats. Instead of carefully building an argument from existing evidence, it seems like he starts from the idea of catastrophe and works backward. This constant focus on doom gets exhausting, and it feels disconnected from the way AI is actually developing, which has been slower and more distributed than Bostrom’s “intelligence explosion” idea suggests (at least, so far!).
That said, Superintelligence deserves credit for raising important questions and pushing AI safety into mainstream conversations. Even if the analysis sometimes feels exaggerated, the systematic approach has definitely helped legitimize AI safety as a serious area of research.
Chapters 14 and 15 stand out—they focus on practical, strategic issues like preparing for an intelligence explosion, managing AI governance, and the critical importance of international coordination. Here, Bostrom emphasizes thoughtful preparation and the urgency of addressing these challenges ahead of time. His analytical style becomes genuinely useful, providing actionable insights rather than theoretical speculation.
Overall, Superintelligence feels more significant for its role in kickstarting crucial conversations than for its actual content. Much of the book is bogged down by overly speculative scenarios and a pessimistic tone.
Common sense and natural language understanding have also turned out to be difficult. It is now often thought that achieving a fully human-level performance on these tasks is an “AI-complete” problem, meaning that the difficulty of solving these problems is essentially equivalent to the difficulty of building generally human-level intelligent machines. In other words, if somebody were to succeed in creating an AI that could understand natural language as well as a human adult, they would in all likelihood also either already have succeeded in creating an AI that could do everything else that human intelligence can do, or they would be but a very short step from such a general capability.
Interesting claim in hindsight.
We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.
The human species as a whole could thus become rich beyond the dreams of Avarice. How would this income be distributed? To a first approximation, capital income would be proportional to the amount of capital owned. Given the astronomical amplification effect, even a tiny bit of pre-transition wealth would balloon into a vast post-transition fortune. However, in the contemporary world, many people have no wealth. This includes not only individuals who live in poverty but also some people who earn a good income or who have high human capital but have negative net worth. For example, in affluent Denmark and Sweden 30% of the population report negative wealth—often young, middle-class people with few tangible assets and credit card debt or student loans. Even if savings could earn extremely high interest, there would need to be some seed grain, some starting capital, in order for the compounding to begin.
This is probably right. Need to prep for AGI more.
Liked these notes? Join the newsletter.
Get notified whenever I post new notes.