Chapter 9: Conclusions

“So you want to save the world. As it turns out, the world cannot be saved by caped crusaders with great strength and the power of flight. No, the world must be saved by mathematicians, computer scientists, and philosophers.”

These are the words of Luke Muehlhauser, currently a research analyst for the EA-aligned Open Philanthropy Project, which has given tens of thousands of dollars to organizations engaged in longtermist projects and research.[26] Silly as they are, they capture, I believe, the grandiosity of longtermist thinking. Longtermists are the prophets and messiahs of the Precipice, out to “save the world” through more science and technology, ultimately leading to the Promised Land of technological maturity. The goal? Saturating our future light cone with intrinsic value by colonizing space, subjugating nature, maximizing economic productivity, simulating huge numbers of conscious beings, and so on.

There are reasons to worry that this worldview is an information hazard. If it were to become influential among politicians or the public, it could precipitate all sorts of harms done in the name of the “greater cosmic good.” This is a dangerous, millennialist ideology according to which the means justify the ends and the end is, in Bostrom’s canonical formulation, nothing more or less than Utopia itself.

More than anything, I want this mini-book to help rehabilitate “longtermism,” and hence Existential Risk Studies. As stated above, we very much do need more sober reflection of, and strategic thinking about, the future of humanity. We live in a fragile, myopic society confronting slow-motion catastrophes like climate change and the sixth mass extinction that threaten the continued existence of this society. Please do care about the long-term — but don’t be a longtermist.[27]

Previous chapter

Table of Contents

[26] In fact, it gave “two grants totaling $38,350 to the Centre for Effective Altruism (CEA) to support the promotion of Toby Ord’s book, The Precipice: Existential Risk and the Future of Humanity.”

[27] A similar critique by Ben Chugg, in which he argues that “longtermism is a dangerous moral ideal,” can be found here. Another critique by Vaden Masrani is here. Note also that this mini-book draws in parts from my forthcoming intellectual history book titled Human Extinction: A History of Thinking About the End of the World.

Author and scholar of existential threats to humanity and civilization. @xriskology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store