Chapter 1: Longtermism

If you’re the type of person who follows public “intellectuals” like Sam Harris, browses popular media like The New Yorker and Vox, or hopes to do the most good in the world, you have very likely heard about longtermism.[1] It is one of the central ideas in Toby Ord’s popular new book The Precipice, published in 2020, and is closely linked to the concept of an existential risk. Not only has the term become more visible to the public over the past few years — and longtermists have big plans for this trend to continue — but projects associated with longtermism have, over just the last year, received literally millions of dollars in funding. Anecdotally, I have noticed a rapidly growing number of young people and established scholars flocking to the new field of Existential Risk Studies, which is largely motivated by longtermist ideas.

In this mini-book, written for students, journalists, and academics curious about this new ideology, I want to explain why longtermism — at least in its most influential guises — could be extremely dangerous. As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe. If this sounds hyperbolic, then keep reading. I strongly suspect that by the end of what follows you’ll come to agree, or at least acknowledge that this ideological package is a ticking time-bomb. Hence, this mini-book is not just a critique but a warning: longtermism is a radical ideology that could have disastrous consequences if the wrong people — powerful politicians or even lone actors — were to take its central claims seriously.

There are many different definitions of “longtermism,” all of which have in common a pivot toward taking seriously the long-term future of humanity.[2] This by itself sounds very appealing, and I believe it should sound this way. The world faces many problems that cannot be solved without thinking hard about the future — not just to the next quarterly report, or the next election, or the lifetime of one’s grandchildren, but centuries henceforth. To overcome the “Great Challenges” facing our species, we need more foresight and forecasting, more sober reflection on the potential causes and moral implications of human extinction.

Longtermism, though, goes far beyond a simple shift away from the myopic, short-term thinking that plagues our contemporary milieu. In what follows, I will focus on a group of ideas that have greatly shaped contemporary longtermist ideologies. We can label this Bostromism after its progenitor, the Oxford philosopher Nick Bostrom. It is this vision of what humanity’s future ought to be that I worry about. It is a vision that, as we will see, commands us to subjugate nature, maximize economic productivity, colonize space, build vast computer simulations, create astronomical numbers of artificial beings, and replace humanity with a superior race of radically “enhanced” posthumans. Its basic tenets imply that the worst atrocities in human history fade into moral nothingness when one takes the big-picture view of our cosmic “potential,” that preemptive war can be acceptable, that mass invasive surveillance may be necessary to avoid omnicide, and that we should give to the rich instead of the poor. However bad worldwide poverty and factory farming may be, solving these realtime global catastrophes aren’t in our top five global priorities. In a catch-22, Bostromism adds that not developing technology would constitute an existential catastrophe in itself, even though the primary reason we face an estimated 20-percent chance of extinction this century is “technological progress.”

In many cases, these claims are explicit in the writings of Bostrom and other longtermists. No inference is necessary: they are right there in black and white. I know because until recently I’ve been an enthusiastic participant in the research community, even writing the first introductory textbook on existential risks. But the more I worked on the topic, the longer I spent reflecting on the underlying assumptions, the clearer it became that the nucleus of Existential Risk Studies — the Bostromian version of longtermism — could justify a wide range of unthinkable crimes against humanity. The multiplicity of dangers connected to this way of thinking about morality and the future were further reinforced by previous research that I’d conducted on apocalypticism and religious eschatology (where “eschatology” roughly means “the study of the world’s end”). The fact is that longtermism has strong millennialist tendencies. If history is our guide, this makes it vulnerable to flipping from a passive to an active, violent mode of bringing about the end of the world — meaning ushering in the techno-utopian world dreamed of above.

Indeed, the parallels between apocalyptic religion and longtermism are striking. For longtermists, we stand at the most pivotal moment in human history — “the Precipice,” as Ord calls it — that will determine whether the future is filled with near-infinite amounts of goodness or an empty vacuum of unforgivable moral ruination. This century is the “Grand Battle” that must be won at all costs — a directive given to us not by God but by the utilitarian imperative to maximize value as seen from “the point of view of the universe.” If we win this battle, then the probability of extinction will drop close to zero and the paradise described above will be within reach. If we lose it, then all will be lost.

Because I want to keep everything as short as possible, the following chapters will outline the bare minimum of what readers need to know about the two main gears in the Bostromian machine: transhumanism and “total” utilitarianism. I will then piece together how this position could pose profound dangers.

Next chapter

Table of Contents

[1] I put “intellectuals” in scare quotes because Sam Harris has propagated a considerable number of unscientific claims about race, IQ, feminism, and other topics. For an amusing collection of some fairly horrendous statements by Harris, click here.

[2] The word “longtermism” has been around for a while, but the sense employed by Effective Altruists (EAs) is novel. Consequently, there is ongoing debate about how exactly it should be defined. EA philosophers have distinguished between, for example, “longtermism,” “strong longtermism,” “very strong longtermism,” “axiological strong longtermism,” and “deontic strong longtermism.” Because the meaning of the term remains unsettled, “longtermism” is a moving target. It might even be that there are some variants of longtermism that dodge the criticisms leveled below. See also this introductory article by Fin Moorehouse.

Author and scholar of existential threats to humanity and civilization. www.xriskology.com. @xriskology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store