Chapter 8: The Wrong Reflection?

Longtermism is one of the three main cause areas of the Effective Altruism (EA) movement.[19] Oddly enough, other major cause areas are alleviating global poverty and eliminating factory farming. Thus, there is a direct tension between longtermism, on the one hand, and these other two cause areas, on the other. In some cases, the tension is resolved by explicitly saying, as Beckstead does in his dissertation, that saving rich lives is “substantially more important” than saving poor lives, for the sake of the greater good over the extremely long term.

Others appear to be more tentative in their endorsement of longtermism. We should take the notion of normative uncertainty seriously, they claim, or the possibility that there are fundamental errors in our normative, including moral, beliefs. Yet leading longtermists like Bostrom and Ord are clear that at least some normative beliefs are non-negotiable. They are not up for debate. One example is “technological progress.” This is so central to the longtermist vision that Bostrom identifies the cessation of further “progress” as an existential catastrophe that would instantiate the “permanent stagnation” failure mode listed above. Ord strongly agrees: “I don’t for a moment think we should cease technological progress,” he writes. “Indeed if some well-meaning regime locked in a permanent freeze on technology, that would probably itself be an existential catastrophe, preventing humanity from ever fulfilling its potential.”

There are two main reasons that longtermists hold this view. The first concerns another non-negotiable commitment: humanity must do everything in its power to reach its “potential.” For Bostrom, this means subjugating the natural world, maximizing economic productivity, simulating trillions (etc.) of conscious beings, and so on. For Ord, what constitutes our “potential” should be decided during a period that he calls the Long Reflection, which he imagines commencing after the more immediate task of establishing Existential Security. I find these ideas so implausible that I won’t here discuss them.[20] The point is that striving to fulfill our “potential” is not debatable, nor is the assertion that technology is the vehicle that will deliver us to this destination. As Ord puts it, “the best futures open to us — those that would truly fulfill our potential — will require technologies we haven’t yet reached.”

The second concerns the fact that we live in a hazardous universe — a veritable haunted house cluttered with death traps both above our heads and below our feet. This leads Ord to conclude that “without further technological progress we would eventually succumb to the background of natural risks such as asteroids.” Technology, then, is necessary to avoid the otherwise inevitable extinction of humanity due to natural threats like asteroid and comet impacts, supervolcanic eruptions, gamma-ray bursts, galactic center outbursts, and so on. Yet at the same time, everyone agrees that by far the greatest source of danger to our collective survival is technology itself. Bostrom makes the point like this:

The great bulk of existential risk in the foreseeable future is anthropogenic; that is, arising from human activity. In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences — intended and unintended, positive and negative.

Indeed, many scholars within Existential Risk Studies agree that the probability of human extinction or civilizational collapse this century is significant. For example, Bostrom writes that his “subjective opinion is that setting this probability [of an existential catastrophe] lower than 25% would be misguided, and the best estimate may be considerably higher.” Later, during a TED talk, he claimed that “assigning a less than 20 percent probability would be a mistake in light of the current evidence we have.”[21] In 2008, an informal survey of experts conducted by the Future of Humanity Institute put the median estimate of annihilation before 2100 at 19 percent. And in a 2017 interview, Ord says the because of “radical new technology,” humanity has a mere 1 in 6 chance of surviving this century. Others, like Lord Martin Rees, believe that these new technologies give civilization a mere 50–50 chance of making it to the twenty-second century. A coin flip! Pause for a moment to allow these numbers to percolate between your wriggling neurons.

Now compare these estimates of catastrophe to the likelihood of extinction caused by natural threats. The most probable threat comes from supervolcanoes, which erupt on average once every 50,000 years. A supervolcano can spew sulfate aerosols into the stratosphere, which then spread around the world and block incoming solar radiation. This results in a decrease of photosynthesis and possible collapse of the food chains, leading to species extinctions. Yet humanity has survived two supervolcanic eruptions over the past 200–300 thousand years, which is our species’ lifetime so far. These are the Toba eruption 75,000 years ago and the Oruanui eruption 26,500 years ago. What about an asteroid or comet impact, the other most probable threat? According to Bostrom, “this particular risk turns out to be very small. An impacting object would have to be considerably larger than 1 km in diameter to pose an existential risk. Fortunately, such objects hit the Earth less than once in 500,000 years on average.”

So, think about the situation: without technology, we are vulnerable to (a) supervolcanoes exploding every five hundred centuries on average, two of which we have already survived with only stones and fire. And (b) impactors that strike Earth every five thousand centuries on average. In contrast, because of technology, the probability of total human annihilation according to longtermists themselves hovers between 16.6 and 30 percent this century.

Arguing that we need more technology is just nuts. The more technological we have become, the closer to self-annihilation we’ve inched. In Bostrom’s words, “with the exception of a species-destroying comet or asteroid impact (an extremely rare occurrence), there were probably no significant existential risks in human history until the mid-twentieth century.” The implication here could not be more obvious — and there is no reason to believe the trend will reverse in the future.[22] And yet longtermists like Ord and Bostrom dig their heels in and dogmatically assert that more technology is the answerthat “we should not blame civilization or technology for imposing big existential risks,” even though civilization and technology are responsible for the extremely dire predicament in which we find ourselves. Imagine boarding a plane and being told that it has a 20 percent chance of crashing. Would you get off? Sorry, let me rephrase that: would you get off by running or sprinting? This is humanity’s situation right now — except that we are already 35,000 feet in the air, thanks to the triumphant strides of “technological progress” over the past seven decades.[23]

The craziness of this plight is not lost on all technophilic transhumanists. For example, Kurzweil writes the following in The Singularity is Near: “Imagine describing the dangers (atomic and hydrogen bombs for one thing) that exist today to people who lived a couple of hundred years ago. They would think it mad to take such risks.” Yet Kurzweil tries to undermine this point with a fallacious argument that others, including Bostrom and Ord, make as well. “How many people,” Kurzweil writes, “in 2005 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through a couple of centuries ago?” Similarly, Ord contends that

technological progress has been one of the main sources of our modern prosperity and longevity — one of the main reasons extreme poverty has become the exception rather than the rule, and life expectancy has doubled since the Industrial Revolution. Indeed, we can see that over the centuries all the risks technology imposes on humans have been outweighed by the benefits it has brought.

First of all, the point of reference should not be “a couple of centuries ago” or “since the Industrial Revolution.” As the anthropologist Mark Cohen writes in Health and the Rise of Civilization, “a good case can be made that urban European populations of that period may have been among the nutritionally most impoverished, the most disease-ridden, and the shortest-lived populations in human history.” In fact, the Neolithic Revolution resulted in a significant decline in human health, as evidenced by a drop in the average height of populations; it was not until the mid-twentieth century that populations in the affluent West regained their lost verticality. Jared Diamond may not be off the mark when he describes the invention of farming as “the worst mistake in human history.”

Second, to say that extreme poverty is now the exception depends on how one defines “extreme poverty.” Some identify it as living on less than $1.90 per day, which places some 734 million people — more than the total number of people on Earth prior to the year 1700 — in the category. But this cut-off is arbitrary. As Jason Hickel observes,

the UN’s FAO says that 815 million people do not have enough calories to sustain even “minimal” human activity. 1.5 billion are food insecure, and do not have enough calories to sustain “normal” human activity. And 2.1 billion suffer from malnutrition. How can there be fewer poor people than hungry and malnourished people? … Lifting people above this line doesn’t mean lifting them out of poverty, “extreme” or otherwise.

Third, whatever “progress” humanity may have made with respect to its own desire to maximize economic growth, consume more resources, make money, and so on, our impact on the environment has been nothing short of catastrophic. The data here are truly staggering, and much too numerous for the present chapter. (See chapter 7 of my book The End for some mind-blowing statistics about how bad the environmental crisis is today.) Suffice it to say that when Ord writes “the track record of technological progress and the environment is at best mixed,” he commits the rhetorical crime of prevaricating. It is not in any way “mixed.” It is unambiguously horrendous.

Fourth, it’s astounding to see someone wax poetic about “progress” in the very same books and papers that identify science and technology as the main reason we face unprecedented threats to our survival. In one breath it’s “The world is so much better today than ever before!” while in another it’s “We now stand closer to the Precipice of total annihilation than we ever have!” The Doomsday Clock, for example, is currently set to 100 seconds before midnight, or doom. This clock was created in 1947, two years into the Atomic Age. But prior to, say, the twentieth century, if the Doomsday Clock had existed, it would have been set to something like 100 seconds after midnight on the same day. In other words, the minute hand would have been rewound almost 24 hours — that’s how low the overall risk of extinction was before the twentieth century.

How can anyone think that we’ve made progress overall when the chance of extinction is orders of magnitude higher than ever before in our species’ history?[24] How are the risks of technology “outweighed by the benefits it has brought” when we stand at the crumbling edge of the proverbial Precipice? If human survival were what matters, sane people would be screaming in unison that we need less rather than more technology.[25] But survival matters to longtermists only as a means to the end of maximizing impersonal value, value, value. This is why proceeding through the ever-more labyrinthine obstacle course of existential hazards before us is worth the risk, for them, of total human annihilation. The more influential longtermism becomes, the harder it will be for the rest of us to change this.

Previous chapter | Next chapter

Table of Contents

[19] A peculiar community of people who have actually encouraged young people to work on Wall Street so that they can donate large sums of money to charity. As Will MacAskill, one of the most prominent EAs, puts it, “To save the world, don’t get a job at a charity; go work on Wall Street.” For the record, I think this is very bad advice. See also Amia Srinivasan’s excellent critique of the idea here.

[20] For one, so long as technology continues to be developed, there is no reason whatsoever to expect the level of existential risk to stabilize or decline. To the contrary, it is likely to increase.

[21] Bostrom lists a few other probability estimates from other scholars but, oddly, gets them almost entirely wrong. For example, he says that John Leslie “estimated a probability that we will fail to survive the current century: 50 percent. Similarly, the Astronomer Royal [i.e., Lord Martin Rees], whom we heard speak yesterday, also has a 50 percent probability estimate.” Leslie’s estimate was actually a 30 percent chance of extinction within the next five centuries, and Rees’ 50 percent estimate concerned civilizational collapse, not extinction.

[22] Many longtermists believe that once we spread beyond Earth, the total existential risk will sharply decline. The reason is that, just as the probability of extinction is inversely related to the geographical spread of a species (ie., the more spread out, the less chance that, say, a natural disaster will eliminate the species), the greater our cosmographic spread, the lower the chance that a single catastrophe will terminate our evolutionary lineage. But there are very strong reasons for believing that space colonization could greatly exacerbate the risk. The most authoritative account of this view is given in the chapters of Daniel Deudney’s book Dark Skies. A summary of at least some of the key points can be found in this short article of mine. To date, no space expansionist (i.e., advocate of space colonization) has provided a convincing refutation of these points, so we should assume for the time being that there really is no “Planet B.”

[23] Intriguingly, there is one instance in Bostrom’s oeuvre in which he explicitly acknowledges that “progress” is the wrong word to use. In this paper, he writes: “It may be tempting to refer to the expansion of technological capacities as ‘progress.’ But this term has evaluative connotations — of things getting better — and it is far from a conceptual truth that expansion of technological capabilities makes things go better. Even if empirically we find that such an association has held in the past (no doubt with many big exceptions), we should not uncritically assume that the association will always continue to hold. It is preferable, therefore, to use a more neutral term, such as ‘technological development,’ to denote the historical trend of accumulating technological capability.” Yet he uses the term “technological progress” in subsequent articles.

[24] Worse, Bostrom seems to endorse a nightmarish form of ubiquitous, highly invasive state surveillance of individuals as part of this “preventive policing” proposal. He appears to believe that a “high-tech panopticon” of some sort will be necessary to prevent omnicide given the growing power and accessibility of dual-use emerging technologies.

[25] Or at least right now, for the foreseeable future. We are simply too irresponsible — even for nukes. (It’s pure dumb luck that the Cold War never turned hot. There were so many near-misses, for example, that I personally have no doubt that if history were rewound to 1945 and played again just once or twice, civilization would not have survived.)

Author and scholar of existential threats to humanity and civilization. www.xriskology.com. @xriskology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store