Chapter 4: Astronomical Value and Existential Risks

The question then is: How many future people could there be? In short, a lot. The first to crunch the numbers was Carl Sagan in a 1983 article published in Foreign Affairs. He calculated that if humanity remains on Earth and survives “over a typical time period for the biological evolution of a successful species,” which he specified as 10 million years, and if the human population remains stable at 4.6 billion (the number of people in 1983), then some 500 trillion people may yet come into existence. This is why he argued that “the stakes are one million times greater for extinction than for the more modest nuclear wars that kill ‘only’ hundreds of millions of people.”

But why would we remain on Earth? If what matters is maximizing value-containers, why not spread into our future light cone, or the region of spacetime that is theoretically accessible to us at any given moment traveling at the speed of light. In a 2003 paper in the Journal of Transhumanism, which seems to have drawn from an earlier paper by Milan Ćirković, Bostrom concluded that about 1023 biological humans could come to exist within the Virgo Supercluster alone.[11] The Virgo Supercluster is a giant cosmic structure that contains about 100 galaxy groups, one of which is our own Local Group, which includes at least 80 distinct galaxies, one of which is our own Milky Way. Yet there are some 10 million superclusters in the observable universe, and while not all of these may be reachable to us given the expansion of the universe, the mathematical implications are clear: the future population of intergalactically spacefaring posthumans could be ginormous.

But why would we remain biological? If simulated beings can have conscious experiences of pleasure, then they can be containers no less than us. So, imagine this: our descendants fly out into the cosmos and convert every exoplanet they encounter into computronium, which refers to a configuration of matter and energy that is optimized to perform computational tasks like — drum roll — simulating conscious minds. These descendants then design high-resolution simulation worlds in which they plop massive numbers of simulated beings living, as Bostrom puts it, “rich and happy lives while interacting with one another in virtual environments.” (Note that Bostrom never tells us why these people, perhaps knowing full well that they’re living in simulated worlds, are happy. Maybe they’re utilitarians who understand that it’s their moral duty to be happy for the sake of adding more intrinsic value to the universe. Or maybe there is some sort of digital Prozac that they can get from their local digital pharmacy.) If this were to happen, Bostrom joyfully reports that some 1058 conscious beings — that’s a 1 followed by 58 zeros! — with lifespans of 100 years could exist thanks to these simulations, although “the true number is probably larger.” The point, as he noted in 2003, is “not the exact numbers but the fact that they are huge.”

What does all of this mean? It means that the total amount of intrinsic value that could come to exist within our future light cone could be astronomically large. I call this the “astronomical value thesis.” It further implies that, since morality is built upon value, according to utilitarianism, we have an overriding, profound moral obligation to ensure that as many of these currently non-existent, possibly never-existent people are actually born.

The next question is practical: how exactly could we accomplish this? We have already mentioned that one important step is colonizing space. Without doing this, the total human or posthuman population will be severely limited by the carrying capacity and resources of our tiny planetary oasis. But is there more?

Bostrom answers this question in his 2013 paper titled “Existential Risk Prevention as Global Priority.” (Note that the paper’s title on Bostrom’s website is different.) To maximally maximize intrinsic value, we must reach and sustain what he calls “technological maturity,” which denotes “the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.” Once we have increased economic productivity and subjugated the natural world to the physical limits (insofar as this is feasible), we will be able to maximally harness all of the universe’s vast resources — our so-called “cosmic endowment” of negentropy — which await our eager plundering. With all of this free energy in hand, with every star and galaxy and supercluster subdued within the kingdom of posthuman hegemony, the grand desiderata of transhumanism and utilitarianism can be fulfilled. That is, technological maturity would allow us to explore every corner of the posthuman realm (the core value of transhumanism) and run the maximum number of simulations full of trillions and trillions (and trillions and trillions) of conscious beings.[12]

This leads Bostrom to define “existential risk” in terms of technological maturity. In essence, this encompasses any future event that would either permanently prevent us from reaching technological maturity or cause us to lose technological maturity after achieving it. The most obvious way that this could happen is for humanity to go extinct. But there are a plethora of survivable scenarios as well. Bostrom thus proposes a four-part classification of existential risk “failure modes,” which goes as follows (to quote him):

Human extinction: Humanity goes extinct prematurely, i.e., before reaching technological maturity.

Permanent stagnation: Humanity survives but never reaches technological maturity.

Flawed realization: Humanity reaches technological maturity but in a way that is dismally and irremediably flawed.

Subsequent ruination: Humanity reaches technological maturity in a way that gives good future prospects, yet subsequent developments cause the permanent ruination of those prospects.

So, to sum up: transhumanism outlines a picture of what Utopia would look like for individuals. It is a place in which posthuman beings are bestowed with superintelligent minds, total control over their emotions, indefinitely long lifespans, experiences saturated with ecstasy, and other superhuman delights. Utilitarianism offers an account of “utopia” from the point of view of the universe. It is a configuration in which the cosmos is overflowing with intrinsic value, value, value, value — impersonally conceived. To realize these overlapping utopias, we must attain a stable state of technological maturity, and failing to do this would constitute an existential catastrophe — the worst possible outcome for not just humanity but the Sidgwickian universe itself.

Let’s now turn to some of the implications of this Bostromian view.

Previous chapter | Next chapter

Table of Contents

[11] Note that the Journal of Transhumanism is now called the Journal of Evolution and Technology.

[12] Bostrom gives mixed signals about whether technological maturity requires space colonization to have already happened. For example, he writes that “a technologically mature civilisation could (presumably) engage in large-scale space colonisation through the use of automated self-replicating ‘von Neumann probes.’” Yet it is unclear how we could attain “capabilities affording a level of economic productivity and control over nature that is close to the maximum that could feasibly be achieved” (italics added) without spreading through as much of our future light cone as possible.

Author and scholar of existential threats to humanity and civilization. www.xriskology.com. @xriskology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store