According to Bostrom’s paper “A History of Transhumanist Thought,” first published in 2005, the term “transhumanism” was coined by Julian Huxley in his 1927 book Religion without Revelation. But this is not true. The first time that Huxley used the term was in a 1951 lecture, although it was employed in roughly its contemporary sense even earlier, in 1940, by a Canadian historian and philosopher. Nonetheless, Huxley clearly stated in his 1957 book New Bottles for New Wine, building upon ideas expounded in 1927, “the human species can, if it wishes, transcend itself — not just sporadically, an individual here in one way, an individual there in another way — but in its entirety, as humanity.” Huxley was a leading eugenicist who identified forced sterilization, demographics, and scientific knowledge of the genetic basis of intelligence as the means by which humanity could “transcend itself,” by which he meant “realizing … new possibilities of and for his human nature.” Obviously, this way of thinking opened the door to the Nazi atrocities of the Second World War, in which Hitler’s diabolical regime force sterilized more than 400,000 people.
After WWII, most people wanted nothing to do with eugenics. But in the waning decades of the twentieth century, this began to change. The catalyst was the exponential growth of genetic engineering technologies along with grand proclamations by futurists about the promises of nanotechnology and artificial intelligence (AI). This foregrounded a new method of human perfectibility: integrating biology and technology, organism and artifact, to create what two authors in 1960 called “cyborgs.” Consequently, a new transhumanist movement began to coalesce in the late 1980s, enabled by the Internet and initially referring to themselves as extropians. They imagined completely reengineering the human being to yield one or more new superintelligent, immortal, ultra-wise, hyper-moral species of posthuman beings — call them Homo cyborgensis, Homo supersapiens, or to borrow Yuval Noah Harari’s term, Homo deus, meaning “human god.” As Bostrom wrote in 2003,
our own current mode of being … spans but a minute subspace of what is possible or permitted by the physical constraints of the universe … It is not farfetched to sup- pose that there are parts of this larger space that represent extremely valuable ways of living, relating, feeling, and thinking.
This triggered a flurry of techno-utopian visions of a posthuman future in which our progeny live lives overflowing with pleasure and ecstasy. Bostrom offers a tantalizing glimpse of this magical future in his “Letter from Utopia,” first circulated in 2006 and later updated in 2020. It is composed by a fictional posthuman to his human ancestors (us), and hence is addressed “Dear Human” and signed “Your Possible Future Self.” The posthuman author opens with the rhetorical question: “How can I tell you about Utopia and not leave you mystified? With what words could I convey the wonder? My pen, I fear, is as unequal to the task as if I had tried to use it against a charging war elephant.” An effusive ballet of phantasmagoric imagery follows:
My mind is wide and deep. I have read all your libraries, in the blink of an eye. I have experienced human life in many forms and places. Jungle and desert and crackling arctic ice; slum and palace and office, and suburban creek, project, sweatshop, and farm and farm and farm, and a factory floor with a whistle, and the empty home with long afternoons. I have sailed on the seas of high culture, and swum, and snorkeled, and dived. Quite some marvelous edifices build up over a thousand years by the efforts of homunculi, just as the humble polyps in time amass a coral reef. And I’ve seen the shoals of biography fishes, each one a life story, scintillate under heaving ocean waters.
The inspired weaver of words continues:
You could say I am happy, that I feel good. That I feel surpassing bliss and delight. Yes, but these are words to describe human experience. They are like arrows shot at the moon. What I feel is as far beyond feelings as what I think is beyond thoughts. Oh, I wish I could show you what I have in mind! If I could but share one second with you!
How can humanity make this marvelous future a reality? How can we build this techno-utopian playground awash “in the pulsing ecstasy of love”? The posthuman tells us: “To reach Utopia, you must discover the means to three fundamental transformations.” The first is that we must become immortal through life-extension technologies, which could include biomedical interventions of our bodies or uploading our minds to a computer. The second is that we must become superintelligent, since “it is in the spacetime of awareness that Utopia will exist.” And the third is that we must elevate well-being, which a “hedonist” would equate with pleasure. “A few grains of this magic ingredient,” the posthuman writes, “are worth more than a king’s treasure.”
Bostrom wasn’t the only transhumanist with intoxicating hopes for a better world to come — a heavenly otherworld built by science and technology rather than supernatural forces. The other most prominent transhumanist so far this century is Ray Kurzweil, author of The Singularity is Near (2005). Whereas the early transhumanists called themselves “extropians,” Kurzweil espoused a version called “singularitarianism.” This emphasized the historical discontinuity that creating advanced AI systems would bring about. Bostrom himself has frequently suggested that the creation of superintelligent machines will result in either a utopian paradise or total human annihilation — a drastic situation reminiscent of the cosmic battles described in religious texts where the stakes are all-or-nothing. But Kurzweil made the stronger claim that in exactly 2045 “human life will be irreversibly transformed” as human and machine intelligence merge, resulting in non-biological forms of intelligence dominating the universe. At the same time, the rate of technological progress will accelerate beyond human comprehension. This is called the “technological Singularity” — or, derogatorily, the techno-rapture — and it will make it possible for us
to transcend our frail bodies with all their limitations. Illness, as we know it, will be eradicated. Through the use of nanotechnology, we will be able to manufacture almost any physical product upon demand, world hunger and poverty will be solved, and pollution will vanish. Human existence will undergo a quantum leap in evolution. We will be able to live as long as we choose. The coming into being of such a world is, in essence, the Singularity.
Many other transhumanists put forward their own wide-eyed prognostications of the promises of tomorrow. For example, David Pearce — who co-founded the World Transhumanist Association with Bostrom in 1998 — argues that we should reengineer not just the human organism but all sentient beings in the biosphere. He calls this the Abolitionist Project.
Along similar lines, the AI theorist Ben Goertzel defends an ideology called Cosmism, which has roots in the work of Nikolai Fyodorovich Fyodorov, a nineteenth-century Russian philosopher who advocated for radically extending our lives, resurrecting the dead, and other marvels. On Goertzel’s view, Cosmism affirms the desirability of pursuing super-powerful advanced technologies to enable humans to fully merge with machines, upload our minds, colonize the visible universe, engage in “spacetime engineering,” devise new and better ethical systems, and “reduce material scarcity drastically, so that abundances of wealth, growth, and experience will be available to all minds who so desire.” The result is that, Goertzel writes, “all these changes will fundamentally improve the subjective and social experience of humans and our creations and successors, leading to states of individual and shared awareness possessing depth, breadth and wonder far beyond that accessible to ‘legacy humans.’”
This is the first component of Bostromism. It offers one reason that failing to develop powerful new emerging technologies would constitute a “disaster.” As Bostrom writes in his “Transhumanist Values” paper, the “core value” of transhumanism is “having the opportunity to explore the transhuman and posthuman realms.” Exploring such realms is the only way that humanity can attain the techno-utopian world so delicately depicted by the posthuman author of Bostrom’s “Letter from Utopia.” Yet to realize this core value, we must develop what the transhumanist Mark Walker calls “person-engineering” technologies associated with genetic engineering, nanotechnology, and AI. Hence, the only way forward is more technology, despite the unprecedented hazards that they will introduce.
 For a good critique of one aspect of transhumanism, see “The Irrationality of Transhumanists” by Susan Levin here.