In 1999, a young PhD candidate in philosophy named Nick Bostrom published an article in Mind entitled “The Doomsday Argument is Alive and Kicking.” The article asked whether probabilistic attempts to predict when the last human being would be born were reasonable. (They were, it argued.) The title, however, signaled something far more significant: the end of post–Cold War optimism. Human extinction was back on the menu.
- EXPLOSIVE: Here’s what was uncovered in Hunter Biden’s iCloud Hack
- MAJOR PEER REVIEWED STUDY: Moderna Vaccine Increases Myocarditis Risk By 44 Times In Young Adults
- MUST READ: High Level International Bankers Simulate The Collapse Of Global Financial System
- BIG STORY: Wuhan Lab Isolated Monkeypox Strain In 2020
- EXPLOSIVE: Ukraine Biolabs Used Fever Carrying Mosquitoes To Spark Dengue Pandemic In Cuba
In the years following the Mind article, Bostrom’s star would rise. He was instrumental in founding at Oxford the academic think tank Future of Humanity Institute, devoted to preventing human extinction. Seven years later, his work would help inspire the founding at Cambridge of a second such institute, the Centre for the Study of Existential Risk. By 2015, Bostrom had made Foreign Policy’s “Top 100 Global Thinkers” list for the second time.
Largely thanks to Bostrom and a battery of his associates, the study of “existential risks” — threats that could bring about human extinction or permanent civilizational collapse — has become an interdisciplinary academic cottage industry. With acolytes ranging from the prominent astrophysicist and Astronomer Royal Martin Rees to the neuroscientist Anders Sandberg, the “x-risk” crowd has now spent the past two decades meticulously inventorying threats to our species, particularly those posed by artificial intelligence and emergent technologies.
X-risk preoccupations extend far beyond the ivory tower. Jaan Tallinn, formerly of Skype, co-founded both the CSER and the Future of Life Institute in Cambridge, Massachusetts, and has donated generously to the FHI and the Berkeley Existential Risk Initiative. The mercurial Elon Musk sits on the advisory board of both the CSER and the FLI and has bequeathed millions to each. The Open Philanthropy Project, a charitable organization run by Facebook co-founder Dustin Moskovitz, has pledged to donate over $15 million to the FHI. It is no exaggeration to say that billionaires are quietly bankrolling existential risk research.
Subscribe to GreatGameIndia
Over the past summer, this behind-the-scenes activity played out in flamboyant fashion with a pair of private space launches spearheaded by Richard Branson and Jeff Bezos. Like Musk, Bezos in particular has long professed that private space colonization is necessary to safeguard humanity. In an article for The Guardian published in July, the Oxford historian Thomas Moynihan’s response was to lampoon Silicon Valley’s obsession, arguing that aspiring “space barons” like Musk and Jeff Bezos are abdicating responsibility for sublunar crises here on Earth. His book, X-Risk: How Humanity Discovered Its Own Extinction, more meticulously locates this obsession within a longer history. Like much of his other public writing, it traces how our species “discovered its own extinction.” Paradoxically, however, the book is wildly Panglossian — within a few pages it becomes apparent that he shares the techno-optimism of the very space tycoons he criticizes. Although he is at pains to argue that existential risk “cannot be rejected as a Silicon Valley fad,” he nonetheless traffics in tried-and-true Silicon Valley rhetoric: that we must enthusiastically embrace promising new technologies to realize “the full scope of our potential” as a species. (Moynihan is the Future of Humanity Institute’s informal in-house historian, surely no accident.)
A case in point: he tells us that the “discovery” of human extinction was the ultimate triumph of Enlightenment rationalism. Thanks to a series of intellectual breakthroughs — and growing awareness that the cosmos might be devoid of intelligent life — modern science and Enlightenment philosophy confronted the fact that human survival is not preordained. We alone can secure our future: “Remember, the human is a being whose vocation […] is to liberate itself from its own extinction,” he intones. Yet, while X-Risk represents an exhaustive catalog of reflections on our species’s potential demise — spanning centuries, academic disciplines, and national borders — Moynihan’s history also contains a glaring, dangerous, and obviously intentional omission: the word “eugenics” does not appear once in 424 pages.
This is no mere oversight. His book systematically glosses over the fact that human extinction was a hobbyhorse of the eugenics movement, and it has now reemerged as a fascination of the contemporary existential risk crowd, often rebranded as “bioenhancement,” “transhumanism,” and even “new eugenics.” Indeed, although he avoids the word “eugenics,” his desire to rehabilitate eugenic discourse is clear: “[T]he pathway to the future — and to maturity over extinction — is the path of bioenhancement,” Moynihan proclaims near the end of his book. He then writes: “To truly assume maturity, perhaps, is to realize that we must leave Earth and our evolutionary past behind.”
Throughout X-Risk, he uses precisely such euphemisms. When discussing the work of the turn-of-the-20th-century British geneticist and early x-risk aficionado J. B. S. Haldane, for example, Haldane’s eugenicism is innocuously described as the “daring task of redirecting our own evolution.” Likewise, when Moynihan suggests that Haldane’s championing of “ectogenesis” — the artificial development of embryos outside the womb — actually anticipates the radical feminism of Shulamith Firestone, he conveniently fails to mention that Haldane conceived of ectogenesis as a tool for eugenic advancement. It could be said that Musk’s intellectual antecedents are anxious British aristocrats who mulled civilizational decline well over a century ago — many of whom were willing to consider extreme strategies à la Musk. In short, neither Musk’s nor the Oxbridge think tanks’ interest is novel.
In the 19th-century Anglophone world, no single text did more to refashion thinking about human extinction than the publication of Darwin’s On the Origin of Species in 1859. It was often viewed as an “outrage” to humanity’s “naïve self-love” (as Freud famously described Darwin’s impact decades later). But an influential cross section of Darwin’s contemporaries and inheritors saw evolutionary science not as a scythe that cut humankind down to size but as heralding glorious possibilities for human supremacy. They accepted the assertion that human beings are precarious animals vulnerable to extinction, but they rejected the idea that the human species is not uniquely privileged among organic beings. After all, we are the only animal capable of being paranoid about our evolutionary future, and potentially able to safeguard it.
Darwin argued that extinction is mundane: a regular feature of the evolutionary process. By making extinction boring — the result of unexceptional changes accumulating over vast stretches of time, rather than of unprecedented and uncontrollable upheavals (as had previously been believed) — evolutionary theory made it possible to view extinction as a long-term risk that might be anticipated, and thus strategized against, in advance. Early Darwinian science fiction featured the looming threat of interspecies competition, but in fact many mainstream British intellectuals believed that the greatest danger to human survival might be the human species itself. Darwin’s cousin Francis Galton capitalized on this evolutionary fever, and fear, kicking off the eugenics movement with his 1883 book, Inquiries into Human Faculty and Its Development. In the decades to follow, Galton’s movement would establish a particularly strong foothold in the United States, where it was dominated by a noxious combination of economic anxiety and racialized nativism. It was British eugenics that was marked by grander ambitions. The English physician Caleb Saleeby spoke in 1909 for a growing number of British intellectuals when he declared that “eugenics is going to save the world.” This conviction would rapidly coalesce after World War I into the paranoid worldview that inflects existential risk discourse today: the belief that the human species is the only species that has evolved to bear the moral imperative of survival, whatever the cost.
In 1923, J. B. S. Haldane wrote a short book, Daedalus; or, Science and the Future. The first installment in the English publisher Kegan Paul’s infamous “To-day and To-morrow” series, it imagines a future remade by eugenic enhancement. “Had it not been for ectogenesis, there can be little doubt that civilization would have collapsed […] owing to the greater fertility of the less desirable members of the population in almost all countries,” writes Haldane. The book ends with an unsettling prediction: “The scientific worker of the future will more and more resemble the lonely figure of Daedalus as he becomes conscious of his ghastly mission, and proud of it.” The following year, the prominent Oxford philosopher F. C. S. Schiller would pen the next “To-day and To-morrow” contribution. In Tantalus, or The Future of Man, he warns that humanity’s “future has always been precarious […] because it has always been uncertain whether [our species] would use its knowledge well or ill, to improve or to ruin itself.” Biological knowledge — politically mediated through eugenics — promised a means by which this improvement might be achieved and existential ruin avoided. Anthony Ludovici, a fellow British philosopher and translator of Nietzsche, would similarly maintain that the stakes of “eugenic mating” were nothing less than “the survival of human life in a desirable form.” He offered these insights in a 1926 book (part of the same series) on “woman’s future and future woman” in which he opined that birth control should ideally be replaced by “some kind of controlled and legalized infanticide.”
The year 1926 would prove a banner year for eugenicists. Ronald Fisher, a major figure in the “modern synthesis” of Darwinian evolution and Mendelian genetics, would write an ambitious essay entitled, “Eugenics: Can it Solve the Problem of Decay of Civilizations?” which argued that guided evolution might hold the key to permanently safeguarding society from disaster. These concerns were also taken up by the English schoolteacher Leonard Huxley. The son of Thomas Huxley, Leonard would tarnish the family name with his book Progress and the Unfit, in which he compared Western Europe to declining Rome, with failing racial hygiene as the chief threat to the survival of England and perhaps human civilization itself. This worry was likewise shared by Leonard Darwin, son of Charles and president of the British Eugenics Society, who in 1925 warned that biological decay would drive civilizational collapse unless the reproduction of the unfit was curbed by eugenic means.
As the decade wound to a close, such pronouncements only grew more dire and disturbing. Charles John Bond, a doctor and euthanasia enthusiast, would deliver the 1928 Galton Lecture in which he proclaimed that the biologically feeble poor were akin to “parasitic cancer cells” that threatened humanity by out-reproducing the wealthy. Assuming that “civilized” temperament and fecundity were inversely related, he speculated that increasing the fertility of the elites “might prove to be the deciding factor in race survival in the future.” As for the lower classes? “We ought to welcome the extinction of the degenerate race,” Bond counseled.
Olaf Stapledon, the prodigiously talented author of a number of eugenics-soaked sci-fi masterpieces — much admired by Moynihan in X-Risk — would deliver a national broadcast on “The Remaking of Man” in 1931. The address waxed in rapturous fashion about our possible eugenic futures while also warning that accidents or terrestrial catastrophes “may easily disinfect the earth of the microbe, man.” In a modestly titled 1934 tract If I Were Dictator, Stapledon’s friend and esteemed biologist Julian Huxley — son of Leonard, brother of the novelist Aldous — continued this line of inquiry. There, the biologist argued that, while “evolution […] may go backwards, or spread sideways […] or become fossilized,” human intelligence spares us of this fate. “Other organisms are the passive subjects of evolutionary forces,” Huxley wrote, but “man can become the conscious trustee of Evolution.” Two years after this pronouncement, in his own Galton Lecture, Huxley would ominously declare that if eugenics was not widely adopted: “[W]e can be sure of this alarming fact. […] Humanity will gradually destroy itself.” Less than a decade later, in a bid to end a world war marked by the deranged application of eugenics, two bombs would fall out of a clear blue Japanese sky, ushering humanity exponentially closer to the very destruction Huxley prophesied.
In fleshing out this partial history of existential risk’s entanglement with eugenics, my aim is not to paint contemporary x-risk researchers as mustache-twirling villains. I had the pleasure of speaking on a panel with Moynihan a few years ago — he came across as thoughtful, modest, and unlikely to be engaged in a malicious conspiracy. Likewise, it seems a stretch to suggest that Nick Bostrom’s philosophy “contains all the ingredients necessary for a genocidal catastrophe,” as the philosopher Phil Torres recently claimed in a rather hyperbolic article for Current Affairs.
What I am saying, though, is that we cannot claim to take existential risk seriously — and meaningfully confront the grave threats to the future of human and nonhuman life on this planet — if we do not also confront the fact that our ideas about human extinction, including how human extinction might be prevented, have a dark history. Although eugenics was not a univocal movement — in Britain or internationally — the thinkers I highlight above all tended to share the assumption that Darwinian biology (and its synthesis with genetics) marked a turning point, a view shared by most x-risk scholars. This fact bears scrutiny, especially given the growing penetration of tech billionaires into this x-risk space.
In the broadest terms, people like Moynihan, Bostrom, and even Musk are united by conviction that we — or, rather, they — have a profound moral imperative to prevent human extinction and provide a livable world for far-future human beings. These are laudable aims. Yet, we should also remember that many early eugenicists also had noble aspirations: Haldane was a staunch anti-imperialist who died an Indian citizen, and Julian Huxley was a vocal opponent of racialized eugenics who spoke vociferously against the Nazi program. Many of them viewed their work as part of the struggle against capitalistic inequality, and yet they also believed in the liberatory potential of eugenics. Any history that attempts to enlist them as Moynihan does in the fight against human extinction must reckon with this complicated legacy.
Today, “man” is no longer the “political animal,” as Aristotle once pronounced. Rather, the human species has become the paranoid animal: the only form of intelligent life able to fear for its future, and now hypothetically (at least according to x-riskers like Musk) capable of using that fear to secure its long-term survival. It remains to be seen whether human beings at large are capable of leveraging this paranoia in the service of a just future — for those currently inhabiting a rapidly warming Earth as well as those to come — or whether “the good of the species” will continue to be the rallying cry of the elite who desire a world remade in their own image. In any case, any attempt to secure a future must aim to learn from past hubris.
Here, it is again worth recalling Julian Huxley, who delivered a speech at Madison Square Garden not long after human shadows were transposed onto Hiroshima concrete. Speaking to an audience of some 18,000 assembled there for a “crisis meeting” on nuclear weapons, the biologist radiated optimism. In a marked departure from his earlier premonitions of dysgenic collapse, Huxley now opined that the species might yet be saved from existential peril by repurposing the split atom for the common good. A few hundred atomic bombs might usefully be dropped on the “polar regions,” Huxley cheerfully advised the crowd. He reasoned that the resulting ice melt would transform the Earth, leading to a warmer, more pleasant climate.
Tyler Harper is a literary scholar working at the intersection of environmental studies, philosophy, and the history of science. His current book project, provisionally entitled “The Paranoid Animal: Human Extinction Before the Bomb,” examines how British literary figures, scientists, and social theorists engaged with the concept of human extinction prior to the nuclear age. This article was originally published on LARB.