For as long as any of us have been alive, we have seen an ever-changing series of popular predictions for how industrial technology is about to destroy civilization. Nuclear war was supposed to kill everyone, whether by literally exploding all of civilization or by irradiating the world’s surface or by nuclear winter. Toxic pollution and resource depletion were supposed to leave the world a barren wasteland. Overpopulation was supposed to lead to mass starvation, universal resource wars, and the collapse of society. Global warming was supposed to make the planet too hot for human life. And today, many people fear that artificial intelligence will disassemble humanity for parts.
So far, however, civilization has not been destroyed. It seems the demand for techno-apocalypses is much greater than the supply. What’s going on here? Why are so many people always convinced that technology is on the cusp of destroying civilization?
Of course, each individual prediction of doom has its own internal reasoning which should be evaluated apart from the broader trend. However the sheer number of widely expected techno-apocalypses, and the similarities in how the ideas are spread throughout society suggest a common pattern at work, separate from the question of how plausible any particular apocalypse scenario might be.
To understand this, the first thing we have to look at is when and where has this been happening. Is this a general phenomenon that humans do in all times and all places, or something that happens in all technological civilizations, or is it something specific to modern Western civilization?
It would be strange if there were popular predictions of techno-apocalypse before rapid technological change became such a visible force. This is indeed what we find. Before the modern era, the closest match is popular “millenarian” movements in Christian societies. These rhyme a little bit with modern techno-apocalypse, and the psychological effects on the believers are remarkably similar to the effects of modern techno-apocalypse beliefs, but they’re not really what we’re looking for. For one thing, millenarian beliefs are about society being radically transformed into a permanent utopia rather than being destroyed, and for another, the transforming force is Christ bringing about the Last Judgment rather than technology. We’ll have to keep looking.
Do we see techno-apocalypticism emerge as soon as rapid technological advance sets in? Actually, no! If we look around the Industrial Revolution, starting around the 1770s or so, there is nothing of the sort. Only a few cutting-edge intellectuals started to realize how important technological progress would be. Ben Franklin watched the prototype hot air balloons and immediately realized that air power would someday transform war, but rather than feeling anxious about the destruction it would cause, he hoped it might “[c]onvinc[e] sovereigns of the folly of wars.” This futurism was unusual even among intellectuals like Franklin, and none of it filtered down to popular discourse—understandably, because industrial technology was not yet transforming the regular person’s daily life.
It was not until the Second Industrial Revolution, starting roughly around the 1860s, that technologies like trains and mass production and electric lights rapidly intruded on urban people’s daily lives, and regular people began to perceive technological progress as a major force in the world. Yet, while there was popular discourse about technological progress, we still do not see techno-apocalyptic anxieties. The only case that looks a little bit similar was the “coal question,” the idea that available supplies of coal—at the time the only industrial power source in use—would eventually run out and industrial civilization would go with it. The idea was common knowledge in intellectual circles, but made approximately no impression on the public. Intellectuals occasionally brought up the “coal question” during popular debates about high coal prices, but never with the palpable anxiety of 20th century writers talking about “peak oil” and the like, and it never achieved any traction beyond the sort of philosophical futurist who in 2026 has opinions about the “simulation hypothesis” and the “Fermi paradox.”
The Apocalypse of 1914
What was the first techno-apocalypse that achieved popular traction? It first emerged right after the First World War and abruptly dominated public discourse. Nearly every thinker and pundit anywhere in Western civilization suddenly worried that war with ever more powerful weapons would destroy civilization.
It’s no mystery where this idea comes from! Industrial war, with trucks bringing men and supplies to the front en masse, and machine guns mowing them down by the hundred, caused death and destruction on a scale no one had seen in living memory. While the per capita death toll from the World War was on par with earlier great power super-wars like the Seven Years’ War or the Napoleonic Wars, the World War affected people more profoundly.
Many people at the time believed that humanity was progressing beyond such things, so they saw the World War as a demolition of their worldview rather than as the normal, if tragic, course of history. That the war appeared so senseless, even by the low standards of international war, contributed further to the emotional impact. At the time, most commentators attributed the outbreak of the war to nationalistic pride of the most narrow-minded and unstrategic sort, and today most historians offer passive-voiced stories about tangled alliances and balances of power which suggest the whole thing just kind of happened by mistake. Of the two, I think the “short-sighted pride” explanation is actually closer to the mark.
And, of course, the rapid introduction of so many new weapons made it easy for people to reason, correctly, that future wars would see the creation of more and more destructive weapons, and further to extrapolate that this would eventually destroy civilization. This discourse was absolutely everywhere, even in subjects with no immediate relation to the topic, in a pattern that will be familiar to us from, for instance, the climate apocalypse discourse of the 2010s. To give one example, from T. A. Rickard’s 1932 Man And Metals:
“Civilization obviously is menaced by the misuse of the very products that were essential to its advancement. By aid of metals man made the tools by which he emerged from savagery and with which he constructed the machines that have given a bigger scope and a wider meaning to human life. From the dawn the digging of ore has played a leading part in the drama of humanity and no one has more cause than the miner to deplore the misuse of the products of his skillful toil. The sword was made before the plowshare, the spear was fashioned before the chisel. The maleficent use of metals has preceded the beneficent use of them. The perversity of mankind has turned a blessing into a curse. Shall we mend our ways or go with the Gadarene swine down the steep slope of perdition?
The history of mining, like all other history, thunders a warning. Those that live by the sword shall die by the sword. The Assyrian trampled upon the Egyptian, the Persian on the Assyrian, the Greek on the Persian, the Roman on the Greek. As they did in ancient times, so we, more civilized, as we deem ourselves, have done in later times. History is a philosophy that teaches by example. We have more examples than our predecessors; shall we heed them no better, more particularly the latest of them, which brought us to the very brink of perdition; or shall we too join the great discard of those that were weighed and found wanting? The finger of history, like that of Daniel before Belshazzar, bids us beware lest we too go the way of Nineveh, and our civilization, like its many proud forerunners, be destroyed by the forces it created but could not curb, by a demon it might invoke but could not exorcise.”
Two things are worth noting in this passage. First, Rickard does not specify a particular weapon which will destroy civilization, only the general trend of destruction. Some men predicted that indiscriminate bombing of cities would be the weapon to destroy civilization, and in hindsight we must give them credit for foreseeing the shape of terror bombing even if they greatly overestimated its strategic effects. But this was the exception, and most discourse about modern weapons destroying civilization followed Rickard’s more general reasoning.
Second, the destruction of civilization is held out as an open question. Can civilization pull back from the brink? Can we figure out some way not to march together off the cliff? This was a live possibility. Unlike the “coal question,” these views did not stay isolated among intellectuals and futurists. This view spawned popular movements and drew statesmen to its banner. They created the League of Nations and the disarmament movement achieved the unprecedented Washington Naval Treaty, which limited the construction of warships among the major powers.
The science fiction author Olaf Stapledon grappled with the spiritual angst of the apocalypse he felt bearing down on civilization in his 1930 novel Last and First Men. In this book, the world’s great powers fight a series of wars with ever more destructive weapons. Eventually, the whole population of Europe is annihilated with poison gas. To avert an even more destructive war, the remaining nations unite in a single World State. This endures until coal supplies run out, at which point civilization disintegrates and man reverts to savage tribes. A hundred thousand years later, a new civilization emerges.
During a rebellion, their advanced technology causes a massive worldwide explosion that turns the planet into a lifeless volcanic wasteland and kills the whole human species, except a few dozen survivors who settle in now-tropical Siberia. Over millions of years, the planet slowly becomes inhabitable once again, and their descendants evolve into the posthuman “Second Men.” New species rise and fall and repeat the mistakes of the past. Finally, the Eighteenth Men—the Last Men—achieve an enlightened utopian society. This, too, begins to decay after a stellar disaster occurs and the Last Men realize that their civilization will eventually be destroyed by the Sun. In the introduction to the novel, Stapledon writes:
“We all desire the future to turn out more happily than I have figured it. In particular we desire our present civilization to advance steadily toward some kind of Utopia. The thought that it may decay and collapse, and that all its spiritual treasure may be lost irrevocably, is repugnant to us. Yet this must be faced as at least a possibility. […] May this not happen! May the League of Nations, or some more strictly cosmopolitan authority, win through before it is too late! Yet let us find room in our minds and in our hearts for the thought that the whole enterprise of our race may be after all but a minor and unsuccessful episode in a vaster drama, which also perhaps may be tragic.”
We can see statesmen grappling with the practical side of this problem in a 1932 disarmament debate in the British parliament. Frederick Seymour Cocks, like Rickard, fears the rising power of new weapons in aggregate:
“If it fails now, the only alternative will be to arm. The nations will arm, and will embark upon another armed race, by land, by sea, and by air. Following upon that, as closely as a man is pursued by his own shadow, will come war, under the waters, in the air and on the land, until civilisation cracks beneath the strain and the Vesuvius of revolution opens out beneath our feet.”
Prime Minister Clement Atlee fears the bombing of cities in particular:
“I believe that what we have to do is to try to build up a constructive internationalism, and I believe that the most fruitful suggestion which has been made in this regard is that relating to the internationalising of civil aviation … I do not believe that you have a defence against air warfare at the present time. I do not believe that you can restrain air forces as long as you have nationalised civil aviation, and I believe that, unless air warfare is restrained, civilisation will be wiped out.”
Similar calls for world government over the supposedly apocalyptic technologies have also been a common feature in every subsequent techno-apocalypse prediction, be they caused by nuclear weapons or artificial intelligence. This debate is most famous for a speech by former Prime Minister of the United Kingdom Stanley Baldwin:
“[T]here is, as has been most truly said, no way of complete disarmament except the abolition of flying. Now that, again, is impossible. We have never known mankind go back on a new invention. It might be a good thing for this world, as I have heard some of the most distinguished men in the Air Service say, if man had never learned to fly. But he has learned to fly, and there is no more important question, not only before this House, but before every man, woman and child in Europe, than: “What are we going to do with this power now we have got it?” … This is a question for the younger men far more than it is for us. They are the men who fly in the air. Future generations will fly in the air more and more. Few of my colleagues around me here, probably, will see another great war.… If the conscience of the young men should ever come to feel with regard to this one instrument that it is evil and should go, the thing will be done; but if they do not feel like that—well, as I say, the future is in their hands. But when the next war comes, and European civilisation is wiped out, as it will be and by no force more than by that force, then do not let them lay the blame upon the old men. Let them remember that they, they principally or they alone, are responsible for the terrors that have fallen upon the earth.”
And then the next war came. The two decades of desperate searching for a way out ended in fire and rubble. London was bombed, and Dresden, and Tokyo. The logic of industrial production was turned to the mass extermination of civilians. The promised new weapons arrived, most notably the nuclear bomb.
It was not as bad as Baldwin and his colleagues expected. Civilization was not actually wiped out. It took about a decade for Tokyo, Hiroshima, and Nagasaki to recover to population levels higher than their prewar starting point. But the war was destructive enough that the doomsayers felt vindicated anyway. For most thinkers, it was no longer a question of whether man will destroy himself, but when.
This felt vindication is not well-grounded historically. World War II killed more people than any conflict in human history because industrial technology can support an enormous population. Apart from modern agricultural technology making everything happen at a larger scale, the destruction of World War II was well within historical precedent. As far as the fraction of the European population it killed, World War II was about as deadly as the Thirty Years’ War, fought with pikes and muskets in the 17th century. Both killed somewhere around seven percent of Europe, with deaths closer to forty percent in the hardest-hit regions. Both left the worst-off areas “post-apocalyptic” in the colloquial sense, but came nowhere close to destroying civilization in the way Rickard, Stapledon, and Baldwin feared.
In World War II, the advanced superweapons were strategically critical, the gas chambers and trainyards were morally horrifying, but almost all of the actual killing was done with bullets and artillery shells in ways that Napoleon would have found perfectly familiar. Probably less than ten percent of the war’s deaths came from the vaunted new weapons like tanks and aircraft. The reality is that industrial armies, with guns and bombs and concentration camps, have been less thorough than Mongols with horses and bows, who killed more people in Baghdad with their own hands than the Americans managed to kill by dropping atomic bombs from thirty thousand feet.
But for people who had imagined their technological superiority also made them morally superior to past civilizations, and who had long forgotten the last time their great powers fought a total war targeting civilians en masse, World War II felt like a near-apocalypse which came within a hair’s breadth of destroying civilization entirely. The illusion of rational progress “advanc[ing] steadily toward some kind of Utopia” was wounded by the First World War, and killed by the Second.
After the First World War, more people expected that some unknown future technology might destroy civilization at some unknowable future date. After the Second World War, more people expected that a specific identified technology would destroy civilization within their own lifetime, although which particular technology is supposedly on the cusp of destroying us has changed over time.
The Parade of Prophecies
The first technology which the public believed was about to destroy civilization was, of course, the nuclear bomb. In popular imagination, it was the sheer explosive force of destroying cities that would end civilization. Cooler heads realized blowing up major cities would merely kill an enormous number of people, but was not actually enough to end the world. Then, in 1950, the great nuclear physicist Leo Szilard popularized the idea of nuclear “cobalt bombs” deliberately designed to spread radioactive fallout which remains lethal for years rather than the usual days. This led to widespread fears that a nuclear war would permanently irradiate the Earth and destroy all life. Such a scenario was not well-grounded; weather spreads nuclear fallout throughout the atmosphere very unevenly, such that covering the entire globe is effectively impossible. No cobalt bombs were ever built, and popular fear of global irradiation persisted for a decade or two. Eventually it faded from the world’s attention, although the idea of a permanently radioactive nuclear wasteland remains vaguely in the cultural imagination, mostly because of depictions in fiction.
In the 1980s, the radiation poisoning apocalypse was succeeded by popular fear of “nuclear winter,” the idea that large-scale nuclear war would loft soot into the stratosphere, higher than the rain which could bring it back to Earth, and so persist for years while blocking sunlight, cooling the Earth, disrupting agriculture, and causing massive famines that kill billions, thereby ending civilization or even causing human extinction. This was based on speculative computer models of cities burning in enormous firestorms and further speculative models of how soot behaves and persists in the stratosphere. These models were invented largely by activist-scientists whose publicly declared aim was to terrify world leaders into believing nuclear weapons were a weapon too terrible to use and, unsurprisingly, their models have not held up well to empirical verification.
In the 1991 Gulf War, massive oil well fires did not produce the predicted lofting effect; and in 2017, when smoke from Canadian wildfires was lofted to the stratosphere, it dissipated much more quickly than predicted by the models used to forecast nuclear winter. By now, the public anxiety about nuclear winter has mostly faded away, partly because of the problems with the climate models, but mostly because the end of the Cold War and the activist campaigns for nuclear disarmament meant that there was no longer much media raising the issue in people’s minds.
While popular fears persist, there is currently no widely-accepted case that nuclear bombs will destroy civilization among experts. Perhaps a new model will arise to fill this niche in the next decade or two. The idea that nuclear bombs could kill a hundred million people, and it would be just another devastating war that gets recorded in the history books while people rebuild, seems unintuitive and even morally offensive to us. Something like that deserves the narrative weight of an apocalypse.
Of course, nuclear weapons are not the only technology that was supposed to have destroyed the world by now. The normal operation of industry at scale was going to cause the apocalypse as well. Like with nuclear weapons, the specifics have changed as individual mechanisms were disproved or simply fell out of fashion, because many people have a deeper spiritual conviction that large-scale industry ought to end the world. The 1968 publication of The Population Bomb by zoologist Paul Ehrlich popularized the theory that overpopulation, enabled by industrial-era advances in agriculture, sanitation, and medicine, would increase the population above the carrying capacity of the Earth and lead to mass starvation and collapse within the 1970s. Ehrlich would later be one of the chief organizers of the campaign to develop and spread the nuclear winter theory as well.
In 1972, The Limits To Growth book made the case for two additional mechanisms of industrial doom: resource depletion, as all possible sources of industrial inputs like aluminum and chromium are exhausted; and pollution, as exponentially increasing emissions of industrial waste poison the world and make it uninhabitable. The latter of these was especially influential, as fictional depictions of barren landscapes blighted by toxic sludge made the idea popular among the general public. In fact, the normal operation of industry was perfectly able to solve these problems. Food, metals, minerals, and fossil fuels are all far more abundant today than they were when these dire predictions were made, and this trend shows no sign of stopping.
The actual limits of matter and energy available to human civilizations are incomparably vaster than the doomsayers imagined, as far beyond modern civilization as modern civilization is beyond a primitive tribe worrying about using up all the flint for their arrowheads. Cleaning and preventing pollution proved even more tractable. Filtering and treating industrial waste required research and development to create air scrubbers, waste-to-energy incinerators, clay-lined landfills, and other such technologies. Deploying these across the industrial stack was reasonably expensive, but well within the means of an industrial society, and most wealthy countries mandated these improvements throughout the 1970s. The problems are now mostly solved. The cities are no longer choked by smog, the rivers are clean again, chlorofluorocarbons have been replaced with non-ozone-depleting substitutes, and acid rain is a thing of the past.
Much as the fears of nuclear apocalypse switched from radiation to nuclear winter when a new justification was needed, the ecological fears switched from pollution to global warming. In the late 1980s, ecological projections that carbon emissions would raise global temperatures by several degrees celsius and cause substantial disruptions to local climates and human life broke into mainstream awareness, where they were soon spun by activists, reporters, and fiction authors into widely-believed but scientifically baseless fears that warming would make the world literally uninhabitable. A series of dire predictions about coastal cities being submerged by melting polar ice have failed to materialize on schedule.
As atmospheric carbon capture technology is developed and rolled out in coming decades, it seems plausible that industrial technology may solve global warming and prevent even the moderate problems which would come from warming of several degrees celsius, but this remains somewhat speculative. If not, the climate disruptions will fall far short of the “world literally on fire” messaging popular in the 2010s. In the last couple of years, it seems as if the wind has gone out of the sails of the global warming apocalypse narrative, and few people remain emotionally invested the way they were even in 2021. Like the other fears of technological doom, it is not that the core arguments were explicitly refuted in the public mind, but rather that people eventually got tired of waiting for an apocalypse that never arrived.
The latest popular doomsday scenario is artificial intelligence, which exploded into public consciousness after the creation of compelling generative artificial intelligence chatbots, especially ChatGPT in 2022. In addition to the inchoate fears of economic obsolescence and the rise of new magnates disrupting the political balance of power that accompany every major new technology, there is also fear that the AI itself will soon exterminate humanity. The most influential and organized activists advancing these views trace to the work of Eliezer Yudkowsky, where he posits a sufficiently intelligent artificial mind would, by default, exterminate humanity as it pursues its own inhuman goals, unless the extremely difficult research problem of aligning its goals exactly to human goals succeeds completely on the first try, in which case the AI would instead create a permanent utopia.
What these “AI Doomers” provide for this narrative is not so much an argument that superhuman AI is dangerous. Futurists and philosophers have speculated about artificial minds supplanting humanity since before the electronic computer was invented. The idea that intelligent nonhuman minds are in natural competition with humanity is obvious and very common, and has motivated stories of robot uprisings since R.U.R. (Rossum’s Universal Robots) in 1920.
Rather, the modern “AI Doomers” provide arguments and intellectual authority for the claim that artificial general intelligence is happening soon, that computer minds pursuing their own interests without the need for human direction are a scientifically respectable possibility, rather than a fiction trope like time travel or alien invaders, that GPT and its derivatives are one of the last steps on the path towards building Man’s successor. There have been past waves of enthusiasm when specialists in artificial intelligence believed they were close to building general intelligence only to be disappointed as the field plunged into “AI winter,” but these did not achieve adoption beyond technical specialists and inveterate futurists. Yudkowsky’s predictions of superintelligence’s imminent creation were the first to achieve wider reach, which ironically inspired Sam Altman, Elon Musk, Dario Amodei, and other businessmen, researchers, and investors to create the AI labs that are now advancing the state of the art.
There have also been plenty of techno-apocalyptic scenarios which remained relatively niche and did not spark visceral public anxiety—bioengineered pandemics, electromagnetic pulse weapons, nanotechnological self-replicating “gray goo,” exotic physics disasters from experimental particle accelerators, the “clathrate gun hypothesis” that moderate global warming would set off a runaway feedback loop and overcook the planet, and more. Partly because these do not fit as neatly into the ideology of industrial civilization being destroyed by its own hubris, and partly because of which causes the most competent activists have chosen to push, these have not achieved the scale or the emotional resonance of the most successful apocalypse narratives.
The Hypothetical AI Apocalypse
In each case that achieves popularity, the pattern is mostly the same. First, intellectuals describe a phenomenon that, if taken to its maximum conceivable extreme, would destroy civilization or the entire human species. Usually this speculation is logically sound, within the limits of its clearly-defined assumptions. There exists some threshold of global warming that would cause human extinction, even if 5 °C would not suffice; dusting the entire Earth with radioactive cobalt-60 would kill all humans outside of shelters if it could somehow be arranged; running out of coal in the 1860s would have shut down industrial society before it learned how to make solar panels or fission reactors. This work attracts the attention of intellectuals and futuristic types, but not the general public.
Second, some of the intellectuals make a public case painting a vivid picture of how the apocalypse could happen within a decade or two in order to engage the popular imagination. This uses looser arguments with poorly justified or unfounded assumptions, although the intellectually rigorous people who first develop the looser arguments—such as in The Limits To Growth—specify that it’s just a model, or an illustration, or one way that things could plausibly go, and “of that day and hour no one knows.” They are careful to make their assumptions explicit and highlight which steps of the argument they place less weight on. Some of the intellectuals come to viscerally believe that the apocalypse will be soon, for reasons which are only partially based on their explicit arguments, and partly due to subterranean matters of ideology and psychological pressures. Even so, the most careful and scrupulous are aware of what their arguments do not prove, and avoid saying that their worst fears will happen, speaking only of possible scenarios and models worth considering—which incidentally means they are never publicly proven wrong when the fears do not materialize.
For now, such qualifications are the norm in predictions of AI doom from the movement’s intellectual core. For example, in 2020, when Ajeya Cotra published a lengthy analysis of when “transformative AI” (TAI) is supposed to arrive based on claimed analogies to biological intelligence, the report assumed AI development would follow what it admitted was a “relatively unrealistic path to TAI because it is simpler to analyze.” Or in 2025, when the authors of the influential popularizer “AI 2027” received criticism from people who believed the title meant they were predicting the AI apocalypse would come in 2027, they were quick to place an addendum at the top:
“to prevent misunderstandings: we don’t know exactly when AGI will be built. 2027 was our modal (most likely) year at the time of publication, our medians were somewhat longer. Specifically, our medians ranged from 2028 to 2032. When AI 2027 was first published we explained this in Footnote 1 as above, but to make our views more clear we have added a clarification to the foreword text.”
Yudkowsky himself has gone even further and denounced the entire practice of making up “AI timelines” for when the superhuman AI will arrive and presumably exterminate humanity, and he makes a point of not giving specific dates.
In every case, from interwar fears of runaway militarism to today’s fears of superintelligent AI takeover, the shuffle works by making an unjustified leap from the claim that the apocalypse is possible in principle to the claim that it’s happening soon. Practical difficulties are skated over or assumed away. Imagined scenarios and wild assumptions are fed into great quantitative models to produce very serious reports. The principle of “garbage in, garbage out” means that these have no value whatsoever for predicting the future. Strip away the leaps of logic and the illustrative fictions and the load-bearing assumptions made to “simplify analysis” and the calculations of irrelevant numbers, and the core remains that a coordinated cadre of well-studied experts just kind of feel a vibe that it’s happening soon.
But the trappings of mathematized analysis can be very persuasive anyway. If you grab a pile of numbers and put them in a graph, then most readers will be dazzled by the appearance of mathematical rigor—that’s a lot of calculations!—and only a minority will ask, “Hold on, where exactly did this graph come from? What’s the argument that these numbers have any causal relationship at all with predicting when the world is gonna end?” For example, prominent AI Doomers like Katja Grace put a great deal of effort into collating software researchers’ predictions about when superhuman AI will arrive. But of course there is no reason to believe that such a big pile of predictions can tell us anything at all about the reality of such an event. There has been a great deal of research into when aggregating forecasts is a useful exercise and when it is not, and as Karger, Atanasov, and Tetlock argue, predictions of events like the arrival of superintelligent AI are not informative.
Such esoteric points may be important for the most engaged and sophisticated intellectuals, but they have little impact on how the public receives an apocalypse narrative. The fact that intellectually respectable technical experts lend their names to the cause is important, while the details of their opinions are lost in a long game of telephone. Activists and journalists and fiction authors spread the scenario, altering and simplifying it as they do. The intellectuals’ caveats are left by the wayside, sometimes because the popularizers only understand the vivid story and do not understand an argument structure filled with careful conditionals, and often because they want to tell a more sensational story, as in environmentalist disaster media like Waterworld (1995) or Wall-E (2008). Many people believe this story. They might experience fear and anxiety, or donate money to professional activists, or forgo having children, or stop saving for retirement, or respond in any number of ways. Popular fear of the AI apocalypse is only recently mainstream. It remains to be seen how widely it will spread.
People give an apocalypse about twenty or thirty years to arrive. After that, most lose interest, leaving behind only a small residue of ideological holdouts, and a wake of science fiction settings that feel increasingly fantastical rather than urgent. There is no big finale to the debate, there is no explicit acknowledgement that the apocalypse is cancelled. People simply wander away from the subject. This seems to be less because the core arguments get refuted with reason and evidence, and more because people slowly lose faith after enough of the activists’ predictions about worldwide famines in the 1970s, or cities flooded by 2015, do not come to pass.
In the realm of pure logic, the failure of the activists’ vivid pictures may have little bearing on the abstract case for doom advanced by the more rigorous thinkers, but few people make the distinction—with some justice, as the highbrow intellectuals are generally happy to lend their intellectual authority to the popularizations. Perhaps the most notorious example is the Bulletin of the Atomic Scientists, founded by Albert Einstein and Manhattan Project scientists, which created the Doomsday Clock—initially set at seven minutes to “midnight” in 1947—to publicize the imminent nuclear apocalypse by laundering the scientists’ subjective feeling of doom into an annual media spectacle. The Doomsday Clock first cited the risk of AI destroying humanity in 2024.
It’s possible that industrial technology will destroy civilization at some point. But the technology of 2026 does not seem especially close. However, the basic observation formulated after World War I—that technology’s ability to destroy increases over time—remains sound. Surely we are closer today than in 1918. This observation does not tell us whether we should fear a techno-apocalypse in five years, or fifty years, or five hundred years. It does suggest it’s worth paying attention to individual apocalypse predictions and evaluating them on their technical merits.
But as we consider them, we must remember that there is great popular and political demand for techno-apocalypse stories. Since World War I and especially after World War II, the mainstream view in our civilization is that Man’s nature is to destroy himself with technology. There have been many attempts to fill in the details with specific apocalypse scenarios. Many tales of extinction and collapse become accepted among the vanguard intellectuals and the public even though the argument for doom has gaping holes. There is a huge audience for playing up reasonably large problems as though they are the literal end of the world. There is a huge audience for talking about speculative future disasters as though the default course is for them to happen very soon. Before we believe the latest prophecy of doom, we must remember to hold out for a full argument which covers all of these steps. We cannot settle for most of an argument with critical steps handwaved away, no matter how many fancy graphs or prestigious names are attached.