Early messaging about AI development presented an optimistic view for a future defined by artificial intelligence. Official releases from leading AI labs highlighted the technology’s positive uses and potential to benefit all of humanity while addressing the risks to human safety resulting from model capabilities. Over the last two years, the narratives released by those same labs and their investors have gradually changed. They increasingly framed AI development as a zero-sum game with consequences for global power dynamics, with emphasis on the need for support from the national security apparatus.
The trend away from a story of cooperative progress for the good of humanity to one of existential competition between nations and values has had to do, in part, with the rapid development of AI capabilities. It’s easy to sell a vision of good when the potential for harm is distant. But these recent warnings of harm have not been about threats from better capabilities, but the harm that comes from an adversary—specifically China—achieving those capabilities first.
These very different framings have stark consequences for how AI development is thought about and approached. The shift in industry messaging towards securitization and great power competition is an attempt to ensure labs remain unencumbered in development for the longest, to gain an advantage in the domestic market when competition increases, or to profit from the windfalls of geopolitical conflict. Each consideration incentivizes framing the debate in the domestic political context to invite desirable forms of regulation while preventing unwanted regulation and investigation. Declaring foreign AI development an existential threat to not only the West but Western values—securitization of the issue—is the vehicle for this.
The argument put forth by leading AI companies and investors can be summarized as follows: whoever leads in AI will control information, economic development, and science and technology research and development writ large. This is not presented as a simple advantage, but as a decisive and perhaps irreversible shift in the global order. Superintelligence, they argue, will be the most powerful technology—and potentially the most devastating weapon—humankind has ever developed, and its capabilities compound.
Under this narrative, a nation possessing advanced AI capabilities could, theoretically, gain an insurmountable military and economic advantage. States might use it to enforce total control internally and project power externally, or leverage it to threaten global stability. OpenAI’s CEO Sam Altman recently warned that “If [the Chinese] manage to take the lead on AI, they will force U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries.”
This rhetoric and lobbying, aimed at galvanizing Western efforts in AI development, creates a self-fulfilling prophecy of competition. While the audience is the U.S. national security and regulatory apparatus, China, currently trailing the U.S. in frontier AI development, is an audience too. If Chinese leadership were to accept the premise that being left behind in AI will result in permanent subjugation to the interests and values of nations with more developed AI, as labs promise, the logical response would be to either focus on catching up on AI development to prevent this outcome, or alternatively to arrest AI development altogether if development was not an option.
Different actors have different motivations. Companies selling services that benefit from the threat of geopolitical conflict and venture capitalists with stakes in those companies are likely driven primarily by their bottom line rather than by any principle. However, examining the early writings of AI lab CEOs like Dario Amodei and Sam Altman reveals a deep conviction that AI is a winner-take-all game and that the significant threat lies in an adversary achieving dominance first. Regardless of their motivations, these individuals must confront a harsh truth: their framing must reckon with the fact that if China takes the threat of AI domination by the West seriously, it might well attempt to pre-emptively take Taiwan at any moment.
This competitive messaging surrounding AI development unfolds against a backdrop of increasingly isolationist trade and economic policy. Since 2016, there has been a marked shift towards withdrawal from key multilateral agreements and a greater willingness to engage in trade wars, particularly with China. The Trump administration’s imposition of tariffs and other trade barriers, aimed at protecting domestic industries, disrupted global trade flows and strained trade relationships.
This period of retrenchment, often at the expense of international collaboration, suggests a future where the U.S. continues to prioritize national sovereignty and its immediate economic and security interests over the potential long-term benefits of multilateral cooperation. For U.S. allies in Europe and East Asia, this is a signal of disregard for their future economic prospects. For China, these trends undermine trust in international relations with the U.S. and signal a unilateral approach to economic development.
The Biden administration rightly believes that ceding the technological edge to China would be disastrous and that allowing the unfettered diffusion of foundational technological advances would be akin to surrendering the crown jewels of the American economy. While US policymakers feel it is necessary to retain advantage, in practice they have made curtailing Chinese growth an explicit U.S. policy objective. The implementation of semiconductor export controls, aimed at limiting the transfer of critical technologies that could enhance China’s capabilities, aligns with this overall strategy. These restrictions mark a significant escalation, moving beyond retaliatory tariffs to actively impeding China’s access to cutting-edge technology. The Biden administration has maintained a hawkish stance, implementing technology export limitations and restricting employment for certain companies. This approach shows no signs of abating and may intensify under a potential Trump administration, given the policies his tech donors support.
In retaliation, China has taken several steps. First, the imposition of export restrictions on critical raw materials used in semiconductor manufacturing, such as gallium and germanium, essential for producing advanced chips. This move aims to disrupt the global supply chain and emphasize China’s pivotal role in it, threatening significant disruptions for U.S. and allied semiconductor industries. China has also been ramping up efforts to boost its domestic semiconductor capabilities, accelerating the development and production of locally-made chipmaking tools, forming new public-private partnerships, and updating its tax incentives to boost its research capabilities. In May, China announced a $47.5 billion investment fund to bolster its domestic semiconductor capacity. The fund represents China’s third round of state-led investment in its semiconductor industry in the last decade.
This dynamic creates a dangerous feedback loop. The more the West frames AI development as a winner-take-all competition, the more China feels compelled to accelerate its efforts by any means necessary. Conversely, the more aggressively China pursues AI capabilities, the more justified Western nations feel in their securitization and acceleration efforts.
Organizations pushing the China-competition line likely believe that a decisive U.S. lead in AI development will create a stable equilibrium, forcing China and other adversaries to come peaceably to the table to share in the benefits of Western AI. But in a scenario where China accepts the critical importance of AI and is behind in the development of state-of-the-art models, Western restrictions may be perceived as an existential threat. With a belief that they are being systematically prevented from exploring critical technological avenues, potentially falling years behind in capabilities, the incentive to arrest AI development altogether grows. The fear is not just of falling behind, but of reaching an insurmountable technological gap and another century of humiliation.
The result resembles a 21st-century Cold War, where escalating tension and mistrust make cooperation increasingly difficult and conflict more likely. Each side, driven by fear of the other’s potential dominance, takes actions that confirm their adversary’s worst suspicions.
This cycle of securitization and competition in AI development threatens to create the very conflict it ostensibly seeks to prevent. By framing AI as a technology too powerful to allow rivals to possess, a race that may compromise safety and ethical considerations is incentivized. The pressure to be first may lead to cutting corners, ignoring potential risks, and prioritizing speed over security, potentially jeopardizing the promised benefit of the technology to humanity.
It’s worth noting that the actual likelihood of China catching up with the U.S. in frontier AI development is influenced by factors far more significant than the securitization of technology. Given the sharing of the latest AI development methodologies by companies like Meta and the leakiness of information within the AI field, Chinese labs are likely to be close to parity in terms of knowing how to create the best models. The large Chinese talent pool, even if less dense in top researchers, further supports this capability. Consequently, the main advantage the U.S. holds—and the primary security concern—lies in the actual creation of model weights and associated deployment infrastructure, which requires substantial capital, hardware, and engineering resources. Securing model weights is therefore the most critical factor for maintaining the U.S. AI advantage.
However, even under sanctions, China can train its own models given enough time and effort. While it might be inefficient, it’s possible to train advanced models like GPT-5 using last-generation GPUs. Moreover, Meta’s continued release of weights for models close to or at the cutting edge provides China with a foundation to build upon. Given these factors, it’s questionable whether the U.S. lead in AI development is easily maintainable in the medium term, especially if the rate of progress slows.
In a scenario where China chooses to invest heavily in AI development out of fear of technological irrelevance, competition and conflict are likely to escalate. This investment could include state-sponsored espionage and theft of critical intellectual property, particularly model weights. Given the difficulty of fully securing these weights as a private company, even with U.S. government assistance—which would likely require elevating frontier AI development and datacenters to TS/SCI security levels or equivalent—it may be impossible to entirely prevent such theft. Alternatively, if China concludes it cannot develop comparable models, it might attempt to halt adversary AI development altogether by disrupting the production of advanced semiconductor chips.
If model weight theft by China becomes a significant risk—itself dependent upon the emergence of AGI and whether domestic race dynamics are neutralized—and it appears unlikely that model weights will be fully secure, this could incentivize a shift toward a more multilateral, internationally cooperative approach to managing race dynamics. Such an approach, taken proactively, could also work to prevent a scenario where a cornered China, fearing irrelevance, may resort to military intervention in Taiwan.
In our current world, both the U.S. lead and the timeline for achieving a decisive technological advantage are uncertain; we must address the short-term geopolitical tensions that could escalate into war or conflict, spurred by AI race dynamics. While the aim should not be to allow China to achieve parity, it is crucial to reduce the immediate tensions that may incentivize China to trigger a broader escalation. International cooperation on AI governance can help this, by ensuring shared benefits from the most capable AI systems, reducing fears of technological irrelevance and foreign cultural dominance among adversary states. The core mechanics of this framework could include providing structured access to frontier AI models hosted on secure hardware, and building domestic capacity to enhance human capital, entrepreneurial capacity, and compute infrastructure to capitalize on frontier AI technologies.
Promoting competition with China in AI development could serve both corporate and geopolitical interests in a specific scenario: if China were maximally accelerated in AI and there was a belief that Beijing was fully committed while Washington lagged behind. In such a case, AI companies might reasonably try to raise public concern to motivate policymakers into action. However, current evidence suggests that China is about a year behind in state of the art AI development, based on publicly available benchmarks.
If escalatory rhetoric increases the competition that is so dangerous to the West and Western values in the most likely scenarios, why engage in it? Starting from the genuine belief that AI is a technology too powerful to allow rivals to have the advantage in—a belief I hold—it serves a dual purpose. By positioning AI development as an existential competition with China, labs employ essentially the only effective contemporary strategy for mobilizing contemporary bipartisan support for U.S. government resources.
Escalatory China rhetoric and actions have been the only framing that have made climate action and other investments possible. This positioning is evident in Altman’s recent Washington Post op-ed, where he calls for massive infrastructure investment, advocating for public-private partnerships to build the physical infrastructure that runs AI systems. By advocating for government partnerships in AI infrastructure, Altman is potentially positioning OpenAI as a key player and U.S. government partner in shaping the future AI landscape. Competitive rhetoric also aligns with AI labs’ previous lobbying strategies of courting policymakers and specific regulations, which can help cement market positions for established players, as well as allow influence in the drafting process.
The article encapsulates not only the false assumption of AI as an us-versus-them conflict, but the assumption of who “us” is. In the article, Altman repeatedly employs something like a royal “we” to refer to both Altman and the United States—“us.” This language is characteristic of securitization, where an issue is framed as an existential threat to an abstract entity, justifying extraordinary measures to address it. For this rhetorical move to succeed, however, it must be accepted by the relevant audience as legitimate. If key actors perceive an issue as a security matter through this speech act, that issue becomes securitized, making extraordinary measures that were previously difficult to justify now legitimate and perhaps even necessary. Consequently, an AI-advanced China is constructed as an existential threat to the U.S. This trend is evident in the messaging from powerful venture capitalists, think tanks, and AI companies, each of whom claim it is “us” against China.
Of course, there is an “us” without any given AI company, executive, or pundit. As occurred in the Manhattan Project and throughout the Cold War, the nationalization of American physics did not accept—or was accepted by—all comers, and many physicists were told to get in line or get out of the way. The proposition is an early attempt to preempt the very real threat that there are more than two answers to who will control the future of AI. In contrast to earlier framing in 2023 of AI development as a humanist project to benefit all, Altman is posing his question as a competition between nations, and in doing so, presenting public-private enterprise in which he is both public and private, as the only actual answer to his question.
This is ultimately a message with worse geopolitical externalities and simultaneously a worse strategy for developing new technology than the previous positive-sum message of rapid AI-driven economic development beneficial for all, of which tellingly Altman was once the foremost proponent. Framing AI development as a zero-sum competition, while effective for mobilizing resources and gaining favored status with the state, risks escalating tensions and compromising ethical considerations.
The relative peace between great powers since World War II has been sustained by strong disincentives that prevent hostilities from existentially threatening the global order. Central to this peace is the belief that an adversary is either incapable of obtaining strategic dominance or lacks sufficient incentives to do so. However, if an adversary were perceived as capable of overcoming these barriers or as having strong incentives to try, the absence of previously assumed costs could create a scenario where a preemptive strike against that adversary might be seen as justified. Consequently, the likelihood of such an action would significantly increase. If the promises of AI—a decisive strategic advantage—are taken seriously, the incentives for a first strike to undermine that potential victory may be viewed as necessary.
A more balanced approach is needed—one that acknowledges that legitimate security concerns can be better addressed through international cooperation rather than competition, as they have been in the past. States have a vested interest in maintaining deterrence to avoid potential existential consequences, and if the danger to the world order becomes clear, collaboration becomes more plausible. Policymakers and the public should critically examine narratives that conflate corporate and national interests, especially when massive government investments are at stake. The challenges of AI development demand a thoughtful, collaborative effort to shape the technology for the benefit of all, just as the original narrative promised.