In July 2017, China released its AI strategy. The document describes the strategic implications of a “new stage” of artificial intelligence enabled by better chips and scientific discoveries. It plays the typically Chinese game of insisting on both radical innovation and total political continuity: a combination of “full play to the advantages of the socialist system,” and “following the rules of the market” as well as a somewhat surprising commitment to “open source.”
It identifies key tasks, such as correctly targeting “basic theory” as well as the foundations of “common technology,” and also outlines several potential applications, ranging from “smart cities” to “intelligent medical care” to “promoting credible communication.”
There are, however, some unrealistic proposals. “Intelligent government” is hard both technically and socially and is unlikely to become a reality, and some supporting technologies like 5G seem a little rushed. And there is scant attention paid to “laws, regulation” and “ethical norms,” which come off as under-developed compared to the rest of the document, indicating a much more “full speed ahead” approach rather than “cautious exploration.”
But the strategic goals is where the document shines with ambition: “by 2025 we shall…make positive progress in the construction of an artificial intelligence society,” and “by 2030 we shall be the major artificial intelligence innovation center of the world.”
Even if we pessimistically estimate that 50% is hype, achieving half of what’s in the document is already a massive step towards an AI-enabled society.
What is America’s answer to China’s AI strategy?
In March, the White House launched a slick new page for national AI efforts. But anyone looking for a visionary and coherent strategy will be disappointed. The White House site is more of an aggregate of individual agency’s plans, rather than a single strategy.
So, what have the departments, agencies, and committees that make up America’s permanent government been up to?
Democratic Senate Minority Leader Chuck Schumer pitched a new $100 billion dollar funding scheme for AI research to a presumably receptive audience at the National Security Commission on Artificial Intelligence. Meanwhile, the National Institute of Standards and Technology is hoping for “voluntary consensus standards.” The Federal Data Strategy site is by far the most worrying one, insofar as most of its proposed improvements are likely to make one think, “How is this not done yet?”—such as validating employers and making policy proposals machine-readable.
Better late than never? Maybe, but it’s hardly the approach of a serious global power.
The White House’s STEM education plan has a number of routine ideas—like using “common metrics to measure progress” and making federal data more accessible. But as usual, the document focuses more on improving the bottom performers than creating and scaling up top talent.
One of the more impressive AI strategy documents is from the Department of Transportation. It starts off strong: “U.S. DOT will lead efforts to address potential safety risks and advance the life-saving potential of automation, which will strengthen public confidence in these emerging technologies.” It identifies key stakeholders, testing stages, and moves towards proactive anticipation of technology by regulators. Moreover, the DOT document consistently keeps its eye on the ball: cutting down the estimated 37,000 lives lost in car accidents.
Also on the ambitious side, the Pentagon released its AI strategy in 2019. It’s been received as among the most wide-ranging plans put out by a U.S. government agency. In addition to noting the geopolitics driving the current battles for an AI strategy, it responded to recent attempts by tech companies like Google to cease cooperation with American security state organs, via calls for information sharing and closer relationships with academia and the private sector. The news came alongside a White House executive order. Notably, the plan targets building in-government AI proficiency as a key goal—which would ultimately reduce its reliance on private actors who are sometimes hostile.
There are other national strategy documents, such as reports from National Security Commission on Artificial Intelligence. This document certainly has a strong sense of urgency. It puts competition in AI as a key aspect of the “reemergence of great power competition” and includes a nod to “existential threats.” And it also points out the aforementioned tech worker opposition to cooperation with U.S. government departments like the Pentagon. In the industry, I hear this kind of concern all the time, though usually more in connection with ICE.
There is also an important recognition that most parts of the government suffer from a dire lack of real AI knowledge. The result is a catch-22 of funding problems, since those in charge of giving money to possible improvements themselves don’t have the specialized knowledge to use said funds wisely. The document also does the outstanding service of proclaiming that AI is not merely a disaster-aversion problem, but also a tool kit that should be used to pursue positive goals: “We need a vision of the AI-empowered future.”
While this document is better than most in terms of its assessment of the situation and present challenges, as well as articulating the need for a vision, it lacks said vision and a sense of determination.
Disorganized planning, cluttered thinking, and scattered good ideas more or less sum up the state of affairs across the American state apparatus when it comes to AI. With the possible exception of the DOT, the national-level result lacks the feeling of success-orientation—a mindset you would expect of an ambitious startup or leading company, not to mention of a superpower which aims to produce them with the numbers and reach needed to maintain its global standing. Instead, it appears fixed on identifying and measuring improvements that don’t require vision or ambition.
A cautious approach can be good to avoid getting fooled by hype, but the level of inertia is obvious. The federal approach struggles to incorporate AI into existing and potentially outdated legal frameworks (such as self-driving cars), and in other cases seems to move nearly all responsibility for the details into the technical sector.
In particular, the documents fail to address crucial problems that the U.S. would face in implementing its AI strategy—for example, hostile relationships between a number of companies and the American state, lack of politically reliable software developers in key positions, declining ability in public and private discourse to discuss basic statistical facts, and generally weak mathematics education in early schooling. California-specific concerns, which directly impact the Bay Area tech hub, include a power grid that companies can rely on and housing policy that enables concentration of technical talent without all excess income being skimmed by landlords and high local taxes.
***
The difference here is not just in mindset, but also in specific policy proposals and social organization. U.S. discourse is full of fears about “automation unemployment,” which is a self-fulfilling prophecy, but also functions as a scapegoat for people worried about two things: losing control of their destiny to systems they don’t have a say in and labor issues arising from globalization and immigration. There is a small AI safety community that worries about existential future risk from advanced AI, but largely doesn’t engage with what’s actually happening in the field. A common concern there is about “AI arms races,” where countries and ambitious projects charge ahead with capability research and applications to beat the competition or just not fall behind, without attention to safety and controllability.
Key thought leaders in the area like Jaan Tallinn have proposed regulating large clusters of computing power. There is some merit to this idea. After all, the companies controlling computing power do not use it in manufacturing goods—the “standard” function of economic capital. Rather, they increasingly use it to target advertising, collect consumer data, and enable narrative control. In short, capital’s power is no longer derived merely from owning the means of production, but also the means of behavioral modification.
But we risk conflating two very different issues. The first issue is the one that originally sparked the concern for “AI safety,” which is the potentially existential impact of superhuman general intelligence. A system better than humans at all industrially essential intelligence-laden tasks, including business, politics, and socializing, could entirely escape our control and ultimately replace and destroy us. This worry is so far entirely theoretical, having very little to do with current developments and applications in artificial intelligence. Current developments are all in the realm of “narrow AI,” which is effective in extremely specific tasks, but not the hardest and most general—and hardly superhuman.
The second issue is how this narrow AI is being applied right now: to data mining, advertising, self driving cars, scaling of editorial control over social media discourse, and so on. This is the realm of what people usually mean by AI ethics. The major issue is the social impact and economic of the technologies we already have, how those should be deployed, and who should have control. In other words, it’s political. There are, of course, safety problems with things like self-driving cars, but the problem of “this car might run some people over” is so entirely different in kind from “this AI algorithm might gain control of technological civilization and destroy us and everything we value” that they shouldn’t be grouped together. This more mundane kind of safety issue is completely within the experience of previous industrial and technological changes. At worst, there are a few small-scale disasters, and we learn some hard lessons. The political and social aspect is much more central.
Unfortunately, the discourse has made a habit of grouping AI existential risk concerns with more mundane—but also far more immediately relevant—governance concerns. This has been aided by an industry full of grifters who have every incentive to portray every new and potentially socially powerful statistical algorithm as a revolutionary step towards full artificial general intelligence, especially for investors who treat anything that can be labelled “AI” in 2019 the same way they treated “Internet” in 1999. Those in AI ethics often appropriate the movement energy and fears built by existential safety thinkers for their own more mundane political and funding ends. The public, which doesn’t understand the subtleties, sees only a spectrum of Black Mirror-like technologies to alternately be in fear or in awe of.
Most of the existing discussion, including at least part of the computing power proposal, is about the second issue: mundane industrial safety, impact, and political governance of powerful new technologies. The example of computing power applied to behavioral manipulation is instructive.
The ability to control and manipulate information flows and human behavior at scale is a power which impacts the whole spectrum of political and social structures sitting atop of mass society. This necessarily puts it in the territory of the state, which naturally seeks to monopolize or coordinate all large-scale power in society. From the perspective of the state and the elite, new forms of power always need to be integrated into the political order. Whether or not the companies in control of this capacity use it for their own gain or to cause political trouble is almost irrelevant.
Regulation is one method. China solves this problem by generally making it clear that the Party is in charge, and large economic actors ultimately serve the Party. The U.S. encourages companies to “go public”—forcing them to be accountable to and share information and profitable opportunities with existing financial interests, which are politically integrated. Many aspects of labor law and custom also have the effect of tying companies into the political zeitgeist, and thus to the will of the elite.
But the U.S. system of political discipline on companies is fairly messy and not necessarily up to the task of integrating these new powers into a pro-social order. So, while regulation of large amounts of computing power has merit as an idea, this further sets up an antagonistic relationship between companies and the state at a time when more alignment is needed.
In short, in addition to an AI policy, the U.S. has an anti-AI policy.
How to integrate the new powers afforded by AI into the political and social order, not to mention the existential threats on the horizon, are important concerns. But there are strategies which would allow both increased AI development and safe handling of the concerns.
When talking about policy, the thing that springs into most people’s minds are tools like “regulation” and “investment,” which are common levers governments use to affect the market. Universities would favor more investment and to the extent that some of university research subsidizes industry, industries favor it as well. While those remain important tools, there is legitimate skepticism that merely throwing money at the AI problem can improve things. While most of the documents identify several key components which enable AI to succeed, such as investment in underlying hardware, easing data collection, and attracting talent to both industry and government, there are at least four overlooked areas that will likely remain a bottleneck for productive development.
1. Aligned Human Capital
While the idea of attracting abstract “talent” is a good one, it’s also important for actual tech workers to agree in principle with state policies regarding the use of AI. Right now, the general crop of technology workers is not necessarily antagonistic to government agencies—with some exceptions like ICE. However, a much bigger problem is that the level of collectively-felt national pride that characterized those who worked on the Apollo project does not exist for AI.
Many tech workers have a libertarian streak, but even those without it are generally allergic to the exercise of power in general, and especially when it has military applications. Many people entered the technology industry because they wanted to “fix” or “disrupt” society in a way that avoided the usual unsavory political battles. While apolitical specialization is normal, common attitudes in tech are often downright hostile to working with the government. The issue is compounded by a number of people who are first generation immigrants in the space. For this group, the primary interaction with government is often the arduous H1-B bureaucracy or local political fights over divisive subjects like affirmative action.
All of this creates large portions of the key technical class who are at best completely apolitical and treat the government as “another customer,” but are more often suspicious of the motives behind government regulation or use of technology.
By far, the most ambitious move in this direction is occurring under the auspices of DARPA, the Pentagon’s research arm. In 2018, it announced up to $2 billion of investment spending in a number of AI programs. On the sunny side, it presented an opportunity for AI researchers to free themselves from the timelines and profit-driven environment of startups and the private tech sector. It allowed for projects looking at ethics and privacy issues. Building on DARPA’s historic role in AI—from the first and second waves to funding Google’s co-founders—it pointed toward a renewed era of innovation. But no one missed the direct signal sent toward Google and other tech companies. If the large tech companies—built with significant public investment—now plan on breaking ties with the American state which fostered them, then it will begin to invest in and scale up more reliable partners. How this initiative will ensure long-term loyalty from these partners remains to be seen.
In the case of the military, the U.S. government can be expected to manage the problem of mistrust through tighter technical controls, putting barriers between “research,” “development,” and “operational” AI. But in the future, to stay on top of developments in technology, the American state would need to become an attractive first choice, both to work for directly and cooperate with as a leading client and partner. This is mostly a function of the political culture of technologists, which won’t be changed by any clever legislation or bureaucratic maneuvering.
A fear of creating technology for the wrong political ends is not the only obstacle here. There is also the phenomenon of general meaninglessness. In creating an AI strategy, the American state should view answering the question of “why are we building this?” to be of the utmost importance. “Not losing to China” gets a C- grade, as far as meaningful goals are concerned. Playing not to lose is not the same as constructing the smart-city Viennas of the 21st century, or some other concrete and inspiring vision.
One thing that will make this easier for the government, at least as far as competition is concerned, is the way the narrative of meaningful work around tech companies has collapsed. Platitudes about making the “world more open and connected” or “don’t be evil” have run their course. Many people in large tech companies use AI for work in targeted advertising or behavioral optimization on social media, with all the attendant concerns about whether this actually creates value for society. There are certainly misalignments between corporate management and employees. Various employee protests, public firings, and even the occasional suicide are symptoms of a problem.
In light of all of this, the American state apparatus should not, in theory, find it too difficult to lure technical talent away from companies—or at least raise its standing with current company personnel. It just needs to be able to provide a coherent vision of a positive future, deliver the social and economic benefits to hire and incentivize top performers, and protect key personnel from arbitrary politics.
The vital task for a government trying to hire people capable of carrying out its technological strategy is to create meaningful positions for them. Competing on meaning with “advertising maximization” has succeeded in both national projects and Elon Musk’s tech empire.
2. A Sane National Culture Of Metrics
Political discussions about metrics generally circle around one of two diametrically opposed positions, both of which are wrong. On the one hand, there’s the technocratic approach centered around GDP, unemployment, and similar quantitative measures which neoliberal institutions have entrenched as the yardsticks of social good. However, there is also an undercurrent in reaction to this: a rejection of technocracy—sometimes along with populist politics—in the face of disparities between good on-paper performance and the realities of class and regional conflict. However, the reactive nature of this tendency often prevents it from putting forward improved substitutes for the old paradigm.
This broken dichotomy must change if we are to succeed with AI. The latter attitude—while correct in its suspicion of over-optimization and technocratic reductionism—fails to recognize the degree to which the ability to standardize and enforce metrics is an important source of political coherence. The modern world runs on metrics, and the neoliberal project’s ability to coordinate multiple countries and institutions through a unified optimization mechanism is one of its most massive successes in cementing power. The metrics can be updated as needed: inflation, unemployment, diversity, GDP growth, population growth, and so on. What is important is the ability of the global capital machine to harmonize along these measures and coordinate in optimizing them. It is one of the most overlooked and underrated sources of power in the world, but has acted as such in modern times since the first French Republic imposed the metric system—a move with such enduring power that both Napoleon and the later July Monarchy took steps to ensure its continuity.
The lesson: whoever sets the global metrics for measuring AI’s success will win a key battle.
But achieving that goal brings up its own array of obstacles. First, this unfortunate dichotomy is nowhere more present than in the AI community itself. One of the tricky parts about future AI is designing the right metrics, utility functions, and error functions. It’s simple enough when the tasks at hand are “image recognition,” or playing a video game. However, the moment the task gets complicated, such as “hate speech detection,” even the simple debate about precision or recall turns political—should a social network be judged by how it avoids censoring people or how much hate speech it’s letting through? That’s not even getting into the question of what hate speech is in the first place.
I have written previously about the issues in the debate about fairness in the justice system, which is in some way a conflict of metrics, as well. Predicting crime is different from creating good incentives to reduce crime and similarly different from metrics that attempt to show equal treatment (and there can be several contradictory metrics in this category alone).
A national culture of metrics would mean both the ability to set measurable goals, but also to achieve them honestly and without cheating. It would recognize the importance of having a legible goal, but also that pushing too far can eventually lead to oblivion.
The question of “where are we going to point AI?” is hard. A clever, but wrong answer to the question is to somehow punt this to the AI itself. For a hypothetical strongly superhuman AI, that may or may not be a reasonable solution. But for current machine learning technology, which essentially optimizes black box software for known metrics on known datasets, it’s conceptually incoherent. In practice, such an initiative would just push vital discussion about where society should go or what it should look like to either chance or obscure office politics.
It’s important that a metric be taken seriously on its own grounds before AI is involved in improving it. For example, while it is positive that the Department of Transportation is embracing self-driving cars as a tool to help decrease car accidents, we might first address why they are so high in the first place? Why do we not have a stronger desire to reduce the 30,000+ deaths on the road every year? Have we, as a society, really looked a problem with enough attention and exhausted enough options in city planning?
3. Control Groups For AI
The current replication crisis in social science is both good and bad news. On the one hand, it shows that a lot of our established wisdom is wrong; on the other, there is still enough of a push for truth that people are, in fact, concerned about replication. AI is just another piece of software, and replication should actually be a lot easier than in the natural sciences. But obscure techniques, lack of precise comparisons, and a desire to hype up results to be “state of the art” have all led to the situation where replication is a difficult task. A national AI strategy which targets scaling up successes will need to push for trustworthy replication of research. For example, adversarial research could be incentivized to dispute particular claims of other papers (acting as software testers, but for science).
An emphasis on competitions has led to researchers trying to beat certain benchmarks, which is good as far as it goes. However, the focus has not been on producing reliable knowledge. Thus, the field looks increasingly complex and hard to interact with. “Try different things until they work,” still seems like the dominant mode of crafting an AI solution. This leaves improvement in the mode of tinkering, not robust experiments with testable and generalizable hypotheses.
On a more practical side, it’s not uncommon for upper management to allocate money for AI, and see improvement, even though lower-level employees know that the improvement actually came from fixing bugs and normal software development practices either spurred by—or unrelated to—the AI work. Teasing apart the role of AI in a solution including both technology and people is tricky, but necessary.
Unfortunately, watching managers cover up the failures of AI projects which get outperformed by other software and non-software solutions has lead to a certain level of cynicism among many engineers, despite the hype continuing full steam ahead in broader society. A country that wishes to succeed needs to manage both the hype—both stoked and fed on by upper management—and the cynicism coming from experienced engineers. The collective knowledge being generated about particular algorithms and their abilities must be reliable.
4. Fundamental Math Research And Competence
Currently, formal mathematics training is viewed as helpful for AI, but not completely required. I suspect this will change soon. It has to change for AI to be more understandable, robust, and scientific. Dumping raw computational power to make up for algorithmic deficiencies and misunderstanding is too expensive and error-prone as a long-run solution.
Stakeholders deploying AI would probably look for broader guarantees of its correctness and debuggability, rather than simply a series of well-defined tests. AIs reasoning and learning about other software would need to incorporate proof-like systems in addition to statistical techniques. While “explainable AI” that can translate its own workings into a human-understandable form is a promising research area, this will most likely require familiarity with the mathematical theory behind why something would work in the first place.
The ultimate success in the AI race, to the point of superhuman Artificial General Intelligence executed both first and safely, would likely come through very significant mathematical advances that create high levels of abstraction—in other words, demonstrating how the agents can reason about themselves reasoning about themselves. A grand strategy for the American state, therefore, will need raw mathematics talent to succeed in the future. Quality of talent is more important than quantity, especially talented people who have a foot in both math and philosophy.
I took college level classes in high school math camp, but I suspect even that is nowhere near the capacity of a bright student to learn math. What we should see in a strategy with teeth is a strong set of experiments aimed at teaching children advanced mathematics as early as possible. Early math education in the U.S. is currently moving in the opposite direction. The U.S. will have increasing trouble relying on imported talent, as the perception of life quality equalizes more between countries, and the ideological luster of the free world fades. This increased focus on mathematics shouldn’t come at a significant cost of a philosophical education—after all, the goal is not merely the ability to do difficult problems, but also the ability to feel what math is right in a moral, philosophical, and societal sense. But a strong foundation is irreplaceable, and the best performers will be adept at both mathematical rigor and philosophical reasoning.
However, at the end of the day, the specific policy proposal comes back to an American mindset that is worthy of a superpower—it needs to want to win, to be able to face the truth, to want to see America succeed, to have a rising tide lifting all boats, even if that happens to lift the boats of people you don’t like. America as a nation cannot afford to play “not to lose” any longer.
At the core of that core is the existential question for America: what are we trying to do, with AI and otherwise? What does victory even mean in this space? What is the grand inspiring vision of an AI-enabled future that motivates us to transform the most powerful civilization the world has ever seen? The current answers no longer seem to inspire. Platitudes about preserving freedom, privacy, and democracy have been abused and emptied of meaning by decades of cynical abuse as propaganda terms covering for the opposite. In AI, where the natural thing to do seems to be an unprecedented centralized surveillance and behavioral control panopticon, with only logistical convenience as a selling point for the public, such platitudes are a farce. Perhaps there is value in that path, or alternate paths with more value, but neither of those value propositions have been articulated.
Given a clear and legitimate vision for an AI-enabled society and for American society in general, resources can be mustered and organized to pursue it, and open discussion had about the details. Without such a vision, it’s hard to blame those who are wary of working with the government, or on this technology in general. It’s hard to expect anything but more half-measured quagmires, and more empty rhetoric covering for dysfunctional private motives in AI strategy and elsewhere.