Deciphering the Components of National Approaches to AI
In the piece on the United States, we explicitly discussed how regulation and development were different priorities and components of a national strategy.
[ Source: The 2020 AI Strategy Landscape ]
This is the third piece in an ongoing series on national AI strategies, with the first two being Where are China’s Recent AI Ethics Guidelines Coming From? and AI Governance in the US - so much more than R&D.
In the piece on the United States, we explicitly discussed how regulation and development were different priorities and components of a national strategy. While this is an important distinction, there is more work to be done in laying out exactly what constitutes a national AI strategy.
In this article, I will focus a bit more on theory than on a particular strategy. Here, I will attempt to lay out a set of components that either belong to, or should belong to, any nation’s AI strategy. In doing so, I will draw from examples of national AI strategies. Although the framework presented here is intended to be government-agnostic, many examples will draw from the US, countries Europe, and China, largely because these nations have thus far dominated the conversation and therefore provide most available examples.
Defining National AI Strategy
Before we dig into AI policy, let’s make a few distinctions. The first important one is the distinction between policy and strategy. This article will focus on the latter. A policy is a course or principle of action, encapsulating what a government chooses to do or not do about a particular issue. By contrast, a national strategy is a plan designed to achieve a country’s objectives. While there are formal definitions, in our case a policy is just one component of a strategy that might be deployed in service of accomplishing a particular objective.
In Issues in Science and Technology, Erica Fuchs describes a national technology strategy as “about incentivizing innovation that offers outsized returns across national objectives, without undermining the strengths of our existing innovation ecosystem.” CNAS defines a (US) national technology strategy as the guiding framework for the nation to plan, execute, and update technology policy. It says that such a strategy must be “a whole-of-nation approach—including human capital, infrastructure, investments, tax and regulatory policies, and institutional and bureaucratic processes—to preserve its current advantages and to create new ones.”
Objectives, Considerations, and Incentives
While the strategies Fuchs and CNAS elaborate on are US-centric, they demonstrate an important broader point about where a national AI strategy needs to start. The first part of Fuchs’ definition involves goals, the “returns across national objectives” a national strategy seeks to produce. From a governmental perspective, technology does not exist in a vacuum and is not to be developed for its own sake. Rather, an AI strategy must derive from the relevant ambitions and objectives of the nation.
Perhaps the clearest example of a national objective is retaining “leadership” in a particular sector. This can manifest in being ahead in research output, international market size, or with regard to other metrics. Other relevant objectives might be the promotion of democracy or liberalism, military prowess, environmental protection, or public health. The United States and China, two most prominent players in the AI space globally, have economic and national security interests, among other goals. Both have also expressed interest in leading conversations about AI ethics.
Just as technology revolutionized many different industries and prompted governments to consider how they wanted to use it to benefit the citizens of their countries, so too will AI systems force nations to consider such questions. Technology and AI hold the potential to not just create immense economic value, but also change how people live and interact, facilitate new forms of education, and help solve important global challenges like climate change. But technology, and, by analogy, AI, are themselves amoral developments. The same technologies that allow us to connect to like-minded Lord of the Rings fans enable white supremacist groups to coalesce online. AI technologies can be developed with good and bad results, and any government that wishes to develop it domestically must consider how these positive and negative effects play into its national goals.
But the technology has to be developed in the first place. To achieve national objectives, a country’s strategy must incentivize innovation in pursuit of those goals. For national security, a nation might want to develop AI systems that enhance its military capabilities. According to Fuchs, it must do so without undermining the strengths of its existing innovation ecosystem. To Fuchs and CNAS, this looks like developing national agencies and decision making mechanisms that compel the right kinds of investments into technology. A national AI strategy can be thought of in the same way: the innovation that needs to be incentivized would be in the AI space, and the relevant infrastructure, regulations, and other mechanisms must be relevant to AI.
I think there are a few basic mechanisms by which a nation might promote AI innovation to achieve its national objectives, and those mechanisms are what form a national AI strategy. I will list the broad categories before going into detail about each one. They are as follows:
Research and Development: Investment into research that underpins scientific and commercial innovation.
Governance: Developing regulations for certain technologies, setting technical standards, and supplying public goods.
Human Capital: Developing pipelines and incentives to attract and retain talent as well as mechanisms to upskill and educate existing labor.
Transition Management: Managing societal or labor changes that may result from the introduction and integration of new technologies.
Partnerships: Partnerships with domestic private companies or governments with similar national objectives.
Research and Development
[ Source: AI Index Report ]
Research and development, as the name suggests, involves investment into the invention and deployment of new technologies. R&D is normally used to refer to “forward-looking” bets on the development of technology that is not expected to manifest immediately. This might involve basic scientific research, biomedical research, or research for military applications. R&D activities are encouraged through public-private partnerships, nonprofit research centers and universities, and private industry.
R&D can be approached in different ways by different governments, and aspects of the countries and their innovation ecosystems will also have an impact. The Chinese government’s approach to R&D and participation in the innovation ecosystem have drawn close attention: a study observes that 25% of all Chinese R&D expenditures come in the form of subsidies to firms that can be used for the development and testing of new products, major R&D projects, commercialization of existing technologies, and other uses. Beijing also uses public-private investment funds to deploy capital in support of emerging technologies like AI, and government-owned firms are a feature of the Chinese VC market.
Western nations like the US are more hands-off with their markets. While Western countries generally approach markets and innovation with a philosophy geared towards allowing companies and investors to “figure it out” and implementing policies that encourage innovation, the global AI race has encouraged nations to lend a greater hand to those developing new technologies. The UK government, earlier this year, set up a £375 million fund for the Treasury to co-invest with private enterprise to incentivize businesses to develop nascent technologies.
Direct R&D investment is just one way in which a government establishing a national AI strategy might entourage innovation. Other methods include tax incentives, intellectual property protection, and developing relationships between universities and the private sector to push research initiatives forward. In doing so, a government should always consider first its priorities, then decide how to leverage R&D policies so as to achieve its ends.
Besides encouraging R&D for the development of new technologies, governments can also encourage commercialization of AI technologies to achieve their goals. By implementing policies conducive to increased economic and business activity, governments can do their part to speed up the adoption of AI technologies. The Chinese government, for instance, has played a key role in large investments such as the development of “smart cities” with modernized digital infrastructure.
Governments can motivate this commercialization in a number of ways. Guaranteed purchasing contracts can alleviate business concerns for the development of risky technologies. In agreeing to purchase AI technologies, governments will need to be cautious about what they are agreeing to purchase so as not to support the development of technologies whose consequences may outweigh their benefits. If regulations exist--and very few do for AI technologies--they can be loosened so as to speed up technology development, just as Operation Warp Speed’s loosening of vaccine approval standards was meant to speed up the process of bringing a vaccine to market. Governments can also set up Special Economic Zones (SEZs) with particular laws designed to encourage innovation. A study of China’s SEZs found that they promoted innovation in existing technical fields and expanded new research fields. Commentators from South Africa have expressed interest in the country’s adoption of a similar SEZ strategy to promote the development of AI and ensure that AI propels industrialization.
[ Source: A Framework for AI Governance ]
Governance involves the management of emerging technologies. The OECD defines technology governance as a set of “institutional and normative mechanisms to steer technology development,” which include R&D agenda setting, public accountability mechanisms, public engagement, technical and design standards, regulation, and private sector governance. Examples include the recent EU Draft AI Act’s provision that “high risk” AI systems be included in a searchable public database to facilitate oversight.
Another key aspect that I will include under the umbrella of AI governance is the provision of resources to enable critical research. One example would be the National Research Cloud proposed by Fei-Fei Li and John Etchemendy of Stanford University, envisioned to be “a close partnership between academia, government, industry, and civil society to provide researchers equitable access to high-end computational resources, large-scale government datasets in a secure cloud environment, and necessary expertise to benefit from a NRC.”
[ Source: The AI Skills Shortage ]
While related to R&D, the development of domestic human capital is a separate challenge for an AI strategy. AI talent shortages have been extensively documented, and governments that seek to facilitate innovation must find ways to (a) attract and develop talent that is capable of helping the nation pursue its goals and (b) retain that talent within its borders. While educating brilliant researchers is important, ensuring they stay within the country is another. China has been documented to have had trouble keeping its AI talent within its borders. This doesn’t mean the brain drain has left China bereft of resources--the country is still seen as a major player and well-known venture capitalists such as Kai-Fu Lee continue to invest in the country’s tech ecosystem, while the nation comes (a distant) second only to the US in papers published at conferences like NeurIPS.
According to the Global AI Talent Report, the US and India top the rankings for supply of specialized AI technical talent. While the US has consistently been viewed as the country with the best opportunities for AI research, the report indicates that countries have been more able to resist the US’s attraction of talent than in the past. As more countries develop national AI strategies and invest in their own ecosystems, we may see this trend continue. It may indeed be a benefit for the global AI ecosystem for different nations to develop their own AI ecosystems with different economic and regulatory environments, countering a homogenization effect that would be engendered by talent drains into the United States or another nation. Of course, that is far easier said than done, and countries who have already invested great sums into their AI ecosystems will likely maintain an advantage over those with nascent ecosystems.
Establishing an attractive ecosystem will require a friendly regulatory environment as well as the investment necessary to allow companies to flourish and work on exciting problems. While the UK announced its own national AI strategy, the plan was criticized for not having numbers attached to it. An ambitious AI strategy is a great start, but if a country is unwilling to spend money commensurate with that ambition then it may have a hard time fostering the talent it needs to realize its goals.
Transition management is an activity that could be considered technology governance, but is important enough in the context of AI that I think it deserves its own section. It has been extensively documented that AI systems will at the very least change what sorts of jobs people do, if they don’t displace workers entirely. Governments, in their pursuit of developing advanced AI systems and integrating these systems into their economies, will need to strategize about how to prepare workers for the changes they will encounter.
It is unclear exactly how the nature of work will change. The recent introduction of powerful AI systems like GPT-3 has caused many to be concerned about their career prospects, but many agree that AI systems will not be able to supplant humans in many decision-making contexts and that, even in cases where AI systems will bring business value, it will take a long time to incorporate them into business systems.
Governments likely cannot predict and manage these changes before they happen. However, given the preponderance of predictions about how changes will occur, they can begin to implement and experiment with policies like upskilling and update these policies as they gather more data about how changes are actually occurring. This will require considerable attention to the technology industry and how companies are using AI systems. I think the particular changes experienced by workers will likely vary by region and not just country, and as a result delegating aspects of transition management to local governments might be useful because those governments will be able to more closely pay attention to their local economies and respond appropriately.
Finally, in the pursuit of national goals, governments need not go it alone. Partnerships with like-minded nations can be immensely valuable for governments seeking to develop their AI ecosystems. For instance, a CNAS report recommends the creation of a technology alliance including large nations with broad capabilities in critical technology areas who are “committed to liberal democratic values, the rule of law, and respect for and promotion of human rights.” These alliancs can be formed so as to jointly pursue innovation, set technical and/or ethical standards, and promote and legitimize norms for the development and use of technology.
I think that while alliances can be useful for countries concerned by intenational competition and desirous of additional support in pursuit of their goals, they might not function well in all cases. Different countries benefit from and prefer different regulatory environments, for instance. As we have previously discussed, many European countries prefer ex ante regulation for technology while the US tends to impose ex post regulation. Countries that want to work together to develop standards and regulatory frameworks should consider what overlaps and conflicts exist, and consider whether they want to cooperate on areas where compromise might be difficult.
In this piece, I have considered the main pieces that might compose a national AI strategy. Many examples of such strategies are bound to come from the US, China, and countries in Europe, as they have dominated the international conversation on AI governance thus far. However, I believe these strategies themselves are agnostic, and their precise implementations might vary widely with the structure and goals of the government that implements them.
Last Week in AI is on Substack – the place for independent writing
About the Author:
Daniel Bashir is a machine learning engineer at an AI startup in Palo Alto, CA. He graduated with his Bachelor’s in Computer Science and Mathematics from Harvey Mudd College in 2020. He is interested in computer vision, ML infrastructure, and information theory.