New from Skynet Today: AI Strategies of U.S., China, and Canada in Global Governance, Fairness, and Safety
|Jul 16, 2019|
With a number of developed countries prioritizing research and development in AI, both these nations and AI research institutes are focusing on whether these policies are indeed well-considered. There is plenty of information both covering and generally evaluating the AI policies of different nations, such as the US and China. In this editorial, we add to this conversation by evaluating national AI strategies against a clear set of recommendations. In particular, we will look at a few AI policy recommendations from groups like the Future of Humanity Institute (FHI), and use these guidelines to evaluate different national AI policies.
The FHI and similar groups, such as MIRI and AI safety groups within DeepMind and OpenAI are primarily focused on mitigating existential risk, particularly in regard to advanced AI. Many of these groups have access to excellent technical researchers, and they therefore boast a strong understanding of the potential problems we may encounter as AI continues to develop. We believe that these recommendations, which span topics much wider than those in R&D, can be used to develop a coherent and useful common framework with which to evaluate different national strategies. This comprehensive evaluation will allow us to understand the broad consequences of these policies and give well-considered recommendations on potential changes. At the same time, in order to be useful the evaluation needs to be specific and concrete enough to have actionable recommendations, so we hone in on three areas of focus:
Given the potential for AI to have transformative effects on many sectors of society, it is vitally important to ensure that research and development of AI technology is both safe and regulated enough to avoid catastrophic consequences.
Further, the potential for AI systems to amplify already existing assumptions and biases that have proven to be problematic should also be taken into account–we should do our best to make sure that the use of these systems respects fairness, ethics, and human rights. In particular, the FHI points out that “AI has the potential to have profound social justice implications if it enables divergent access, disparate systemic impacts, or the exasperation of discrimination and inequalities” (FHI).
Lastly, the potential for AI development to create competitiveness and race conditions also needs to be considered in national strategies. That is to say, while healthy competitiveness can indeed aid the development of AI, countries should be careful to avoid allowing this to run the risk of arms races and speedy development that pays no attention to the impacts of developing AI.
In this article, we evaluate the national strategies of the United States, China, and Canada along these three lines. We’ll look at more countries in the future, but a great deal of media attention has been foisted upon the US and China in particular (such as with Kai Fu Lee’s book AI Superpowers), and Canada boasts some of the most important contributions to AI research in recent history. First, we’ll provide descriptions of the three criteria. Then, we’ll go on to describe each country’s policy and evaluate how well we believe it fulfills each criterion. In particular, we think the recent US AI bill presents some promising strategies for developing safe and effective AI, while China’s government-pushed strategy seems optimized for massive growth in the AI sector, but it appears to have less concrete work towards risk mitigation. We believe that among the three, Canada’s policies on AI may present a particularly good model for countries hoping to develop both safe and effective AI.