Where are China’s Recent AI Ethics Guidelines Coming From?
Examining Three Guidelines from the Ministry of Science and Technology’s Recent Document in Context
This is the first of a series of articles I’ll be writing that look at recent developments in AI policy. This article covers China’s recent ethics guidelines, while future ones will look at developments in the EU and US.
Tl;DR: China’s Ministry of Science and Technology released a set of 8 guidelines for responsibly developing AI systems. I look at 3 of these guidelines in the context of China’s broader policy goals and ambitions.
[ Photo Credit: New America ]
Intro
In early October, the South China Morning Post reported that Beijing’s Ministry of Science and Technology (MOST) released a set of guidelines for AI systems that largely emphasize user awareness and consent. Their headings are as follows:
Harmony and friendliness
Fairness and justice
Inclusivity and sharing
Respect privacy
Secure/safe and controllable
Shared responsibility
Open collaboration
Agile governance
These guidelines follow on the heels of the EU’s Draft Artificial Intelligence Act and Beijing’s own crackdown on tech giants. The fact that the guidelines were released by MOST indicates central government involvement, and these guidelines coupled with the recent crackdown seem to indicate a level of seriousness on the part of Beijing.
In fact, China seems to be pursuing two conflicting objectives: the country aims to become the dominant AI power by 2030, but it also wants to establish itself as a leader in the international conversation by pursuing regulation and ethics guidelines that could slow down the pace of R&D. Ethical guidelines might also seem hypocritical for a country whose repertoire includes mass detentions and exporting surveillance cameras. A report from Merics observes that China’s AI ethics and governance landscape is further complicated by the collage of actors engaged in it: these include not just central and local governments but private companies, academia, and the public.
In this essay, we will begin by briefly taking a look at China’s AI landscape and strategy. We will then take a look at the recent guidelines from MOST, how they factor into China’s overall AI strategy, and whether they represent a meaningful desire to responsibly develop AI.
Background
To understand the recent developments in Beijing’s crackdown on recommendation systems and the country’s AI principles, we have to look at the broader context of what China wants to do with AI and what it considers meaningful progress.
It would not be amiss to say that China’s goals in developing AI inherit from its goals for the country’s progress in general. Among others, those goals involve achieving prosperity for its people and establishing China as a nation that commands respect. Prior to the establishment of its Communist Party and the rapid economic advances we have seen recently, China experienced a “Century of Humiliation” that included losses in the Opium Wars and what are still referred to as the Unequal Treaties. While China has, according to the World Bank, lifted hundreds of millions out of poverty, feeding its growing population has been and remains a national concern. As a question of incentives, the economic health of the country is an important factor in the Party’s legitimacy. In an article for The Atlantic, H.R. McMaster summarizes the components of the “China dream”: prosperity, collective effort, socialism, and national glory.
AI Motivations
[ Photo Credit: Xinhua, via South China Morning Post ]
A government that wants to establish international legitimacy and the image that it can do better by its population than Western counterparts must then think carefully about how it develops AI systems. From these two goals, we might deduce that the AI systems China might want to develop should do two things:
Showcase China’s technological and intellectual prowess
Benefit Chinese society in the sense of improving economic and social outcomes
Looking at these two criteria in a vacuum, it doesn’t seem inconsistent for Beijing to put serious weight on ethical guidelines. Abiding by such guidelines would likely enhance AI’s benefits to society and Chinese legitimacy. China already maintains an advantage in data collection and manpower that are conducive to developing capable AI systems--given that the United States, the only actor China considers a rival in this space, is also placing a great deal of weight on the ethical implications of developing AI, China’s doing so might not imply a competitive disadvantage.
The Guidelines
There are eight guidelines, but in this piece I will focus on a few to consider how they fit into China’s AI strategy and whether they seem like meaningful commitments from Beijing.
The fourth guideline states that AI systems should respect privacy: “AI development should respect and protect personal privacy and fully protect the individual's right to know and right to choose. In personal information collection, storage, processing, use, and other aspects, boundaries should be set and standards should be established. Improve personal data authorization and revocation mechanisms to combat any theft, tampering, disclosure, or other illegal collection or use of personal information.”
Given that the data generated by its 900 million internet users is an important advantage for China in AI, placing regulations on data-gathering and application might hurt China’s ambitions. Nevertheless, China might be getting serious about data privacy: in July 2020, the National People’s Congress (NPC) Standing Committee began reviewing a draft data security legislation that aims to protect individual privacy while preserving the relevant use of data. Just over a year later, the Standing Committee passed the Personal Information Protection Law.
As the Wall Street Journal reported, the sweeping law forms part of a tighter regulatory regime for Chinese tech companies, but is unlikely to rein in government surveillance. Indeed, it is unlikely that the NPC will consider or pass a law that inhibits the government from such paternalistic behavior. The Chinese Communist Party’s (CCP) current doctrine, Xi Jinping Thought, seems to emphasize the Party’s presence in improving the lives of Chinese citizens.
It is unlikely that the government would register its own surveillance as an encroachment on the rights of its citizens--if the Party leads the people in governing the country and “guarantees the position of people as masters of the country,” then surveillance and individual autonomy might not seem contradictory. Private companies, on the other hand, are not a manifestation of the people’s interest. Therefore, strict regulations on their collection and use of data might seem reasonable from the government’s point of view. Furthermore, the law comes after news of data mishandling by companies like Didi--concerns about Chinese citizens’ data slipping into foreign hands makes for another powerful motivator.
The seventh guideline emphasizes open collaboration: “Encourage exchanges and cooperation across disciplines, domains, regions, and borders; promote coordination and interaction between international organizations, government departments, research institutions, educational institutions, enterprises, social organizations, and the public for the development and governance of AI. Launch international dialogue and cooperation; with full respect for each country's principles and practices for AI governance, promote the formation of a broad consensus on an international AI governance framework, standards, and norms.”
This seems like a tricky bet for China. International cooperation over standards for developing AI broadly seems like a good thing. However, the guidelines call for full respect of each country’s principles and practice for AI governance and the formation of broad consensus on an AI governance framework.
I think this comes with a few problems because the Chinese government looks at ethics and the development of AI from a different perspective from countries like the United States, and resolving such disputes calls for looking at principles much deeper than AI governance practices. For example, China does not appear to have a problem with exporting surveillance technology to over 60 countries--is this consistent with “full respect” for a country whose principles include restricting AI-enabled surveillance? If this prevents consensus on AI governance norms now, will Beijing consider scaling back its surveillance or expect other countries to be okay with continued surveillance?
I find the eighth and final guideline particularly interesting because it points to a desire to govern without slowing progress. The agile governance guideline states: “Respect the natural laws of AI development; while promoting the innovative and orderly development of AI, search for and resolve risks that might arise. Continuously upgrade intelligent technological methods, optimize management mechanisms, perfect governance systems, and promote governance principles throughout the entire life cycle of AI products and services. Continue to research and anticipate potential future risks from increasingly advanced AI, and ensure that AI always moves in a direction that is beneficial to society.”
As the name suggests, “agile” governance seems to imply a way to effectively govern technologies without slowing down development. Alone, the guideline seems high-level and open to interpretation. I am not sure what the “natural laws” of AI development are, but respecting them does not mean letting algorithmic recommendation systems run wild. It is already known that the United States has had a light regulatory hand, allowing technology to develop at a faster pace than legislation or policymakers’ understanding can keep up. Beijing seems to want to develop regulation that mitigates potential risks from AI systems and is somehow embedded into the lifecycle of AI development. Regulations often imply process and bureaucracy that is inconsistent with unfettered innovation, but “optimized” and “perfected” management mechanisms and government systems implies a desire for lightweight mechanisms to ensure AI systems are developed responsibly and with attention to potential risks. We do not have much in the way of specifics yet, but it does seem generally desirable to develop governance and accountability mechanisms that ensure AI development abides by ethical principles without substantially affecting its progress. This is a guideline that might require substantial innovation on the part of the government.
Conclusion
The three principles I have touched on above are only a few of the new guidelines from China’s Ministry of Science and Technology. Given the increasing attention paid to AI by governments and attempts to draft legislation, it will be interesting to see if and how these guidelines manifest in government action. While ethical guidelines might seem an odd approach for China, it is worth considering that China has different economic and political objectives, as well as different perspectives on ethics, from western countries like the United States. It will be worth paying attention to how Beijing balances its control over and guidelines for domestic uses of AI and its desire for international cooperation and leadership.
About the Author:
Daniel Bashir is a machine learning engineer at an AI startup in Palo Alto, CA. He graduated with his Bachelor’s in Computer Science and Mathematics from Harvey Mudd College in 2020. He is interested in computer vision, ML infrastructure, and information theory.