AI Governance in the US - so much more than R&D
Looking at existing AI governance measures in the United States and considering the path ahead
This is the second of a series of articles I’ll be writing that look at recent developments in AI policy. The first covered China’s recent AI ethics guidelines, while this one looks at AI strategy and governance measures in the US.
Intro
A look at the first appearances of Big Tech executives on Capitol Hill (such as Mark Zuckerberg’s still-famous rejoinder to Senator Orrin Hatch’s question about how Facebook remains free: “Senator, we run ads.”) does not inspire much confidence about the US government’s ability to contend with the present state of technology as it defines policy and strategy going forward. Examples of members of Congress displaying an extreme lack of awareness of our digital ecosystem abound. But the picture isn’t entirely bleak--lawmakers made valid points about the inscrutability of recommendation algorithms and platforms’ lack of knowledge about what could be happening on their sites.
Yet, despite increasing consensus over the need to “reign in” technology giants like Facebook and more discussion of AI in Congressional hearings, next steps and concrete plans from policymakers remain unclear. Among calls for the “responsible” development of AI, bans on technologies like facial recognition, and more democratized access to resources for developing AI systems, it is hard to place a finger on a national strategy for the United States, especially when it comes to regulation. However, there have been efforts to establish official agencies and a clearer desire to establish a national development strategy, as well as examples of legislation at the state level.
In this piece, we will look at the United States’ national AI plan--how the country intends to support AI development, whether it will consider regulation, and how it will collaborate and compete in the international AI ecosystem--to the extent that there is one, as well as local efforts at AI governance. We will also consider how this national AI plan’s current pieces might come together into something more concrete. In sum, there appears to be little national movement on regulation, while the proliferation of agencies and committees indicate a strong interest in maintaining the US’s advantage in AI. There is little signal from the national government to make regulations seem likely, but groundswells of support for tighter laws around technologies like facial recognition may well prove formative.
Context
Recent concerns over big tech and the impacts of technologies like facial recognition have drawn significant attention to the prospects for artificial intelligence regulation in the United States. As far as technology policy in the US goes, such regulation of AI would mark a change from past practices: while Europe imposed data privacy regulations with the GDPR, the United States has done little in this vein. Many have called for the United States to take a more proactive approach to regulating AI technologies and not “miss the bus” as it did in contending with data privacy and other unsavory impacts of technology. Regulation advocates are worried about potential adverse impacts due to the adoption of AI technologies, such as unfair criminal sentencing or wrongful arrests due to biased facial recognition systems.
But not everyone agrees that ex ante regulation--which identifies problems before the fact and seeks to shape incentives--is good or appropriate in the case of rapidly evolving technology. Ben Thompson, for instance, has consistently advocated against a top-down approach to imposing rules for technology development. That is not to say governments should give technology companies free reign to do as they wish. Thompson believes the FTC should not have let Facebook acquire Instagram and that a rule against allowing social networks to merge would be a good one so as not to allow any one network to accrue too much power. But lawmakers should be careful about the unintended consequences of imposing regulatory burdens when they choose to do so and understand that the assumptions they make about the world might quickly cease to be true.
National Level - Development
The United States’ AI strategy has been a consistent topic of interest among academics, policymakers, and industry. Little has been done in the way of regulation, but the previous few administrations have shown great interest in developing AI and maintaining the US’s status as a leader in the field. The Trump White House launched an initiative that “committed to doubling AI research investment, established the first-ever national AI research institutes, issued a plan for AI technical standards, released the world’s first AI regulatory guidance, forged new international AI alliances, and established guidance for Federal use of AI” (Executive Order). The Obama administration also revealed proposals for AI research and funding and released a report on the future of AI.
The Biden Administration has had more immediate issues to deal with, but has likewise worked on its own initiatives for AI. The Administration launched ai.gov, a website “dedicated to connecting the American people with information on federal government activities advancing the design, development, and responsible use of trustworthy artificial intelligence (AI).” It also established the National Artificial Intelligence Research Resource Task Force and the National Artificial Intelligence Advisory Committee. The former aims to democratize access to research tools to facilitate innovation in AI, while the latter will advise President Biden and federal agencies on AI research and developments.
We will later discuss the important issue of possible regulations on a national level, but in the wake of new, paradigm-shifting technologies that may have geopolitical implications, it is natural that the government would want to seize an opportunity to develop a global advantage in the field before attending to regulation. There continue to be concerns about China’s developing an advantage in AI and a desire to develop AI systems imbued with “American values.” Google’s former chair Eric Schmidt has been particularly outspoken about his vision of a conflict between democracy and authoritarianism--China being a particular representative of the latter--in the development and adoption of AI. As chair of the National Security Commission on AI, Schmidt saw many of his group’s recommendations written into law--however, most of these “laws” resemble existing AI ethics guidelines from industry rather than legislation with any real bite.
State and Local Level - Regulation

The US state and local levels of government have had their own initiatives for AI governance, and have been more active than the federal government on proposing and passing regulation. Illinois has put forward the Biometric Information Privacy Act (which had some technical issues) and the Artificial Intelligence Video Interview Act. Both laws reflect increasing scrutiny of biometrics practices in the United States--the first is more directly related to data privacy, while the second places transparency requirements on companies that might use AI services like HireVue in their interview processes. Washington state county cited privacy and bias threats when it banned facial recognition wholesale.
That is not to say all local efforts were successful or even useful. In 2019, the NYC government put together a task force to study the use of automated decision systems. However, the results were mixed. After 2 years, the task force was unable to surface information about how the decision systems worked, and the city did not release a full list of automated decision systems known to be used by its agencies. Despite the presence of multiple experts, the task force was plagued with issues: its members debated over the definition of an automated decision system and were given practically no useful information from the city. Albert Fox Cahn, who attended task force sessions, wonders “whether other cities will have the political will to do more than perform a transparency shadow play and actually pull back the curtain on their algorithms.”
Ex Post vs Ex Ante and National Regulation
If the federal government is to pursue a strategy that involves regulation, it will naturally need to consider how it wants to impose that regulation.
Examples such as Facebook’s acquisition of Instagram have shown that US agencies like the FTC have thus far avoided imposing strict regulations on the actions of technology companies. The European Union, on the other hand, has been much more forthcoming in placing restrictions on companies such as with the GDPR. Both cases have made the issue of ex ante vs ex post regulation salient in considering technology policy. As opposed to ex ante, ex post regulation takes place after a market failure or distortion arises. An ex ante regulation might impose requirements on companies that seek to use consumer data, while the FTC’s desire to “undo” the Facebook-Instagram acquisition would be an attempt at ex post regulation.
The question already feels important in the light of AI systems. In an article for the New York Times, Frank Pasquale and Gianclaudio Malgieri argue for a “coordinated nationwide response [to the rise of A.I.], guided by first principles that clearly identify the threats that substandard or unproven A.I. poses.” Citing laws prohibiting the use of facial recognition and requiring consent before the collection of biometric data, the authors claim that while authorities have started to respond, the United States should follow in the EU’s stead, referring to the Draft AI Act.
In short, it seems Pasquale and Malgieri would like to learn from the EU and adopt ex ante regulations that would impose restrictions on or outright ban potentially harmful uses of AI. Facial recognition has been under particular scrutiny and seems likely to be a first candidate for national-level regulation if that were to come about--Facebook recently announced that it was shutting down its own facial recognition system in response to concerns, an indication that developers of the technology are responding.
Indeed, uses of AI like facial recognition seem more morally concerning than beneficial. But even in these cases, I’m not sure the answer is as easy as ex ante regulation. In a report on ex ante regulation of digital platforms, Tim Tardiff considers some of their drawbacks. While his paper speaks about the merits and drawbacks in the context of antitrust, his concluding sentence applies well to the regulation of AI technologies: “When market conditions and technology are rapidly changing, the factual basis and theories justifying ex ante regulation can become out-of-date before the necessary actions to update the regulatory regime and/or defer to antitrust to deal with competition problems can be implemented.”
The paper “Demystifying the Draft EU Artificial Intelligence Act” raises just such a concern about the Act: its concluding remarks describe it as “stitched together from 1980s product safety regulation, fundamental rights protection, surveillance and consumer protection law.” Even a pass at the criticisms reveal that this patchwork attempt at regulation makes a number of outdated assumptions about how technologies are used and who will be affected by particular regulations.
Conclusion
If the United States is to pursue its own regulation, the answer should not involve resuscitating old regulations and laws for the purpose of anything other than inspiration. The internet was already a paradigm shift that required and continues to require a shift in assumptions about technology’s impact on consumers and citizens and what effects certain regulations will have. Regulations that imposed data privacy requirements in the EU, for instance, had the unintended effect of disproportionately affecting small and medium sized businesses. The businesses better able to bear the costs of regulations tend to be large companies that already dominate the market, and some regulations may end up facilitating lock-in.
In the same way, ex ante regulations that require businesses to comply with ethics and data privacy requirements may sound like a good idea, but could have the effect of stymying competition and allowing only organizations with vast resources at their disposal to compete in developing AI technologies. This is not to say the government shouldn’t do so for this reason, but it is a tradeoff that needs to be considered when making a decision about whether and how to regulate AI development.
However the US government decides to regulate or not regulate, the consequences will be many. In this piece, we outlined some considerations that may come into play in making such a decision. In future pieces, we’ll look at topics such as how Big Tech players have weighed into regulatory considerations for AI and how other nations are approaching AI regulation.
About the Author:
Daniel Bashir is a machine learning engineer at an AI startup in Palo Alto, CA. He graduated with his Bachelor’s in Computer Science and Mathematics from Harvey Mudd College in 2020. He is interested in computer vision, ML infrastructure, and information theory.