

Discover more from Last Week in AI
Decoding the Discussion: In the Senate AI Hearing, Uncertainty Speaks Louder than Consensus
May's committee hearing showed a remarkable bipartisan consensus on the need to regulate AI. Yet the road ahead presents complex challenges that could disrupt this unified momentum.
We have come a long way from the days when senators described the internet as a “series of tubes”. At last month’s Senate Judiciary Committee hearing on AI, Senator Richard Blumenthal kicked off the opening statements with a deepfake of his voice and a statement written by ChatGPT, and senators spent three hours talking with top AI leaders about issues of transparency, privacy, and more.
The past half-year of AI frenzy seems to have resulted in a genuine desire for lawmakers to come together with industry to understand this new technology and develop standards to promote its safe development. Some, in fact, were quick to point out that the interaction seemed almost too friendly. The three key witnesses — OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and New York University Professor Gary Marcus — faced a much warmer environment compared to the more antagonistic congressional hearings of past tech executives like Mark Zuckerberg. While it’s possible that this unprecedented friendliness signals better cooperation between government and industry, many analysts warn that it indicates a high likelihood of regulatory capture.
Yet if there’s one thing known to be true in American politics, it’s that the path from a policy discussion to a newly adopted bill is a long one. With top tech companies pouring nearly $70 million into congressional lobbying last year, and the vast majority of bills never making it past committee, the latest hearings are only the beginning of the long road to come.
To see where AI regulation may be headed, it’s helpful to look into the subtle aspects of this hearing and see what questions are still left unanswered. Finding points of agreement is valuable, but it’s in the areas of uncertainty where we have the greatest opportunity to actively shape the discourse — ensuring that it’s well-informed, productive, and focused on the issues that matter to all of us.
Here’s what still remains to be debated:
1. Which concerns will be lawmakers’ primary focus?
While developing standards for the responsible use of technology appears to be a common goal, the scope of proposed regulations will vary significantly depending on which problems they aim to target. The seven-minute questioning rounds converged towards many common themes, showing hope for future government action on fronts like misinformation and system transparency.
While the spotlight shone on election security, other issues were left behind:
Many speakers seemed particularly concerned about AI’s potential to spread misinformation and influence elections. As the 2024 election moves closer, new bills introduced in the House and Senate seek to target potentially misleading content by requiring clear disclosure of AI-generated content in political ads. There appears to be a fairly strong bipartisan consensus on preventing AI from interfering with elections, with medical misinformation also highlighted as an area in need of stricter regulation.
At the same time, several well-documented harms of AI were minimally addressed. For example, issues of algorithmic bias and discrimination were only brought up by a few speakers, despite being highly prevalent when it comes to the design and impact of AI systems.

Concerns about AI capabilities other than generative AI also warrant more attention:
Lawmakers also largely focused on AI’s generative capabilities, noting issues like its potential to manipulate public opinion or a lack of compensation for artists threatened by AI.
While these concerns are becoming increasingly relevant in AI policy, it’s important that lawmakers don’t ignore the more longstanding harms of other types of AI, like image recognition technologies used for police surveillance, or potentially biased algorithms used in hiring and tenant screening.
Only Senator Ossoff substantially highlighted concerns around predictive models:
“With massive data sets the integrity and accuracy with which such technology can predict future human behavior is potentially pretty significant at the individual level… So we may be confronted by situations where, for example, a law enforcement agency deploying such technology seeks some kind of judicial consent to execute a search or to take some other police action on the basis of a modeled prediction about some individual’s behavior. But that’s very different from the kind of evidentiary predicate that normally police would take to a judge in order to get a warrant.” — Sen. Jon Ossoff (D-GA)
Proposed transparency standards such as auditing and licensing schemes may help to address the equity issues in decision-making AI, but it’s unclear whether new regulations would target these more narrow uses. In Altman’s testimony, he loosely recommends that the government “consider a combination of licensing and testing requirements for the development and release of AI models above a threshold of capabilities”, also referring to a threshold based on a model’s “amount of compute” in later statements. Depending on how these thresholds are constructed, more narrowly applied or “weak” AI systems could be left without adequate regulatory oversight, posing a greater danger as they tend to attract less public attention than flashier generative models.
Other issues in the AI industry aren’t directly caused by a system’s capabilities, but instead what happens during training. OpenAI’s documented exploitation of foreign workers for labeling AI content was one issue that never got mentioned, along with the environmental footprint of training large AI models.
Senator Coons showed the most in-depth knowledge about the training of AI systems, inquiring about the use of ‘constitutional AI’ principles as opposed to reinforcement learning from human feedback (RLHF) techniques:
“I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There’s another approach that’s called constitutional AI that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content?” — Sen. Christopher Coons (D-CT)
Senators’ specific agendas varied:
Understandably, short questioning periods won’t cover all of the possible concerns surrounding AI technologies and their regulation. Yet they do give particular insight into what specific areas might be targeted soonest by new legislation. To provide a snapshot of each senator’s main areas of focus, I created unique profile cards for each speaker, including their top priorities, notable tech-focused legislation they’ve sponsored, and a word cloud generated from all of their statements during the hearing. For a detailed view of each image, click on the gallery and scroll through:
The individual profiles show a fair degree of similarity between senators from each party, with terms like “agency” and “license” highlighted in both, along with references to accountability, election misinformation, and foreign actors like the EU and China. Other concerns were more niche, like Senator Blackburn’s focus on AI-generated music (a relevant concern given the prominence of Tennessee’s music scene) or Senator Padilla’s focus on language diversity (which is similarly fitting for his constituent state of California).
It’s also worth noting the past legislative records that the senators have when it comes to taking action on technology. Some, like Senator Hawley, have made combatting big tech a major part of their platform, frequently proposing new legislation, while others like Senator Ossoff have focused more on adjacent issues like cybersecurity. Lawmakers’ experience with tech policy will likely have a significant influence on their ability to rally support from other members of Congress and determine which issues are prioritized.
2. What language is used to talk about AI?
Beyond the specific issues slated for legislative action, the rhetoric used to discuss AI offers another look into the future of regulation. For one, it shows which voices in the AI dialogue are influencing each other the most — for example, the Stochastic Parrots authors and the Future of Life Institute have markedly different attitudes and approaches towards AI policy, reflected in the wording of their public statements. Furthermore, vague or misleading rhetoric on AI may signify probable loopholes in future legislation that get put forward.
‘AI safety’ was a popular term of choice, which may intensify divisions in the regulatory community:
One area of note flagged by other analysts is the hearing’s slant towards ‘AI safety’ as opposed to AI ethics. While these terms seem to target similar goals and are used interchangeably by some, they’ve increasingly grown to signify two markedly different approaches to AI regulation, with ‘safety’ advocates primarily focusing on long-term impacts like existential risk, while ‘AI ethics’ approaches tend to emphasize more immediate issues like transparency and misinformation.
Altman notably uses the word “safe” or “safety” 15 times in his testimony, but never directly uses terms like ethics, fairness, or bias. This is consistent with OpenAI’s general public-facing rhetoric: the company’s charter frequently refers to its goal of building a “safe and beneficial AGI”, and safety is generally the term of choice used by Altman and other high-level employees. By contrast, Montgomery and Marcus both use a more varied set of terms, including safety, ethics, bias, and transparency.
If it seems odd to focus on these precise word choices, that’s the point: Altman’s safety-exclusive approach seems to be a calculated decision, aimed at solidifying a narrowly-defined target for OpenAI’s activities both in research and now public advocacy or perhaps signaling a narrow group of supporters and funders.
Whether the growing division between these groups ends up influencing future debates is yet to be seen: At Mozilla’s Responsible AI Challenge event a few days ago, Marcus expressed concern that it could prevent action from being taken on an issue that’s remarkably avoided politicization so far, imagining a worst-case scenario future in which “Conflicts over which risks to address precluded anything from happening: 'AI Safety' and 'AI Ethics' people couldn't agree on anything, either in terms of problems or solutions; Congress gave up in disgust.”

Speakers used more selective wording to address difficult topics:
Beyond signaling certain value alignments, specific wording choices also show how the witnesses navigate around certain questions or topics. For example, in discussing user privacy protection and the ability for users to opt out of data training, Altman states:
“I mean, I think a minimum is that users should be able to, to sort of opt out from having their data used by companies like ours or the social media companies. It should be easy to delete your data… but the thing that I think is important from my perspective running an AI company is that if you don’t want your data used for training these systems you have the right to do that.” — Sam Altman
After being questioned on how to practically implement a restriction on data gathering and training, Altman clarifies that he’s not talking about data scraped from the public web, and that the opt-out ability he’s referencing is for data submitted directly to ChatGPT:
“So I was speaking about… the data that someone generates, the questions they ask our system, things that they input their training on. That data that’s on the public web that’s accessible, even if we don’t train on that, the models can certainly link out to it. So that was not what I was referring to. I think that, you know, there’s ways to have your data, or there should be more ways to have your data taken down from the public web, but certainly models with web, web browsing capabilities will be able to search the web and link out to it.” — Sam Altman
This demonstrates how a concept like the right to opt out of AI data training could have different meanings to different people: would regulations provide a way for publicly available web data (such as copyrighted artistic works) to avoid being used for AI system training, or would opt-out protections apply only for user-submitted data?

Another example of differing concept interpretations comes through in Senator Padilla’s questioning period on language inclusivity. Padilla’s statements mainly focus on the unequal attention given to content moderation in different languages (an issue that has caused companies like Meta to be sued for promoting human rights abuses in Ethiopia and Myanmar):
“My understanding is that most research into evaluating and mitigating fairness harms has been concentrated on the English language, while non-English languages have received comparably little attention or investment. And we’ve seen this problem before. I’ll tell you why I raised this. Social media companies, for example, have not adequately invested in content moderation, tools and resources in non-English languages… I’m deeply concerned about repeating social media’s failure in AI tools and applications.” — Sen. Alex Padilla (D-CA)
Both Montgomery and Altman’s responses, however, seem to focus more on making AI models available in different languages, rather than ensuring content moderation standards stay consistent.
“We don’t have a consumer platform, but we are very actively involved with ensuring that the technology we help to deploy in the large language models that we use in helping our clients to deploy technology is focused on and available in many languages.” — Christina Montgomery
“We think this is really important. One example is that we worked with the government of Iceland which is a language with fewer speakers than many of the languages that are well represented on the internet to ensure that their language was included in our model. And we’ve had many similar conversations…” — Sam Altman
If Congress or regulatory agencies want to push for more consideration for non-English language users, it matters whether the baseline is simply includes other languages or fully monitoring activity in them. Deploying a large language model in more languages could actually be harmful without the proper resources to manage it. In some ways, the corporate responses seem more like canned statements about diversity and inclusion rather than a full answer to these concerns.
The effectiveness of future regulation will depend on whether regulators can be specific about the actions they want to see, preventing loopholes and knowing how to navigate the increasingly complex domain of talking about AI.
3. How much will corporate entities control the discussion?
When several top tech CEOs were invited to a discussion at the White House at the beginning of May to discuss the dangers of AI, several people were quick to point out who wasn’t invited to the table. The federal government’s newfound interest in AI regulation has largely taken the form of outreach to corporate leaders, with many arguing that representatives from academia, civil society, and more policy-focused fields are being left behind. The Senate hearing weeks later displayed many of the same patterns, with the two largest names coming from the top ranks of major tech companies, and AI ethics research largely being ignored.

If corporate tech executives are the first people Congress consults when seeking to regulate AI technologies, it’s likely that many regulations will be crafted to serve corporate interests. Perhaps most emblematic of this deference to industry opinion is Senator Kennedy’s testimony, where he repeatedly urges the witnesses: “This is your chance to tell us how to get this right. Please use it.”
Corporate leaders acknowledged their limited capacity to address policy concerns:
Facing deep questioning on policy issues, the speakers themselves appeared to be uneasy at times, admitting directly that they didn’t have the right expertise to give a full answer. There’s a selective pattern shown in when they speak authoritatively like policy experts, and when they admit that they aren’t: It’s visible when Marcus hesitates to give an answer on global policy, or when Altman states he doesn’t know exactly what a privacy law would need.
While it’s reassuring to know that these people aren’t trying to pose as regulatory experts, it also demonstrates the importance of bringing the right voices to the table: tech executives may be well-suited to answer questions about how their products work or their company structure, but detailed policy questions should be targeted towards a broader group of experts, including those in non-technical roles.
Antitrust efforts are likely to grow:
Multiple speakers at the hearing also referenced the importance of avoiding regulatory capture and ensuring proper corporate accountability. Senator Booker particularly focused on the issue of corporate concentration, as a limited number of companies dominate the AI sphere. Others like Senator Hawley inquired about AI’s role in the attention economy, where Altman noted that OpenAI didn’t operate on an ad-based model, but that other companies would likely use AI for targeted advertising.
With antitrust lawsuits currently pending against major companies like Google and the FTC aiming to step up enforcement against deceptive practices involving AI, corporate concentration and influence will be a major area to watch as AI regulation develops further. Yet if tech executives continue to be lawmakers’ first point of contact when aiming to understand AI, it’s likely that they won’t gain the depth of understanding necessary to craft effective and durable regulations.
4. Which branch of government will take the lead in AI regulation? Specifically, is a new agency the best way to deal with AI?
Beyond the rhetoric and character of the AI discussion, the Senate hearing held several key revelations about the contours of regulation itself. As Senator Graham remarked during his questioning period, there are three main ways to protect consumers against any product: statutory regulation, legal systems, and regulatory licensing. While the conversation naturally focused mainly on actions that Congress could take, a notable amount of attention was given to the latter avenue for regulation. Executive branch action was at the forefront of the conversation, with speakers frequently referencing both existing arms of the bureaucracy and offering proposals for a new agency.
The prospect of a new executive agency generated excitement among many:
A few statutory regulations did seem to pick up support, such as requiring clear labeling of AI-generated content, particularly in political advertisements (which Altman agreed would be prudent). Other proposals also focused on transparency, like Senators Coons and Klobuchar’s bill to give researchers more access to platform data.
Yet pure legislative action paled in comparison to the discussion around bureaucratic oversight. Senator Blackburn called for more action from copyright agencies to ensure fair compensation for AI-generated music, and Senator Durbin questioned Marcus on how existing government agencies could respond to the challenges posed by AI, to which he answered:
“We have many agencies that can respond in some ways. For example, the FTC, the FCC, there are many agencies that can, but my view is that we probably need a cabinet level organization within the United States in order to address this. And my reasoning for that is that the number of risks is large. The amount of information to keep up on is so much. I think we need a lot of technical expertise. I think we need a lot of coordination of these efforts…” — Gary Marcus
Marcus was not alone in calling for an independent agency dedicated to AI. Altman also advocated for an agency that “licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards”.
Montgomery was more skeptical on the matter, arguing that it would be best to build upon existing systems to address the risks in the status quo. Focusing all efforts on creating a new organization could viably cause certain harms to evade bureaucratic oversight, and some have argued that it shows Congress passing the problem off to a new group rather than taking direct action. Her response, however, was met with resistance by lawmakers like Senator Graham, stating plainly:
“I just don’t understand how you could say that you don’t need an agency to deal with the most transformative technology maybe ever.” — Sen. Lindsey Graham (R-SC)
Executive branch action could also take other forms:
Even with this disagreement, a clear model for AI regulation is emerging with a heavy focus on the executive. The four prongs of Montgomery’s “precision regulation approach” (different rules for different risks, clearly defining high-risk cases, promoting transparency, and conducting impact assessments) bear a strong similarity to the agency-based proposals.
Senator Blumenthal’s proposal is similar, looking into “independent testing labs to provide scorecards [similar to] nutrition labels… that indicate to people whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be”.
While the details vary a bit from one proposition to the next, the principles are fundamentally the same: create a set of standards or licensing requirements, assess systems based on those, and make changes accordingly. That’s a great start. But what happens when developers don’t follow them, or if those standards fail to catch real harms until it’s too late?
On the flip side, the role of the judiciary was largely neglected:
Without sufficient attention paid to legal frameworks, new AI regulations will lack a good foundation for practical enforcement. Yet the speakers primarily evaded any deep discussion of legal action related to AI.
Senator Graham used his questioning period to probe Altman about OpenAI’s susceptibility to lawsuits in the case of AI harming someone, which Altman largely downplays, saying it’s beyond his area of knowledge, and that OpenAI has only been sued for “petty frivolous things”, as “happens to any company”. The lawsuits Altman’s referencing currently include a class-action copyright suit targeting Github’s unattributed code samples, and another alleging the company of fraudulently obtaining nonprofit status. OpenAI has also begun facing defamation lawsuits from politicians who allege it of spreading misinformation, with more likely to come in future months.
In the wake of these growing legal concerns, Altman has stated that Section 230, the law which prevents online platforms from being held liable for user-generated content, doesn’t apply to generative AI, a statement to which the drafters of the law itself have agreed. In response to Graham’s questioning, he states: “We’re claiming we need to work together to find a totally new approach”, but few details are given on what that approach could be.
Graham later moves on from the legal issue to focus on advocating for a new agency, but fellow Republican Senator Josh Hawley takes a notably different approach, fully zeroing in on legal accountability. Expressing concerns about regulatory capture in a new agency, he argues that a better approach would be to create a federal right of action allowing individuals harmed by AI to bring forth action in court. Marcus, however, responds with skepticism, arguing Hawley’s plan “would be too slow to affect a lot of the things that we care about”, and that existing laws have too many gaps and loopholes to be effective at targeting AI.
Marcus is correct in noting that current legal standards are insufficient. The conclusion that this should make executive action a higher priority than legal reforms, however, seems backward. Without the clear development of legal standards, planned executive action lacks a proper form for resolving disputes: and disputes are likely to happen. OpenAI may not claim protection under Section 230, but other providers are likely to use First Amendment defenses to avoid facing liability for AI-generated content. (My previous writing here explores several of the constitutional challenges brought by generative AI, and offers new legal models that could be used to address them).
It remains to be seen whether legislative proposals and executive agencies will set out clear paths for bringing forth legal challenges. If Congress fails to recognize the role of the judiciary branch in regulating AI, they risk passing laws that could later be ruled void, or leaving those who are harmed by these technologies without a chance for redress and remedy. To fully address the risks posed by new AI systems, all branches of government need to be on board, rather than leaving the task solely to executive bureaucrats.
5. What will action on AI look like at the international level?
Going beyond domestic regulation, international efforts to address AI were another major topic of discussion. With the EU’s AI act being officially approved the day before the Senate hearing, and G7 leaders preparing for future talks on AI, several speakers alluded to the role that international cooperation would play in developing uniform global standards and enforcement for the risks posed by AI.

Senators and executives want America to lead the way in AI:
Many speakers took on a distinctively patriotic nature, talking about the need for America to ‘lead the way’ in global AI discussions. As Altman stated in response to questioning from Senator Welch:
“I think America has got to continue to lead. This happened in America. I’m very proud that it happened in America… and I think it is important to have the global view on this because this technology will impact Americans and all of us wherever it’s developed. But I think we want America to lead. We want what we want.” — Sam Altman
It’s not entirely clear what an America-led approach would look like, as the US has historically fallen behind other governments like the EU in taking proactive action on AI. Striving for more international dialogue is an important step, one which policymakers have long advocated is important to prevent AI from being misused for authoritarian purposes. However, US leadership in AI should not be a goal in itself.
This America-first approach often comes with anti-Chinese undertones:
Mainstream coverage of global tech policy frequently frames AI development as a race for power between the US and China. As efforts to ‘decouple’ from Chinese technology have increased, so has the assumption that any technological activity in China inherently poses a threat. Some have called for the US to avoid taking regulatory action that might ‘set progress behind’ and allow for Chinese AI technology to take the lead (though such a scenario is largely over-exaggerated). While discussing democratic values in AI, Senator Coons states:
“The Chinese are insisting that AI being developed in China reinforce the core values of the Chinese Communist Party and the Chinese system. And I’m concerned about how we promote AI that reinforces and strengthens open markets, open societies and democracy.” — Sen. Christopher Coons (D-CT)
What’s missing from these conversations is a neutral examination of China’s AI policy from a factual standpoint. The country’s recent regulations on generative AI do include some areas of vague policy, but they also address many concerns like information verifiability or intellectual property, which have yet to be thoughtfully addressed in US policy. It’s easy for US politicians and media outlets to take a one-dimensional view of Chinese tech development. However, doing so risks further dividing AI development across country lines, making it harder to converge on global standards. If lawmakers see international action as purely a way to ‘beat China’, they may miss opportunities for strategic collaboration by taking a narrow and adversarial approach.
International agencies will play a large role in the global response to AI:
In addition to considering how to respond to Chinese AI development, the largest question on the table is if a global agency to monitor AI will be created, and how it could be structured. Senators on both sides of the aisle frequently expressed support for an international agency, a proposal primarily championed by Marcus.
Agencies like CERN and the IAEA have historically acted to facilitate dialogue and cooperation on delicate technologies like nuclear energy, and a similar model could be useful for AI. At the same time, nations can build on established guidelines such as UNESCO’s Recommendation on the Ethics of Artificial Intelligence. It also remains to be seen how international efforts will include regions like Africa which have largely been left out of the AI discussion so far.
Conclusion
The Senate AI hearing as a whole point to a growing government consensus on the need to regulate AI, with bipartisan agreement on a wide range of issues. As lawmakers move forward in navigating the complexities of AI regulation, however, the opinions of tech executives will only take them so far. It will be the work of many actors, from AI ethicists to citizen advocates that determines which concerns get addressed, whose voices are elevated, and which laws begin to lay the foundation for the emerging field of AI policy.