

Discover more from Last Week in AI
Russia May Have Used Autonomous Drones. Now What?
Examining the policy options for responding to Russia's possible use of autonomous drones in Ukraine.
Intro
The Russian invasion of Ukraine goes on, and much continues to be made of its implications. France’s re-elected President Emmanuel Macron vowed to “act first to avoid any escalation following the Russian aggression in Ukraine” during his inauguration speech. That aggression itself, as we’ve already commented, has possibly involved AI-powered weapons. In this piece, we will comment on the political implications of Russia’s use of AI-powered weaponry in Ukraine.
The State of War
Since our last piece on the topic was published, commentary has continued on the use of AI in the Ukraine war. Indeed, it does offer us a glimpse into the future of warfare–the Los Angeles Times calls wars of the kind Russia is waging virtual wars–“increasingly fought from a distance… by remote control, automated systems, artificial intelligence and social media.”
Indeed, advances in vision that allow systems to detect objects and make decisions based on their visual field are bound to enable drones and military vehicles that can hunt for particular targets based on images their onboard systems have been fed. It remains the case that there is no evidence Russia has used lethal autonomous weapons (LAWS) in its war, but The Conversation notes that Russian forces have been seen testing new “swarm” drones in addition to “unmanned autonomous weapons capable of tracking and shooting down enemy aircraft.” Fortune cites a comment that Russia would further use AI to help analyze battlefield data.
Implications of LAWS and Future of Warfare

What does it mean when AI-powered systems enter warfare? If an AI-powered system is enabled to make decisions about whether to take a human life, numerous questions arise. Some claim that these systems could reduce civilian casualties, and that is a point worth taking seriously. But we also know that today’s vision systems are not perfect, and AI systems in general can make mistakes that would literally be lethal in this use-case. Fortune points out that even if a vision system correctly identifies a tank, that tank may be next to a school–avoiding casualties also requires knowledge of the landscape where war is being conducted. Furthermore, the question of responsibility becomes complicated when autonomous systems are put into use by military operators, but make decisions to kill independent of human guidance.
So are these “killer robots” in our future? Fortune also notes that the global market for AI-enabled lethal weapons is growing quickly, “from nearly $12 billion this year to an expected $30 billion by the end of the decade,” with the US alone already spending $580 billion annually on loitering munitions. While concerns have been raised, they have not managed to stop the development and market for these systems–autonomous weapon ban supporters have warned of “slaughterbots,” clusters of inexpensive drones that could be used to kill everyone in a certain area or commit genocide. UN talks to ban these weapons that “select and apply force to targets without human intervention” fell apart, with objections from the US, Russia, and the UK.
What to do? The Policy Response

Russia may or may not have used autonomous weapons systems in Ukraine, but Russia, the US, and other countries are continuing to develop AI-powered weapons for use in warfare. Going forward, we should act as though the use of autonomous weapons is a live issue—if it hasn’t already happened, it likely will in the near future. The question, then, is what to do?
Scientific American and The Bulletin have both suggested approaches along the lines of regulation and pushing compliance with international humanitarian law (IHL). While SA merely highlights the need for regulation, The Bulletin goes further in offering and evaluating a number of suggestions for moving forward:
A legally-binding comprehensive ban on autonomous weapons: The strongest possible measure, and unlikely given pushback by powerful nations.
Recommendations from the International Committee of the Red Cross: In particular, ‘a ban on unpredictable autonomous weapons, autonomous weapons that target human beings, and various regulations on other sorts of “non prohibited” autonomous weapons.’
Better implementations of existing IHL: best practices, practical measures, and general information for complying with humanitarian law for autonomous weapons. This would require monitoring failure when best practices are observed to understand whether autonomous weapons comply with existing humanitarian law, and adapting measures to the findings.
A political declaration about the necessity of human control: the “easiest” approach but also the least likely–such a declaration would call into question why autonomous weapons are needed in the first place, while autonomous weapons ban advocates would see such a move as too little.
Indeed, the chances for hard regulation are slim: the outcome of the UN Convention on Certain Conventional Weapons indicates that powerful nations are unlikely to agree to them, and international bodies do not have the ability to overpower their wishes, especially when nations like the US and Russia act in concert. It does not need to be said that international pressure against these countries to force them into agreement with restrictions is hardly an option.
The “intermediate” approaches from The Bulletin–the Red Cross recommendations or developing best practices around IHL–seem like the only realistic options among the four suggested. I think these approaches hold some promise, but do need to be worked out in detail.
Let us first examine the Red Cross recommendations. What constitutes an “unpredictable” autonomous weapon? What facets of its behavior must be predictable? The exact behavior of an AI system–how it goes about achieving its goals–is always. going to be unpredictable. Specifying a predictability that hinges on correctness, demanding that an autonomous weapon must always achieve an intended result such as always hitting the correct target or class of targets, is more feasible. On the whole, it seems that the Red Cross recommendations would have nations “de-fang” their militaries’ autonomous weapons systems and take steps to ensure their correct behavior before allowing their use in war.
The suggestion regarding better implementing existing humanitarian law offers an interesting iterative approach to the problem. By articulating how humanitarian is to be followed in the use of autonomous weapons and examining how well these approaches reduce the risks of such weapons, we can understand and evolve approaches to curbing adverse consequences.
A document from the Center for Naval Analyses (CNA) indicates that discussions over these sets of recommendations may be possible, but tricky. Russia appears to believe that IHL applies to LAWS without any changes and that its use falls within the parameters of that law. Russia also ‘advocates for the concept of “meaningful human control” over future LAWS, as a potential point of consensus with the international community;’ however, the CNA observes the definition of “meaningful” may be difficult to develop without politicization.
Enforcement

These best practices and suggestions are nice, but what of enforcement? If the United States deploys a system that is deemed “unpredictable,” what are the consequences? It feels easy to develop a bleak picture of international law enforcement–as we have already commented, international bodies are unlikely to be able to affect a meaningful change in the international policy of nations who hold outsized power in their chambers.
Indeed, the problem is bound to be one of incentives. We can quibble about security discourse and various international theories that pertain to warfare and how nation-states’ perceptions of one another will influence action, but it seems fair to claim that nations will continue to develop or stop developing LAWS and other autonomous weapons if they deem it to be in their best interests, however they arrive at their conclusions.
In “Revisiting the Geneva Conventions: 1949-2019,” Derek Jinks argues that IHL in general is, in fact, self-enforcing: irrespective of the opposing party’s actions, “mistreatment of the enemy often decreases morale in one’s own forces, discourages surrender and motivates more ferocious fighting by the enemy forces, encourages mistreatment of one’s own forces when captured, and complicates efforts to restore the peace post-conflict.”
Furthermore, non-belligerent third parties such as international institutions can impose steep costs on IHL-disregarding parties through mechanisms such as international criminal tribunals, while the political economy of international legitimacy also exerts influence on nations’ compliance with IHL. Jinks recognizes the serious limitations of enforcement by these third parties, noting that they “often lack the resources or sustained political will to influence law-disregarding parties in any meaningful way.”
The Geneva Conventions and their mechanism for enforcing IHL provides an interesting case study for the conduct of enforcement mechanisms regarding autonomous weapons and their implications for human rights. Jinks also observes that the conventions, which currently form the core of IHL and regulate the conduct of armed conflict, serve as an effective mechanism for its enforcement not by imposing strict guidelines or punishment for violations, but by representing a “fundamental humanization and individualization of inter-belligerent enforcement.” The Conventions afford the punishment of wrongdoers, while regulating that punishment through IHL:
> The enforcement mechanism is, then, the individualized assignment of blame for the violation of shared rules through a process that is itself consistent with shared standards.
Jinks observes that reliance on war crimes trials is also problematic–such trials may not be retaliatory enough to violations, while resource constraints may limit the number of trials possible and weaker nations may not have the institutional capacity to conduct procedurally adequate trials. Indeed, despite its advantages, the provisions of the Geneva Conventions may not solve the problem of making the punishment fit the crime.
So, in a world of asymmetric power, international institutions dominated by a few powerful actors, and seemingly few “real” prospects for international sanctions, what can we do in the face of autonomous weapons development? War trial enforcement for the use of autonomous weapons will suffer from the limitations Jinks points out, and responsibility is far more difficult to assign when the actions of both humans and autonomous systems are involved.
I think a key incentive to hinge on is that of legitimacy. Legitimacy in international and domestic affairs is an incredibly important consideration for any government. Nations that do not seem legitimate to their citizens may be doomed to popular discontent and an inability to enact meaningful policy. Nations that lack international legitimacy are likely to suffer myriad consequences, including economic and diplomatic.
This is not to mark legitimacy as the only lever necessary for directing the behavior of nations, but it is one with important implications – the status of “international pariah” is one likely undesirable by even powerful nations – though such a measure may suffer from collective action problems: if a few nations fail to apply pressure, the offending country might not be sufficiently incentivized to change its behavior. But among the three mechanisms of social control traditionally posited by political theorists – coercion, self-interest, and legitimacy – legitimacy seems to be a lever that international organizations or warring parties could ostensibly move.
The Geneva Conventions’ focus on avoiding retaliatory spirals and systematic dehumanization is useful in the context of warfare–by definition, measures that achieve this would avoid dangerous conflict spirals and appropriately calibrate to the psycho-social context of war. But autonomous weapons imply a different sort of warfare. Dehumanization is inherent in the operation of autonomous weapons, which can be controlled from a distance as though one were playing a video game. “Humanizing” measures that one might think of, like making an operator stare at photos of people he might be targeting, seem unlikely to actually be implemented by warring nations. Furthermore, requiring that the taking of human life in warfare must comply with IHL will be complicated by the use of LAWS, and debates over whether autonomous systems meet the desired standards are bound to be politicized.
Conclusion
I am skeptical that we can “fully” solve the issue of dehumanization in the use of autonomous weapons and that nations will find a reason to cease developing and deploying them. What remains, then, is the best pragmatic solution we can muster that at least mitigates the damages from this evolution of warfare and establishes a way for us to avoid egregious violations of human rights and sovereignty going into the future. I have already stated that enforcement of international law can be difficult, but standards and norms that most nations are willing to obey can be a useful check nonetheless.
Additionally, I do think that the “intermediate” approaches we mentioned in the previous section still have their use, and might be the best options available. We cannot expect 100% compliance with rules, but guidelines and norms that most nations will agree to and are willing to iterate on seem like a pragmatic solution with enough force to mitigate the worst harms of new types of warfare.
About the Author
Daniel Bashir (@spaniel_bashir) is a machine learning engineer at an AI startup in Palo Alto, CA. He graduated with his Bachelor’s in Computer Science and Mathematics from Harvey Mudd College in 2020. He is interested in computer vision, ML infrastructure, and information theory.
Copyright © 2022 Skynet Today, All rights reserved.