Search
  • Trade of Kings

Stop Killer Robots: The Menace of Artificial Intelligence

On the 1st March 2021, the US National Security Commission on Artificial Intelligence voted unanimously in favour of rapidly expanding the US military’s AI capabilities. Ex-Google Chief Eric Schmidt, who headed the commission, urged the Biden administration to support AI weapons in the armed forces.


The commission warned the US could “lose its military-technical superiority” as China looks to lead the world in AI technology by 2030. Indeed, the report stated, “if our forces are not equipped with AI-enabled systems guided by new concepts that exceed those of their adversaries, they will be outmatched and paralysed by the complexity of battle”. To this end, Schmidt and the rest of the commission recommended the US should double its efforts in AI research by 2026.


This is very a precarious and frightening development in international relations, signalling a potential arms race which may already be in its infancy. In an attempt to remain two generations ahead of the Chinese in micro-electrics, the US has restricted the trade of advanced computer chips to Beijing. Meanwhile, to counter Washington’s embargo, China’s Ministry of Information Technology has promised to “vigorously expand” its chip making capabilities by 2025.


The UK is also considering expanding its AI research and development programme. General Sir Nick Carter claimed over 30,000 soldiers could be remotely controlled machines as early as the 2030s. When questioned on the likelihood of future world conflict the General stressed “it’s a risk, and we need to be conscious of those risks” through the development of AI technology.


It seems states around the world are free to explore this new and potentially devasting avenue of armed conflict. Indeed, Professor Noel Sharkey of the ‘Stop Killer Robots’ campaign argues there is no international muzzle preventing or controlling the development of autonomous weapons. As a result, powerful states are already considering their usage in national security.


This presents significant concerns for the future of war. As states expand into this unknown territory, the threshold for armed conflict becomes significantly lower. If machines come to replace soldiers on the front line, governments will unlikely face public backlash as fewer lives at home are at risk. However, if autonomous weapons become a reality, not only is war a more likely scenario but the pace and acceleration of armed conflict intensifies. More specifically, such systems reduce the time frame of the decision to kill. As human beings are clouded with emotional judgement and deliberation, an autonomous weapon is programmed to take life instantaneously. In turn, this could lead to a rapid escalation in hostilities where many more lives are at risk.


Nevertheless, Michael Schmitt of the US Naval War College argues such calculated decisions reduce the opportunities of despotic individuals like Saddam Hussein to slaughter their own people. Ronald Arkin of the Georgian Institute of Technology similarly stressed machines “will not be subject to-all-human fits of anger, sadism or cruelty”. Thus, from this perspective, AI reduces the possibility of massacres and other abhorrent human rights abuses resulting from irrational human emotions and passions.

However, this argument is rather flawed. Surely such advanced weapons would have an override feature permitting senseless and hateful individuals to commit barbaric acts of cruelty? Furthermore, the choice to kill or not to kill is not a binary decision. Soldiers face an unquantifiable array of scenarios in the heat of war. Rasha Abdul Rahim, Advisor of Arms Control to Amnesty International stressed the three pillars of the laws of war – distinction, proportionality, and precaution – fundamentally require human judgements.


To this end, distinguishing between civilians and combatants, the decision to use lethal force, and ensuring the safeguarding of non-combatants wherever possible requires careful deliberation. A machine simply isn’t capable of making such complex judgements. From a moral standpoint, a computer programme should not be given that responsibility.


Indeed, Bonnie Doherty and Matthew Griechen of Harvard Law Schools International Human Rights Clinic underline how Article 48 of the Additional Protocol to the Geneva Conventions states warring parties must distinguish civilians from soldiers, as well as differentiating civilian infrastructure (hospitals, and schools etc) from military installations. Weapons which fail to do this are labelled “indiscriminate and unlawful”.


It is difficult to imagine how machines would determine who is a combatant or not. Today, armed conflict is so enmeshed in civil society. Guerrilla warfare blurs the boundaries of the battlefield to such an extent that distinguishing between civilian and combatant is a complex task to say the least. The ability to determine who is a threat; the need to understand one’s intentions through tone of voice, facial expression and body language can only be done by a human being who understands these nuances in a way software cannot. In contrast, machines neglect all ambiguity and doubt. If an individual looks like a target, then they must be so. Under such conditions, autonomous weapons are deemed illegal by international humanitarian law.


Similarly in this regard, Article 6 of the International Covenant on Civil and Political Rights states all human beings have a right to life. Killing is only lawful if it is proportionate and ensuring the safety of others. AI simply lacks the empathic ability to make such complex and important decisions.


Indeed, drones, arguably the precursor to autonomous weapons, have already demonstrated the danger posed by such advanced weapons capabilities. The Intercept Drone Papers found 90% of US drone strikes hit unintended targets. Meanwhile, the US continues to be elusive over the proportionality over the use of force. Indeed, the US Air Force argued proportionality is “an inherently subjective determination that will be resolved on a case-by-case basis”. Moreover, Grégoire Chamayou stressed the definition of ‘combatant’ has become so diluted, extending to any form of “collaboration with, or presumed sympathy for some militant organisation.”


The crux of this is that accountability behind military force is becoming increasingly vague. This hazy distinction between civilian and combatant, the unclear parameters behind the acceptable use force coupled with the ever-increasing distance between the triggerman and the target means accountability behind violence is fast becoming non-existent. Little action has been taken to hold the US drone campaign to account. Imagine if such machines acted independently. Who is to blame? Military commanders can only be prosecuted if they’re aware of their subordinate’s actions, while arms manufacturers are immune from prosecution if they follow government guidelines. Therefore, if a machine acts independently, almost no one is responsible for who it targets and why.


Indeed, Joy Boulamwini of Massachusetts Institute of Technology underlined how poor AI was in recognising faces. Not only this, but such systems were even poorer at recognising darker-skinned individuals. In fact, such systems presented an error rate of 19% for darker-skinned persons, and 34.4% for darker-skinned women. This indicates that AI would not only have trouble distinguishing combatants and non-combatants, but such distinctions would also carry significant racial and gendered biases.


AI poses a serious risk to both international, and human security and wellbeing. If states continue to explore autonomous weapons, unscrupulous governments and repressive regimes will soon aspire to do the same. It doesn’t take much thought to consider the horrors that would ensue if the Assad regime acquired autonomous robots that would instinctively kill within nanoseconds. Such technology presents a serious threat to civil liberties and individual freedoms. Technology of this kind could suppress protests with brutal force. Indeed, China is already using advanced surveillance techniques to suppress its Uighur Muslim minority and a more advanced AI would add a brutal dimension to this crisis.


Furthermore, the trickle-down effects of this technology would mean violent non-state actors would ultimately acquire autonomous weapons. Thirty years ago, drones were nothing more than science fiction. Yet, in recent years, ISIS has repeatedly launched drones strikes across Iraq and Syria. The prospect of similar groups acquiring ‘off the shelf’ technology in the near futures seems a likely prospect.


In sum, it is essential the international community takes the development of autonomous weapons very seriously. Unfortunately, it appears this is not the case. The UN Convention on Certain Conventional Weapons dedicated very little time to discussing killer robots. Yet, action needs to be taken imminently before it is too late. The advancement of drones demonstrates how hard it is to curb a weapon’s use once it is in deployment. Control over autonomous weapons needs to be addressed preventatively. As biological and chemical weapons have been condemned by the international community so too must artificially intelligent weapons. In July 2015, leading AI researchers and scientists signed an open letter arguing for the prohibition for the use of autonomous robots in warfare. The threat of AI and weapons is fast becoming a real and present threat in light of the evermore stressful relationship between the US and China. Therefore, action must be taken imminently.


By Daniel Mountain.





71 views0 comments