Killer Robots, aka Lethal Autonomous Robotics, vs International Humanitarian Law. And the winner is … (Part 1)
Updated on 05 April 2024
- Progressive development of international law and weaponry
The making of international law is not a one-time event. It cannot even be assimilated into the process of the negotiations, starting from day A and ending on day Z. Like love, the making of international law is in the air, everywhere we look around in multilateral diplomacy.[1]
This explains why the founding jurists of the United Nations, in their infinite wisdom, retained in the Charter the tender terminology: progressive development and codification of international law.
Yet, this nice expression insinuates that there is always progress in the development of international law. This is true in the sense that there is more international law (more UN conventions, regional treaties, states parties, courts, and – nothing is perfect – even lawyers).
In fact, international law may face powerful obstacles and setbacks which not only block its progressive development, but also force it to take a step back occasionally. For example, some might say that the economic and financial crisis weakened or even annihilated several economic and social rights which were the pride of well-off societies in Europe for decades. Other will claim that anti-terrorist policies actually helped terrorists to fulfill their dream, if that was to damage Western democratic societies based on human rights and the rule of law. Even worse, new guys in town may say that current surveillance techniques and policies are making George Orwell turn in his grave and Big Brother look like a boy scout playing with Lego.
This kind of challenge makes people and organisations evaluate whether international law as we know it is badly damaged. Technological and military progress counts among the factors that may indeed block or reverse the progress of international law. This brings me to one of the recent debates around the Human Rights Council in Geneva.
Before proceeding, let me remind you of the truism that, in diplomacy, words are important and they have to be carefully chosen. In the ensuing paragraphs you will have a choice between two terms. Of course, this will happen only until (if ever) the UN General Assembly unanimously decides on the use of a single, boring expression, but one which will have the same meaning for everyone, in every sight, in every sound, as Paul Young might sing, had he been solicited to intervene in our debate.
The debates I am alluding to concern the possible impact of new, unmanned weapons aimed at international public law, in general, but in particular at international humanitarian law.
ALEX IVANOV – THE PRINCIPLE OF HUMANITY
BRONZE
- Killer Robots, the rude version
Human Rights Watch launched a study on Killer Robots[3] and UNITAR organised a debate on the same issue. The study represents a comprehensive and articulated analysis of the complex relationship between a forthcoming generation of weapons and international law.
Killer Robots is a user-friendly term apparently handpicked to allow even fresh graduates in video games sciences to understand to what extent the new stuff may affect their future (whether they go abroad to fight terrorism or stay home as civilians).
Human Rights Watch defines Killer Robots as fully autonomous weapons that could select and engage targets without human intervention. These weapons are not only unmanned, they are also generally detached from any human involvement. They act on their own when selecting which targets to attack. Killer Robots are different from any weapons used today, including drones. The latter are also unmanned but they still require the human command to hit. At present Killer Robots do not exist as such, but technology is moving in the direction of their development. Faster than international law is moving in the direction of codification, I would add … if asked.
In his turn, Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, presented a study on The Lethal Autonomous Robotics and the Protection of Life.[4] He defines Lethal Autonomous Robotics as robotic weapon systems that, once activated, can select and engage targets without further intervention by a human operator.
Perspicacious readers as you are, you no doubt have already noticed that Human Rights Watch and Christof Heyns are talking about the same thing. The academic and euphemistic Lethal Autonomous Robotics are the same as the rude and vulgar Killer Robots.
Human Rights Watch believes that international law prohibits the development and use of such weaponry, given that they violate crucial requirements of international humanitarian law and the Martens Clause.
Killer Robots cannot fulfill three crucial principles of international humanitarian law, namely distinction, proportionality, and military necessity, and therefore ought to be prohibited.
The expert may skip the following lines, but I ought to remind colleagues who took Diplo’s courses on multilateral diplomacy that the principle of distinction requires states to distinguish between the civilian population and combatants in warfare. Killer Robots will not be able to make this distinction.
The principle of proportionality prohibits acts where civilian harm outweighs military benefits. Killer Robots will lack human skills in judgment and will thus not be able to judge how much harm is proportionate to any benefit.
The principle of military necessity implies that any military act must be necessary, in the absence of any other less violent remedies for the conflict. As efficient as they are supposed to be, Killer Robots will not be able to make reasonable judgments about what is militarily necessary and what is not.
The Martens Clause says that civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from dictates of public conscience. Killer Robots, with all due respect to their creators, contravene principles of humanity and public conscience.
Even worse, Killer Robots will increase the likelihood of conflict: states will not need to sacrifice their own people, so wars are likely to happen more frequently. Given the absence of human empathy in selecting and attacking targets, Killer Robots are likely to render conflicts more brutal.
Human Rights Watch believes that, undoubtedly, the development and use of Killer Robots contravenes international humanitarian law and recommends to states to:
- Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
- Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
- Commence reviews of technologies and components that could lead to fully autonomous weapons.
Disgusting killers, no? Well, it seems to be so. Do not worry, in Part 2, we will meet their more emancipated version, the Lethal Autonomous Robotics, and look at the differences. If any!
[1] If you forgive me the liberty of paraphrasing Paul Young on such a serious issue!
[2] Like someone you all know, but I will not mention his name, without his permission.
[3] Human Rights Watch & International Human Rights Clinic at Harvard Law School (2012) Losing Humanity: The Case against Killer Robots.
[4] Human Rights Council (2013) Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns. Document A/HRC/23/27, 9 April 2013.
Related blogs
Related events
Related resources
Subscribe to Diplo's Blog
Diplo: Effective and inclusive diplomacy
Diplo is a non-profit foundation established by the governments of Malta and Switzerland. Diplo works to increase the role of small and developing states, and to improve global governance and international policy development.
Want to stay up to date?
Subscribe to more Diplo and Geneva Internet Platform newsletters!
Thank you, Aldo, for your
Thank you, Aldo, for your remarks. Of course, you are right. As usual! You go deep into the words and beyond them.
I have a few clarifications of context, though.
1. The only part which is actually mine is the introductory part. The rest is my attempt to summarize the reports of Human Rights Watch and the Special Rapporteur. So, I could answer better the questions you did not ask, rather than those you did.
2. As neither KRs, nor LARs do not exist as such at present, the attempt of both HRW and CH is to anticipate and to prevent the evil. In other words, they want a … pre-emptive move.
If we see the issue this way, then the traditional order of precedence (lex specialis derogat lex generalis, lex posterior derogat priori).
If we need a principle of law (to make Jovan happy, as he was a jurist before becoming a champion in the Internet League), that would rather be lex moneat prius quam feriat, the law warns before it punishes.
3. We are talking about the characteristics of the future objects. Therefore I would defend both HRW and CH by saying that the adjective “autonomous” is the rightly used, as it describes the “autonomy” of robots in relation with the immediate human instructions. Subsequently when we see them in the line of duty, we will be able to discuss whether the act “automatically” or take a while for …reflection.
4. It goes without saying that morality is extremely important. My article refers to the international humanitarian law not to geopolitics, strategy and morality. And yet the issue of morality, humanity and ethics, is there, with the Martens Clause. I was not that subtle and I did not try to induce the reader to believe that efficiency is the alibi of the killers.
Like you, if I have to, I would prefer to be killed by a human being, rather than by a machine.
Petru, I’m preparing a
Petru, I’m preparing a spirited response to the whole concept of humanitarian law… bear with me for a while.
Like X? in Catch 22, I don’t want to get killed, period, and it matters little to me whether I die by human hand, a distracted human hand, a robot, or a sanction. I just don’t want to be there.
2000 years ago, the losers were sold into slavery – and nobody saw this as immoral. Such was jus ad bellum then. We have evolved. But not through “humanitarian law”, or jus in bello. We have evolved because people have become autonomous, and because the technology on which our society rests requires autonomy, and autonomy is indivisible. Brute force is to blunt an instrument to yield compliance.
The context has changed, for social reasons, not on account of “humanitarian law”. There is a short story, in Japan, about a fly clinging to a galloping horse. It is convinced it is directing its drive.
Petru, thanks for the cogent
Petru, thanks for the cogent analysis (though I’d put a fee question marks on the introductory part about the emergence of international law). I have, however, a fundamental problem with your approach. If you draw a line, you INCLUDE as well as exclude. If “autonomy” becomes the shibboleth, then all that is not autonomous is automatically validated. You are, in a sense “pushing the envelope”. A general legal principle puts the lex specialis before the lex generalis and the lex posterior before the lex anterior. In addition, you are subtly transforming an issue of morals in one of efficiency. Take the Serbia bombing: 250 lawyers vetted targets, and we became so engrossed in the seriousness of their work that we forgot what they were doing – bombing.BTW: the first such “automatic” bombing was Hiroshima. Truman never gave the order to bomb. Spaatz got a conditional order – to bomb UNLESS countermanded. No one bothered to call back, so he did it on auto-pilot. Is it autonomous, or automatic?