Responsibility Protraction with Remotely Controlled Combat Vehicles
Abstract This paper deals with the phenomenon that there has been little to no prosecution of misconduct with remotely controlled combat vehicles (RCCVs), such as armed drones, although it is likely that RCCVs have been deployed unjustly. While responsibility gaps are considered as one reason for this apparent lack of prosecution, it is argued that there are no such gaps because no realistic cases of unintended, uncontrolled, physical outcomes, unbeknownst to their operators, are conceivable. Instead, lack of prosecution is explained with responsibility protractions: The involvement of many stakeholders makes it difficult to pin down one responsible individual, errors can slip in unbeknownst to all parties involved, and “surgical attacks” with drones may have been considered to be less significant as bombing by aircraft and therefore largely neglected. To address possible misconduct with RCCVs, this paper finally considers the following measures: (a) that RCCVs should be deployed as little as possible, (b) more effort must be spent on ascribing responsibilities, (c) conscientious engagement should be promoted among RCCV operators, and (d) a novel military convention ought to mandate that pilots be stationed in countries where RCCVs are being deployed, whenever there is no official declaration of war against these respective countries.\(^1\)
Introduction
One recurring theme in robotics and AI is responsibility gaps (Matthias 2004; Coeckelbergh 2020). It follows from the thought that we can build machines for whose action neither the machine nor its developer(s) can be fully held responsible. Consider a machine that has been trained from vast amounts of data and been let to run on its own, causing consequences that could not be predicted. Since the machine’s developer(s) would not be the physical source of its actions and could not fully predict its performance, they would be only responsible for the machine in a reduced sense. But the machine could neither be held responsible when we agree that a machine lacks the necessary capacities to regulate and adapt its output, based on a moral understanding of the situation and a propensity to foresee any consequences (Köhler 2020, 3127). Furthermore, if we consider responsibility to be a form of accountability, a machine would also be incapable of answering or demonstrating accountability for its output, as would a human being, because a machine is not an appropriate recipient for praise or punishment (Sparrow 2007, 71; Danaher 2016). These points, when considered together, indicate that autonomous machine actions cause a gap in the assignment of responsibility.
In military ethics, responsibility gaps have been discussed by considering autonomous military robots, also known as lethal autonomous weapons systems (LAWS) (Sparrow 2007; Lokhorst and van den Hoven 2012; Leveringhaus 2016; Müller 2016; Oimann 2023). In these discussions, it is assumed that their operators are “on” or “out” of the loop. If they are on the loop, it means that the systems run autonomously but the operators can intervene, if necessary. If the operators are out of the loop, the deployed systems are fully autonomous and follow their own programs and data. In both respects, the human-robot constellations that were discussed in this fashion have anticipated scenarios that are still rare because no fully automated military robots are being deployed as of today. The Kargu-2 drone by the Turkish company STM appears to be a new exception.
In contrast, responsibility gaps with existing remotely controlled combat vehicles (RCCVs), such as the MQ-9 Reaper drone, have not been considered to a similar degree.\(^2\) For it is assumed that their pilots are “in” the loop, that is, that they control the system and everything it produces. Hence, most people think that the operators who deploy and steer RCCVs, such as aerial drones, can be held responsible for the consequences.
Since drones from the MQ series are still being used by various countries (e.g., by the U.S., the U.K., or the Netherlands) and there is little literature on responsibility gaps with RCCVs, another consideration of responsibility seems justified. But even more interesting is the fact that, to my knowledge, no one has been held liable for drone strikes that were done without just cause while ground troops are periodically sentenced for war crimes. In the United States, for example, court cases have resulted in no convictions. What is also striking is that all of them were brought to court by civil parties only. Some cases on excessive force with RCCVs were eventually dismissed by referring to the “political question doctrine”, assuming that drone strikes must first be politically addressed (Kaufman and Diakun 2017, 120). A path to the merits has also been barred by state-secrets privileges.\(^3\) Conversely, a notable war crime that ended in prison time were the “thrill killings” by U.S. infantry soldier Pvt 1st Class Andrew Holmes in Kandahar, Afghanistan, in 2011 (Fallon 2011). The mention of these examples, and legal liability more generally, may help us in what follows to better understand the apparent problem with RCCVs, as liability is also rooted in the attribution of responsibility.
Another reason to discuss responsibility allocation with RCCVs is to gain a better understanding of who would be the source for possible misconduct with RCCVs. In military ethics, it is generally accepted that no excessive force should be used to achieve military goals (Lazar 2016). However, one would need to be able to call out those who have created more harm and pain than can be justified, that is, those who are responsible, if this standard were to have any force. Therefore, considering responsibility regarding RCCVs would also be important if we are to maintain military standards.
Having now introduced some of the thoughts that lie behind this paper, let me briefly sum up what follows: I discuss responsibility gaps with Alex Leveringhaus’s Ethics and Autonomous Weapons (2016). Leveringhaus provides us with useful criteria that lead me to the conclusion that there are no responsibility gaps with RCCVs; for no realistic cases of unintended, uncontrolled, physical outcomes occur unbeknownst to their operators with contemporary RCCVs. However, I argue in the next section that the legal disinterest regarding misconduct with RCCVs could still be explained by considering responsibility. For the complexity of RCCV deployment leads to responsibility protraction. Responsibility is protracted when too many stakeholders are involved so that it is very difficult to pin down one responsible individual. Furthermore, accumulative actions might also create undesirable outcomes by allowing errors to slip in unbeknownst to all parties involved. Ultimately, “surgical attacks” may appear less significant in comparison to heavy bombing by aircraft and have been therefore mostly ignored.
In the last section, I discuss several measures that could help with reducing possible misconduct: (a) RCCVs should be deployed as little as possible, (b) more effort must be spent on ascribing responsibilities, (c) conscientious engagement should be promoted among RCCV operators, with the aim to incorporate their views into decision making processes, and (d) the introduction of a new military convention should require operators to be stationed in those countries where RCCVs are being deployed, provided that their states have not officially declared war against the respective country where they carry out their missions.
Responsibility Gaps with Remotely Controlled Combat Vehicles
There are many theories of responsibility, but there is no uniquely accepted definition of what it means to be responsible. For example, David Miller (2001) maintains that there are roughly four types of responsibility. According to Miller, we can be causally (1) and morally (2) responsible, as well as responsible due to certain capacities that we might have (3), or because of our affiliation with a community (4). Briefly summarized: 1) If I damage your bike, by kicking it, I am causally responsible for the damage. 2) If your bike gets stolen because I have forgotten to lock it after having it borrowed, I would be morally responsible for your loss: I am not the actual source of the larceny, so I am not causally responsible, but I have neglected my duty to secure your possession while it had been under my custody. Moral responsibility also implies for Miller the possibility of appraisal or blame. Causal responsibility, in turn, might not incur any blame or legal action because things can be done without intention. For example, I could damage someone’s property while having a seizure. 3) Moreover, Miller argues that people can be responsible because of their capacities. Take into consideration doctors who may be responsible for helping the sick due to their knowledge and training. 4) Finally, our affiliation with a group might imply certain responsibilities that are owed to fellow members, such as an obligation to pay taxes because one shares a political community with them.
However, Miller’s variations on responsibility are not sufficient for addressing responsibility gaps in human-machine interaction. His types focus mainly on concepts that we can apply to human action, but which are ignorant of whether humans act with or without machines, or within or without complex machine systems. Therefore, it makes sense to enrich the concept of responsibility with another account that is better attuned to the human-machine-action that we find in RCCV’s deployments.
In this regard, it is worthy to consider Alex Leveringhaus’s Ethics and Autonomous Weapons (2016). Leveringhaus suggests three points that must be fulfilled to allocate responsibility when machines are being used. Machines’ operators are responsible for any physical output: (1) if it is the result of the operator’s intent, (2) the machine action happens to her knowledge, and (3) when it is under her control (Leveringhaus 2016, 78-79). For these points to hold, it is generally assumed that the machines are tools, extending or enhancing their operators’ capacities. Where (1)-(3) do not apply, and we observe physical output, it is reasonable to speak of a situation where neither the operator behind the machine is responsible nor the machine itself for it is but a tool. In other words, in such a situation we would be confronted with a responsibility gap.
At the outset, it is important to note that accidents might not be responsibility gaps in the sense I am trying to discuss here.\(^4\) If an operator loses control of her machine because of unforeseeable weather conditions, and if her machine crashes into a densely populated area, she might not be responsible for this outcome, or only in a reduced manner, even though some action had occurred with devastating results. This is so because it was arguably not in her power to alter the weather, nor the crash of her machine. We would excuse her, if she had spent due care on preparing the deployment of the machine, by assuring its functionality, by looking at weather reports, by following conventional security measures etc. And, in doing so, we would reach the conclusion that no one is ascribed responsibility, at least in the sense of culpability.
There are further scenarios where machines could produce outcome without violating Leveringhaus’s principles. For example, a machine could evade its operators control and run against her intention when it is hacked or even repurposed by a third party. (Leveringhaus 2016, 78) In such a scenario, we would not speak of any responsibility gap because the machine would follow someone else’s intention and control. For example, the Australian company DroneShield Limited develops “Counter-Unmanned Aircraft Systems”, such as its DroneGun MK4, with the aim to disrupt signals to and from third party vehicles, in the hopes that the disrupted vehicle either lands on the spot or returns to its operator by default. In short, we may have situations where machines run against their operator’s intentions and go out of control, but we can still find a way to avoid talking about responsibility gaps in these situations.
To press the issue, we need to consider whether RCCVs systematically produce unintended, uncontrolled, physical outputs, unbeknownst and unforeseeable to their operators. In this respect, Leveringhaus is not sure whether all three conditions need to be fulfilled to speak of responsibility gaps, or whether two of them might already be sufficient. However, he thinks that the first criterion, intent, is necessary: Only if RCCVs, such as armed drones, systematically run against their operators’ intention, we could think of gaps when trying to ascribe responsibility. For example, if we have a machine that nudges its operators to always shoot at innocent people against her will.
Are there any situations where responsibility gaps realistically occur with armed drones? Lack of intent (1) seems unlikely. If drone missiles kept hitting the wrong targets, this might be the result of a bug or malicious programming. They would be reprogrammed to fully serve the commands of their operators.\(^5\) Likewise, military drones would not simply fly in the opposite direction as commanded by the operator. If that happens, it could be a sign of interference from a third party. In any case, drones would ideally be stopped before more unintended action would follow.
Let us examine the knowledge-condition (2). We might want to ask: Do contemporary RCCVs lead to indiscriminate killings unbeknownst to their operators? This seems to be unlikely as well. It has not been the case that drones created a surge of massacres without the knowledge of those who have steered them, as they have already been used for more than a decade. As a result, we can conclude that drones, as far as we know, do not involve any unforeseeable physical actions.
That said, two arguments that are discussed regarding unmanned warfare seem to be pressing this issue. The first is the so-called Threshold Argument. According to this argument, military technologies, such as armed drones, might lower the threshold for states to enter war because their deployment is less costly in terms of soldiers’ lives and money (Sparrow 2012; Galliott 2015). While I cannot go into a more detailed discussion of this argument, I would like to emphasize that it is possible to say that the deployment of armed drones has consequences that are not directly anticipated by their operators. In a similar fashion, it has been argued that asymmetric warfare can create unethical responses, such as guerrilla tactics or terror, because the technologically inferior side might have no other means to retaliate (Killmister 2008). Again, one can argue that RCCV deployment produces these outcomes in the long run, which would not be easily predicted.
One might thus conclude that RCCV deployment violates Leveringhaus’s third criterion, that is, RCCVs could systematically produce long-term effects unbeknownst to their operators. However, it is far more difficult to prove for each RCCV deployment respectively that these vehicles are the primary causal source of such effects. If a drone is flown to a terrorist outpost with the goal to eliminate it, the possible outcomes that arise seem to be calculable. Additionally, pilots could introduce sufficient risk analyses, evaluating chances of failure. They could also include rather unlikely events to estimate whether their drone deployment is justified. Eventually, the operator would know whether the attack succeeds in damaging the terrorist’s infrastructure or even in eliminating the terrorist if the drone’s telecommunication works properly. If not, she could always adapt its action. It appears that such a drone mission would not lead to any outcomes that could not be anticipated after careful consideration.
When considering Leveringhaus’s last criterion, we observe that contemporary military drones are not sophisticated enough to undermine their operators’ control. If connection is lost and they run out of energy, they would land on the spot or crash. Some models are also equipped with the capacity to automatically return to the point from where they were started whenever telecommunication with the operator is interrupted. As with many other machines, they require the operator to be always in the loop. Hence, everything contemporary RCCVs produce is controlled by their operators, unless auto-pilots and other automated control mechanism are being used extensively. But in this paper, I want to focus on those RCCVs that are not automated yet. A discussion of automated military robots would entail many more issues concerning control, intention, and anticipation on the operator’s side.
For the reasons given, it is difficult to claim that contemporary RCCVs systematically lead to responsibility gaps. If unintended, uncontrolled action occurs with RCCVs, unbeknownst to their operators, it would be likely due to accidents or due to insufficient foresight or control from their operators or from those who deploy them, thus making the operators or deployers responsible. Putting such exceptions aside, action by RCCVs can indeed be linked to the decisions, control, and knowledge of their operators. If that is the case, namely, if there is no responsibility gap with contemporary RCCVs, one is further justified to conclude that their operators would be responsible for whatever consequences the RCCVs might cause.
Responsibility Protractions with Remotely Controlled Combat Vehicles
Given the number of missions and lethal strikes that have taken place with RCCVs, predominantly with armed drones, chances are high that abuses have occurred. Why has there been no conviction? One explanation could be that the military suppresses any misapplication of drones. There might have been abuses and unjust drone strikes, but the misconduct has not been addressed within the military apparatus. One recent investigation by the New York Times, How the U.S. Hid an Airstrike That Killed Dozens of Civilians in Syria, points into this direction (Philipps and Schmitt 2021). While the article focuses on aircraft, it would make sense that similar attempts of hiding and neglecting misconduct in the military has happened with armed drones. But why, on the other hand, are infantry soldiers, such as Andrew Holmes, prosecuted and convicted for war crimes? Another explanation would be that states may knowingly and willfully engage in unjust and abusive conduct, overriding international law, to exercise their power without regard for legal consequences. The justifications for dismissing litigations involving armed drones in the U.S. would suggest such a position. Yet there are states that could be said to subscribe to this stance, and which neither ratify the Rome Statute of the International Criminal Court, like the United States of America, but which still hold some of their soldiers responsible for excessive and unlawful force. Given these circumstances, I think it is possible to provide a third explanation for the lack of prosecution and conviction for unjust and abusive drone strikes.
Another possible explanation could be that armed drones result in a phenomenon I would like to refer to as responsibility protraction, which can muddy the waters of our ethical assessments and comprehensive legal proceedings. The idea of protraction is not novel. For example, it is extremely difficult to find the right kind of addressees in large corporations when the business has been involved in criminal action (Nucci and de Sio 2016). The complex way of how responsibilities are shared within large institutions and how the minute acts of each employee that add up to what the institution eventually “does” make it difficult to pin down one person who is responsible for the outcome. This difficulty has also been called the Problem of Many Hands (Thompson 1980; van de Poel et al. 2012). Although one would assume that politicians or managers are the kind of persons in charge, examples of corporate scandals have shown that they can successfully exempt themselves from taking responsibility, by claiming that things were done unbeknownst to them. On a similar note, we can assume that the deployment of armed drones happens within a complex institution, involving many people: ground personnel fueling and repairing drones, pilots steering and targeting, officers and intelligence service agents discussing who will be killed next. If Jo Becker and Scott Shane (2012) are right, there are up to 100 individuals involved in such debates, including generals, a minister of defense, or even the president, who might sign the final verdict to kill suspects with drones. In fact, there might be so many people involved that it becomes almost impossible to evenly distribute responsibilities among all stakeholders. Arguably, this is already the case with every military mission and even more so, as more sophisticated technologies are involved. But the crucial point is to be aware that RCCVs, such as armed drones, would add a further layer to this situation.
Due to this complexity, we may not blame a drone pilot for unjust strikes in the same sense as a combat soldier. While one can assume that eventually the pilot would be causally responsible for the strikes, which he sets off by clicking or pressing a button, as he is the last in the chain of command and hence physically involved in the action of the machine, we can say with Miller (2001) that he is not fully morally responsible for the killing to a similar degree as a soldier would be by shooting her rifle. For the drone pilot has had less leeway in applying his own interpretations to the combat. We have seen that the decision who is going to be killed entails in contemporary drone warfare the involvement of many agents, whereby the final verdict to attack a target is put out of the hands of the pilots. To put it more bluntly, pilots might be just the final cog within the military complex, executing orders without much chance for considering situations based on their own application of military standards.
Infantry soldiers, in contrast, seem to have more leeway as to what they will do and whom they will kill, if they deem it justified. Consider Sam Mendes’s film 1917. Two British soldiers, Lance Corporal William “Will” Schofield (George MacKay) and Lance Corporal Thomas “Tom” Blake (Dean-Charles Chapman), are sent to the front to warn their troops of a German ambush. After having passed the first trench, Will and Tom reach a small, abandoned farm behind the lines. While they search for German outposts, they observe an airplane battle that ends with a German plane crashing into the barn. Tom decides to help the severely wounded pilot, holds him in his hands, and feeds him water from his own bottle. While Will continues his search, the next scene reveals how the German stabs Tom. The film presents us with how differently soldiers can treat the enemy. While Tom shows mercy, the pilot decides to take another life with his own. Finally, after Will discovers what happened, he shoots the German.
What is also worthy keeping in mind is that Tom and Will could not only spare the German’s life, but they would also have the chance to take him prisoner, which was likely Tom’s intention when he decided to help him. Drone operators, on the other hand, do not have as many options as infantry soldiers have. All that they could do is to fully disobey their orders, by not flying and shooting with their drone. It is likely these individual circumstances that make it easier to identify infantry soldiers as the fully responsible source of outcomes, and why they are therefore more often put on trial for excessive force than RCCV pilots.
A further explanation for the muddying of responsibility allocation is that the complexity of RCCV deployment also increases the possibilities for errors. There might be situations where errors slip into the process of deciding which suspect should be killed with RCCVs, as well as into the execution of this decision, unbeknownst to all parties involved. Whether we should understand these cases as accidents for which no one could be taken neither responsible, as with unforeseeable weather, or whether such cases hint at a systematic responsibility gap in RCCVs is open to debate. On the one hand, if such a situation were to happen, responsibility might be distributed in terms of moral responsibility because it could be argued that the drone’s operators failed to detect the error and therefore neglected their duty to oversee and care for the RCCV’s deployment. On the other hand, I am inclined to say, if RCCVs repeatedly imply errors, which remain unknown, Leveringhaus second criterion for responsibility gaps is fulfilled here. That is, RCCVs could potentially do something because of undetected errors, which were unforeseeable to their operators.
Finally, the lack of prosecution of wrong drone strikes could also be explained by looking at the bigger picture of wars. Armed drones allow “surgical” and small-scale strikes which do not appear as grave and obvious, as, for example, erroneous and unjust bombings by aircraft. RCCVs can fly at lower altitudes than aircraft and, thus, much closer to their targets. This provides their operators with the opportunity to attack with precision and less collateral damage. In contrast, consider when the German Colonel Georg Klein mistakenly ordered the bombing of two fuel trucks in Afghanistan in 2009, and the incident led to approximately 100 deaths of innocent civilians. This resulted in an investigation by public prosecution, although charges were eventually dropped. Compared with such damage, it is possible that drone strikes have not been adequately addressed. If considered firmly, however, (legal) responsibility for wrong drone operations could and, of course, should be allocated.
To sum up, in this section I have argued that the deployment of RCCVs is prone to responsibility protractions. This is so because they are highly sophisticated technologies whose deployment requires many hands. For this reason, killings perpetrated with RCCVs appear to be qualitatively different from those which are perpetrated with conventional rifles by infantry soldiers. The complexity of remotely controlled machines seems to deprive military personnel from an individual application of their own ethical standards. Additionally, errors can happen without individual responsibility because they could slip in undetected. While surgical effectiveness of RCCVs is arguably an advantage, this aspect might be another reason as to why unjust drone strikes have rarely been prosecuted and not let to any conviction.
Conclusion and Measures to Address Possible RCCV Misconduct
The previous analysis agrees with the established view that armed drones do not create responsibility gaps\(^6\); for there seem to be no realistic cases of unintended, uncontrolled, physical outcomes, unbeknownst to their operators, with contemporary RCCVs. However, there is evidence suggesting that RCCVs create other problems with responsibility, by protracting its allocation. If that is the case, RCCV deployments pose a problem for our ethical standards.
Just War principles and established military conventions, such as The Convention on Certain Conventional Weapons, are meant to prevent excessive force. But if unjust harm is being done with armed drones, and no one seems to be fully responsible for it, it would be difficult to hold someone accountable. Who should it be? The pilot? The officer or the intelligence service agent who discuss suspects? The general? The head of state? This is an ethically undesirable situation because the responsible use of machines is not only achieved through good will on the operators’ side, but also by sanctions for misconduct that can only be applied if it is clear who has been responsible for the machine’s output. If, however, no one can be held responsible for RCCVs, then there is no way for sanctions and further legal action; and the deployment of RCCVs eventually invites excessive and unjust use.
How can this situation be ameliorated? While it seems that one should as much as possible avoid technologies, such as RCCVs, that diffuse responsibility allocations, this answer will only partly suffice because RCCVs are already being deployed by many states. The chances that this will change are slim. Therefore, one amelioration would be to spell out at least as clearly as possible in what way and to what extent agents are responsible for RCCV strikes. This paper has already started with pointing out the complexity of responsibility that arises with RCCVs, but further research, especially empirical research, is needed. We need to discuss in what way military institutions prosecute misconduct, how RCCV sorties are being internally monitored, whether one pilot suffices for steering RCCVs, and whether pilots or superiors duly take responsibility for their deeds. It would require the military of many states to step up their internal monitoring.
At the same time, I believe that considering RCCVs on their own will reach its limits. If RCCVs are seen as being part of a complex institution, a view that mainly focuses on individual responsibility is too narrow. In this regard, Luciano Floridi (2016) suggests one way of reversing our classical understanding of responsibility that deserves further attention, but upon which I can only concentrate briefly. Considering actions, we should, says Floridi, focus less on agents and their intentions and more on outcomes. The underlying idea is that if a network of human, artificial, and hybrid agents produces effects that are morally undesirable, then one way of addressing the issue could be by introducing systematic changes, especially when it is unclear who or what has led to the outcome of the network. For Floridi, this is best achieved through “soft legislation, rules and codes of conducts, nudging, incentives and disincentives” (Floridi 2016, 7). In other words, Floridi’s approach is trying to circumvent the scavenger hunt for individual responsibility by varying different systematic stimuli until the morally desired outcome for a network of agents is achieved.
Considering RCCVs, it would be hence reasonable to complement our strategy of assigning responsibility with additional strategies to disincentivize misconduct. We therefore need to ask ourselves: What systematic measures could be useful to reduce possible unjust RCCV deployment?
One opportunity to reduce unjust RCCV strikes is to strengthen soldiers’ and pilots’ possibilities for “conscientious refusal to fight”, as Jeff McMahan suggests (McMahan 2009, 97). This could be achieved by giving pilots a greater leeway as to how they also apply their own moral standards, for example by being involved in the process of deciding which target is being hit. Because RCCV missions can often be prepared, questioning operators whether killing such prospective targets is also consistent with their moral views does not seem to conflict with other practical requirements that must be met for a successful mission. In this way, operators, especially the pilots, would also have more grounds to object if they thought the killing of people was morally wrong. Ultimately, we would want to strive for a situation where the operators are as morally responsible for their action with RCCVs as infantry fighters are in combats, for whom it appears easier for us to assign responsibility. If this is achieved, RCCV operators could take a greater share of responsibility, as when they are only cogs in the military machinery.
Another problem with contemporary RCCV missions is that RCCVs are often deployed within the context of terrorist hunts that are not in any way part of a justified war. This then points to an issue with RCCVs which we need to address, namely, the fact that they invite military action outside of our classical understanding of just and unjust wars, for example, by breaching sovereignty of states where terrorists hide. Therefore, we may try to directly re-enforce the principle of state sovereignty that has been weakened with RCCV deployments. We should require clear agreements between the deploying countries and those countries where RCCV missions take place.
To undermine quick and dirty missions, we could further demand that RCCV pilots must be stationed in the country where they deploy their systems, provided that there is no official declaration of war against the country where they carry out their missions. For it is one thing, to quickly fly through a foreign and often militarily inferior country, but another to station active military personnel abroad. Physically locating pilots in a foreign country where they undertake military strikes would also necessitate that such a country allows for their presence, helping to reinforce state sovereignty. Additionally, this requirement would mean that disagreeing countries could more easily prevent deployments by, for example, expelling RCCV pilots from their territory. In the traditional scenario of defending against a belligerent state, it appears neither feasible nor ethically imperative to deploy RCCV personnel abroad, as missions are likely to be accompanied by additional military action against the adversary and can be justified to effectively halt their attacks. For instance, a nation defending itself may employ drone strikes to impede the adversary’s military infrastructure.
Such an approach to promoting a specific outcome may prove effective if it were to become a mandatory requirement that is accepted by all nations in the spirit of war conventions. If the international community agrees that RCCVs lead to ethically undesirable situations, they might be willing to cede certain RCCV missions in the hopes of ameliorating the overall legal and moral standards with which they can be deployed. At the same time, it would allow RCCVs being deployed for defensive purposes. Having states agree on the use of certain weapons in this sense does not seem to be impossible if we think of The Convention on Certain Conventional Weapons that prohibits or restricts the use of certain weapons, such as booby traps or incendiary weapons. Even in the absence of a formal agreement, states may wish to station their pilots in the areas where they deploy RCCVs to ensure a more responsible approach to these systems.
In sum, I am suggesting several measures to address possible RCCV misconduct: (a) the military should deploy RCCVs as little as possible to avoid responsibility protractions, (b) if RCCVs are being deployed, the military must assure that for each RCCV mission responsibility can be allocated. To heighten the standards with which RCCVs are being deployed, (c) conscientious engagement among RCCV operators should be promoted, allowing them a greater leeway to apply their own moral interpretations. Finally (d), states should station pilots in countries where their RCCV mission takes place whenever there is no official declaration of war against these respective countries.
Footnotes
\(^1\) The paper is an ameliorated and extended version of Chapter 3.3. from my master’s thesis Drone Ethics: Duties and Responsibilities for Unmanned Aerial Combat Vehicles from 2020, which is accessible at the library of the University of Vienna, Austria.
\(^2\) One notable exception is Jail Galliott’s nineth chapter on responsibility gaps in his Military Robots. Mapping the Moral Landscape (2015).
\(^3\) Al-Aulaqi v. Obama, in which these privileges played a role, was ultimately dismissed on different grounds (U.S. Congressional Research Service 2022, 27)
\(^4\) Thomas Simpsons and Vincent Müller (2016) as well as Sebastian Köhler (2020, 3137) also defend this view.
\(^5\) The rise of amateur drone deployments (as was the case in Ukraine against the Russian military in 2022) increases the chance of bugs and flawed designs. But do amateur models lead to systematic responsibility gaps? To address this question, further information concerning actual deployments is required that is not yet available as of 2023. This paper focuses on RCCVs that have been produced by licensed companies, undergone professional testing, and which are maintained by professional mechanics.
\(^6\) That is, there are no responsibility gaps when we analyze armed drones based on Alex Leveringhaus’s criteria (2016). Ibo van de Poel et al. (2012) speak of the Problem with Many Hands also in terms of responsibility “gaps”.
Bibliography
Becker, Joe and Scott Shane. 2012. Secret ‘Kill List’ Proves a Test of Obama’s Principles and Will. The New York Times. https://www.nytimes.com/2012/05/29/world/ obamas-leadership-in-war-on-al-qaeda.html (accessed May 6, 2023).
Coeckelbergh, Mark. 2020. AI Ethics. Cambridge, MA: MIT Press.
Danaher, John. 2016. Robots, law and the retribution gap. Ethics and Information Technology, 18(4): 299–309.
Fallon, Amy. 2011. US soldier jailed for seven years over murders of Afghan civilians. The Guardian. https://www.theguardian.com/world/2011/sep/24/us-soldier-jailed-afghan-civilians (accessed June 17, 2023)
Floridi, Luciano. 2016. Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society, vol. 374 (A): 1-13.
Galliott, Jai. 2015. Military Robots. Mapping the Moral Landscape. London: Routledge.
Kaufman, Brett Max, and Anna Diakun. 2017. United States Targeted Killing Litigation Report. In Litigating Drone Strikes: Challenging the Global Network of Remote Killing, eds. Andreas Schüller and Wolfgang Kaleck. Berlin: European Center for Constitutional and Human Rights (ECCHR): 118-133.
Killmister, Suzy. 2008. Remote Weaponry: The Ethical Implications. Journal of Applied Philosophy, vol. 25, no. 2: 121-133.
Köhler, Sebastian. 2020. Instrumental Robots. Science and Engineering Ethics, (2020) 26: 3121–3141.
Lazar, Seth. 2016. War. The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta. https://plato.stanford.edu/entries/war/ (accessed May 6, 2023).
Leveringhaus, Alex. 2016. Ethics and Autonomous Weapons. London: Palgrave Macmillan.
Lokhorst, Get-Jan and Jeroen van den Hoven. 2012. Responsibility for Military Robots. In Robot Ethics. The Ethical and Social Implications of Robotics, eds. Patrick Lin, Keith Abney,and George Bekey. Cambridge, MA: MIT Press: 145-156.
Matthias, Andreas. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technologies, vol. 6: 175-183.
McMahan, Jeff. 2009. Killing in War. Oxford: Oxford University Press.
Miller, David. 2001. Distributing Responsibilities. The Journal of Political Philosophy, vol. 9, no. 4: 453-471.
Müller, Vincent C. 2016. Autonomous Killer Robots are Probably Good News. In Drones and Responsibility. Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons, eds. Ezio Di Nucci and Filippo Santoni de Sio. London: Routledge: 67-81.
Nucci, Ezio Di and Filippo Santoni de Sio. 2016. Drones and Responsibility. Mapping the Field. In Drones and Responsibility. Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons, eds. Ezio Di Nucci and Filippo Santoni de Sio. London: Routledge: 1-14.
Oimann, Ann-Katrien. 2023. The Responsibility Gap and LAWS: a Critical Mapping of the Debate. Philosophy & Technology (2023): 36:3.
The Organization for Security and Co-operation in Europe. 1994. Code of Conduct on Politico-Military Aspects of Security. https://www.osce.org/files/f/documents/5/7/ 41355.pdf (accessed May 6, 2023).
Phillips, Dave, and Eric Schmitt. 2021. How the U.S. Hid an Airstrike That Killed Dozens of Civilians in Syria. The New York Times. https://www.nytimes.com/2021/11/13/us/us airstrikes-civilian-deaths.html (accessed May 6, 2023).
Simpson, Thomas W., and Vincent C. Müller. 2016. Just War and Robot’s Killings. The Philosophical Quarterly, vol. 66, No. 263.
Sparrow, Robert. 2007. Killer Robots. Journal of Applied Philosophy, vol. 24, no. 1: 62-77.
Sparrow, Robert. 2012. ”Just Say No” to Drones. IEEE Technology and Society Magazine, Spring: 56-63.
Thompson, Dennis F. 1980. Moral Responsibility of Public Officials: The Problem of Many Hands. The American Political Science Review, vol. 74, no. 4 (Dec.): 905-916.
U.S. Congressional Research Service. 2022. The State Secrets Privilege: National Security Information in Civil Litigation (R47081, April 28, 2022) by Jennifer K. Elsea and Edward C. Liu. ProQuest® Congressional Research Digital Collection (accessed June 17, 2023).
van de Poel, Ibo, Jessica Nihlén Fahlquist, Neelke Doorn, Sjoerd Zwart, and Lambèr Royakkers. 2012. The Problem of Many Hands: Climate Change as an Example. Science and Engineering Ethics, vol. 18: 49-67.
Enjoy Reading This Article?
Here are some more articles you might like to read next: