A Moral Duty to Use UAVs

Disclaimer: Because I can only write with some measure of authority from the perspective of ethics and critical reasoning – my background is in philosophy generally and ethics in particular – I am setting aside issues related to national and international law. Whether or not law allows for drone killing is another issue worth investigating – and one that many legal scholars have tackled, but is not a question I address in the scope of this article.

Edit: See comments/responses below this post for an expansion of this post with regard to unintentional collateral damage (i.e. the killing of innocent bystanders), which is something I mean to write on when I began this post but forgot as I passed the 2k-word mark. I thank the commenter for prompting me to clarify my position on this relevant aspect of the topic.

—-

In this post, I’ll defend the notion that it’s morally obligatory to use unmanned aerial vehicles (UAVs) to kill people engaged in acts of terrorism. First, I’ll give some moral background to the notion that all people are deserving of respect. That being the case, it’s always wrong to kill. Second, I’ll offer a toy example in which I tease out a moral principle about justified killing. Third, I’ll discuss some complications that arise as a product of the example. Then, I’ll draw some conclusions about what all of the aforementioned means for UAV attacks on those engaged in acts of terrorism.

One assumption common to modern (say, from the time of John Locke) ethics is that all people, in virtue of their humanity, are due a certain kind of respect. What “respect” amounts to is an issue philosophers have been addressing for the last 100+ years, and I don’t expect to solve it today. Usually, though, to treat someone with respect means to treat them as important by taking their ends (goals) to be valuable for oneself. So, if I respect you but disagree with your pursuit of a degree in philosophy, I should treat your goal as valuable to me – and encourage you as if it were my goal that you earn a philosophy degree. Usually, too, respect means not treating people badly – generally put. It means refraining from killing, torturing, enslaving, or otherwise abusing someone. I ought not to set aside your humanity in pursuit of my own ends. If I disagree with your lifestyle, I should not pass a law that prevents you from pursuing the life you wish to lead, unless your doing so would be radically disrespectful toward other people (if, for instance, your ‘lifestyle choice’ is to be a serial murderer). This is an admittedly vague and broad definition of respect, but it will have to do in this context.

Respect usually is a zero-sum game when it regards extreme instances of potential disrespect. For instance, our moral duty to respect others clearly requires us not to kill them to satisfy our own personal aims. I cannot both respect you and murder you. For this reason, any form of murder necessarily is disrespectful, and therefore immoral. This is a commitment I endorse without regard to the consequences of any particular instance of killing.

Often, in the public forum, ‘murder’ is defined to suit the argument: Capital punishment is not murder, the argument in favor of the practice might claim; killing in self-defense is not murder, and so on. Murder is often defined as “unjustified killing.” So, if Y is justified in killing Z, Y has not committed murder and so is not morally blameworthy for committing the act of murder. (Y may be blameworthy for other reasons, but not for committing a murder).

Here is where things get particularly tricky. Often, a society seems to want to define a particular instance (or group of instances) so as to exclude it from the category of murder and include it in the category of justified killing. The purpose of this exclusion is to render a particular instance of killing morally acceptable.

I don’t buy this argument. It seems to me that justified killing is still killing, and, if we value a person’s humanity, ending that person’s life (against the wishes of that person) is murder, and is always morally abhorrent. This line of thought does not preclude a particular murder from being justified, however. As we will see, there are some clear cases in which a particular case of killing can be both wrong and justified.

That said, something can be both morally wrong and still be the right thing to do. That is, what is right can be so in virtue of the fact that it is the least-wrong outcome in a given case. To take an example of a simpler case than that which I’ll be consider in this post, imagine the following scenario:

Tom and Jerry live in the same house. Tom is a cat and Jerry is a mouse. (I think you know this story). Let’s imagine that cats and mice are due the same sort of respect that humans are due. Tom tries, day in and day out, to kill Jerry. He does it because it is a kind of carnal desire for him. It’s his natural inclination – and he feels compelled to kill Jerry, for one reason or another. Let’s assume that Tom doesn’t need to kill Jerry for food – Tom is fed by his guardian, a human who we don’t need to know about apart from the fact that he provides sustenance to Tom. So Tom doesn’t need to kill Jerry – he just wants to, albeit for what might be described as physiological reasons. Tom’s killing Jerry would be morally abhorrent. Tom’s refraining from killing Jerry would be morally good – but he won’t refrain. He’s made it clear that he’ll keep trying until he’s successful. He doesn’t ask his owner for help to stop him attempting to kill Jerry (in the form of what philosophers sometimes call a “pre-commitment device”).

Imagine that an all-knowing, all-powerful philosopher has the power to end Tom’s life (against Tom’s wishes) and spare Jerry an undesired end to his own. It would be wrong for the philosopher to do this, but it would be more wrong to allow Tom to continue in his efforts to kill Jerry. Because Tom could, however unlikely his physiological urges make this course of action, refrain from killing Jerry. Tom has not given up his cat-hood, in virtue of which he is due respect equal to that due to Jerry (and all people). So, if the philosopher kills Tom, he treats Tom disrespectfully, and so commits a moral wrong. However, if the philosopher wants to do what is most right under these circumstances, he must kill Tom.

My claim here is that killing Tom is better than allowing Tom to kill Jerry, given that we know Tom is working doggedly toward that end. Killing Tom is wrong, but it’s less wrong than killing Tom for arbitrary reasons when, given this option, by killing Tom we can prevent him from killing Jerry.

I’ve chosen this toy example for its clear manifestation of the claim I’m supporting in this article: That using drone strikes to kill what are reasonably termed “enemy combatants” and their familiars is wrong, but is less wrong than allowing those people to carry out their plans to destabilize governments, kill civilians, and generally cause worldwide pandemonium. My argument for this point of view is consequentialist in its method, but also accepts the non-consequentialist foundational assumption that killing is always wrong. I do accept the idea that killing someone can be justified, but that does not make it morally acceptable.

The phrase – “for the greater good” – has led to many abuses and to the tyrannical treatment of millions of people over the course of just the last century (setting aside the countless millions more who suffered under the misuse of this aim over the course of history altogether). However, that does not mean that a truly greater good does not exist. Killing someone is worth a great deal, but saving many lives is worth a great deal more. This calls to question the obvious query: How many lives must be saved to justify ending one life?

Before I answer this question, consider a major problem with the Tom and Jerry example from above. One of the (several) problems with my Tom and Jerry example is that the fictional case exists in a vacuum (moral and otherwise). For instance, if it were possible for a person to be all-knowing, then the example would have wider instructional value. However, this is not possible. We all are limited in what we can know, even if we have reports, briefings, etc. at our fingertips. So our decision, were any of us in charge of making such a call, is quite unlike that which is made by the philosopher in the example. Another effect of the idealized example is that it takes out of consideration the societal social and psychological consequences of a hand-from-above striking down an aggressor. In truth, if the government were to have this power over its citizenry, the other citizens would not be able to live normal, self-determined lives. They would live in terror that they, perhaps, would be next to be smitten, in order to serve what might be called the greater good. The consequences would be extreme, and so the greater good almost never would be served by the government’s choosing to end this or that particular individual’s life. The problem is that our government does have this power, despite the fact that it lacks the ability to calculate reliably the cases in which use of the power is justified.

So, while the Tom and Jerry example sets forth a principle about justified killing in the name of the truly greater good, it is likely that the conditions satisfying the principle will almost never obtain in the real world. The risk of causing unacceptable consequences for our global society is immense.

Remember that, above, I set aside the question, “How many lives must be saved to justify ending one person’s life?

It seems that this is a question we can set aside quite reasonably if we establish as a baseline that terrorism, on the whole, does not cost lives alone. If those who engage in acts of terror were to be protected from targeting by drones or other military actions, the consequences would be dire for nations (plural), for global economic security, for global peace (between nations), and for global stability more generally. These consequences may sound extreme, but I am likely here to be wildly understating the costs of unchecked global terrorism.

So, when the question (“How many lives…?”) involves terrorism, the answer is that the question does not find application. If a government recognized by the other governments on earth and by the society that enacts that government has the ability to stop terrorism by engaging in warfare (e.g. by killing those involved with terrorism), the consequences under consideration include not lives – but entire civilizations. Protecting modern civilization, no doubt, is a greater good worth serving.

Furthermore, if an agent’s specified ends include destabilizing governments, societies, and undermining the peace of mind of the rest of humanity (those who do not share that agent’s aims), that agent’s aims ought not to be considered as valuable. It is legitimate not to value the aims of an agent who aims to terrorize other members of humanity. Stopping that person from realizing his aims is part of what is required to treat the rest of humanity – those he aims to terrorize – with respect. Our obligation to treat with respect members of humanity whose aims include radical disrespect of other members of humanity is diminished.

There’s some concern about the technical superiority of UAVs, perhaps setting drone strikes apart from traditional methods of warfare. In a 2010 article, Thomas J. Billitteri wrote the following:

Drone technology itself is astonishing in its capacity to reconnoiter and kill. In the case of the Predator and its even more powerful brother, the Reaper, controllers sit at computer consoles at U.S. bases thousands of miles from harm’s way and control the aircraft via satellite communication. With the ability to remain aloft for long hours undetected on the ground — Predators can fly at altitudes of about 50,000 feet — the planes can do everything from snap high-resolution reconnaissance photos of insurgents’ vehicles to shoot Hellfire missiles at them.

A secret archive of classified military documents controversially released in July [2010] by the group WikiLeaks revealed the lethal power of the Predator. As reported by The New York Times, in early winter 2008 a Predator spotted a group of insurgents suspected of planting roadside bombs near an American military outpost in Afghanistan. “Within minutes after identifying the militants, the Predator unleashed a Hellfire missile, all but evaporating one of the figures digging in the dark,” The Times said. “When ground troops reached the crater caused by the missile, costing $60,000, all that was left was a shovel and a crowbar.”

In 2011, an Illinois Law Review article tells a similar story:

Drones represent a summit in long-distance killing. From the Neolithic spear, to the bow and arrow, to artillery, to the airplane, to the cruise missile, advances in weaponry over the millennia have made it easier and safer to kill from great distances. Drones, combined with suitable missiles, have taken this process to its logical extreme.

Though the point I’m about to make was not the focus of these articles, consider the following. Characterizing unmanned warfare in this fashion often is used to distinguish modern forms of warfare from older forms of warfare. The purpose of this distinction generally is to support an argument that we ought to employ a different class of moral expectations on the new kind of warfare. The point usually is: “Look how easy it is to take a human life – this is not like the usual methods of conducting a war.”

Sometimes, the only feasible means of stopping that person is to use a drone to kill that person. These circumstances obtain when using other means of stopping that person would put many other lives at risk. The risk to the actors engaged in the type of targeted killing in which drones engage is substantial – and, if it can be avoided, it ought to be. Those actors’ aims and lives are worth saving, as it is their goal to prevent terrorists from committing acts of atrocity. Therefore, the use of drones instead of direct human action is not only acceptable; it is obligatory, where it is an option. The fact that it is easier, broadly speaking, than sending human combatants to accomplish the same task, counts for the practice and not against it.

All that established: If an authorized agent of a government can establish to a sufficiently high level of certainty that a person has the intent and the means to commit acts of terrorism, and one has the option of killing that person, the agent ought to do so. As Radsan and Murphy argue in their article, the burden of establishing a sufficiently high level of certainty should be a high bar to clear. We have not yet reached a determination regarding this sufficiency criterion.

That said, If this agent of the government has the ability to take the life of someone who reasonably can be expected to engage in acts of terror by use of UAVs, and in so doing can spare calamity to other members of humanity, it has a moral obligation to do so.  It is wrong to kill – even to kill those engaged in acts of terrorism – but it is less wrong to kill in these cases than it would be to refrain from killing.

—-

Resources:

[Edit: A few days after I published this post, Justin Caouette (of Univ. of Calgary) directed me to an article in The Guardian from Thursday Aug 2nd 2012 depicting Bradley Strawser’s view, which is strikingly similar to my own. You can also read Strawser’s self-penned article in The Guardian from August 6th 2012.

Bradley Jay Strawser (2010): Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles, Journal of Military Ethics, 9:4, 342-368]

Thomas J. Billitteri, “Drone Warfare: Are Strikes by Unmanned Aircraft Ethical?The CQ Researcher 20.28 (2010): 653-676.

Afsheen John Radsan and Richard Murphy, “Measure Twice, Shoot Once: Higher Care for CIA-Targeted Killing” University of Illinois Law Review 2011.4 (2011): 1201-1241.

Joanne K. Lekea “Missile Strike Carried Out with Yemeni Cooperation — The War Against Terrorist: A Different Kind of War?” Journal of Military Ethics Vol. 2, Issue 3 (2003): 230-239.

See also – a recent blogger taking on the ethical issues: http://dronewarsuk.wordpress.com/2012/08/19/reflecting-on-the-recent-rash-of-writing-on-morality-and-drones/

About Steve Capone

Interested in Domestic and Foreign Policy, Ethics, and Political Thought. One-time adjunct instructor and current full-time educator of small humans. Europhile, historophile, & bibliophile. M.S. Philosophy (Univ. of Utah 2013) M.A. Humanities (Univ. of Chicago 2007)
This entry was posted in Ethics, Political Commentary, Political Philosophy, Rough Ideas and Arguments and tagged , , , , , , , , , . Bookmark the permalink.

10 Responses to A Moral Duty to Use UAVs

  1. Abandon TV says:

    1. In the real world drone attacks hit the wrong targets all the time, or are simply not very precise, resulting in the murder of civilians. (lots of civilians)

    Imagine if another country was flying armed drones over your head 24/7 with the intent to use them to murder people on the ground. You are aware that you and your family’s life are permanently at risk of being ‘caught up in the crossfire’ and potentially murdered accidentally by a drone strike at any moment. Would you not feel extremely terrorised by this every present risk of annihilation by another nation’s government (with effectively no accountability on their part and no legal recourse either)?

    2. RE you cartoon analogy…… It needs to be pointed out that in the cartoon the premise is that Tom gets sustenance and shelter from his human owners in return for him protecting the food in the kitchen from any thieving mice. If Tom fails in his role as mouse catcher he will be viewed by his owners as a waste of cat food and threatened with being thrown out onto the street (this often happens in the old cartoons). Therefore it seems that reality is a little bit more complicated than surface appearances. And let’s not forget that Tom never does actually kill Jerry (or any mice) therefore he is not yet guilty of murder himself.

    It seems more than a little harsh to murder poor Tom considering the difficult situation he is in. I think this is a great analogy because the reality of this cartoon is far more complicated than appears on the cartoon-like surface. Real life is also like this.

    Of course (just as with the real world) Tom and Jerry has changed quite a bit in recent years…

    3. Seeing as how Tom has not actually committed the crime of murder in 50 or so years, if someone were to find out about your plans to murder Tom then (by your own argument) they would be justified in killing you to prevent this. Do you agree?

    And therein lies the problem (the hypocrisy) of ‘murdering for peace’….or “murdering to prevent murder” …..or “initiating violence on others in order to prevent them from initiating violence on others”….. one becomes the very thing one is pretending to be in opposition to.

    We might ask if drone attacks really do help to solve (ie eradicate) terrorism… or do they just ensure your team maintains an unfair advantage in the game of terrorism? (ensuring no other team can scores any goals against you). In the game of football if you consistently ‘annihilate’ those trying to score points against you, you end up at the of of the league. In the game of terrorism doesn’t the same apply?

    Terrorism is defined in my dictionary as “the use of violence and intimidation in the pursuit of political aims.”

    This is interesting when you consider that everything a government does from taxation at home to wars of empire overseas is an example of ‘the use of violence and intimidation in the pursuit of political aims’.

    A Stefan Molyneux so astutely points out….“…the terrorist is therefore in the eye of the beholder…”

    Both governments AND terrorists claim that their use of violence, intimidation and even mass murder in pursuit of their own political aims is justified, whereas any other individual or group’s use of such tactics is not justified. In term of morality (rather than, say, consensus) how do we judge these claims and can one of these identical claims be any more valid than the other – if so how?

    4. A rapist is someone who has committed rape. A terrorist is someone who has committed acts of terrorism. A ‘suspected terrorist’ or a ‘suspected rapist’ are people you suspect have also *already committed* those crimes. A distinction needs to be made here between ‘suspected terrorist/ rapist’ and ‘potential future terrorist/ rapist’ (ie someone who has not yet committed the crime in question).

    Anyone in possession of a gun or a boxcutter knife or some rat poison (or whatever) and has a strong political or idealogical world view can be considered a potential future terrorist. That’s going to be a significant proportion of any population.

    Anyone in possession on some rope or a knife or some duct tape who has strong sexual urges and fantasies of sexual power play can be considered a potential future rapist. Again that’s going to be a significant proportion of any population.

    In today’s world these kinds of people are increasingly likely to be (mis)labelled as ‘suspected terrorists’, or ‘suspected rapists’ – as if for someone to suspect that you might potentially committing some crime in the future were itself a crime. (ie thought crime)

    Also it must be pointed out that any political ruler who has strong political or idealogical world views (including strongly opposing the policies of other political rulers) who has also amassed an arsenal of weaponry fulfils the criteria of a potential future terrorist …….or to use this disturbing modern parlance, a ‘suspected terrorist’.

    5. If you combine a public acceptance or even support of drone murder with public acceptance or even support of the murder of ‘suspected terrorists’ (who are really just potential future terrorists) what you have is a situation where the government can justify killing anyone it pleases. Already we see that drones are now being deployed in the US and in European countries. All combined this could easily lead to a government simply ‘picking off’ those citizens who they view as being in opposition to their policies. Opposition to a government policy = anti government = potential terrorist = suspected terrorist = a terror threat = a legally justified murder AKA ‘neutralising suspected terrorists’.

    After all, this is a pattern which has repeated throughout history: An enemy is identified overseas. The government becomes highly militarised and violent towards this perceived threat. Soon this violence is turned against the government’s own civilian population.

    “Those who don’t learn from history are doomed to repeat it”
    “Live by the sword, die by the sword”
    “war always comes home”
    etc.

    • Steve Capone says:

      I really appreciate your comments. Thanks for reading. You’ve made me think a little more carefully about all of this. Because I have kind of a lot to say in response, I’ll tackle each point here – one at a time.

      Re: Item 1

      You bring up a very important point that I overlooked in my initial post. I actually was thinking at the time of its writing, “I need to deal with the collateral damage / casualty issue,” and then I completely forgot to add my take on that issue. The problem with applied ethics, for better or worse, is that there are so many real-world implications to consider that doing applied ethics is never a matter of simply finding a principle and then applying it universally – because the world will always give us reason to adjust that principle in response to real-world consequences (a variant of Rawls’ “reflective equilibrium” concept). That said, I’ve made it a mission of mine in this blog to deal with real-life applications of philosophy – so, if I set this aside, I’d be remiss.

      So here’s what I was/am thinking about this issue.

      First. I acknowledge that collateral damage is practically unavoidable in many cases, if we choose to target those we identify as terrorists. Collateral damage, as I understand it in this context, includes civilians who, to date, have committed no crimes against humanity and are not engaged with the plotting of or preparations for any such criminal acts.

      Second. Collateral damage should, of course, be avoided wherever feasible. There are going to be problems in the details of sorting through what “feasible” means in this context. However, I think I can at least specify clear cases of feasibility and infeasibility. For the former: If the targeted person is someone we have reason to expect will be away from innocents (as defined above) with any kind of predictable regularity, we have a moral obligation not to use UAVs when that person is surrounded by innocent civilians. With regard to the latter: If we have human intelligence sources in the field and we only get one actionable opportunity to kill the main target, and that opportunity carries with it some level of certainty that innocent civilians will be harmed, then avoiding collateral damage is not feasible.

      Third. So – I’ll set aside cases where it’s feasible to avoid collateral damage (for instance, UAV strikes on training camps and hideouts) and focus only on cases in which it’s infeasible to avoid such damage. If the manner in which I’ve characterized the risks involved with letting terrorism go unchecked is even in the realm of reasonable expectation, that collateral damage is the necessary, though morally high cost of preventing the much higher moral costs of risking those consequences. Paying that high moral cost is worth preventing the much higher moral cost than refraining from killing the known terrorist from committing crimes against humanity.

      With regard to your case – “put yourself in the innocent civilians’ shoes,” more or less – I would be just as innocent as they are, and the cost of my life would be worth mitigating the risks that the people with whom I find myself (through no fault of my own, perhaps) pose to worldwide stability and humankind generally. My life is not worth that of hundreds of thousands (or many millions, given the right/wrong circumstances) of other lives of potential victims that would be at risk if I am not (unintentionally) killed as a consequence of the government’s targeting my associates.

      I have a feeling that your first point goes further than this, however – and perhaps more important than the last consideration. You’re also challenging me to consider whether or not UAV-warfare is another form of terrorism. I have to claim here that the two cases (UAV attacks carried out by a government vs regional terror-driven organizations) simply are not comparable – that is, the analogy is bad. And I suspect that the bad analogy comes from your assuming too broad a definition of the word “terrorism”.

      During World War II, B-17 bombers ‘terrorized’ (to borrow your usage in this context) German civilians with regularity. Many of those civilians had nothing to do with the war effort, just as, perhaps, many of the civilians near terrorist training facilities or camps have no direct ties to those facilities or the activities taking place therein. The B-17 bombers were creating terror. But that fact, by itself, does not make the operators of those bombers or those issuing orders in the allied command ‘terrorists.’ That is, at least, not as we use the word today – and, more importantly, not as I’m using it to describe the class of people whom we have a moral obligation to kill. The B-17 bombers were an incidental aspect of the US government’s attempts to preserve a stable, global society. The obligation on our part today to prevent terrorists from operating anywhere in the world is even more severe than was our obligation during that war, due to the increased interconnectedness of the global community and the increased destructive powers of the weapons to which they can gain access. The major difference between our attacks and theirs is that the purpose of our attacks is to increase stability and peace, while theirs is to cause instability, disorder, and terror. There are other major differences, mostly having to do with the goals and methods of the actors on either side of this analogy.

      The only relevant similarity between US-UAVs and what I’m calling terrorist actions is that both cause people to feel terror, and this is the heart of my disagreement with the analogy you suggest. Possessing the trait “X causes terror” is not what makes a terrorist a terrorist. If it were so, then snakes, drunk drivers, and anyone about whom a person happens to feel terror is also a terrorist. This definition is unacceptable because it includes much too broad a class of people and non-people. That is, because it’s overly broad, it’s not a useful definition – it doesn’t pick out any particular thing in the world, but rather, picks out many (or most?) things in the world.

      The collateral costs in the bombing campaigns in 1943-45 were astronomical. But we have convinced ourselves that it was morally worth it, and I share this intuition. If I am either a radicalized militant working against the interests of civilization at large and perhaps my own nation as well, or am unlucky enough to be an innocent civilian living near those who fall into that class, I am like the innocents in Germany in 1944. I am a high cost to prevent the exacting of a much higher cost.

    • Steve Capone says:

      Re: Item 2 (and the top of Item 3)

      Because I tailored the example to draw out the features that were most relevant to the moral dilemma I was addressing, I’d ask you here to set aside any details about the way you think about my example that you gather outside of my characterization. If it helps, consider for these purposes that it’s not Tom and Jerry but rather Smith and Jones.

      This is a strange aspect of contemporary philosophy – we set out examples and tailor them much the way chemists set out experiments in a manner that enables them to isolate particular elements in a reaction that they aim to observe. In this case, I only included the details that I included to isolate the morally relevant factors. This may not feel right to you, because of the baggage the scenario I describe seems to bring along with it. If you like, we could invent a new scenario that is almost identical to the one I suggest, but with the characteristics that it shares with the cartoon emended.

      Along these lines – That the actual (fictional) Tom has not succeeded in murdering Jerry in 50 or so years is not relevant to the example I describe.

      That said – has Tom and Jerry (the real, fictional cartoon) changed recently? No more attempted homicides?!

    • Steve Capone says:

      Re: Item 3

      “[In] ‘murdering for peace’….or “murdering to prevent murder” …..or “initiating violence on others in order to prevent them from initiating violence on others”….. one becomes the very thing one is pretending to be in opposition to.”

      On the grounds I’m defending these instances of killing, I do not claim that the murder is morally good. I only claim that the murder is both justified and morally obligatory. The reason is that committing these acts of murder prevents much greater harms to society (much) more broadly.

      Because the justification I’m offering here does not rely on the notion that “We’re killing to prevent them from killing because killing simply is wrong” – but rather the notion of considering altogether the consequences of killing the terrorists vs. not killing them, my view doesn’t lead us to a place where we become the thing to which one is in opposition.

      If my view were that we were doing some kind of moral justice (in terms of retribution), then I’d run into the difficulty that you describe. But because I’m doing a kind of moral calculus here, considering consequences of killing vs. not-killing, this particular worry doesn’t find application.

      “We might ask if drone attacks really do help to solve (ie eradicate) terrorism… or do they just ensure your team maintains an unfair advantage in the game of terrorism? (ensuring no other team can scores any goals against you). In the game of football if you consistently ‘annihilate’ those trying to score points against you, you end up at the of of the league. In the game of terrorism doesn’t the same apply”

      We are getting into considerations of data. If it turns out that I’m wrong – that what’s at risk if we do not kill those who intend to commit atrocities is not so terrible as I suggest, then my argument surely will fail. But I don’t think I’m wrong, and I also believe that there’s plenty of evidence to support my claims about the potential consequences of allowing these individuals and organizations to proceed unmolested. You can either buy this or not – if you don’t think there’s a risk to allowing what I’m calling terrorist groups and individuals to continue, then I’m not going to try to convince you. It’s a matter of empirical data and predictions of likely outcomes based on that data, and I have to be honest – I’m not going to take the time to present that data.

      Terrorism is defined in my dictionary as “the use of violence and intimidation in the pursuit of political aims.”
      This is interesting when you consider that everything a government does from taxation at home to wars of empire overseas is an example of ‘the use of violence and intimidation in the pursuit of political aims’.
      A Stefan Molyneux so astutely points out….“…the terrorist is therefore in the eye of the beholder…”
      Both governments AND terrorists claim that their use of violence, intimidation and even mass murder in pursuit of their own political aims is justified, whereas any other individual or group’s use of such tactics is not justified. In term of morality (rather than, say, consensus) how do we judge these claims and can one of these identical claims be any more valid than the other – if so how?

      We’re getting into more definitional disagreements here. I believe, as I argued above, that the definition of “terrorist” and “terrorism” is narrower than merely “possessing the trait that causes a person to feel terror”. Likewise, here, I might find disagreement with the definition. We surely engaged in psychological warfare during WW2 (I use this example again because it’s the clearest one I can think of right now) that caused terror. We had political aims, among other aims. That political aims were among our aims did not make our actions those of a terrorist organization. We also had aims that focused on human dignity, world peace, and social order. Also, we did not target civilians to cause terror. Terror was not our goal, but merely an incidental element of what we found to be necessary to win that war. Similarly, when we’re talking about terrorist groups today, their main goals are not so morally admirable as human rights, individual and economic freedoms, social stability, and world peace. Their main tactic is terror. They target civilians purposefully. Again, comparing nation-states whose goals include promoting morally laudatory aims and incidentally cause someone to feel terrorized to those whose major method is the use of terror to achieve morally despicable aims – this is a mistaken analogy.

      I’m not sure how Molyneux argues for his claim that terrorism is in the eye of the beholder, but I reject the conclusion (and so I reject it as a premise of further argument), and so your last point in Item 3 is one that I reject as well. Once we define what a terrorist is, as I’ve done in a kind of fill-in-the-blank way in these responses and in this blog post, it’s simply false that whether X meets those conditions is ‘in the eye of the beholder’. Either the conditions are met or they are not.

      To answer your last question – I would appeal to the moral consequentialist argument I’ve been offering all along. That both sides see it one way does not mean that there’s a stalemate regarding the facts of the matter. If one side has a compelling case in favor of its claims that its aims are moral and its means are as moral as possible, then that side has the better case. I believe the US Government has the upper hand in this regard, as you might gather from my earlier responses and the original post. That the other side disagrees does not count as evidence in favor of its position.

    • Steve Capone says:

      Re: Item 4

      The distinction between “suspected/potential terrorists” and “terrorists” is worth considering. I think, again here, that the weight of consequences of making a mistake about the target’s intentions will fall on the side of moral caution – that is, on being overly cautious. In this case, moral caution would have us kill the suspected or potential terrorist, even at risk of being mistaken. That said, I think I addressed this above – the burden is extremely high in determining who and what is an appropriate target for UAV strikes. Simply having some fertilizer in the bed of one’s truck doesn’t make one a suspected terrorist. If, on the other hand, one has purchased a ton of fertilizer, contacted a bomb-maker for instructions about making a bomb, rented a large truck, falsified documents to gain access to a high-risk target (a government building or a nuclear facility), talks to his peers about killing civilians to make a point, etc… – this person is properly thought of as a likely terrorist. It comes down to establishing likelihood, I suppose, and establishing the facts surrounding the likelihood that a given person is likely to carry out a terrorist act is what determines whether or not that person is a viable target. Of course, the circumstances I’ve described here (and that you pointed to above) are those that are likely to occur on US soil, and so not the kind of place where we are conducting drone attacks (or are ever likely to, it seems).
      Instead, the circumstances are going to be different. In tribal Pakistan or in Yemen, we’re likelier to be dealing with a case in which a person has been attending terrorist training and educational facilities, meeting with bomb-makers directly, arranging for the purchase of weapons, etc. – the burden of proof should be high, as I said above, but, so long as it is met, and a persons intentions can be read with a high degree of probability, we should think of the use of UAVs in these cases as acts of defense of stability, peace, etc.. To meet the conditions I think extend to us a moral obligation to use UAVs to kill terrorists, reliable and extensive intelligence is required.

      I’ve set aside the rape analogy because it is not apt, I think. While the rapist does terrible damage to the victim and his or her familiars (though indirectly), the terrorist does damage to society on a vast scale (per my above comments). There aren’t enough relevant, shared features such that comparing the two is legitimate in this context.

    • Steve Capone says:

      Re: Item 5

      I do not assert that public acceptance equals moral obligation (or permission, for that matter). Public acceptance, more likely than not, is often (though not always) irrelevant to moral obligation and permissibility. The public, we have seen over the course of history, often is a bad judge of what is right.

      If you accept the burden of proof (regarding likelihood) argument from the original post, combined with the principle I identified in the Tom and Jerry (or Smith and Jones, if you like) case, then there’s no way that the consequence about which you’re concerned here will occur. That a misapplication of an idea is possible is no argument against that idea, so long as the misapplication isn’t likely. I believe that, with the right institutional requirements, we can prevent misapplication. Civilian/Public oversight insofar as national security allows may be one way of preventing misapplication (per the suggestion of the Illinois Law Review article I noted in the original post) – but there will be many other stopgaps that I’d recommend besides this one.

      Last – I’m not sure about the use of drones for strikes inside the US. I haven’t heard of that one yet. I don’t find it to be likely, though we will be using drone technology for other purposes – and those uses are outside the scope of this post, so I have to set this issue aside.

    • Steve Capone says:

      One final thought – not directly in response to your original comments:

      We have an ethical obligation (apart from the one I discuss in this blog post) to do whatever we can to change the circumstances that produce militant radicals. I’m largely a behavioralist, and I believe that our environment has a lot to do with who we are and what we do (and what we do *is* who we are).
      As I said above, killing is always wrong.
      So, we also have a moral duty to do whatever we can to alter the circumstances in which we’re ethically obligated to use UAVs to conduct strikes to kill terrorists.

      Accepting this conclusion may mean that we are morally obligated to use economic, humanitarian, and soft-diplomatic means (among other modes of effecting change) to change the cultures that produce people who mean to do the world harm through terroristic acts.

  2. jonpeto says:

    Abandon TV sets his sights in outright refuting you. My aims are less ambitious. I mostly have internal questions.

    You begin by discussing the distinction between justified killing and murder, which is a distinction between morally an act that is permissible versus one that is impermissible. This distinction makes sense to me. But then you go on to say,

    “I don’t buy this argument. It seems to me that justified killing is still killing, and, if we value a person’s humanity, ending that person’s life (against the wishes of that person) is murder, and is always morally abhorrent. This line of thought does not preclude a particular murder from being justified, however. As we will see, there are some clear cases in which a particular case of killing can be both wrong and justified.”

    I’m not clear what your stance is on the distinction between justified killing and murder. Are you rejecting the distinction? Are you claiming that all forms of killing are murder? If so, then I would like to see your reason for denying the distinction.

    Like Abandon TV, I’m going to pick on the Tom and Jerry example. However, there are some points here that haven’t been discussed yet which I think are relevant. In the Tom and Jerry example, does Tom see the killing of Jerry as wrong? This is not clear because you say, “Let’s imagine that cats and mice are due the same sort of respect that humans are due.” From who’s perspective? Only from the human perspective or also from Tom and Jerry’s? Even when a dog attacks a human, we don’t hold the dog responsible. Dogs don’t know the moral consequences of their actions. I think Tom needs to have some kind of awareness for his actions to have moral worth (at Tom’s level). And if this is the case, then why can’t he understand that Jerry deserves some respect? If we can work out whether Tom has the capacity to know what he is doing is, then I think this might change some of our intuitions on what we should do with him.

    Lastly, I haven’t formed an opinion on whether drone strikes are morally permissible. Your post hasn’t convinced yet because I already know this argument–we utilize drones for protection because it leads to the saving of lives, even though there is some collateral damage. I think the consequential argument is the argument that our military uses to justify its actions.

    • Steve Capone says:

      Thanks so much for reading and for your comments, Jon. I value your feedback.

      First, regarding the distinction between (different instances of) killing and murder

      I appreciate this criticism. My prose was definitely sloppy here. Let me try to sort out what I’m getting at, at this point in my post.

      First, I should clarify that I am denying the distinction, but not exactly in the way it seems from the way you’ve read my post (and perhaps the way I’ve written it).

      Often, we define the word ‘murder’ in a way that is contrived to suit a circumstance. The motivation for defining the word in this way usually is to exonerate an act the person offering the definition on grounds that the act is an instance of morally acceptable killing.

      I don’t think it’s useful to argue whether a particular act of killing is or is not murder. I think it makes more sense to speak in terms of justified and unjustified killing, and recognize that both kinds of killing are morally abhorrent. When X deprives Y of his or her life, the act is morally abhorrent, whether or not the act of killing in that case is justified.

      So we ought to do away with arguments that try to pack a circumstance neatly into the category ‘murder,’ – or remove that circumstance from that category. If we talk instead in terms of justified and unjustified killing, we can do away with really a layer of complication.

      It’s a separate issue whether or not something can be morally abhorrent and also be the right thing to do. I’m claiming in this piece that the two are not mutually exclusive.

      Second, regarding the Tom & Jerry example

      In the Tom and Jerry example, does Tom see the killing of Jerry as wrong? This is not clear because you say, “Let’s imagine that cats and mice are due the same sort of respect that humans are due.” From who’s perspective? Only from the human perspective or also from Tom and Jerry’s? Even when a dog attacks a human, we don’t hold the dog responsible. Dogs don’t know the moral consequences of their actions. I think Tom needs to have some kind of awareness for his actions to have moral worth (at Tom’s level). And if this is the case, then why can’t he understand that Jerry deserves some respect? If we can work out whether Tom has the capacity to know what he is doing is, then I think this might change some of our intuitions on what we should do with him.

      My inclination here, which aligns with my intuition in the initial post, is to say that it’s irrelevant whether or not Tom sees killing Jerry as wrong. That the consequences are unacceptable – that Tom can refrain (or employ a pre-commitment device to help him to refrain) from killing Jerry, and that he does not choose to refrain – this justifies stopping Tom by whatever means necessary, including killing Tom to prevent him killing Jerry.

      A problem that you’re picking up on here is that I’ve anthropomorphized Tom and Jerry in a way that wasn’t entirely clear. Despite my asking the last commenter not to import anything from his/her imagination into the example whose bounds I have set precisely, I only implied something that I wanted to be a part of the example and was not explicit about it. What I had in mind here to ascribe a kind of human agency to Tom and Jerry. That’s why I wrote in terms of choices, actions, and motivations. So this is my fault, here, in terms of sloppiness. I intend for both characters to have whatever we might include in a concept of human agency.

      Does this clarify my position on the example? Tom ought to understand that Jerry deserves respect, but he does not (for whatever reason), and his acting on that lack of holding Jerry as an object worthy of respect in such an extreme way as ending Jerry’s life… foreclosing all future possibilities that Jerry can pursue his own aims. This justifies the philosopher-on-high killing Tom.

      Last –

      I haven’t formed an opinion on whether drone strikes are morally permissible. Your post hasn’t convinced yet because I already know this argument–we utilize drones for protection because it leads to the saving of lives, even though there is some collateral damage. I think the consequential argument is the argument that our military uses to justify its actions.

      I think, really, that it comes down to this: either one is or is not convinced by consequential arguments justifying morally abhorrent actions. Using UAVs to strike at terrorist targets is morally abhorrent, but it is also the right thing to do. This is the most likely candidate for the military’s defense of its actions, but, to my knowledge, the military hasn’t offered any arguments in defense of its behavior – at least, not officially. This is the route it ought to go, however, because other arguments simply won’t work to justify its behavior.

  3. Steve, thanks for the post – excellent discussion. More importantly, thanks for attempting to clarify your position (though I think Jon and Abandon are absolutely right in many respects). I just wrote a response that went over 900 words without being done, so, if you’d like I’ll email it to you just let me know. For now, and for the sake of brevity, I’d like to pose only one quick barrage of questions.

    In your opening para you say “I’ll defend the notion that it’s morally obligatory to use unmanned aerial vehicles (UAVs) to kill people engaged in acts of terrorism”. You proceed by doing just that, trying to defend that it’s not only permissable but obligatory to use UAVs against actors of terrorism (the term terrorism is still vague to me, but that’s another discussion). It’s morally obligatory to kill these terrorists because of the consequences, we save more lives by killing fewer (sounding eerily similar to the trolley problem, huh?). But, it seems like you’re trying to combine a deontological and a consequential approach, I’m not sure I follow. Isn’t killing the terrorist to save lives using the terrorist (a person) as an means to an end? You say that it’s always wrong to kill (kantian type claim), but then, you say that it’s morally obligatory to kill (when you net more lives, clearly a utilitarian claim). How can we be both wrong and right at the same time? Moral obligation implies that the act is right, right? Also, aren’t these two ethical notions a source of tension?

    Now, I know you want to say that morally abhorrent does not equal wrong. But what makes that morally abhorrent act better than alternative actions? An obligatory action is one that is best in all possible worlds. So, killing the terrorist is best (because obligatory), but why is it best? Attempting to kill the terrorist might enrage the other radicals and further entrench the ideals that are governing their commitment to terror in the first place, right? We don’t know the long-term affects that killing this one terrorist may have, so, how is it that the clearly (by your own standards) morally abhorrent act of killing the terrorist right? It seems that we would need complete predictability (I’m thinking minority report, kind of?) to justify the UAV’s. Or, would you be using some sort of probability theory? If the latter then this opens you up to all kinds of weird policy implications such as the killing of innocents (for their organs) to promote overall prosperity and health. Or, the probabilty that someone from a lower income neighborhood is more likely to be dealing drugs therefore I am justified in racially or socially profiling them? How do you gauge the right time to act with a particular probability? Whatever happened to allowing one to change their mind before they act (for instance in the film Shawshank Redemption Andy Dufresne (Tim Robbins) plans to kill his wife, he loads the gun, sits outside her house, waits for her to go to sleep. He has every intention on killing her, but, at the last moment he doesn’t). Considering the way you’ve argued, is it fair to say that you think that Andy Dufresne should have been shot and killed by police even though he didn’t commit the crime? If the obligation to kill the terrorist exists prior to the act being acted out then it seems that your view commits you to prepunishment. Or, have I missed something?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s