Search This Blog

Wednesday, March 30, 2011

Bad Things Happen When Philosophers Do Algebra

I was about halfway through an ethics class last week and we were working through an ethical dilemma I had posed for the class. In the process of our discussion I was struck by an idea which I presented briefly to my students but would like to flesh out a little bit here. I am certainly interested in your response for this post because if you can convince me that the idea will hold some water I may rework it into a full paper. Keeping in line with some of my previous posts, this idea revolves around the worth of individuals.

The context for this, is the dilemma (there are many forms of it) in which someone is faced with a choice between actively saving five lives but causing a single death (not their own) or refusing to act and thereby allowing five deaths but not causing the death of the one person. In essence the idea is that if you do nothing you will passively allow evil which you could only prevent by causing evil to a smaller number of people.

We had some interesting back and forth discussing how the situation might change depending on whether or not  we knew some or all of the people who would be effective. The general consensus of the class was that so long as you don't actually know any of the people involved, killing the one person to save the five would be the right thing to do. This changed, however, if you knew the one person. Not everyone agreed but the majority was in favor of allowing the five deaths if the only alternative was killing someone you knew. I asked if this might be because we knew the value of the one person but did not know the value of the five. They agreed that it was, and went so far as to say that if we knew the one person to be a very bad person, that we would be justified in causing that person's death in order to save the lives of five people who's value was undetermined. This was when the idea struck me. I wrote the following equation on the board:

e   =/</>   x  

In this equation a,b,c,d and e are the five people who will die without intervention and x is the person who will die if they act. I will suggest that if people are assigned worths varying between, say, 1 and 100, and we didn't know the actual value of each person, it would make sense to assume that 
e   >   x
on the other hand, if we know the value of x say because we are related to him, we can give it an actual value, say 95. With this value plugged in it becomes relatively less likely that a+b+c+d+e will have as much value as that, especially to us given that we don't know them. In fact, using this sort of ethics math, the most unlikely outcome is :
 d e   =   x
run the numbers and it is clear that the most likely outcome is: 
e   >   x

Now as logical as all of this may be, I think that while the structure is sound, the whole process suffers from one basic flawed premise: an individual's worth does not vary between 1 and 100 on any scale. In point of fact, each individual's worth is infinite and when we plug that into the equation it becomes clear that 
e   =   x
is the only possible outcome. After all, any number of infinite values, when added together can only equal an infinite value:
 + ∞ + ∞ + ∞ + ∞   =   ∞

So this is the idea; it is impossible to "calculate" the outcome of our moral decisions regarding individuals precisely because each and every individual is of infinite value. We must therefore find some other way of deciding what is right and what is wrong. At the least this seems to work as one more reason I am not a utilitarian, what to the rest of you think?


  1. id be interested to see how people would respond to this question if it were someone really powerful or influential or rich like bill gates pitted against 5 "regluar people". or someone really good looking. like something that you mentioned in your previous post as to what people would view as "a very valuable trait" like immense talent or model-like good looks. would that affect the outcome? or even someone especially evil in the group of 5? i know i would think twice before killing albert einstein over 5 others

  2. That was definitely an important factor for my class; once they knew something about one of the people they immediately oriented their decisions around that fact - they killed "bad" people (and the people with them) and saved "good" or useful people. What struck me is that they were willing to do this without knowing anything about the rest of the people. I think this is symptomatic of having a 1-100 value approach rather than an infinite value approach.

  3. Yes, I think when ethicists do the math, they are considering practical value and not eternal value. As you alluded to in a previous post, there is a difference, since we all have value based on simply being human, even though some of us are clearly better humans than others. I agree these kinds of equations seem heartless.
    I think it would be ethical to let the people make their own decisions. One person might sacrifice himself. Or the group would find solace and comfort in going down together. Or not prove their salt and go down fighting. It's nobody's business, even a trained professional's to make the decision of life and death for others.