The context for this, is the dilemma (there are many forms of it) in which someone is faced with a choice between actively saving five lives but causing a single death (not their own) or refusing to act and thereby allowing five deaths but not causing the death of the one person. In essence the idea is that if you do nothing you will passively allow evil which you could only prevent by causing evil to a smaller number of people.

We had some interesting back and forth discussing how the situation might change depending on whether or not we knew some or all of the people who would be effective. The general consensus of the class was that so long as you don't actually know any of the people involved, killing the one person to save the five would be the right thing to do. This changed, however, if you knew the one person. Not everyone agreed but the majority was in favor of allowing the five deaths if the only alternative was killing someone you knew. I asked if this might be because we knew the value of the one person but did not know the value of the five. They agreed that it was, and went so far as to say that if we knew the one person to be a very bad person, that we would be justified in causing that person's death in order to save the lives of five people who's value was undetermined. This was when the idea struck me. I wrote the following equation on the board:

**a**+

**b**+

**c**+

**d**+

**e**=/</>

**x**

In this equation

**a**,**b**,**c**,**d**and**e**are the five people who will die without intervention and**x**is the person who will die if they act. I will suggest that if people are assigned worths varying between, say, 1 and 100, and we didn't know the actual value of each person, it would make sense to assume that**a**+

**b**+

**c**+

**d**+

**e**>

**x**

on the other hand, if we know the value of

**x**say because we are related to him, we can give it an actual value, say 95. With this value plugged in it becomes relatively less likely that**a**+**b**+**c**+**d**+**e**will have as much value as that, especially to us given that we don't know them. In fact, using this sort of ethics math, the most unlikely outcome is :**a**+

**b**+

**c**+

**d**+

**e**=

**x**

run the numbers and it is clear that the most likely outcome is:

**a**+

**b**+

**c**+

**d**+

**e**>

**x**

Now as logical as all of this may be, I think that while the structure is sound, the whole process suffers from one basic flawed premise: an individual's worth does not vary between 1 and 100 on any scale. In point of fact, each individual's worth is infinite and when we plug that into the equation it becomes clear that

**a**+

**b**+

**c**+

**d**+

**e**=

**x**

is the only possible outcome. After all, any number of infinite values, when added together can only equal an infinite value:

**∞**+ ∞ + ∞ + ∞ + ∞ = ∞

So this is the idea; it is impossible to "calculate" the outcome of our moral decisions regarding individuals precisely because each and every individual is of infinite value. We must therefore find some other way of deciding what is right and what is wrong. At the least this seems to work as one more reason I am not a utilitarian, what to the rest of you think?