utilitarianism

Other scientific, philosophical, mathematical etc. topics go here.

utilitarianism

Postby PWrong » Sun Jul 02, 2006 6:03 am

I want to develop some kind of mathematical formalisation of utilitarianism. I've thought about going to another forum to try this, like a philosophy forum, but I think we're more used to undertaking large projects like this.

First, I'll see what everyone's opinions are on utilitarianism. Basically, the idea is that the only moral imperative is to maximise utility (i.e. maximise pleasure and minimise total suffering). What I want to find out is the mathematical way to do that.

I'd also like to consider an abstract universe in which entities have "utility" and "morality" in the same way that an electron has a "position" and a "spin".
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby wendy » Sun Jul 02, 2006 9:33 am

One must understand that "utility" is a relation between disjoint things: that one can use X to understand/manage Y.

The measure of the utility of X against Y is then a token of the number of times X is invoked against Y, and the extent that X is done.

For example, complex numbers are a certain entity. It is not so much that we can particularly visualise sqrt(-1), or something. The world could go along without these.

The utility of complex numbers is that they replace a large number of elsewise learnt relations of trig functions with simple multiplication. They are for example, used very heavily in electromagnetism, where a sinosoidal wave is written in terms of a complex number eg A cis(wt+a), where a and A are constants. Reactance and Impediance act as if they were complex resistances.

Of course, the insights into things happen pretty much when all of the segments or elements are present. That is, insight is more a case of being belted over the head by the idea rather than a subtle flash of vision splendid (although these also occur).

It is not so much complex numbers that leads to the use in electromagnetism, but more the visual implications of the argand diagram. That is, the validity of complex numbers to describe the trignometric eqns is more that someone did this on a diagram.

It should be recalled that there are many ways there: one can study the physics in terms of quarterions, or completely avoid them. Either way is suitable, but to change between them is not.

It is also useful to recall that much of our thinking and ideas are connexions between things, rather than things themselves. For example, "love" might have a specific meaning, but it is not implemented in the real world as a set of actions, eg "I love you when I ...." does not by the meaning of love, fill in the blanks. Even though we know what love is, the path from the real world to love is one that different indivuals make of their own accord.

Even when we write, all we give is words: the pylons of the bridge of ideas. That you see the pylons does not mean you see the decking: you are never able to see anyone else's train of thought: what you see are the results, and it is left to you to model what is happening on the bridge.

Utility of an idea is but one of these spans. There is nothing intrinsic about creating a connexion between ideas, the utility of this is what meaning one gives to these things.

Wendy
The dream you dream alone is only a dream
the dream we dream together is reality.

\ ( \(\LaTeX\ \) \ ) [no spaces] at https://greasyfork.org/en/users/188714-wendy-krieger
User avatar
wendy
Pentonian
 
Posts: 2031
Joined: Tue Jan 18, 2005 12:42 pm
Location: Brisbane, Australia

Postby PWrong » Sun Jul 02, 2006 4:11 pm

Maybe I should have defined utility. Utility is a measure of pleasure versus suffering. It looks like you interpreted it as "usefulness".

I don't think the word utility is as abstract as love. Anyway, both can be measured, theoretically. They're both caused by a collection of chemicals in the brain.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby bo198214 » Sun Jul 02, 2006 6:45 pm

I think utilitarianism is contradictive anyway.
It always depends for whom you want to maximize pleasure (I use pleasure now meaning also all the other things to be maximized).
What is best for me, is not best for another person, another family, another nation, the whole mankind.
What is best for my family is not best for other persons, families,nations and whole mankind.
What is best for my nation ... etc etc


What if my personal pleasure can be so high, that it weights as the whole pleasure of an entire nation ... is the consequence then, to maximize my pleasure (because the sum becomes maximized ....)?!
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby jinydu » Sun Jul 02, 2006 11:21 pm

bo198214 wrote:It always depends for whom you want to maximize pleasure


My understanding is that utilitarianism, at least as advocated by Mill and Bentham, strives to maximize pleasure for all of mankind. This appears to be the most common position, and the one that was taught in my ethics class at high school. However, this is not to say that some people do not adopt variants of this view that strive to maximize pleasure for some other group. For example, some animal rights activists believe in maximizing pleasure for all "animal-kind" (a larger group than mankind) while some politicians believe (or at least claim to believe) in maximizing pleasure for their nation (a smaller group than mankind).

bo198214 wrote:What if my personal pleasure can be so high, that it weights as the whole pleasure of an entire nation ... is the consequence then, to maximize my pleasure (because the sum becomes maximized ....)?!


According to standard utilitarianism, yes. But that is a very big if; one that is extremely unlikely to occur in practice. One of the consequences of utilitarianism is that in general, the larger the number of people affected by a moral decision, the less important a single person becomes.

Note that I am not arguing for utilitarianism here; my opinion is that it does have some unsavory features. I'm just explaining what it states.
jinydu
Tetronian
 
Posts: 721
Joined: Thu Jun 10, 2004 5:31 am

Postby PWrong » Mon Jul 03, 2006 2:55 am

It always depends for whom you want to maximize pleasure (I use pleasure now meaning also all the other things to be maximized).

You should maximise the total pleasure over the entire system, assuming you have some way to add them up.

What if my personal pleasure can be so high, that it weights as the whole pleasure of an entire nation ... is the consequence then, to maximize my pleasure (because the sum becomes maximized ....)?!

Pleasure doesn't increase linearly. I think it's probably logarithmic. Getting $10 isn't ten times as good as having $1, but giving $1 to ten people is.

For example, some animal rights activists believe in maximizing pleasure for all "animal-kind" (a larger group than mankind)

Well, I'm one of them, personally. My criteria for applying utilitarianism to an entity is based on its ability to have utility. If something can feel pain, it shouldn't. Generally, you maximise the total utility of the system. This is why I want to consider a simpler system that doesn't have the annoying complexities of our universe.

while some politicians believe (or at least claim to believe) in maximizing pleasure for their nation (a smaller group than mankind).

That doesn't neccessarily contradict maximizing pleasure for all mankind. A father takes care of his own children before others, because he assumes that other children have their own guardians to take care of them, and if he doesn't care for his own children, noone else will. That doesn't mean other children aren't equally important. Similarly, if you have 100 politicians and 100 countries, they best way to maximise utility is to assign each country one politician.

http://en.wikipedia.org/wiki/Utility explains utility with vectors. It takes more of an economic perspective though.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby jinydu » Mon Jul 03, 2006 4:52 am

PWrong wrote:That doesn't neccessarily contradict maximizing pleasure for all mankind.


It doesn't necessarily contradict maximizing pleasure for all mankind. All other things equal, a happier country A populace leads to a happier mankind overall. However, this is not necessarily the case. For example, it may be in the best interests of country A to invade country B, plunder all of country B's resources and enslave country B's populace. While this may increase the pleasure for country A, it could end up decreasing the pleasure for mankind. Furthermore, if different moral decisionmakers try to maximize happiness for different systems (for example, country A and country B), their decisions may be mutually contradictory. Thus, I think that in order for utilitarianism to give consistent answers, all moral decisionmakers have to agree on both the definition of the system as well the method for measuring pleasure.

As you pointed out, the Wikipedia article deals primarily with economic utility rather than moral utility. This is probably not a coincidence; economic utility is far easier to measure quantitatively. However, Jeremy Bentham, one of the founders of utiliarianism, believed that pleasure is quantifiable and attempted to develop a system (known as hedonistic calculus) to actually quantify it:

http://en.wikipedia.org/wiki/Felicific_calculus

Admittedly, I haven't fully represented the views of the founders of utilitarianism here. For instance, John Stuart Mill also wrote that: "Over himself, over his own body and mind, the individual is soverign." and that people should have the greatest possible liberty, so long as it does not interfere with the liberty of others. This is a deontological principle (something is inherently right or wrong), in contrast to utilitarianism, which is a teleological principle (something is right or wrong because it leads to good or bad results).
jinydu
Tetronian
 
Posts: 721
Joined: Thu Jun 10, 2004 5:31 am

Postby PWrong » Mon Jul 03, 2006 6:35 am

For example, it may be in the best interests of country A to invade country B, plunder all of country B's resources and enslave country B's populace. While this may increase the pleasure for country A, it could end up decreasing the pleasure for mankind.

That's exactly why most politicians don't do that. The ones that do clearly aren't utilitarians :lol:. To use the father example, a parent doesn't steal toys from other children and give them to their own kid.

However, Jeremy Bentham, one of the founders of utiliarianism, believed that pleasure is quantifiable and attempted to develop a system (known as hedonistic calculus) to actually quantify it.

I've seen the felicific calculus, but I don't think it's sufficient. It's reasonably easy to measure both economic utility and moral utility. All you need is a utility function that turns a bunch of stuff into a real number. But I want to also measure the morality of a person. This is a bit more difficult.

At any given time, you can take actions that may affect the utility of several people, for an extended time. How much it affects them at any other time depends on what you do to them, how they are feeling at the time, and how much time has passed.

Suppose that if you had done nothing, their utility would be U<sub>0</sub>(t). We want to measure their actual utility U(t), and the morality of your action, which we can define as M(t) = U(t) - U<sub>0</sub>(t)

Now, suppose that one person has U<sub>0</sub>(t) = 0, and your action brings it up to U<sub>1</sub>(t). The morality for that action is M<sub>1</sub>(t) = U<sub>1</sub>(t) - U<sub>0</sub>(t). Now you perform the same action on a different person, whose U<sub>0</sub>(t) is not zero. Their U(t) is not equal to U<sub>0</sub>(t) + M<sub>1</sub>(t), because utility doesn't stack linearly. So morality isn't just a function of time, it has to be some kind of operator on the function U<sub>0</sub>. So this might get very complicated.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby PWrong » Thu Aug 17, 2006 8:52 am

I want to see if we can apply utilitarianism to a very simple problem. Suppose you are stuck in a fight to the death with someone. One of you has to die. Either you fight and have a chance of winning, or you give up and allow yourself to be killed.

Now, the question is, under what circumstances are you morally required to give up and die? Now, I'm tempted to believe that you should always stand and fight. Self sacrifice is often a good thing (giving your life to save others). However you rarely hear about some giving their life to save someone they're fighting. But I'm not sure if that would always maximise utility.

We could go more general. Replace the fight with any kind of conflict, and "death" with some loss in utility for you or other people.

These are our parameters:
p = The probability of you winning the fight.
u<sub>1</sub> = The amount of utility you can gain from the world by being alive.
u<sub>2</sub> = The utility your opponent can gain.
v<sub>1</sub> = The utility you can cause in the world.
v<sub>2</sub> = The utility your opponent can cause in the world.

In the fight to the death scenario, I think the solution is trivial. If u<sub>1</sub> + v<sub>1</sub> > u<sub>2</sub> + v<sub>2</sub>, then you should fight.

In a more general scenario, some variables may be interdependent. Also, it can be argued that you don't know how good you are until you fight. In some cases, winning might prove that you have more to offer the world than your opponent. In that case, you should always fight.

Any thoughts, or ideas on how we could formalise this?
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby Keiji » Thu Aug 17, 2006 12:35 pm

I had an idea when I was reading your post.

Suppose that in this scenario, you can either give up and die leaving your opponent unscathed, or you can attempt to fight leaving your opponent badly hurt and you dead, or you can fight and win leaving your opponent dead and you badly hurt. Your opponent will never give up.

So there are 3 possibilities, and we should have a utility value for each of them... ;)
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Postby PWrong » Sun Aug 20, 2006 3:29 pm

To be even more general, we could have a variable for "effort" and "ability". Rather than a probability of winning, we have a probability distribution across the utility scale. So for any given amount of utility U, the probability of attaining that utility is a function of effort, ability and U.

For instance, suppose the competition is over money. The best case scenario is that you win everything your opponent owns. The worst case is that you lose everything you own. Call the amount of money you win M. e = effort and a = ability. We can let 'e' range from 0 to 1. Then p(M,e,a) is a probability distribution over M, with parameters e and a. Note that:
Integral p(M,e,a) dM = 1.

Now, as you pointed out, if the choice is between losing and hurting your opponent, or losing by giving up, it's technically better to give up. I say technically, because other factors might come into play (it might be a good thing to teach him a lesson).
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby Keiji » Sun Aug 20, 2006 3:38 pm

I didn't understand most of that post. :(
User avatar
Keiji
Administrator
 
Posts: 1985
Joined: Mon Nov 10, 2003 6:33 pm
Location: Torquay, England

Postby PWrong » Mon Aug 21, 2006 12:31 pm

Sorry, I forgot you probably haven't learnt about probability functions yet. Basically you pick a real number at random, but some numbers are more likely than others. You have a probability function p(x), which gives you the probability of landing very close to x (between x and x+a, for some very small a). But it doesn't matter. You'll come across these soon, and you'll probably hate them like me :lol:. We don't really need them here, it's just another generalisation.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby PWrong » Sat Aug 26, 2006 1:08 pm

It seems in most cases, the probability of winning is irrelevent. It's possible the problem could be reduced to "should I commit suicide or not?". It's easy to show that the answer's no, except possibly when sacrificing yourself to save others. If you were, say, Hitler as a child, then it might be better to commit suicide than grow up to be an evil dictator, but it would be even better to grow up to not be an evil dictator.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia

Postby bo198214 » Sun Aug 27, 2006 9:01 am

Hugh wrote:The problem is, that on a day to day basis, most people are only looking out for themselves and their immediate family for short term gains, and not worrying about the long term greater good for the whole.


Oh thats a very good aspect. On what time range acts utilitarianism?
Is there not only the sum of the pleasure of the people but also the integral over the whole time until the universe collapses? ;)
For example is it feasable to kill the whole human kind except some families that the nature can regenerate and their descendants enjoy a paradisic life in blossoming nature?

Because of unreliable/impossible prediction it then becomes really difficult.
bo198214
Tetronian
 
Posts: 692
Joined: Tue Dec 06, 2005 11:03 pm
Location: Berlin - Germany

Postby Hugh » Sun Aug 27, 2006 9:38 am

There is also another aspect to consider. What if a superior race of aliens arrives at our planet and says "Great, you're Utilitarians too! Then surely you must understand that our superior race of 10 billion people need to take over your nice planet and use you as food."

Would you agree we should allow this for "the greater good"?
User avatar
Hugh
Tetronian
 
Posts: 817
Joined: Tue Sep 27, 2005 11:44 pm

Postby PWrong » Sun Aug 27, 2006 10:16 am

That quote is from the thread about starting a religion :?.

Is there not only the sum of the pleasure of the people but also the integral over the whole time until the universe collapses?

Yes, you have to integrate over all time. You'd have to set it up somehow so that the integral converges, or at least so that the difference between two integrals converges (so that you can compare two actions).

Because of unreliable/impossible prediction it then becomes really difficult.

I think the effects of a decision will either reduce to nothing after a long time, or eventually become so unpredictable that the probability of a positive effect cancels out that of a negative effect, so only the short term effects really matter.

For example is it feasable to kill the whole human kind except some families that the nature can regenerate and their descendants enjoy a paradisic life in blossoming nature?

It could be, but I doubt it. The few families left over would either die out or repopulate the earth, and would eventually have the same problems that we have now. Even if it is morally acceptable to kill most of humankind, it would better still if everyone just stopped having children.

One interesting question is whether we should consider the total utility or average utility. If we only look at the total utility and we assume that most people have positive utility, then we are morally required to have children. But if most people have negative utility (and if this will always be the case), we are morally required to wipe out all life. If we look at average utility, then it may be justifiable to kill people who have negative utility overall, like people in poverty.

There is only acceptable solution I can see. We maximise average utility, but we assume that the vast majority of people have a positive utility. Anyone with negative utility may be a candidate for euthanasia. However with most people we have three options: do nothing, euthanise them, or increase their utility above zero. Obviously the third is the best option wherever possible.
User avatar
PWrong
Pentonian
 
Posts: 1599
Joined: Fri Jan 30, 2004 8:21 am
Location: Perth, Australia


Return to General

Who is online

Users browsing this forum: No registered users and 1 guest

cron