X-Message-Number: 10178
Date: Tue, 4 Aug 1998 16:25:46 -0400
From: Brook Norton <>
Subject: what you "ought" to want

This is a response to Bob Ettinger, Charles Platt, Tim Freeman, and Brian
Delaney who have all recently commented on the assertion that:

** The only rational approach for anyone is to try to maximize personal
happiness over future time, appropriately weighted.**

Let's reconsider the drug-high problem.  It is a thought-experiment that
goes right to the core of the problem.  If trying to maximize happiness is
the end-all, then wouldn't one choose "spending the rest of eternity in
drooling bliss (Tim Freeman)."  First consider a modern day heroin addict=2E =

I conceed that after shooting-up he might be happier than dealing with
reality... until the high wears off.  But I believe the long term mental
and physical effects of the addition will result in him being much less
happy in the long run.  Therefore, heroin addiction should not be tried
since it decreases overall happiness.  (I can't prove someone else wouldn't
be happier as an addict in the long run, but it seems like a good bet.  I'm
trying to clarify the philosophical concepts rather than proving anything
in this message.)

For the sake of the thought experiment, suppose a god gave you a drug
guaranteed to give a heroin high that would not wear off and that your
physical well-being would be cared for till eternity.  This would be a
better deal than modern addiction, but I still wouldn't take it because
future technical advances will probably make it possible to be happier than
a mere heroin high.  And there are other types of happiness I want to enjoy
like ice cream and love.  So I would reject the drug because I believe I'd
be happier without it in the long run.

So the god guarantees that the drug will give me all possible kinds of
happiness and in quantities greater than I could ever experience without
the drug.  On top of that, he'll monitor the progress of man (and his
decendants) and will update the drug whenever a new form of happiness is
discovered, thereby guanteeing that I'll be happier than any contemporary
could be.  I would take the drug since it would make me happier in both the
short and long term.

But the god adds, the side-effect is that you'll have no intelligent
thoughts (other than those required to experience the happiness); you'll
drool a lot and you'll look like Dennis Rodman for eternity.  No problem. =

So long as I'm happy, give me the drug.  I'd rather be dumb-happy than
smart-sad.

This is hard to swallow because we so often associate mindlessness and
drugs with unhappiness.  But in the thought-experiment, drugs are linked
with true happiness and so such a super-drug should be taken.

End of drug thought-experiment.

"Happiness" should not be defined too narrowly.  I'm using it to mean all
related enjoyable experiences (even those types of experiences not yet
invented or discovered).   Ettinger calls it "feel-good".

In the phrase ** The only rational approach for anyone is to try to
maximize personal happiness over future time, appropriately weighted.**
consider "appropriately weighted".  Tim Freeman points out heroin may make
you happier in the sort term.  Brian Manning Delaney points out that if you
live to infinity, maximizing happiness may not be appropriate since two
different courses of action may both result in infinite happiness.  But
"appropriately weighted" gets us out of these problems.  The idea (not
proved here) is that near term happiness is worth more than long term
happiness.  How much more is very fuzzy to me.  Heroin addiction gives
brief periods of happiness mixed with longer periods of unhappiness. 
Appropriately weighing these highs and lows leads me to believe that heroin
will reduce my overall happiness.  Considering a choice where my life is
infinite, I put value approaching zero to happiness at t =3D infinity, and
base by choice on more near term happiness.  For example, suppose a god
tells me that he will give me the super-drug described above in 1 million
years.  But for the next million years I will experience horrible misery. =

One million years of misery combined with the following infinite time of
bliss adds up to infinite bliss.  I would reject the deal even though it
resulted in a net infinite bliss because I give high weighting to first
million years.  I don't want to only "maximize happiness", I want to
"maximize happiness, appropriately weighted."

Charles Platt says:
>>
You also ignore the well documented fact that some people enjoy pain. You
may claim that this is a form of pleasure; but in that case, for these
people, you would have to say that pain does not exist.
>>

I'm no psychologist and don't understand the details of why some people
enjoy pain while most of do not.  But surely these people enjoy some things
and dislike others.  Doing the things they enjoy gives them happiness and
doing things they dislike gives them unhappiness.  Then all the same
arguements hold.

Charles Platt then says:
>>
In fact the problem is far more complex, and I fear that by projecting your
own set of life-rules on the rest of the population, you are making an
error which is seen often in the cryonics community: assuming that everyone
else is as smart and rational as you are.
>>

Another claim of the "maximize happiness" approach is that it is objective,
that is, independent of anyone's opinion.  I can no more project these
rules on others than I can project gravity on others (not the best example
because I do project some gravity on others, but you know what I mean). 
The rules are a natural consequence of the nature of the universe.  We are
all subject to the same rules.  Those who are "smart and rational" and
apply it well will make more "right" decisions than those who are less
smart and rational.  By making more "right" choices they will be happier.

Tim Freeman says:
>>
Suppose that instead of deciding to maximize personal happiness, I took as
my highest goal to maximize the wealth of the Catholic church, or the US
Neo-Nazi movement, or whatever other bizarre charity you want to imagine. =

This is a pursuable goal.  There isn't any contradiction there that I can
see.  It is different from the "only rational approach" listed above.  By
what criterion is it irrational?  There are people who really do things
like this.
>>

In responding to Tim, I quote Ettinger who is commenting on being rational,
"But the key questions are always whether you are being honest, and whether
you are utilizing all available information."  The people contributing to
the "bizarre charities" above believe it will increase their happiness. 
But they are being irrational because they are ignoring available
information that indicates contributing to these charities does not lead to
happiness.  They are pursuing happiness, but irrationally, and so
"wrongly."

Brian Delaney says:
>>
Hi Brook. I think you are wrong, or saying something empty.

If you really believe that at "the most basic level, ... the brain is
hardwired to always choose to increase happiness," then what you mean by
happiness is simply what we choose. Thus, you're saying, at bottom: the
only rational approach for anyone is to choose what we choose. Not helpful.
>>

Agreed.  Perhaps the assertion should be changed from

** The only rational approach for anyone is to try to maximize personal
happiness over future time, appropriately weighted.**

to

** Happiness is the primary end in itself. One should make rational
decisions to maximize personal happiness over future time, appropriately
weighted.**

As you say, the logic implies that we always act to increase happiness. 
And I agree with this.  But its important to add that the "best" way to act
is "rationally" because this leads to the greatest happiness.  A drug
addict shoots up because he believes it will maximize his happiness,
however, this is irrational.  He has discarded evidence that shooting up
will lead to greater unhappiness in the long run.  Therefore, he acts
irrationally and "wrongly" while at the same time trying to maximize his
happiness.

Brian Delaney, in msg 10174 raises many good points and also says the
discussion should include determinism and that its all probably too much to
cover on Cryonet.  I agree all around and hope we have the opportunity to
continue in some future way.  I also look forward to Ettinger's book (and
Mike Perry's) for more stimulating discussion.

Brook Norton

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=10178