X-Message-Number: 10174
Date: Mon, 03 Aug 1998 22:57:48 -0700
From: Brian Manning Delaney <>
Subject: Re: What you OUGHT to do
References: <>

(This is long: ~11KB or so.)

Robert Ettinger wrote (in a Cryonet msg. not yet sent out to
the List):

>I posted the following today to Cryonet. My
>guess is that there is more potential benefit
>than trouble-making in posting to both groups.
>Thanks to Dr. Delaney for his interest and his
>apparent recognition that "what we ought to do"
>is from some standpoints the most important of
>all questions.

I like the sound of "Dr. Delaney" enough that I'm impelled
to wrap up my dissertation quickly! Thanks for the extra
motivation! But, for the moment, just Brian (or Delaney, or
my full name) is more accurate. :)
 

>Brian Manning Delaney (#10170) wrote:
>>Brook Norton <> wrote:
>>>I'll restate that the underlying assertion is (borrowing
>>>some from Ettinger) ** The only rational
>>>approach for anyone is to try to maximize
>>>personal happiness over future time,
>>>appropriately weighted.**

>>Hi Brook. I think you are wrong, or saying
>>something empty.

>>If you really believe that at "the most basic
>>level, ... the brain is hardwired to always
>>choose to increase happiness," then what you
>>mean by happiness is simply what we choose. Thus,
>>you're saying, at bottom: the only rational
>>approach for anyone is to choose what we choose.

[....]

>First, it is not exactly true that the brain is
>hard-wired to choose to increase happiness.
>The brain, in Lorentz' metaphor, is a disorderly
>parliament of instincts (and habits and
>preferences etc.). "Choices" can arise in
>various ways, not all of them the result of
>balanced appraisal or cool calculation or
>anything similar. (See my [cryonet] post
>yesterday.) Nevertheless, Brook Norton is
>basically correct, that our most basic value is
>to maximize personal happiness (satisfaction,
>feel-good, whatever you want to call it).

I don't think I understand the significance of the
difference between what is "not exactly true" and what is
"basically correct." It appears to hinge on what you say
below, which I'm also not sure I follow (though I suspect I
follow, and just disagree).

>It seems superficially reasonable to object, as
>Dr. Delaney does, that the statement is circular
>and meaningless--that saying we always choose to
>increase satisfaction is the same as saying that
>what we choose is what we mean by "satisfaction."
>One way to understand the error is simply to
>compare alleged or chosen criteria of value, and
>ask "Why?"

>For example, suppose someone says his highest
>value is to improve the lot of humankind,
>regardless of his own fate. We simply ask, "Why?"
>It will develop that this is just what he wants.
>That is his value because that is his value.
>THIS is circular. Someone like Brook, on the
>other hand, will say I want to feel good because
>that is the way I am made, at the most basic
>biological level. Feeling good is an end, not a
>means. "Helping humanity" is a means, not an end.

But as a response to the question of what to do, Brook's
answer is still, as I said, ultimately circular, or, empty
and therefore unhelpful. You've shown that the "means"
answer isn't necessarily circular, but in such a way that
you've shown it's empty, that is, nothing about the criteria
for "satisfaction" applied to means helps answer the
question of what to do (even if your analysis might say
something about what our ultimate end unavoidably is). That
was my point.

In order to work out some of these differences, we may
require a conversation that's far too long to have here, but
I'll make another brief attempt. (I suspect a discussion of
determinism is part of what's necessary -- but I'll skip
that here.)

The original statement was: "The only rational approach for
anyone is to try to maximize personal happiness over future
time, appropriately weighted."

My contention is that this is deeply wrong (because empty or
ultimately circular).

The are a number of things that have to be dealt with for
Brook's statement to work, among them:

1. That a rational approach is better than a non-rational or
irrational approach needs to be shown.
2. That there are no other better rational approaches --
that is, that maximizing personal happiness is the most
rational approach -- needs to be shown. This of course
requires #3:
3. Personal happiness needs to be defined.
4. "Maximizing" needs to be defined.

(Not one, but all of these have to be dealt with, along with
other smaller problems.)

Proving #1 is probably easy (feel free to ignore it, if you
like -- though see the condition in next paragraph). Still,
"rational" needs to be defined in a non-circular way. (That
is, it won't do to say, "rational is what makes sense," or
"rational is what reasonable people would agree on," etc.)

You appear to want to demonstrate #2, and, at least to some
extent, #3, largely by means of an understanding of the
brain. This is an understandable approach. However, I think
it's futile. I've suggested one problem already: than any
hard-wired "engine of action" (even understood as Lorentz
does) can't be called "rational" in a way that permits us to
say that action X is more rational than action Y, unless
rational is being defined in a way _radically_ different
from everyday usage. The brain, after all, does irrational
things. If a radical redefinition is needed, such a
redefinition needs a warrant (and answering #1 will no
longer be so easy, but will become more important).

It's also difficult to see how happiness can be defined by
means of an understanding of the brain. Part of my objection
has to do with the difference between the mind and the
brain. This is a long story, but one way of summing it up is
this. The Churchlands, whose work many people reading this
know (Paul and Patricia Churchland make claims about the
mind based on scientific understandings of the brain), are
very, very smart people, and do great work. But they aren't
philosophers, in my book. They're cognitive scientists. The
question of the definition of happiness -- especially as
something claimed as relevant to the question of what to do
-- is not a scientific question, however much science comes
into play in helping us do the right thing, and however much
science can help us understand certain mechanisms by which
happiness levels change. (There's a much longer story here
-- where isn't there in this debate, actually?!)

About #4:

>Dr. Delaney also appears to mistake the nature
>of probability calculations about the future. In
>order to reach a rational decision, it is NOT
>necessary to calculate every consequence of
>every possible choice out to infinity. Decision
>theory, rather, is precisely the science of
>making choices in the face of uncertainty and
>limited information.

I believe I appreciate the nature of probability
calculations about the future, I just have a different
understanding of whether probability calculations are
relevant in the way you think they are.

Consider the following two statements:

S1. The right thing to do is to make choices that maximize
happiness.
S2. The right thing to do is to make choices that we assess
as having the best chance of maximizing happiness.

I take it A) you think the only relevant claim at hand is
#S2, and B) you think it can be answered using probabilistic
reasoning. (Correct me if I'm wrong.)

To take (B) first: Even if the only relevant question is the
second one, it's not clear that probabilistic reasoning
helps. I can, to be sure, come up with a mathematically
robust likelihood of my having an overall winning record at
poker, played against a one particular person whom I know
(and have played before), for a limited time. There are the
odds of certain hands being dealt, the odds of the relative
skill levels not changing, etc. All these things can be
factored in. I can even, I think, come up with a fairly good
estimate of the probability of my having an overall winning
record if we played forever. Sure, my partner might take
advanced smart drugs before I, etc., etc., but I can factor
in the probabilities of such eventualities. It's not clear
to me that we could _prove_ that a given calculation of
probability of an outcome over the course of infinity is
calculable, but I'm happy to grant that it is indeed
calculable.

The possibility of calculating my happiness, under the
condition either 1) that I live forever, or, more likely, 2)
that my current happiness is contingent upon my current
assessment of future events (my children's well being, my
grandchildren's well being, the continuation of human life
beyond the time the sun goes nova, etc.), is an entirely
different matter. The system is not closed temporally (and
probably not spatially). The poker game isn't closed
temporally either, but one might argue it's like the
question of calculating the probability that flips of a coin
will average (under fixed conditions) to the same number of
heads as tails over time: the amount of time is irrelevant,
we know. The calculation of future happiness, however, is
not like the flipping of a coin (or rather IS, in a
different sense...). There are a few arguments I could make
to support this. The simplest is just to claim that the
system is not closed spatially, as the flipping of a coin is
(though even that isn't spatially closed, one could argue,
but that's not relevant). Take this as a claim that we can't
prove there aren't an infinite number of alternative
universes, if you're inclined. There are other ways to show
the system isn't spatially closed, but this is getting too
long. In any event, my response to (B) is a very minor
point, compared to the following.

And now (A).

I argue that #S2 turns out to be incoherent, or useless,
because of its relation to #S1. I'll have to make some
assumptions about what you're thinking (as I've already
done!), which I'll happily see corrected (though I think the
assumptions actually follow from what you've said).

The main problem is that #S1 and #S2 contradict each other,
and it appears that you've claimed both (or made claims that
imply both):

#S1 here:
> [...] that our most basic value is
>to maximize personal happiness (satisfaction,
>feel-good, whatever you want to call it).

and #S2 here:
>Decision theory, rather, is precisely the science of
>making choices in the face of uncertainty and
>limited information.

With #S2, you admit the possibility that someone could make
THE correct probability assessment (I'm granting for the
moment that which I argued above might not be possible), and
yet it could turn out that a different course of actions
could have resulted in a happier (or more satisfied, or
whatever) life -- possibly MUCH happier. So then, by S2#,
the person did the right thing, but by #S1 they didn't do
the right thing, and perhaps even did the WORST thing.

To get out of this problem, you either have to eliminate #S1
or #S2 (it seems to me). If you eliminate #S1, #S2 becomes
incoherent, for obvious reasons: the right thing to do
becomes the making of choices that we assess as having the
best chance of doing something the complete success of which
we've eliminated as our goal. If you eliminate #S2, then my
earlier claims about the problems of calculating something
into infinity become fatal, for we no longer have the goal
of best assessment, but of best actuality, for which
decision theory will not help us.


This was too long. Sorry. I will probably not have time to
add much more, but any responses will be greatly
appreciated.

>Yes, there is much, much more.

Yes -- and hardly less so now, alas!

Best,
Brian.
--
Brian Manning Delaney
<>

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=10174