X-Message-Number: 11070 Date: Mon, 11 Jan 1999 06:28:59 -0800 From: "Joseph J. Strout" <> Subject: Theory of Personal Identity In Message 11064, Robert Ettinger () wrotes: >I--and doubtless others--would be interested to see that theory. It's very straightforward: persons A and B are the same person (i.e., have the same personal identity) to the extent that the have the same mental structure (which includes explicit and implicit memories, personality traits, emotional responses, and so on). Notice that I reject the Boolean classification; A and B may be more or less the same person, rather than totally the same or totally different. I think most of the confusion in personal identity over the centuries has stemmed directly from trying to apply Boolean logic to an inherently fuzzy-logic reality. (Not surprising: fuzzy logic was developed fairly recently.) Some applications: - I am almost entirely the same person I was yesterday, as the mental structure of me-now and me-then are almost entirely the same. - I am not the same person as you, as we share no mental structure. - You are mostly the same person you were ten years ago; see above. - Identical twins or clones are not the same person (but maybe just a *little* bit, due to genetic influences on mental structure). - I am the same person when I wake up that I was when I went to sleep (even if space aliens came down, took me apart atom by atom, and put me back together). - You're the same person after deep hypothermic surgery, cryonics, etc., because these operations do not (much) affect mental structure. - You're the same person after uploading as you were before, provided of course that the upload is faithful and accurate. >Although I don't rule it out, I have several problems with the conclusion >that >a copy of you "is" you. > >One of the problems has been mentioned repeatedly here and elsewhere. If there >are many known copies of "you"--and perhaps many more unknown or possible >copies somewhere in spacetime--then apparently one "ought" to strive to >maximize total or maximum satisfaction for the whole set. I'm afraid I don't see this. I don't even see that one ought to strive to maximize one's own satisfaction when duplicates are *not* involved. (My stance in moral philosophy is basically utilitarian, but with a number of modifications from the classic formulation, including a weighting function that, for example, generally rules out sadism. But I digress.) >My problem (or part of it) is that there seems no way to separate this >scenario from a broader one involving "copies" of much lower fidelity. Even if >initial "copies" are identical by agreed criteria, they will rapidly diverge, >as Mr. Strout notes, and after a while may be no more similar than identical >twins or clones with different histories. There is some plausibility in the >quantitative approach (a copy sufficiently similar, whether by design or >accident, is partly you), but plausibility isn't enough. First, I wouldn't say that copies "rapidly" diverge; they will become less the same person as each other at pretty much the same rate that, say, I am becoming less the same person as I used to be. What you call the quantitative approach (and I call the fuzzy-logic approach) does seem to me to fix both the problem of low-fidelity copies, and divergence over time. These are only "problems" if one tries to apply Boolean logic -- but if one does that, similar problems can be found which don't involve uploading at all (involving, e.g., split-brain patients, brain damage, amnesia, etc.). >I also see a problem with the whole concept of "instance" as opposed to >original/copy. Well clearly if we duplicate anything, we can point to different instances of that thing, yet they may be the same thing in all ways that we care about. For example, suppose you've read The First Immortal. "Wow," say I, "I've read the same book, twice now." Do I mean that I snuck into your house, stole your book, read it twice, and put it back? No, we both understand that I have read a different copy (instance) of the same book. That's all I'm getting at here. >Mr. Strout correctly points out that, if we reject the idea that you "survive" >in a copy, we might also (depending on future developments in physics) have to >reject the idea that we "survive" from day to day in the ordinary course of >events, or after cryostasis, etc. I don't reject that possibility either, >although I emphasize that we still lack important relevant information about >the nature of nature, and in particular about time. That information will be more or less relevant depending on your definition of personal identity. I think a theory of PI must be judged on its consistency or usefulness. If your theory of PI depends on the nature of time, and will end up concluding that we die every moment and are recreated as an entirely different person, then it may be consistent, but I suggest that it is not useful. A useful theory will produce identity relations that agree with our common usage for the sake of contracts, expectations, allocation of responsibility, etc. (By comparison, most people's "intuitive" theory -- basically, that identity is defined by the body -- is not very consistent, but it is useful today. It will be far less useful when uploading is developed, which is why I feel it's important to start developing more consistent, but still useful, theories.) >As far as I can see--and subject to review of Strout's theory--there is >presently NO satisfactory or consistent answer to the question of correct >criteria of survival. Why not accept that and push on to get the needed >information and concepts? Well, I think I understand that your (implicit?) theory of identity depends on some details of physics that are not yet available. I respect that, but the theory I find most consistent and useful does not. It does, however, raise some practical issues about how various kinds of mental structure are stored, so we can (for example) better judge whether cryonics is preserving this information. Progress on these fronts (mainly neuroscience) is proceeding rapidly, and I look forward to future developments. Best regards, -- Joe ,------------------------------------------------------------------. | Joseph J. Strout developer: MacOS, Unix, 3D, AI | | http://www.strout.net | `------------------------------------------------------------------' Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11070