X-Message-Number: 8094
Subject: CRYONICS Re: computers and errors
Date: Thu, 17 Apr 1997 10:47:57 -0400
From: "Perry E. Metzger" <>

> From:  (Thomas Donaldson)
> Date: Wed, 16 Apr 1997 23:56:33 -0700 (PDT)
> 
> One of the problems with making an ordinary computer alive and
> intelligent in the same way we are alive and intelligent, I think,
> is that of also imitating ERRORS. As human beings, we can sometimes
> be wrong, confused, etc etc. If we really wanted an emulation, we'd
> have to someone arrange that the emulation is also wrong, confused,
> etc etc on the same occasions and the same way. Doing that raises
> much harder programming problems than just giving perfect answers
> every time.

This is a very weak argument. It operates on the dubious premise that
mistakes are the result of imperfect implementation of somehow
"perfect" human programming.

In fact, the reason you incorrectly decided to invest in IBM instead
of in WallMart cannot be attributed to imperfection in your mind but
is instead due to the fact that you CANNOT predict some things, and
you CANNOT produce a closed form algorithmic solution to every
problem.

The science fiction notion of the "perfect thinking machine", or even
the perfectly logical "Mr. Spock" like being, is completely out of
line with reality.

When we build artificial mechanisms capable of intelligent thought,
they will be imperfect, period. There is no perfect way to pick the
"right" thing to do in any given situation -- there aren't even
reasonable decision criteria for doing so, and in any case any
decision criteria you could possibly arrive at would have to be arational.

Perfect algorithms cannot be devised to do even a tiny fraction of the
tasks any intelligent entity must perform. *OF NECESSITY* any such
entity will make mistakes.

> I'd even suggest that one very good way to tell that you are
> conversing with a computer (in the classic Turing test) is to watch
> what happens when things don't come out right. The
> misunderstandings, mistakes, blunders, etc that people sometimes
> make says a lot more about how they work than when everything is
> done correctly.

Why would you assume we *could* build a "perfect thinking machine"
even if we wanted to? If there are three ways you can parse something
someone said, if you are using real world audio reception devices
(ears or microphones) that introduce noise into sensing instead of
some impossible to build "perfect" devices, if you operate by science
and not by magic, you cannot help but sometimes mis-parse or mis-hear
what someone says. The idea that you could somehow produce this
"perfect thinking machine" is completely out of line with reality.

Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8094