X-Message-Number: 8637
Date: Mon, 29 Sep 1997 09:44:30 -0400
From: "John P. Pietrzak" <>
Subject: Re: Digital Shakespeare
References: <>

John K Clark wrote:
[On Hopcroft & Ullman's version of the TM]
> The beauty of the Turing Machine concept is its ability to reduce
> every computer without exception to its SIMPLEST possible conception,
> one that is unique.

Hmm.  The TM itself is the bastard child of a finite state automaton
and a mathematical representation of an infinitely long magnetic tape,
glued together with that odd "read-head" function.  Personally, I can't
swear that it's the simplest possible conception available, or even
that it is unique (given that even we obviously have two different
basic representations of the TM between us).  To me, the real beauty
of the TM is that it does bound what we think of as a digital processing
machine, and that it is easy to use when performing proofs.

[A cheap shot, couldn't help myself]
> nothing is simpler than a part that can change in only 2 ways

A part that doesn't change at all?  (Sorry, just had to say that. :) )

[Some strange aspects of the (Hopcroft & Ullman) Turing Machine]
> It always need at least 3 symbols? A real computer like the one on my
> desk only uses 2 [...]

What I've not brought up here is the question of addressing.  The real
computer on your desk uses only two symbols; but the regions which hold
those symbols are all individually addressable.  In RAM, on the disk
(and even on most modern forms of tape) each block of data is
individually addressible, and blocks are normally of a standard size,
so that it is easy to distinguish which pieces of data lie where in
the block itself.

Hopcroft & Ullman's tape is a truly serial access structure.  There
is no way to move the tape head directly to a specific cell on the
tape; thus, in order to show where one block of data ends and the
next one begins, you need special end-marker symbols.  (If you only
have two symbols on this sort of structure, you'll need to do something
interesting to them to allow them to both hold a 1 or 0 value and to
perform data boundary tasks.  I can remember seeing an example of a
H-U TM with a language containing only the symbol 1 and the blank
symbol, but I can't remember if it was able to do everyting that one
with both 0 and 1 could...)

[Encoding issues]
> Why are you bringing up eight bit and sixteen bit processors with very
> specific and non universal machine instructions? We both know they can
> be reduced to something much more fundamental, a binary Turing
> machine.

Sir, I happened to be talking about a system that you brought up, the
AIC.  That thing deals with the encoded length of algorithms.  The
encoded length of an algorithm on a 16 bit machine is DIFFERENT than
the encoded length of an algorithm on an 8 bit machine, which in turn
is DIFFERENT than the encoded length of an algorithm on a Turing
Machine.  Wouldn't you think, then, that this means the results you
achieve from your much more fundamental binary Turing Machine w.r.t.
the AIC would NOT apply to other processors?

[More encoding issues]
>         >they distinguish between data and instructions in exactly
>         >the same way that a real-world general-purpose computer does
>
> And exactly how does a Turing machine react differently if a zero it
> reads on a tape is an "instruction" as opposed to being "data"? [...]
> The difference is subjective, your data could be my instructions.

Like I said, exactly the same as a real-world general-purpose computer.

[More fiddling with TMs]
>         >Besides, there is nothing at all stopping me from creating a
>         >FSA which [...]
> 
> In the context of a Universal Turing Machine [...]
> A Universal Turing Machine is perfectly capable of doing this, but
> not if the only information you give it to accomplish the task is one
> bit.

You are exactly right sir, which is why I'm not talking about a
Universal Turing Machine.  (I'm talking about fiddling with the FSA of
a TM, which is verboten when you're messing with a UTM: since you're
passing the UTM both the entire description of a whole other TM and the
problem that TM would work on anyway, it doesn't really matter.  The
only reason the UTM exists in the first place is to show that you can
emulate one TM on another; it isn't supposed to be some general panacea
for all proofs.)

[More on the origins of Digital Shakespeare]
> By the way, could you tell me why you keep going on and on about
> printf statements, I don't see what you're driving at.

Yes, by this point, we do need to describe the argument again.  Let
me back up and give the gory details:

1) You say determining Nth digit of Pi is complex.
2) I say determining Nth digit of Pi time-consuming, not complex.
3) You say time-consumption is complexity, bring up AIC as definition of
   complexity without time, and refute AIC, thus supporting your claim.
4) I also refute AIC, on grounds other than time-based.
5) You question second refutation, eventually describing AIC on TM.
6) I use H-U TM definition to show simplest solution to AIC is trivial
   on TM (encoding C language style "printf" statement in FSA).  This
   forms my refutation of your refutation of my refutation of AIC, thus
   supporting my disagreement of your reasoning that time = complexity.

Whew.

[On to issues of certainty and vagueness]
>         >I'm beginning to be unsure that I can believe anything you
>         >say.
> 
> You shouldn't believe anything I say just because I say it, only if
> you find my arguments convincing and apparently up to now you haven't.

To be honest, I've found a good deal of your arguments to be convincing.
There are only a few points upon which I disagree, and our discussion
has generally been reduced to those points.  (One of these being whether
a person can be certain of anything.)

[The name of a category]
>         >Ah, good.  Then that means you could have used the name
>         >"stupidity" for that category and it would have worked just
>         >as well.
> 
> Call it whatever you like, the word is not important, you could use
> xqqwjbvy as a label for it and it would work just as well.

But my question is, do you really believe that the category, beyond
inclusion or exclusion of members, is meaningless?  Is there no other
important relationship between the members other than the fact that
they are members?

>         >If your brain is ruled by vague and fuzzy rules of thumb,
>         >how can you give a "concrete" example?
> 
> Turn it around, if your brain is ruled by rigid rules of perfect
> logic, how can you give a "vague and fuzzy " example?

Personally, I would think that would be much easier: once you knew
what a vague and fuzzy answer was, you would be able to crank out
perfect examples of them every time.  :)

> If everybody's mind works by a collection of theorems codifying the
> one true law of logic why do different people act differently, and
> why does everybody make mistakes?

All right, its true, I don't believe that people have perfect all-
inclusive theorem provers inside their brains.  I do, however, believe
that people do have axioms in which they belive and do use logic quite
often.  Without a complete set of axioms or a full set of theorems over
those axioms, and moreover needing to work in real time, they will often
come up with results that are less than perfect; that doesn't mean the
ability to perform correct logic does not exist.

Oh, I guess you've basically said this:

> Actually I'm sure we have some rules in the form "if A then always
> exactly B" but the trouble is the rule may be wrong, for example,
> "If a man does not believe that Jesus is God then he will be tortured
> for eternity by the same loving God".

Well, what axioms you choose to believe in are up to you, but the logic
will still work (you'll note that Christians believe that it is because
God loves us that he punishes us... sort of "character building", I
guess).

> Does all this mean there is a possibility everything I say or think I
> know is wrong? Yes.

In so far as the absolute truth of our axioms, I agree with you.  I'm
hoping to eventually find the absolute truth myself.  Until then, I
do have absolute certainty *with respect to the axioms I have chosen*
in my own reasoning.

> Absolute certainty is easy to obtain, far too easy. People are
> running around certain about the damnedest things, most of them
> mutually exclusive. There seems to be a bizarre inverse relationship
> between certainty and the ability to test an idea experimentally,
> people will admit that they aren't sure if Cheyenne is the capital of
> Wyoming, but will insist they are certain there is life after death.

But you're talking about a common sociological situation: Cheyenne
being the capital of Wyoming is a simple fact of a name, life after
death is an emotionally-charged socio-political issue with intense
personal ramifications.  You can't expect people to answer questions
truthfully when their own survival is at stake; they'll make sure that
even in the deepest recesses of their mind that they believe a
particular fact is true, no matter how obviously it conflicts with the
rest of their axioms.

There's nothing bizarre about it.  I personally belive in the absolute
selfishness of the human race: if a particular person thinks there is
more benefit to themselves to believe in one axiom over another, they
will do it, no matter how it conflicts with their other axioms.

> If I truthfully tell you I an absolutely positively 100% certain that
> fact X is true you have learned nothing about fact X, you have only
> learned about me and the state of my mind.

I am absolutely positively certain that people in this world in which I
live use the term "intelligence", that they use it as if it has some
meaning, and indeed they all seem to use it in a similar way.  In so far
as I can understand your own belief about fact X, I may indeed have been
able to learn something about fact X itself, in that your mind and
others like it are the only places in which any possible definition of
this particular fact reside.  (So far as I know.)

>         >although you can perform experiments, you can't ever tell
>         >what the experiment is really about [...]
> 
> I'm not sure I understand you, it almost seems as if you're saying
> that if there is any chance at all a procedure will produce an error
> then the procedure is completely useless [...]

To have an experiment, you need a hypothesis.  To have a hypothesis,
you need axioms.  To have axioms, you need certainty: otherwise, you
cannot describe the axioms in sufficient detail.  You need not be
certain about absolutely everything, but you _must_ be certain about
which axioms you choose and what they mean.  Otherwise, you could never
disprove them.

> Godel proved in 1930 that ANY set of axioms will be incomplete or
> inconsistent or both.

And it's just amazing, how he stopped science in it's tracks when he
showed that.  :)

Even today, all of science runs on axioms, practically everyone knows
and understands Godel's proof, and yet science continues to stumble
forward.  Because, even if you know that you won't be able to cover
everything with your set of axioms, a good set of axioms will still
apply to a large percentage of what you're interested in, and you can
always leave the rest to a better set of axioms later.  Partial truth
is better than none at all.


John

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8637