X-Message-Number: 8634 Date: Sun, 28 Sep 1997 21:21:29 -0700 (PDT) From: John K Clark <> Subject: Digital Shakespeare -----BEGIN PGP SIGNED MESSAGE----- On Sat, 27 Sep 1997 "John P. Pietrzak" <> Wrote: >In Hopcroft & Ullman's book, a TM is a device with a tape, the tape >is divided into cells, and each cell can contain a symbol from >Lambda, the (finite, but can be greater than 2) set of allowable >tape symbols Well of course it's mathematically equivalent to a binary Turing machine, it's also mathematically equivalent to a Pentium 2 with a 512kb L2 cache and 64mb of main memory and a 8 gb hard disk. The beauty of the Turing Machine concept is its ability to reduce every computer without exception to its SIMPLEST possible conception, one that is unique. There is nothing unique about a machine that uses 3, 4, 5, or N different symbols, but there is something unique about a machine that uses only 2 because as I've said, nothing is simpler than a part that can change in only 2 ways >In fact, in order to encode a set of ones and zeros on the tape using >their formalization, you'll always need at least three symbols in >your language, since you'll need a "blank" symbol for tape >manipulation purposes. It always need at least 3 symbols? A real computer like the one on my desk only uses 2, if seems pretty pointless to dream up a "simplified" abstract model is actually more complex than the real thing; or are you telling me that this is something new, something that can not be reduced, that it's fundamentally different than a classic Turing machine with 2 symbols? I admit that I've not read Hopcroft & Ullman's book, but if true I think I would have heard about the greatest revolution in computer theory since Turing's paper was published in 1935. I haven't. >When you start talking not about a set of actions but rather about >the written representation describing a set of actions, we now have >to start being very specific about just how those actions are >encoded. (The "add" instruction on an eight-bit processor is encoded >in exactly half the space of the "add" instruction on a sixteen-bit >processor. Why are you bringing up eight bit and sixteen bit processors with very specific and non universal machine instructions? We both know they can be reduced to something much more fundamental, a binary Turing machine. Yes, when programming in the real world it's important to distinguish between data and instructions and yes, we don't usually write programs with only 2 symbols, so what, we were talking about Turing Machines. >This means, the encoded length of an algorithm consisting of a >single addition on both processors differs greatly (in fact, by 100%) >even though they may be EXACTLY THE SAME ALGORITHM.) I hate to sound redundant but, so what? I never said that the same algorithm must always be the same size regardless of the particular hardware it is implemented on, and I can't imagine why anyone would say such a silly thing. >they distinguish between data and instructions in exactly the same >way that a real-world general-purpose computer does: by defining >some sequences of symbols as instructions when they are in a >particular state, and some sequences of symbols as data when they >are in another state. And exactly how does a Turing machine react differently if a zero it reads on a tape is an "instruction" as opposed to being "data"? In either case the machine changes its state, it changes the zero to a one or it leaves it alone, it then either moves right, moves left, or stops for good. The difference is subjective, your data could be my instructions. >Besides, there is nothing at all stopping me from creating a FSA >which, when it reads a "1", performs the actions of a printf >statement, and when it reads a "0", gives that printf statement the >string I'm interested in printing. In the context of a Universal Turing Machine a printf statement must mean "take a string starting at a specified position on the tape and ending at another specified position on the tape and duplicate it at yet another specified position on the tape". A Universal Turing Machine is perfectly capable of doing this, but not if the only information you give it to accomplish the task is one bit. By the way, could you tell me why you keep going on and on about printf statements, I don't see what you're driving at. >>Me: >>I said figuring out the trillionth digit of PI was a complex task, >>you said it was simple and as "proof" you presented a computer >>program that could do it,... eventually. You said because the >>program was short that proved the task was simple. I said your proof >>was flawed, and it is, but if AIC really was complexity then your >>proof would have been valid. But it's not so it isn't. >Ah, now I begin to understand. Determining the trillionth digit of >Pi is a time-consuming task. Just a tad. >Personally, I don't consider the consumption of time as being >complicated I know you don't, that's why you sounded like a defender of the "complexity is AIC" viewpoint, and still do. If I've misjudged you I'm sorry, but then explain why the shortness of your program is relevant in determining the complexity of finding the trillionth digit of PI. >If your brain was only using a rule of thumb to come up with that >statement, you may have been wrong. Could be, I've been wrong before, ... of course it's been many years. >I'm beginning to be unsure that I can believe anything you say. You shouldn't believe anything I say just because I say it, only if you find my arguments convincing and apparently up to now you haven't. >Ah, good. Then that means you could have used the name "stupidity" >for that category and it would have worked just as well. Call it whatever you like, the word is not important, you could use xqqwjbvy as a label for it and it would work just as well. >if you *really* aren't using random chance to locate examples of >intelligence, there *MUST* be something other than random chance >guiding you, There is, educated guesses, rules of thumb and playing the odds, but like nearly everything we do in life, except for pure mathematics or formal logic, we am not guided by rigid definitions, axioms or theorems. >If your brain is ruled by vague and fuzzy rules of thumb, how can >you give a "concrete" example? Turn it around, if your brain is ruled by rigid rules of perfect logic, how can you give a "vague and fuzzy " example? If everybody's mind works by a collection of theorems codifying the one true law of logic why do different people act differently, and why does everybody make mistakes? Actually I'm sure we have some rules in the form "if A then always exactly B" but the trouble is the rule may be wrong, for example, "If a man does not believe that Jesus is God then he will be tortured for eternity by the same loving God". Does all this mean there is a possibility everything I say or think I know is wrong? Yes. >How can you be certain about how our brains work, or in fact, >certain about anything at all? Absolute certainty is easy to obtain, far too easy. People are running around certain about the damnedest things, most of them mutually exclusive. There seems to be a bizarre inverse relationship between certainty and the ability to test an idea experimentally, people will admit that they aren't sure if Cheyenne is the capital of Wyoming, but will insist they are certain there is life after death. Being certain is a dime a dozen, being correct is not. If I truthfully tell you I an absolutely positively 100% certain that fact X is true you have learned nothing about fact X, you have only learned about me and the state of my mind. All I can do is present my arguments and if you think they're good you might become as certain as I am that fact X is true. But is it? >Your belief in vagueness appears to have extreme clarity here... Thanks, I like to think so, but I could be wrong. >although you can perform experiments, you can't ever tell what the >experiment is really about, because you really can't understand the >concept "stochastic" with certainty. I'm not sure I understand you, it almost seems as if you're saying that if there is any chance at all a procedure will produce an error then the procedure is completely useless, but it would be unfair of me to put such foolish words in your mouth. >This is my problem with your approach: if you really are what you >say you are, then you can _never_ know that you really are what you >say you are. True, but it would be just as true if you were right and we operated by a set of rigid axioms as in Euclid. Godel proved in 1930 that ANY set of axioms will be incomplete or inconsistent or both. John K Clark -----BEGIN PGP SIGNATURE----- Version: 2.6.i iQCzAgUBNC8n2n03wfSpid95AQGhLQTuIeKfXb3+7LcMQOmsfukXw67aCI/RdNw/ 27aSvCCUcOa2EbYl/VW6kYeI40kRBQ2IYa3ilCT2WIQ251JhuczBjur9Rx6K86zg pQGl+gF29lGhlYv847WbRjgyh8EoQRZdubBk08+L+6YB9boPw68yncDwaQLenjwf cuJS9Dp472U3eRgIgVCkatZIhcY3MfuoIZtQKpTw00wQ4zroSdw=cnz+ -----END PGP SIGNATURE----- Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8634