X-Message-Number: 8541 Date: Fri, 05 Sep 1997 11:57:32 -0400 From: "John P. Pietrzak" <> Subject: Re: CryoNet #8533 - #8540 References: <> John K Clark wrote: > [On computers, I wrote:] > >but what it *does* is still very simple. > > Pi = 4 - 4/3 +4/5 -4/7 +4/9 - 4/11 + ... > > Please tell me what the trillionth digit of Pi is. The short line > above is all the information you need to figure it out, so you should > be able to do it in a snap. It's very simple. Calculating Pi (using the above formula) is, in fact, quite simple. The following algorithm (in pseudocode) is sufficient to do it. PROCEDURE CalculatePi() VARIABLES REAL Pi; INTEGER div1, div2; BEGIN Pi := 0; div1 := 4; div2 := 1; WHILE (TRUE) BEGIN Pi := Pi + (div1 / div2); div2 := div2 + 2; Pi := Pi - (div1 / div2); div2 := div2 + 2; END WHILE; END. Unfortunately, because it contains an infinite loop, this algorithm will not terminate. In order to terminate at the point where the trillionth digit no longer fluctuates, you need to have more information about how the formula behaves than I have at my disposal. Still, the real problem here is that you and I have a different definition of "simple". This 15 line algorithm will calculate the _exact_ value of Pi (assuming your formula is valid), given an infinite amount of time. Certainly, I can't afford to give it that much time to run, but neither can I say that the little algorithm is complicated to run. (If there are any steps in the algorithm that you can't perform yourself, perhaps it *is* complicated with respect to you.) > >In the end, the computer is just a glorified state machine. > > In the end all Shakespeare did was put ASCII characters in a sequence. Absolutely not! There is a great deal of difference between the sequences of characters originated from Shakespeare and a random sequence of characters. (Besides, ASCII wasn't invented until centuries after he died. :) ) On the other hand, there is no significant difference between one state machine and another; apart from storage limitations, they generally can perform all of a particular class of algorithms (and only those algorithms). > >your machine may be faster, but it _can't do anything new_. > > I'll bet it would take less than 5 minutes to write a computer > program that would search for the smallest even number greater that > 4 that is not the sum of two primes (ignoring 1 and 2) and then stop. > Since you say that a computer can't do anything new you should be > able to answer the following question. Would this humble little > program ever stop? > > The fact is nobody knows the answer to that question and there is no > guarantee that you could find an answer regardless of how much time I > gave you to figure it out. It could be true so you will never find a > counterexample and yet Turing found (same man who invented the test) > that some true statements can not be shown to be true in a finite > number of steps, that is, they have no proof. This is true, but: > Whatever the machine does, stops or continues, it will be new > behavior to you. You're confusing instances of behaviour with ability to behave. Certainly, a computer may be able to present to you something that, with respect to your previous knowledge of what it has done, is novel. But what you can't do, is create a new (digital) computer which can do something that a previous computer couldn't possibly do. The problem you outlined above is true no matter what machine you use to implement it. Similarly, such artifacts as the Halting problem are true for any digital machine made today and into the future until the end of time. The concept of the Turing machine is powerful because it does place absolute limits on computability. > >I've met people who are brilliant but act like morons > > Everybody acts like a moron from time to time, but if you've never > seen them act any other way then why do you say they're brilliant? Question 1: A is brilliant. B knows A is brilliant. B sees A acting like a moron. If someone acting like a moron is a moron, and someone acting brilliant is brilliant, why should B believe that A is still brilliant? Question 2: A is brilliant. However, at this time, A is acting like a moron. C (having never seen A before) sees A acting like a moron. With respect to C, is A then in fact a moron? In my opinion, the way that you act is not a good indicator of your intelligence. I firmly believe that some people who are morons can act brilliant for significant periods of time. Real truth lies elsewhere than surface appearances. > >The Turing Test says basically, "if it talks like a human, > >it's intelligent." > > No, it would take me about 2 seconds to write a write a program that > talks like a human, a comatose one. The Turing Test says "if it talks > intelligently then it's intelligent". That's not very deep I grant > you, it's really just a tautology, but bad mouth tautologies all you > want, they do have one great virtue, they're true. That's why is has > always been so utterly mystifying to me that some think The Turing > Test is controversial, it's like debating the question "if something > is moving swiftly is it swift?". Not quite on the mark here. A tautology is a statement along the lines of "X = A implies X = A". Your statement here is more of the form "X acts_like A implies X = A". If you can define "talks intelligently" in an objective form, you would have a valid (and valuable!) implication to use in determining intelligence. Unfortunately, there really isn't any objective definition for it... > I don't have a good definition of intelligence and I don't need one. :) :) Boy, I could sure use a good one. I'd be able to completely restructure the whole field of AI if I could come up with the right definition and get everyone else to agree to it. > Like any self respecting neural network most of my knowledge, and > all of the really important stuff, is not in the form of definitions > but of examples. I say intelligence is like the way Einstein or > Hawking behaves and unlike what a rock or a tree or a bug or a > politician does. That's why The Turing Test works. Eliza has passed the Turing Test a few times. You remember Eliza, don't you? About a thousand-line program which acts (sort of) like a psychologist. Is Eliza intelligent? More to the point, training a neural net off of appropriate examples requires a set of appropriate examples to begin with. In other words, you need a teacher. Note that you don't need an objective, unbiased teacher, any teacher will do. That teacher will teach the neural net a particular set of examples (and/or counter-examples) of a particular category. If successful, the neural net will have learned that category. But it will not know _which_ category it has learned. You have no way of telling where that category fits into the global ontology. So. What you yourself have is a way of distinguising Einstein or Hawking from a rock or a tree or a bug or a politician. What you don't have is any reason for giving this category the title "intelligence". Which means, you still don't know what intelligence is. > >People never built significant analog machines, because > >they were so darned hard to create and use. > > No. Analog computers are not hard to make, they are impossible to > make, and that's not a word I often use. Again, we must have duelling definitions here. Quite a variety of what I term "analog" computers have been constructed already; machines which perform mathematics by adding or subtracting actual amounts of electrical current rather than by manipulating switches. > >I'd prefer a test able to show more of a biological > >similarity between the two structures... > > Exactly what did you have in mind? To me, perfection would be an atomic-level simulation. Barring that, a faithful representation of each and every neuron in my brain would be sufficient for me to feel good about the upload. Basically, by the time humans achieve the ability to do an upload, I suspect they will have (at least as a byproduct of this) a pretty good way of determining exactly what is lying around inside the brain. John Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8541