X-Message-Number: 5521 From: Peter Merel <> Subject: SCI.CRYONICS Re: Drexler's Timeline Date: Tue, 2 Jan 1996 13:45:33 +1100 (EST) John Clark writes, >[Drexler says,] >Seeing no reason current trends could not be extrapolated and >using his considerable knowledge of the field, he expects to >see the first assembler able to reproduce itself sometime in >the first 2 decades of the next century. A full nanotech >computer could be made almost immediately after that, because >the design work will already be finished by then, as some people >are working on that already. So far, so good. >[big snip ...] A recipe for intelligence: Build a simulated world in your >computer and fill it with very simple creatures (programs). >Make sure they must solve problems in order to get "food". The >creatures that are better at solving problems leave more >descendants. Now you do nothing, just step back and let it >evolve. After evolving for a few hundred million SIMULATED years >you have intelligence, high order intelligence. What suggests that your simulated creatures will ever evolve intelligence (human intelligence or better)? Life existed on Earth for many billions of years before humans evolved - one flap of a butterfly's wing 2 billion years ago and our intelligence might never have evolved. Alternatively, the intelligence that evolves may have no interest in donating its design skills to human ends - it might even, as Larry Niven imagined, shut itself down after divining the meaning of life, the universe and everything ... It seems to me that the best way to engineer super human-like intelligence has already been plotted by Moravec - begin with regular humans and improve their hardware. It seems to me that this is a much more achievable engineering goal than the infinite-number-of-monkeys approach, and one that gives us grounds for guesstimating the amount of time before the singularity. Perhaps Eugene Leitl or one of our other uploading fans would care to use the back of an envelope? >As breathtaking as these changes are, it's really just >engineering, Drexler invokes no new laws of physics and assumes >no scientific breakthroughs. If there is one things would >become even wilder. For example, if all the recent speculation >about Quantum computers ever pans out and a practical machine >is possible, it would make even Drexler look like an old fuddy duddy. Indeed - give me a general-purpose quantum computer and a weekend to program it and I'll have you playing chess with God Themself! >Somebody mentioned that safety concerns might slow things down, >I doubt that it would, but perhaps it should. We are about to >enter a period of gargantuan change happening in an >astonishingly short amount of time, and that is an inherently >dangerous situation, it would be foolish to deny it. The biggest >danger is probably something that we haven't imagined yet, >probably something we are incapable of imagining. In my darker >moments I wonder if that could be an explanation of the Fermi >paradox, the fact that we don't see any ET's and the fact that >the universe has not yet been engineered. In my lighter moments I think that the ET's are all around us, and that civilisations live and die inside my nose. After all, if nanotech is really this easy to do, then it stands to reason that someone somewhere else in the universe must already have done it. They'd probably be as interested in communicating with us as we are with lichen - that explains the Fermi paradox. As to engineering the universe, what makes you think it doesn't suit Them as it is? >In spite of the dangers I admit I'm happy about the coming >changes, we might survive it, and the alternative after all, >is certain death for all of us. If nothing else things won't be >dull. The truth however is, it doesn't matter a hill of beans >if you or I think it's a good idea or not, somebody, somewhere, >will do it, and do it as soon as he thinks he can. The best we >can do is prepare ourselves as well as we can. Amen. -- mailto: | Accept Everything. | http://www.zip.com.au/~pete/ | Reject Nothing. | Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5521