X-Message-Number: 15902
Date: Tue, 20 Mar 2001 00:00:20 -0500
From: James Swayze <>
Subject: Regarding responses to Singularity Bah Humbug!
References: <>

Hello all,


Sorry for the delay but this took some time to compose. See below responses to 
the following.

CryoNet wrote:

> CryoNet - Sun 18 Mar 2001
>
>     #15878: [off topic] Singularity... Bah Humbug! [Eugene.Leitl]
>     #15879: savants; happiness [Ettinger]
>     #15880: Re: [off topic] Singularity... Bah Humbug! [Scott Badger]
>     #15882: Re:{off topic] Singularity...Bah Humbug! [Brian Phillips]
>     #15885: Re: [off topic] Singularity... Bah Humbug! [Damien Broderick]



_______________________________________________________________________________________________

Eugene Leitl angrily wrote:

> It must be a nice universe to inhabit, one that conforms to your beliefs
> and expectations.


I could say the same for you. Eugene I have the greatest respect for you. Have 
since I first began reading

your posts and learning from you. So I'm perplexed at the need for the tone of 
anger I perceive in your

response to my criticism of whether Super AI will be considered godlike and help
cause the dangerous

possibly humanity extinction causing flavor of Singularity scenarios. I am 
certain your IQ is far greater

than my mere 150 and I know you are more educated than I. So does superiority 
breed contempt? Sure looks
that way, yes? Sort of proves one of my points, huh?


> > science and grew out of it. It was a little traumatic. I now am fully 
atheist.
>
> So, you're now of the opposite persuasion?


Not following you here. Are you asking me if I'm opposite of believing in god, 
magic, religion,

irrationality? I thought I made it clear I was. Why was it necessary to 
highlight this part of the issue in

your efforts to counter my position regarding super AI. I don't see it as 
germane to the topic of whether

you agree with me or not that Super AI will be godlike. It has a mean feel to 
it. So who's being a troll
now?


> > And I didn't go through all that just to end up bowing down some goddamn god
of

> > our own making!! Machine AI!! Bah!! Bear with me before getting upset at me 
for
>
> This seems a rather simplistic view.


Oh really? So you don't believe Super AI in any way deserves to considered or is
considered in some circles

as possibly godlike? Wait a minute I think I have a quote from someone I respect
very much and consider way
above me in intellect, ability and knowledge.


" I think that (apart from the means of production themselves) computronium will
be first mass produced
output

of nanotechnology of any flavour. And it will be certainly used for AI 
applications, ALife-style. I think a

metric ton of computronium is a pretty useful DIY godhead toolkit. If imprinted 
with the right virtual

machine pattern, I (quite rightfully) feel insignificant in comparision to that 
thing."


I like the part  about "godhead toolkit". Has this anything to do with Super AI 
being considered demigod
like? Who could have said this. Oh shit...it was you!!

> There's no point in introducing an
> artificial Us/Them polarity.


Huh? I wonder if some would apply the Us/Them polarity to the 
Neanderthal/Cromagnon polarity. So you don't
believe in evolution? Let's see.

Do you agree that Super AI could possibly develop self awareness?

Do you disagree that an entity that has self awareness will understand and so 
prefer existence over
nothingness?

Do you disagree that something considered an entity and self aware may also be 
considered a new species?

Do you disagree that different species on this planet have a history of 
competition?

Do you agree that Super AI would be superior to us if we are sans augmentation? 
Oops, wait a minute it seems

you do!  "I think that...[sic] I (quite rightfully) feel insignificant in 
comparison to that thing."

So would you not agree that the current top species (us), that "need resources 
like matter and energy",

might need to compete with the new and superior species that will just happen to
also "need resources like
matter and energy"?

What's the obvious conclusion here? To me it is simply that we could never trust
the machine.

> There's no point in introducing an event horizont

> either, as an observer traversing an event horizont of a spacetime singularity
doesn't
> notice a damn thing. A late stage of an exponential is still smooth and
> contiguous. It's just, well, ramping up like crazy.


Like Thomas Donaldson says I don't think you can apply infinity evoking imagery 
to this issue. We certainly

aren't dealing with relativistic time dilation when considering the rapid rise 
of knowledge and technology.
We are submersed in it not distant observers either.

> Mahattan might look like magic to an australopithecine.


You think we (the science aware community) are still so gullible as to mistake 
high tech for magic? I don't.

I only use terms such as godlike to describe the situation of being helpless 
before something we'll hardly
be able to fully comprehend if we each are not upgraded personally.

> Though a lot
> of change is being packed into increasingly shorter stretches of time,
> you can still keep current, if you choose to. Just don't get left behind.

That's my point. The only way to stay current is augment OUR intelligence.


> > being a luddite, I certainly am not. I'm all for AI, just not machine AI. I 
got
>
> Huh?
>


Huh? Back. I know you are more than smart enough to know I was referring to 
"artificial" augmentation. You
were just being a pill to say "huh?". Play fair.

> You must have seen "The Forbin Project" one time too many.


Never saw it. I'm careful to protect my meme space. I prefer to make my own 
conclusions about the future
based on what I come to know of reality and applying common sense and logic.

> > "The last word uttered by mankind
> > will be, What's this button do?".
>
> Strange, I thought it was "Uh-oh".


Now you're being simply troll-ish. This certainly has nothing to do with the 
subject of your disapproval of

my post and of my opinion but you decide yet to vent your anger towards it and 
me by quibbling over my

quote? FYI, it's from the "Moon Base 3" (if I recall the name correctly) British
TV series and it IS as I

said. It made an impression on me so I commited it to very long term memory 
though the show's name
apparently was not deemed so important. You were being pill again.


> > Does AI have to be machine? I don't think so. Why trust super intelligence 
to
>
> Hint: the A in AI stands for Artificial.


Umm, hint back at you. Recipe, natural intelligence + artificial augmentation 
(IMHO) = artificial

intelligence. How closely did you really read my post? Too angry that I stepped 
on your cherished memes?
Must have really threatened them for such vitriol to result, eh?


> > some inhuman meachine. If not human they'll no doubt end up competing with 
us for

> > resources and we'll lose. Superiority breeds contempt more often than 
altruism.
>
> That's a distinct possibility.


Ok, one if it is, why risk it? If you believe so, if you do, why are you part of
the problem? One

possibility I would agree with is perhaps you wish to help guide it so to avoid 
the dangerous out of control

possibilities. So I have to ask you a question especially after this statement 
by you. "I (quite rightfully)
feel insignificant in comparison to that thing."


If you are so certain we'll be   inferior why would you choose to be inferior if
you had a choice to be

otherwise? Which would you rather be superhuman almost godlike or inferior serf 
to a Super AI machine
demigod? I choose to improve me and my kind.

> > Don't make machines smarter than us, it's colossal stupid!! Make us smarter

> > instead. Use any means available and then some. Use artificial means, use 
natural
>
> I agree here. However, some people are stupid, and *like it*.


See we do agree on some things so why all the fuss? Or am I included in those 
that "*like it*"?


> > means, use directed evolution and genetic manipulation. Put the AI inside 
each of

> > us. If we can indeed someday link human neurons usefully to digital data 
devices

> > then do so and then link us all together. Each of us a single processor in a
6+
> > billion strong and growing multiprocessing super computer.
>
> Good advice, let's go Borg.


Borg? Being a pill again, eh? So you would accept Hollywood's interpretation 
over science? Hey, I like Star

Trek and have the greatest respect for Gene Roddenberry but let's face it he 
missed the boat when he failed

to recognize that "his" transporter technology would mean immortality. Just send
the bad bits to a buffer

and let the computer replace with whatever necessary to fill the pattern with 
the correct material. Got

cancer or aging effects? Go into transporter, come out without. No brainer. The 
only excuse for Roddenberry

not using it is that it would have changed the whole meme of the series and be 
too controversial at the
time. I will not limit my imagination to Hollywood's opinion.


So why would Hollywood's horrifying concept of Borgism (the show requires a bad 
guy antagonist) have to be

the only possibility? My following comment applies also to Scott Badger's 
response regarding AI

dissemination. I don't feel that the reality of the most likely scenario would 
be that the first person

augmented with AI would be the one to become the super AI demigod to then rush 
off to prevent all others.

Just look to what's happening in reality and consider carefully economics. 
Everyone is slowly moving to

personal wireless computing and communication right now. Soon we'll have 
wearables, you must have seen them

advertised as the next step. The next logical step after wearables is VERY 
personal computing as in embedded

inside the body. Being connected to the net is part of the equation already. Why
would that change? If
mutual connectivity is Borg than, too late, we already ARE Borg!


Let's not forget that Borg is slang for Cyborg and by definition we arguably 
already are cyborg through

dependence on technology. Some of us more than others. I depend on pain relief 
delivery straight to my spine

and literally would quickly be in danger of death or stroke via extreme blood 
pressure spiking without the

little robotic device in my abdomen andI also need a wheelchair. I am by 
definition a cyborg already.


Regarding dissemination of augmentation one must consider the economics 
involved. It's not likely that

business would spend vast funds to develop personal AI augmentation just to have
the first person to use it

prevent them selling it to everyone else. They'll be wanting to make customers 
of as many or all of us as is

possible. It will happen slowly just like the dissemination of electronic 
calculators, PC'S, cell phones and

etc. Once only a few had them and they cost a fortune. Now just about everyone, 
their uncle and every other

teenager has one. Same will be the case for augmentation and it's logically 
accompanying mutual
connectivity. Star Trekish murderous Borgs? Not necessarily.

> I wouldn't be saying "ever", not at this day and age. Whatever (and whenever)
> nanotechnology is going to deliver, molecular circuitry will be hitting the
> streets first. It's a question of 1) raw switches 2) the right architecture.
> We'll be certainly getting gadzillion of affordable switches, and there are
> rather srong clues about the right architecture, so essentially it's a
> question of time.


So the race is on between personal augmentation versus building a Jupiter brain 
eh? I think the economics
involved favors augmentation.

> Get rid of your anthropocentrism.


Why should I? I'm proud to be human. The definition will evolve but we'll still 
be human regardless of form.

> Biology is not going to stay a constant, but nor

> are machines going to. "machine", "most efficient algorithm", "duplicate that 
over

> and over", "hidden flaws", "must kill all humans". Puh-leeze. I could flip all

> the sentiments 180 degrees to an anti-human bias, and it would ring just as 
true.
> Truer, if anything.


Firstly I apparently am too dense to figure out your syntax above could you word
it differently for me
please? Honestly I'm missing your point above.


> > We have hidden abilities that put computers to shame. Now one caveat. I 
can't
>
> Little disagreement here, if you mean current computers.
>
> Actually, if you'd let every second neuron in your brain vanish tracelessly,
> you'd probably not notice a lot of difference, apart from some degradation.


Do you have any actual proof of this? Have experiments been done to a real human
being that removed every
other neuron?

> You may notice that idiot savants typically excel at a single task.


Not true. Many can both calendar count and do other savant things such as 
calculate prime numbers lightening

fast along with their primary savant skill. Didn't you go to the link I gave and
read the rest of the data?
These things were covered in the linked articles.

> You might
> have heard about a recent result (reference lost in mailbox upstream) where a
> focused disruption (using transcranial magnetic stimulation) produced
> idiot-savant-like effects in normal adults.


Firstly, the derogatory "idiot" term part of the old description has been 
dropped and it's called Savant

Syndrome. Furthermore, had you, Eugene, gone to the link I provided or merely 
read the part of the article I

included from the link or indeed read more closely the accompanying footnotes of
my post you would have

answered the above question for yourself. Yes I am aware of that experiment, it 
is the crux of my point! Yes

savants are very narrowly skilled but not necessarilly singly. The whole point 
of the new research is that

the ability to function socially displaces partially the ability to concentrate 
so deeply on and hence

reinforce the savant skill. Thus, the twin involved who once could calendar 
count and as well calculate

prime numbers could not so readily later in life after learning more social 
skills. The hope is to find a

way to allow us to temporarily tap into those primal abilities by temporarily 
turning off the skills that

displace the savant skills. I assumed everyone here would imagine a way that 
augmentation via nano or chip

interface and all the other sexy tech in our future might possibly help 
facilitate this so and I didn't
express my case completely enough.

> > Quantum laser turns electron wave into (computer) memory
>
> Yawn. The idiot press once again misunderstood something quite profoundly.
>


Again did you even read the link? Idiot press? I think all the "idiot press" did
was report what the
experimenter provided them. Don't shoot the messenger.


> > Nanotech should be able to reduce the size of a quantum laser electron hard 
drive

> > to oh maybe the size of a dime or even the head of a pin. They'll be the 
rage!
>

> You should be reading up on some of that fancy Schroedinger stuff. Like a 
wavefunction

> trapped in a periodic boundary conditions/box, and the size of the box. 
Measurement,
> collapse of the wavefunction, and the like.


I haven't heard of this. Can you give me a link please? As I said you are much 
more capable than I. Could

you help me understand if this is capable of "infinite storage" like the Quantum
Laser Electron Memory

experiment was reported to be "possibly" capable of? Then, if so, why this 
method of "infinite storage"

(wavefunction trapped in a periodic boundary conditions/box) is preferable over 
the other method of

"infinite storage". How much more than infinite does one need? Obviously I'm not
informed enough to form an

opinion. I only suggested one possibility. I didn't presume to exclude any 
others. If this is better than so
be it, whatever works.

> Nano doesn't allow you to run rings around quantum mechanics, unfortunately.

Not qualified to comment. Not sure what you are referring to.

> What's wrong with old-fashioned molecular memory?


Just guessing, Maybe size and heat. Again not really qualified to make a solid 
opinion. I just assumed that

if there are people working on better, quicker, more capacity systems than 
molecular, such as those

mentioned by you and by me, that they know what they are doing and had good 
enough reason to do so. Just an
ole country boy's common sense approach.

> What makes you think your work and play will not be so demanding, as to
> require you to use whatever computational resources are at your disposal?


I was imaging a low demand scenario. Certainly a human based super computer 
could achieve quite a lot during

a high demand peak where perhaps, where safe to do so, as many as possible would
dedicate full capability to

the the problem. I also assumed a more leisurely environment with machines 
performing all manual labor. It

wasn't an absolute. It was merely a suggestion for a tiny fraction of possible 
human activity and computing

ability. Again I apologize for leaving it to everyone's imagination to fill in 
the blanks.

> Gravity? Who needs it?


That's funny, you compare the Vingeian Singularity with the natural law of 
gravity. Teehee. You make a

funny. Me laughing ass off. In other words I can't believe you meant it. I 
certainly wouldn't characterize
the "Spike" (I prefer that term) as an immutable law of the physical universe.

> You can't do much as a single person.

One can do what I am doing, provoke thought and debate.


> > P.S. I have an answer for the oh so scary nano grey goo as well and it 
doesn't
> > entail avoiding nanotech just the how to appraoch.

> Oh, everything is easy. Except when it isn't.


Question, without knowing anything about the details of my opinion/idea why 
would you make any comment?

Being a pill again? Well to me it is easy. But what do I know? Guess I'm in my 
own little universe again.

However, someone agrees with me that is high up in nano circles. What I referred
to is only my opinion of

how nano should first be achieved. I don't claim to be the first to have thought
of this nor by any stretch

an expert, but here goes. I see no need for outrageous computing power such as 
AI to achieve beginning

nanotech if we follow nature's even human nature's precedents. Ok, on to the 
nuts and bolts.


Trying to make each single nanobot capable of doing every job possible in the 
list of all possible jobs for

nanobots would take huge amounts of computing power, yes? Nanobots capable of 
doing every job possible are

in some circles also hoped to be capable of self replication, yes? In some 
circles the ability to self

replicate could, if something goes terribly wrong, lead to gray goo meltdown and
the destruction of
everything, yes? With me so far? Any disagreement?


So don't make them capable of self replication. Make nanobots in a huge variety 
of species capable of only

one or two jobs each. Orchestrate them via an ant like or human military like 
hierarchy template. Example:

Suppose we need to remove some potassium from something. We order via squad 
leaders and lieutenants and

captains the potassium loving, K type, nanobots (perhaps potassium loving due to
a lock and key shape of

their manipulator arms) to do the one job the K types were built for. The squad 
leaders each oversee perhaps

a dozen K type bots perhaps more. These squad leaders handle navigation and 
maybe the when and how much

issue. Or perhaps yet higher up the hierarchy lieutenant nanabots that are 
responsible for leading the squad

leaders take on the responsibility of time and quantity values. Captains oversee
the lieutenants and so on

and I'm sure everyone gets the gist and can imagine further by now so I'll spare
us all further and

redundant description. Obviously this method wouldn't take as much onboard 
computing as self replicators

would for each individual bot. In fact maybe only simple processing and radio or
ultrasound communication

with processing from an external computing source or hold on now maybe even one 
day from the onboard

Artificially Augmented Intelligence (dare I coin it? AAI?) residing in ones 
skull. Easy enough?


Eugene, please forgive me for having stepped on your cherished memes and forgive
me please for here
defending myself. Still friends? ;)


_______________________________________________________________________________________________

Robert Ettinger wrote:

> Patrick Swayze (#15875) has some good comments about savant capabilities and
> human improvability.


Thank you though I wish to gently correct a common typo that sometimes occurs 
when people refer to me. I

don't really mind. It is even flattering. You see my distant cousin Patrick 
Swayze is the actor and I'm just

a normal pion named James but I'm better looking. hehe ;) Patrick and I share a 
great great ....nth??,

grandfather from the end of the 18th century. He was a wealthy land owner in New
England and Judge named

Samuel Swayze, Sr. Patrick is from the Samuel, Jr. line and I am from the line 
of Sam junior's brother

Barnabas. I have no idea if we are 2nd, 3rd or what cousins or how far those 
numbers apply if at all. There

is a family resemblence however. In the movie "Dirty Dancing" that made him so 
popular he is dancing at a

party and beckons the girl to come dance with him by grinning widely and wagging
his finger. That fat cheeks
no teeth showing wide chipmunk grin is the same as mine.

> This is similar to some of my comments in MAN INTO
> SUPERMAN

Yes, yes! This is what I aspire to become and hope for all of us.

> However, this does not altogether avoid potential "singularity" or "spike"
> problems. When Damien Broderick said "all bets are off or moot" in event of a
> spike, I think he meant, at least in part, that there might be drastically
> new conditions of life and new outlooks, regardless of whether conscious
> computers are ruling the roost, and the results would be almost totally
> unpredictable.


My post was meant to focus on the dangerous aspect of Super AI as I knew it 
machine AI in the context of

softening the singularity by taking an alternative more human friendly approach.
I seized upon the Super AI

reference of Damien's to focus on. Extremely rapid rise of knowledge is 
certainly likely but I feel

Singularity is a poor term for it as everyone knows this refers to infinities 
and exponential functions
never touch either zero in the low range nor infinity in the high range.

> But, to repeat myself, there is no assurance that the results of
> revolutionary ideas would spread like lightning. After all, there are
> physical as well as societal constraints.

Not to mention economics.


_______________________________________________________________________________________________

Scot Badger wrote:

> To James Swayze;
>
> First let me say that I don't believe this thread to
> be far off topic.

I agree but I erred on the side of caution.

> The singularity has a decent chance
> of occurring before the technology develops to revive
> cryonics patients and it would have an important
> impact on our goals as well as everyone else in the
> world for that matter.


If certain singularity scenarios occur we likely don't have a snowball's chance 
in.... hehe. Couldn't
resist. ;)

> I'm certainly not the most qualified to respond to
> your concerns about machine AI, but I think your
> making some rather remarkable assumptions when you
> suggest that a human-based AI will be much less
> dangerous.


I give it a better chance of being friendly than pure machine. But it's just a 
hunch.

> will develop before the
> general population has access to such a
> transformation.


I don't see it as an, forgive me, "event horizon" transformation. To keep from 
repeating myself please see

my references above in comments to Eugene regarding dissemination of 
augmentation technology.

> I would also expect that AI to quickly
> evolve into an entity as superior to the average human
> as the average human is to an insect.


Sure if it occurred to only one, but there lies the difference in opinion. Again
I refer to my above
comments about dissemination of augmentation technology.

> The notion that
> that entity will retain its "humanity" simply because
> its origin was human is difficult to argue in my
> opinion.


Still it has to be better odds than machine based. After all, why would a human 
based AI want to give up

it's memories so readily? This may indeed occur over time or not but what reason
would there be for it to
immediately shed it's memory of humanness?

> Imagine you're the entity for a moment and the next

<required snipage for postability>

> don't assume the AI would be
> "lonely" or "sympathetic" or "ethical" by our
> standards. Those are human traits and are not likely
> to apply to a super AI. After all, how many traits do
> share with ants?

<More snipage>

> So ... if you choose to avoid competition, what must
> you do? Perhaps you can try to prevent the ants from

<More snipage>

> To what lengths would you go to spare them? My real
> point is that you can't answer that question for a
> super AI because you aren't one and cannot know it's
> mind. It is unfathomable.


All agreed if that were the case but as I said and will say differently here if 
we all surf the wave we can
all rise with the tide.

> The singularity and the idea of creating a super AI,
> either machine-based or human-based, scares me,

Me too, obviously.

> but I
> don't see how it's not going to happen.

I do but still the odds are bad are they not?

> Too many
> people around the world are working on artificial
> intelligence. The temptation will be too great to
> resist. Someone's going to flip that switch and when
> they do, for better or for worse, nothing is likely to
> be the same again.

All we can do is provoke debate. That's what we are doing here.


_______________________________________________________________________________________________

Brian Phillips wrote:

> James Swayze ,
>
>   Well said.


Thank you for the support. Whew! *wipes sweat from brow nervously looking over 
at Eugene* ;)

>  I agree that Singularity-meisters tend to seem oddly theistic.

Sure do.

> I have yet to hear a really good reason why the Artificial Intelligence
> can't
> ride inside a "technogland" in the nervous system. Certainly it can be
> argued that one would want a chip the size of Luna, but still....


Not mobile enough for what I want to do. Of course someone might chime in and 
say, "How mobile is your 6

billion strong multiprocessor, James?". Well not very if we are talking about 
taking the entire computer on

a long interstellar voyage. But maybe with the power we'll have we'll be able to
find the right and safe way

to make a machine AI helper to take along that will be as large as energy 
economies allow.

>   This is one reason why I am putting all my energies towards understanding
> the Human side if the equation..the physiology and neurology and genetics,
> rather than be yet another programmer type.

Good. Being part of the solution as I see it.

>    People who work closely with computers tend to "personify" them, I wonder
> if those who create them tend to "spiritualize" them to the point of
> deification.


Super AI sure is deified. Common terms are Jupiter Brain, demigod, godlike 
superior to human intelligence

and on and on. Perhaps they need to take a serious look at their semantics. I 
have a theory based on the

Sapir-Whorf theory of semantics which regards differing perceptions of symbology
and difficulty finding a

meeting of minds of peoples of differing languages. It's basically garbage in 
equals garbage out. If one

uses certain semantics to the exclusion of others eventually it is difficult to 
find agreement over the same

exact objects or concepts with others who have used differing semantics. To me 
this is why it is so

difficult to get a creationist to see the same evidence for evolution the same 
way the scientist does.

They've finessed their memes so through their own language of belief they can't 
possibly see what the

scientist sees even when before their very eyes. Their paradigm is tinted. 
Perhaps this is so for the
deification of machine AI.

>    We all have our blind spots.


I sure do and I depend on my friends, you all included and you Eugene, to peel 
them off when I'm being
obtuse.


_______________________________________________________________________________________________

Damien Broderick wrote:

> product of the singularity is super AI so powerful as to take on
> the role
> >of a near god.
>
> James, I think you're confusing two different issues (at least; maybe
> three). Will it happen? Will you like it happening, if so? Is there only
> one sort of It to happen?


Respectfully Damien I'll agree to mixing but confusing? Not sure I'll buy that 
but I'm listening.

> My claim: I think a technological singularity is on the cards, because it's
> a convergent outcome of a lot of different current inputs.


No disagreement. I should have better qualified my not wanting to buy the scary 
type statement. I merely
wish to influence the coming outcome such that we rise with the coming tide.

> But it's not
> obvious to me what *kind* of spike it'll be, or how swiftly it will occur,
> and


It's not something anyone could specifically nail down but we might be able to 
finesse it. At least I hope
so.

> I certainly don't consider it part of the hypothesis that a *cold
> machine god* will be created.


But many do in the very field working on it. How about instead of the semantics 
*cold machine god* we insert
*cold machine entity beyond our comprehnsion that sees us as parasites*?

> You just described a couple of possible Singularity outcomes. The first,
> intelligence amplification of existing humans and their coordination into a
> kind of emergent group mind, is... let's see... Spike version B ii (THE
> SPIKE, 2001, p. 327).


So in your writings Pan Human AI is possible even probable but even so we cannot
even with the power that

would give us exert any control over the dangerous uncertainties? I really need 
to get to reading your book

I bought. ;) I am so bad about procrastination. I really hoped to see you at 
Asilomar and get it
autographed.

> We can't *help* acting stupidly, by comparisons with the fast, commodious
> intelligences that will be here in 50 or 100 years.


Again I say rise with the tide and look to how technology now disseminates under
economic influences. By 50
or 100 years we may well all be augmented and hence BE the Super AI ourselves.


_______________________________________________________________________________________________


Rhetorical question to everyone: Which would you rather be, super human 
yourselves or serf to super machine?

James Swayze
--
Some of our views are spacious
some are merely space--RUSH

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15902