X-Message-Number: 18829
Date: Sun, 17 Mar 2002 16:31:18 -0800
From: "John Grigg" <>
Subject: Attention!!: Steve Jackson of SJ Games

Steve,

*The following is eyes only for Steve Jackson!*


Please forgive me for using Cryonet to contact you, but it is the only way I 
knew of to definitely get your attention.


Anders Sandberg and several other bright extropians have recently played with 
the idea of a Singularity card game.  A lot of thought has gone into how the 
game might be.  I hope you would consider doing the project.  


Please let me know if you have gotten this with a quick email acknowledgement.  
And if you decide to do this, may I have a free autographed copy of the game 
from you? : )


AND as for those of you who went ahead and read this post, you now are obligated
to buy the game once it comes out!! ; )

John

Singularity Card Game Alpha Test
From: Adrian Tymes ()
Date: Sun Mar 03 2002 - 22:28:40 MST 

http://www.wingedcat.org/singulcard/ 

Feel free to forward this URL to anyone else that may be interested. 
If I get enough suggestions - *especially* ideas on how to fill up the 
card list - I think I know someone who could get this tested and 
published (Cheap-Ass Games, or a similar publisher). But that won't 
happen with just four cards on the list; I'd need at least forty (or 
somewhere in that neighborhood) before I'd approach them with this. 

Re: Singularity Card Game Alpha Test
From: Anders Sandberg ()
Date: Tue Mar 05 2002 - 06:55:28 MST 

On Sun, Mar 03, 2002 at 09:28:40PM -0800, Adrian Tymes wrote: 
> Someone asked for the rules to this. Like I said, there were no formal 
> rules, but here's my best shot at making some up. 
> 
> http://www.wingedcat.org/singulcard/ 
> 
> Feel free to forward this URL to anyone else that may be interested. 
> If I get enough suggestions - *especially* ideas on how to fill up the 
> card list - I think I know someone who could get this tested and 
> published (Cheap-Ass Games, or a similar publisher). But that won't 
> happen with just four cards on the list; I'd need at least forty (or 
> somewhere in that neighborhood) before I'd approach them with this. 


I like the basic simplicity of the game, a bit like Nim. The 
question is whether to keep it simple and clean, or more complex but 
also more related to real ideas about technological futures. 


My main problem with it is the technological and economic 
assumptions. Is the singularity really a goal in itself, and why 
would a disaster automatically happen if the points are unbalanced? 
It seems to assume technology is developed because somebody just 
decides to, and that things can be perfectly banned. I would like to 
make the game a bit more Illuminati-like, even if this may lose 
some of the elegance. 


What about this: players have "Money" (money/power/knowhow) that can 
be invested. You get a certain amount of money for playing certain 
cards (representing that you invented something, gained a temporary 
monopoly or became the big expert in the field), and pay money for 
certain cards (like bans and unbans). This means that it might be 
tempting to play a powerful card, even if the risks increase. In 
fact, you may have to do that in order to get enough money to affect 
a ban. 


Bans cost money, proportional to the importance of the technology or 
how hard it is to ban - banning basement technology is inherently 
more expensive than banning nuclear weapons. Some cards make banning 
inherently harder (like the Internet), some easier (like Global Law 
Enforcement). They might be not just bans, but regulations and other 
"softer measures". 


The goal should be to make the situation such that everybody can 
profit from developing dangerous technology, but that cooperation is 
necessary to uphold bans. Some technologies that are very useful 
(like nanoimmune systems) also require dangerous technologies. 


The game ends in a disaster if too much risk points are accumulated. 
For example, Nano-manipulators have just +1 Nano risk (they are in 
themselves not very risky) while Desktop Fabrication has perhaps +3 
risk (it is an application) and Free Replicators +10 risk. Risk can 
be reduced by developing safeguard technologies such as Nanoimmune 
Systems (-10 Nano Risk) and policies such as Secure Nanolabs (-2 
Nano Risk). So the race is to gain enough points while not allowing 
the risk to grow too large. 


I would like to add a fourth kind of risk: Social risk. While 
technologies may cause other risks, bans make society more 
dangerous. If the Social risk goes too high, we end up with a social 
disaster. The risks can easily be represented by markers moved along 
a line. 


Each turn, a card is also drawn from the deck that is played by the 
"world" - possible events and developments not foreseen by the 
players. If it is not a valid play it will vanish, but if the 
technology or event is possible to play it will be played. 



Some possible cards based on this (I have not included points 
seriously here, since I'm more interested in general ideas of 
contents and gameplay): 


Tech: Internet: +2 Robo, +2 Social Risk. Ban cost: 10 


Tech: Ubiquitious Law Enforcement: +10 Social Risk, +5 Money, 
requires Distributed Processing and Nanosurveillance. Ban cost: 5 


Tech: Distributed Processing: +1 Robo, +2 Money, requires Internet. 
Ban cost: 5 


Tech: Friendly AI: +4 Robot, -5 Robot Risk, +5 Money. Requires 
Cognitive Engineering. Ban cost: 5 


Tech: Free Replicators: +5 Nano, +5 Money, +10 Nano Risk. Ban cost: 
7 


Tech: Nanoimmune Systems: +3 Nano, +5 Money, -10 Nano Risk, requires 
Nanomedicine and Distributed Processing. Ban cost: 7 


Tech: Space Habitat: Decreases all risks by 5, -10 Money. Ban Cost: 
2 


Unban: Treaty Defection : Unbans a technology, and gives the player 
the money on the card. 


Unban: Freedom of Speech Lawsuit: Unbands a technology, -10 Money 


Ban: Oversight Committe: Halve the risk of the controlled 
technology. -5 Money 


Event: Secure Nanolabs Protocol: -2 Nano Risk, -5 Money 


Event: Global Law Enforcement: Halved cost of bans, +5 Social Risk 


Event: Unexpected Synergy: Increase all points and risks by 3. 


Event: Replicator bug: If free assemblers are played, increase their 
Nano Risk by 2. 


Event: High Tech Terrorist: If played by the world the Bio or Nano 
Risk is within 5 points of disaster, the game ends in disaster 
(otherwise the card has no effect). If played by a player under the 
same conditions, that player can destroy the other players and win. 
Under other conditions may be played to provide a free ban of any 
technology. 

Re: Singularity Card Game Alpha Test
From: Adrian Tymes ()
Date: Tue Mar 05 2002 - 14:32:05 MST 

Anders Sandberg wrote: 

> On Sun, Mar 03, 2002 at 09:28:40PM -0800, Adrian Tymes wrote: 
>>Someone asked for the rules to this. Like I said, there were no formal 
>>rules, but here's my best shot at making some up. 
>> 
>>http://www.wingedcat.org/singulcard/ 
>> 
>>Feel free to forward this URL to anyone else that may be interested. 
>>If I get enough suggestions - *especially* ideas on how to fill up the 
>>card list - I think I know someone who could get this tested and 
>>published (Cheap-Ass Games, or a similar publisher). But that won't 
>>happen with just four cards on the list; I'd need at least forty (or 
>>somewhere in that neighborhood) before I'd approach them with this. 
> 
> I like the basic simplicity of the game, a bit like Nim. The 
> question is whether to keep it simple and clean, or more complex but 
> also more related to real ideas about technological futures. 



If you make it too complex, the target audience - non-sophisticated 
people who have open minds - will shrug and pass on to the next 
distraction. The primary reason for creating this game is not the game 
itself, but memetics. Therefore, if we can find simple ways to 
incorporate ideas about the future (for instance, uploads and similar 
tricks == no more absolute robots vs. humans distinction), it may be a 
good addition - but too much complexity in game implementation is, 
itself, reason enough to reject a given idea for this particular game. 


Or, in short: eyes on the prize. 


> My main problem with it is the technological and economic 
> assumptions. Is the singularity really a goal in itself, and why 
> would a disaster automatically happen if the points are unbalanced? 
> It seems to assume technology is developed because somebody just 
> decides to, and that things can be perfectly banned. 



Ah, no. You're misreading things...but perhaps, if I explain, we can 
find a clearer way of stating these concepts. (I.e., that you read it 
as "perfect bans are possible" is a bug, just like in programming.) 


Technology cards, when played, represent the *commercialization* (or 
other widespread deployment) of technology. The actual, mere 
development of a technology is a non-event as far as this game is 
concerned; when a tech card is played is when the tech starts 
affecting Joe Q. Public. Likewise, bans - though not perfect - do 
remove a technology from most peoples' lives, though the factories et 
al to produce the banned tech can be mothballed...or, at least, the 
information about how to set up said factories is still floating around. 
Either way, once it has been deployed, it can and will be redeployed 
once the laws against it are removed. 


For example: cloning is in the media right now. But, can an average 
person purchase a clone at the moment? No way. Therefore, the Cloning 
card has not yet been played. Now, this might be a candidate for an 
Event - which cannot be undone, and which can slightly change the rules 
of the game. Which is why I set up said category. 


> I would like to 
> make the game a bit more Illuminati-like, even if this may lose 
> some of the elegance. 



Umm...it's a nice idea, but I fail to see how this implementation adds 
to the goal of the game. (See above.) 


> The goal should be to make the situation such that everybody can 
> profit from developing dangerous technology, but that cooperation is 
> necessary to uphold bans. Some technologies that are very useful 
> (like nanoimmune systems) also require dangerous technologies. 



Prerequisites are already explicitly allowed for. Adding in the profit 
motive taints the meme with "tech is developed so the rich get richer", 
not "tech is developed to make the world a better place". While they 
are both true, it is better to promote only the former meme, not the 
latter. 


I am, however, thinking of putting in an optional other action: Forfeit. 
"Your group needs to develop new technologies to stay relevant to the 
world. If it sleeps, it dies...and you lose the game automatically." 


> The game ends in a disaster if too much risk points are accumulated. 
> For example, Nano-manipulators have just +1 Nano risk (they are in 
> themselves not very risky) while Desktop Fabrication has perhaps +3 
> risk (it is an application) and Free Replicators +10 risk. Risk can 
> be reduced by developing safeguard technologies such as Nanoimmune 
> Systems (-10 Nano Risk) and policies such as Secure Nanolabs (-2 
> Nano Risk). So the race is to gain enough points while not allowing 
> the risk to grow too large. 



The point is that the knowledge itself, and the world's possession of 
it, *is* the risk. And the promise. The two are the same. 


> I would like to add a fourth kind of risk: Social risk. While 
> technologies may cause other risks, bans make society more 
> dangerous. If the Social risk goes too high, we end up with a social 
> disaster. The risks can easily be represented by markers moved along 
> a line. 



Problem: what is a disaster for some may be heaven for others. The 
world will not end if, say, America becomes a police state under martial 
law, even if it would suck. The world would recover from such a state 
within (current) human lifespans. 


> Each turn, a card is also drawn from the deck that is played by the 
> "world" - possible events and developments not foreseen by the 
> players. If it is not a valid play it will vanish, but if the 
> technology or event is possible to play it will be played. 



That's a good idea. Makes the game a bit less predictable, thus adds 
more risk to playing brinksmanship with the Catastrophes. I'll add it. 


> Some possible cards based on this (I have not included points 
> seriously here, since I'm more interested in general ideas of 
> contents and gameplay): 



Points and dependancies will probably be balanced once we have a good 
set of cards. I've added in the non-social ones (though, for instance, 
"Space Habitat" seems more like an event: once you get a bunch of people 
living in orbit, trying to ban it would probably have no effect). 

Re: Singularity Card Game Alpha Test
From: Anders Sandberg ()
Date: Thu Mar 07 2002 - 07:45:06 MST 


On Tue, Mar 05, 2002 at 06:33:53PM -0800, Adrian Tymes wrote: 
> Anders Sandberg wrote: 
> 
> We can only pump so many memes into a single simple game. Perhaps 
> another one can speak to the realities of tech deployment. 


A good idea. Maybe I could try out writing my own version, if you don't 
mind? My version would likely deal less with the race towards 
singularity and more with the politics and economics of technology. 


> >Having a competition element is good for motivating people; the trick is 
> >to make the game a complex mixture of cooperation and competition. 
> 
> Hmm. Perhaps if we pump up the random element? That is, when it is 
> the world's turn to play, play as many cards as there are players. You 
> can pass on your turn if you want, shuffling your hand into the deck and 
> drawing a new hand...but the world will continue apace. You can either 
> enjoy the ride, or try to steer it towards your goal. For any given 
> technology or event, you may be given the opportunity to put it into 
> play...but if you don't, someone else eventually will. 


This is a good idea. Even if you are a die-hard luddite you better 
participate than just sit back. Depending on the size of the deck 
certain cards will likely not re-appear very often, so getting rid of 
them is semi-permanent. 


Hmm, maybe one could even play this game luddite-wise: you try to ban 
all technologies so that singularity *and* disaster are impossible. For 
a luddite win to happen all technologies enabling new stuff have to be 
banned, and these bans have to be upheld for a certain time. Sounds like 
a real challenge. 
  
> The problem here would seem to be that it gives too much of a chance 
> of world-caused Singularity or Catastrophe, which means a good chance 
> that no one can win. 


> >Remember the goal. If you claim this, then many players will realize 
> >that the sure way of winning the game is not to play the game. What are 
> >the benefits of the technologies? To the player there are none in the 
> >current form, just an increase of abstract numbers. Even a "Clinical 
> >Immortality" card doesn't cut it as a motivator, since it would maybe be 
> >a motivator for the humans in the game world, but does not per se mean 
> >much for the player. The singularity is just the end, and has no other 
> >meaning. Instead of promoting progress the message seems to be that 
> >increasing these technologies increases risk, and the singularity is 
> >just someone winning. That doesn't seem very extropian to me. 
> 
> Actually...re-read the conditions. When you have X points on average, Y 
> points will put you over the edge. When you have X*2 points, you need 
> Y*2 points. And any single card has the same value. Thus, deploying 
> new tech increases the tolerance, on average. 


But this seems to assume that "the solution to tech problems is more 
tech", and that the true goal should be some kind of balanced form of 
technology. Personally I don't see why you can't have a singularity 
based only on biotech (maybe something like Blood Music or the Edenists 
of Peter F. Hamilton), AI or nanotech. This might be more of a game 
solution, of course - it is rather neat to have this kind of gliding 
thresholds. 


> >Perhaps a better way of handling it would have a "tolerance" level for 
> >the different points. It represents how much the world can handle, what 
> >institutions can deal with issues and how much people have adapted to 
> >it. The points must remain below the tolerance for the world to work; 
> >tolerance is increased using either safety technologies or perhaps 
> >direct payment of money (representing the building of institutions). To 
> >reach singularity the world needs to adapt. This seems to be closer to 
> >the memetic goal of the game and transhumanism. 
> 
> So why not just do pure tolerance (at least, as pure as you can get) 
> *and* supress all new technologies? This seems to be just the same 
> abstract numbers you were objecting to. 


Sure, but given the assumptions about costs I had made, every player 
would eventually run out of money. Then they would not be able to 
prevent new tech from emerging. 
  
> >>>I would like to add a fourth kind of risk: Social risk. While 
> >>>technologies may cause other risks, bans make society more 
> >>>dangerous. If the Social risk goes too high, we end up with a social 
> >>>disaster. The risks can easily be represented by markers moved along 
> >>>a line. 
> >>> 
> >>Problem: what is a disaster for some may be heaven for others. The 
> >>world will not end if, say, America becomes a police state under martial 
> >>law, even if it would suck. The world would recover from such a state 
> >>within (current) human lifespans. 
> > 
> >The biotech gray goo scenario of Greg Bear's _Blood Music_ seems rather 
> >nice from my point of view - does this mean that we should regard the 
> >Bio/Nano disaster in the game as similarly relative? 
> 
> No, because the biotech gray goo you refer to is not the one I'm 
> referring to. Perhaps I should specify "mindless gray goo". 


Sure. But how is this different from the social risk described by Orwell 
as "If you want a picture of the future, imagine a boot stamping on a 
human face--for ever"? With ubiquitious law enforcement, paranoid 
culture and AI enforcement you could get it even if all AI is obedient, 
all biotech under control and the nanotech under lock and key. It might 
even be self-reinforcing and impossible to get rid of. It is the social 
version of gray goo, a permanently entrenched society that does not 
promote human growth. 


What I worry about in your system of bans, is that it suggests that 
banning technologies is a good thing and that it does not carry any 
cost. If antibiotics or the Internet are banned, in the real world this 
would cause hundreds of thousands of deaths and billions in economic 
losses. In the game it would remove a few Bio or Robot points. 


> >I think it is 
> >important not to isolate the game from social reality. If it is intended 
> >to convey a transhumanist point of view it better demonstrate that 
> >technology alone isn't enough, we better make sure our culture isn't 
> >turned into something nasty as we advance. 
> 
> Again, define "something nasty". For any given permutation, some of the 
> audience will be predisposed to think it's actually a *good* thing...so 
> better to just avoid that topic entirely, no? 


It is your game, and you may do with it as you like, but I think leaving 
out issues like this would make it less interesting and actually less 
likely to spread the positive memes you would like to spread. We already 
have enough games and scenarios around where technological development 
is pursued for its own sake, and far too few that dares to look at how 
society interacts with technology. 


This is one of the most obvious weaknesses of transhumanism today, and 
many of our critics latch on to it: we do not integrate our 
technological visions with social visions, and that either suggests that 
we do not care the least about the rest of humanity, that we naively 
think technology is the only thing that matters or that we have unsavory 
social visions we do not reveal. All three views are wrong, but we have 
to show them untrue ourselves. 

Re: Singularity Card Game Alpha Test
From: Adrian Tymes ()
Date: Thu Mar 07 2002 - 19:47:00 MST 

Anders Sandberg wrote: 
> On Tue, Mar 05, 2002 at 06:33:53PM -0800, Adrian Tymes wrote: 
>>We can only pump so many memes into a single simple game. Perhaps 
>>another one can speak to the realities of tech deployment. 
> 
> A good idea. Maybe I could try out writing my own version, if you don't 
> mind? My version would likely deal less with the race towards 
> singularity and more with the politics and economics of technology. 



Please, be my guest. (Not that I have any standing to grant permission 
on this anyway, but if you want my permission here, you've got it.) 


>>>Having a competition element is good for motivating people; the trick is 
>>>to make the game a complex mixture of cooperation and competition. 
>>> 
>>Hmm. Perhaps if we pump up the random element? That is, when it is 
>>the world's turn to play, play as many cards as there are players. You 


>>can pass on your turn if you want, shuffling your hand into the deck and 
>>drawing a new hand...but the world will continue apace. You can either 
>>enjoy the ride, or try to steer it towards your goal. For any given 
>>technology or event, you may be given the opportunity to put it into 
>>play...but if you don't, someone else eventually will. 
> 
> This is a good idea. Even if you are a die-hard luddite you better 
> participate than just sit back. Depending on the size of the deck 
> certain cards will likely not re-appear very often, so getting rid of 
> them is semi-permanent. 



Playing them is semi-permanent. Getting rid of them...you can shuffle 
them into the deck, but the only "discarded" things are Unbans and Bans 
once they cancel each other out. Reason: for any given tech, you may be 
given the opportunity to implement it...but if you pass, someone else 
eventually will. Your choice is now vs. later. 


> Hmm, maybe one could even play this game luddite-wise: you try to ban 
> all technologies so that singularity *and* disaster are impossible. For 
> a luddite win to happen all technologies enabling new stuff have to be 
> banned, and these bans have to be upheld for a certain time. Sounds like 
> a real challenge. 



Umm...actually, I'm deliberately limiting the number of Bans and Unbans 
to make this infeasable. Sit back, only banning stuff...you can do that 
for a while, but there's only so much of that kind of political capital 
floating around. If that's all you do, then eventually, you'll run out 
and the world will start progressing without you. 


Though a related possibility: try to invoke one of the three types of 
Catastrophes. Three players only, of course. 


> But this seems to assume that "the solution to tech problems is more 
> tech", and that the true goal should be some kind of balanced form of 
> technology. Personally I don't see why you can't have a singularity 
> based only on biotech (maybe something like Blood Music or the Edenists 
> of Peter F. Hamilton), AI or nanotech. This might be more of a game 
> solution, of course - it is rather neat to have this kind of gliding 
> thresholds. 



Problems: 
* Biotech only - ok, we've upgraded our bodies, but we're still limited 
   to that which we can produce biologically, or mine and refine by 
   relatively crude industrial processes. Our minds are not any more 
   advanced; neither do we have companions or aides much smarter than 
   ourselves to call upon. Life remains mostly predictable, though 
   much longer. 
* Nanotech only - without AI to control the nano, only crude processing 
   is possible. (Diamondoid space elevators? Sure. Nanites to repair 
   cellular damage? Maybe...and a single cell at a time, as controlled 
   by a person. Anything more complex than that? Nope.) And we, 
   ourselves, remain mostly unchanged from our current forms. Life 
   remains mostly predictable, though we do have more neat toys. 
* Robo/AI only - our bright children, the AIs, may theorize and 
   philosophize all they want...but without nanotech or biotech to 
   synthesize things, manufacturing costs mean their ideas for changing 
   the world take years to implement, just like Big Ideas do today, thus 
   limiting the pace of significant change. In addition, previously 
   existing humans can not join the advanced intelligences, for we do not 
   know how to merge them and us, or how to make either side become the 
   other. Life for humans remains almost totally unchanged; even any 
   given AI does not usually experience radical change over the course of 
   a few days. 


The core concept is the purity of each type, to the negligence of the 
others. The solution to most problems is, in part, to find and develop 
(and apply) a solution to the problem...which, in this context, 
translates to more tech to solve imbalances in previously deployed tech. 


I'm also thinking of perhaps more Event cards like High Tech Terrorist, 
embodying other ways that the world could destroy itself...unless 
society has deployed solutions to stop that way first. True, few people 
would want to play such a card deliberately...but the deck itself cares 
not for which cards come from it when it is the world's turn. 


>>>Perhaps a better way of handling it would have a "tolerance" level for 
>>>the different points. It represents how much the world can handle, what 
>>>institutions can deal with issues and how much people have adapted to 
>>>it. The points must remain below the tolerance for the world to work; 
>>>tolerance is increased using either safety technologies or perhaps 
>>>direct payment of money (representing the building of institutions). To 
>>>reach singularity the world needs to adapt. This seems to be closer to 
>>>the memetic goal of the game and transhumanism. 
>>> 
>>So why not just do pure tolerance (at least, as pure as you can get) 
>>*and* supress all new technologies? This seems to be just the same 
>>abstract numbers you were objecting to. 
> 
> Sure, but given the assumptions about costs I had made, every player 
> would eventually run out of money. Then they would not be able to 
> prevent new tech from emerging. 



As implemented by a limited number of Ban and Unban cards. 


> Sure. But how is this different from the social risk described by Orwell 
> as "If you want a picture of the future, imagine a boot stamping on a 
> human face--for ever"? With ubiquitious law enforcement, paranoid 
> culture and AI enforcement you could get it even if all AI is obedient, 
> all biotech under control and the nanotech under lock and key. It might 
> even be self-reinforcing and impossible to get rid of. It is the social 
> version of gray goo, a permanently entrenched society that does not 
> promote human growth. 



What you describe is not so different from the robo Catastrophe: a 
mindless machine wiping out free humanity; the machine just happens to 
be composed of organic robots instead of metallic/plastic/ceramic ones. 
Given as AI would be required to effectively implement such a scheme 
worldwide, I'd say the robo Catastrophe more or less covers it...though 
perhaps I should note that explicitly. 


> What I worry about in your system of bans, is that it suggests that 
> banning technologies is a good thing and that it does not carry any 
> cost. If antibiotics or the Internet are banned, in the real world this 
> would cause hundreds of thousands of deaths and billions in economic 
> losses. In the game it would remove a few Bio or Robot points. 



See above about game-ending Event cards. Banned technologies would not 
exist so far as said cards are concerned. 


>>>I think it is 
>>>important not to isolate the game from social reality. If it is intended 
>>>to convey a transhumanist point of view it better demonstrate that 
>>>technology alone isn't enough, we better make sure our culture isn't 
>>>turned into something nasty as we advance. 
>>> 
>>Again, define "something nasty". For any given permutation, some of the 
>>audience will be predisposed to think it's actually a *good* thing...so 
>>better to just avoid that topic entirely, no? 
> 
> It is your game, and you may do with it as you like, but I think leaving 
> out issues like this would make it less interesting and actually less 
> likely to spread the positive memes you would like to spread. We already 
> have enough games and scenarios around where technological development 
> is pursued for its own sake, and far too few that dares to look at how 
> society interacts with technology. 



The base target is mere introduction to the concept of Singularity, and 
a mild blessing of it as a good target. Again, if you wish to develop a 
game that looks at the social realities of deploying certain specific 
technologies, go ahead. Such a game might be viewed as what happens 
when, in the SCG's abstraction, someone just plays a certain Technology 
card, or maybe a few cards. 


> This is one of the most obvious weaknesses of transhumanism today, and 
> many of our critics latch on to it: we do not integrate our 
> technological visions with social visions, and that either suggests that 
> we do not care the least about the rest of humanity, 



Some of us do, despite the efforts of those humans who see our visions 
as the biggest threat to life and limb currently in existence, and act 
accordingly. It's hard to care for those who not only trying to kill 
you*, but utterly discredit everything you believe in...but some of us 
manage it. 


* Not exaggerating here, BTW. I have received death threats over my 
beliefs in the past. That I do not now, I attribute mostly to the fact 
that I have learned not to associate with people who would do such a 
thing. 


> that we naively 
> think technology is the only thing that matters 



Depends on how broadly or narrowly one defines "technology". Is it 
technology when one discovers a better way to live, if that way just 
happens to require certain modern inventions in order to be practical? 


> or that we have unsavory 
> social visions we do not reveal. 


See above. I envision life radically enhanced by new technologies, with 
material prices dropped through the floor compared to current levels, 
where anyone can modify their own bodies as they please, and ability for 
anyone to be beyond the reach of the law if they choose (so long as they 
do not harm anyone under the law's protection). This would classify me 
as a dangerous lunatic in any social circle that believes in keeping 
control over everyone, therefore prudence demands that I not reveal it 
in most cases. 

Re: Singularity Card Game Alpha Test
From: Anders Sandberg ()
Date: Fri Mar 08 2002 - 08:49:36 MST 

On Thu, Mar 07, 2002 at 06:47:00PM -0800, Adrian Tymes wrote: 
> >>We can only pump so many memes into a single simple game. Perhaps 
> >>another one can speak to the realities of tech deployment. 
> > 
> >A good idea. Maybe I could try out writing my own version, if you don't 
> >mind? My version would likely deal less with the race towards 
> >singularity and more with the politics and economics of technology. 
> 
> Please, be my guest. (Not that I have any standing to grant permission 
> on this anyway, but if you want my permission here, you've got it.) 


Well, if you have come up with a creative idea I would consider it rude to 
use it without asking. I don't know what to think about intellectual 
property, but I firmly believe in intellectual propriety. 


Overall, your "purist" version of the game seems to be developing quite 
nicely. It is starting to make some sense :-) 


> I'm also thinking of perhaps more Event cards like High Tech Terrorist, 
> embodying other ways that the world could destroy itself...unless 
> society has deployed solutions to stop that way first. True, few people 
> would want to play such a card deliberately...but the deck itself cares 
> not for which cards come from it when it is the world's turn. 


Also, the event cards may *have* to be played if no other valid card is in 
the hand. Which means that once you have such a card you cannot just put the 
hand back into the deck. Somebody got an anthrax bomb, and no matter what 
you do sooner or later it will crop up somewhere... 


> >Sure. But how is this different from the social risk described by Orwell 
> >as "If you want a picture of the future, imagine a boot stamping on a 
> >human face--for ever"? With ubiquitious law enforcement, paranoid 
> >culture and AI enforcement you could get it even if all AI is obedient, 
> >all biotech under control and the nanotech under lock and key. It might 
> >even be self-reinforcing and impossible to get rid of. It is the social 
> >version of gray goo, a permanently entrenched society that does not 
> >promote human growth. 
> > 
> What you describe is not so different from the robo Catastrophe: a 
> mindless machine wiping out free humanity; the machine just happens to 
> be composed of organic robots instead of metallic/plastic/ceramic ones. 
> Given as AI would be required to effectively implement such a scheme 
> worldwide, I'd say the robo Catastrophe more or less covers it...though 
> perhaps I should note that explicitly. 


Well, look at the original 1984. That is the low tech version, and in the 
appendix Orwell quite clearly describes how the Party is circumscribing 
further research in order to not get any surprises. Oceania is nowhere near 
any advanced technology, but still is aiming for an ultra-stable, 
humanity-crushing state. 


> The base target is mere introduction to the concept of Singularity, and 
> a mild blessing of it as a good target. 


This is a good idea. However, the game as it is does not include the perhaps 
most important aspect of the singularity: its autofeedback. As far as I can 
see, the rate of tech development will not increase over time as the game is 
played. 

Re: Singularity Card Game Alpha Test
From: Mike Lorrey ()
Date: Fri Mar 08 2002 - 14:59:46 MST 

Anders Sandberg wrote: 
> 
> On Thu, Mar 07, 2002 at 06:47:00PM -0800, Adrian Tymes wrote: 
> 
> > I'm also thinking of perhaps more Event cards like High Tech Terrorist, 
> > embodying other ways that the world could destroy itself...unless 
> > society has deployed solutions to stop that way first. True, few people 
> > would want to play such a card deliberately...but the deck itself cares 
> > not for which cards come from it when it is the world's turn. 
> 
> Also, the event cards may *have* to be played if no other valid card is in 
> the hand. Which means that once you have such a card you cannot just put the 
> hand back into the deck. Somebody got an anthrax bomb, and no matter what 
> you do sooner or later it will crop up somewhere... 


This is a bit pessimistic, I think. There should be some sort of 
counteraction card, like "sanity" or "wisdom" that neutralizes negative 
event cards (just as you could have 'stupidity' or 'irrationality' cards 
to neutralize positive events...). 


This would reflect the real world a bit better, where you have balance 
and counterbalance in most situations, helping to attenuate the risks of 
unlikely situations. 

Re: Singularity Card Game Alpha Test
From: Adrian Tymes ()
Date: Fri Mar 08 2002 - 19:20:01 MST 

Mike Lorrey wrote: 

> Anders Sandberg wrote: 
>>On Thu, Mar 07, 2002 at 06:47:00PM -0800, Adrian Tymes wrote: 
>>>I'm also thinking of perhaps more Event cards like High Tech Terrorist, 
>>>embodying other ways that the world could destroy itself...unless 
>>>society has deployed solutions to stop that way first. True, few people 
>>>would want to play such a card deliberately...but the deck itself cares 
>>>not for which cards come from it when it is the world's turn. 
>>> 
>>Also, the event cards may *have* to be played if no other valid card is in 
>>the hand. Which means that once you have such a card you cannot just put the 
>>hand back into the deck. Somebody got an anthrax bomb, and no matter what 
>>you do sooner or later it will crop up somewhere... 
> 
> This is a bit pessimistic, I think. There should be some sort of 
> counteraction card, like "sanity" or "wisdom" that neutralizes negative 
> event cards (just as you could have 'stupidity' or 'irrationality' cards 
> to neutralize positive events...). 
> 
> This would reflect the real world a bit better, where you have balance 
> and counterbalance in most situations, helping to attenuate the risks of 
> unlikely situations. 



I'd argue against a *specific* card for a specific event: if, say, 
anthrax bombs are instant Catastrophe that can only be prevented by 
Nanomedicine, but Nanomedicine happens to be at the bottom of the 
deck... 


Better to tie the counters to specific game conditions that can always 
be achieved, or at least striven for. For instance, staying close to 
the balance, or "accumulate X (bio/nano/robo) points in the next Y 
turns" (which may have to be balanced by other types, if the needed type 
is already relatively high). 

Re: Singularity Card Game Alpha Test
From: Adrian Tymes ()
Date: Fri Mar 08 2002 - 19:19:56 MST 

Anders Sandberg wrote: 

> On Thu, Mar 07, 2002 at 06:47:00PM -0800, Adrian Tymes wrote: 
>>>>We can only pump so many memes into a single simple game. Perhaps 
>>>>another one can speak to the realities of tech deployment. 
>>>> 
>>>A good idea. Maybe I could try out writing my own version, if you don't 
>>>mind? My version would likely deal less with the race towards 
>>>singularity and more with the politics and economics of technology. 
>>> 
>>Please, be my guest. (Not that I have any standing to grant permission 
>>on this anyway, but if you want my permission here, you've got it.) 
> 
> Well, if you have come up with a creative idea I would consider it rude to 
> use it without asking. I don't know what to think about intellectual 
> property, but I firmly believe in intellectual propriety. 



I suspect our difference is the level at which a work becomes 
derivative, and the permissibility of derivative works. I suspect my 
level can be approximated by the fact that I write fan fiction of 
certain published works for noncommercial recreation. ^_- 


> Overall, your "purist" version of the game seems to be developing quite 
> nicely. It is starting to make some sense :-) 



Finally. ^_^; 


>>I'm also thinking of perhaps more Event cards like High Tech Terrorist, 
>>embodying other ways that the world could destroy itself...unless 
>>society has deployed solutions to stop that way first. True, few people 
>>would want to play such a card deliberately...but the deck itself cares 
>>not for which cards come from it when it is the world's turn. 
> 
> Also, the event cards may *have* to be played if no other valid card is in 
> the hand. Which means that once you have such a card you cannot just put the 
> hand back into the deck. Somebody got an anthrax bomb, and no matter what 
> you do sooner or later it will crop up somewhere... 


...but you can play a card that will prevent its worst effects from 
taking place. (I've updated the rules, so now passing is always an 
option.) So, *you* don't have to play a bad card if you don't want to, 
but it will come up eventually. (Though you can squat on a card by 
playing other legit cards in your hand, so long as you have other legit 
cards. I wonder if this can be justified as "keeping those with access 
to X busy"?) 


>>What you describe is not so different from the robo Catastrophe: a 
>>mindless machine wiping out free humanity; the machine just happens to 
>>be composed of organic robots instead of metallic/plastic/ceramic ones. 
>>Given as AI would be required to effectively implement such a scheme 
>>worldwide, I'd say the robo Catastrophe more or less covers it...though 
>>perhaps I should note that explicitly. 
> 
> Well, look at the original 1984. That is the low tech version, and in the 
> appendix Orwell quite clearly describes how the Party is circumscribing 
> further research in order to not get any surprises. Oceania is nowhere near 
> any advanced technology, but still is aiming for an ultra-stable, 
> humanity-crushing state. 



Which is why, even though there are sentient minds still around, the 
robo Catastrophe stops play (and development of new tech and events) 
just like the other two types (which physically wipe out all sentient 
minds). 


>>The base target is mere introduction to the concept of Singularity, and 
>>a mild blessing of it as a good target. 
> 
> This is a good idea. However, the game as it is does not include the perhaps 
> most important aspect of the singularity: its autofeedback. As far as I can 
> see, the rate of tech development will not increase over time as the game is 
> played. 



Good point. Hmm...what if the rate of the world's playing depending on 
total accumulated (bio/nano/robo) points? That is, at start (say, 15 
total b/n/r points), it plays 1 when it's the world's turn; when there 
are at least 25 total b/n/r points, it plays 2, then 3 at 35, on up to 
Singularity or Catastrophe. Likewise, the players' hands shrink at 
certain levels (specifically, they skip drawing a card if their hand 
size is too large). 


If this is used, then maybe Singularity happens when all (or almost all) 
the cards have been played? 







  










 


2,000,000,000 Web Pages--you only need 1. Save time with My Lycos.
http://my.lycos.com

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=18829