View Single Post
  #5  
Old 01-11-2007, 08:37 PM
MaverickZer0's Avatar
MaverickZer0 MaverickZer0 is offline
Suuuuuper genius
Member
 
Join Date: Feb 2004
Location: On Beach Street, in a Dimensional Area
Posts: 745
Send a message via AIM to MaverickZer0 Send a message via MSN to MaverickZer0 Send a message via Yahoo to MaverickZer0
Default

No. Setting guidelines like that will only eliminate the reaches of the AI. If you want to limit them, do so physically--give them only the strength of an adult human, say--rather than mentally by set parameters. They should learn morals like anyone else. And if they go psycho, we can lock them away in a capsule for 30 years with ethical sims running in their head.

But the Three Laws, when you look at them, are not practical. What if one human is in danger from another one, and you cannot shield the one in danger? The only option is to diasble the attacker, but under Asimov's three (though not the fourth) the robot would freeze. And then, what if many robots were in danger from a human, a fanatic or something? What then? Or many animals?

You can't cut everything down so simply. Asimov's Laws work well for Asimov's writing. In the real world, we'd need different limits; hence limiting physical strength. I hesitate to suggest we limit their intelligence as well, because that would be what they were for and unfair, but possibly that would only be when the robot seemed to be a danger.
__________________
Sig v8.2.2

No, I don't know what I'm doing, but I'm going to go and do it anyway.

*pokes avatar* Made by a good LJ friend. Thanks Ani!

Dark Blues: I'm going to kill you!
Enzan: Not if I kill me first!
Dark Blues: You...are aware my goal is accomplished either way, right?
Enzan: ...Yeah...
Reply With Quote