|
View Poll Results: Asimov's Laws a Requirement? Preferable? | |||
Yes | 4 | 50.00% | |
No | 4 | 50.00% | |
Voters: 8. You may not vote on this poll |
|
Thread Tools | Display Modes |
#41
|
||||
|
||||
Quote:
Quote:
I think the real question is not whether this is feasible, there are over six billion of us walking around with just that kind of computer in our skull, but the more interesting questions are what kind of intelligence you will create in this way. In a lot of ways, we are who we are because of what came before us - how can we tell what will result from feeding the budding AI on what we *think* is the best way to grow an intelligence? We might be breeding a true alien that remains completely incomprehensible to us - the problem shifts if it becomes intelligent enough to analyze and understand *us*, but then you have the boogieman of an AI that's smarter than us. It will be able to talk to us, but we won't know what's going on inside of it. (AI that we can fully analyze and understand is likely to not be very useful, though...unless you're breeding artificial insects.) That'll be a few interesting conversations, I think. Quote:
Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'? Justin: It's kind of a limited skill. Greg: Depends on how often you drive off bridges. - d02 Quotes |
#42
|
||||
|
||||
Okay, although I concede this whole "self-evolving" thing might be the best bet for a self-aware computer, I assert that this is EXACTLY what you don't want to happen. Think of Deep Thought. He wasn't even fully hooked up and he already knew about rice pudding and income tax! What's to stop a self-teaching computer from reaching the point of "these dirty bags of mostly water are so self-contradictory that they're not worth obeying"? I have no problem with an assembly line robot being able to figure out the most efficient way to perform an assembly line text, but you don't just give a robot total Internet access and step back.
__________________
mudshark: Nate's just being...Nate. Zeke: It comes nateurally to him. mudshark: I don't expect Nate to make sense, really -- it's just a bad idea. Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity. Adam Savage: I reject your reality and substitute my own! Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity. Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further. |
#43
|
||||
|
||||
Quote:
The main question it comes down to is this: is conciousness purely a function of the brain? On the face of it, yes, but think a little harder. If we take it to be the case, then which part of the brain is responsible exactly? Is it somehow the case that 'conciousness' only happens when an animal with a big enough brain comes along? If so, why? Is conciousness instead not primarily a biological function, but more of a learned one? Or is our memory the primary factor? The point is, no-one really has a clue, or for that matter any idea how to find out. Conversely perhaps it'll be attempts to create machine intelligences that'll give us some handle on how we ourselves think, but like I said I doubt that it'll happen in any of our lifetimes. As an aside, I find it interesting that no-one has thus far touched upon the ethics of creating machine intelligence. One of the things about Trek that has consistently bugged me in nearly every show is what an incredibly laissaz-faire approach the otherwise fanatically ethical Federation takes toward the creation of artificial lifeforms (in the form of holograms, mostly). We had that whole thing with Data being judged to be 'human' legally, but what of Vic Fontaine and the EMH? To be fair though, it's not something that much SF covers at all, but it seems like it should.
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images. Sergeant: You can? That's amazing! Mason: Yes. We call it 'Google Earth'. - Five Minute 24 S1 (it lives, honest!) "Everybody loves pie!" - Spongebob Squarepants |
#44
|
||||
|
||||
Quote:
If computers/robots/machines/Tamagotchis ever reach the level of complexity that they can be self-aware, IMO they'd be as good as human, only with more batteries and less pooping. As such, anyone with a self-aware AI would pretty much be a parent, and some parents just suck. Others, though, are totally awesome. Good parents teach their kids about morals, responsibility, and all that other stuff that stops most humans from going BSI* and killing everyone. I guess I just don't see the robotic sentience issue as any more different than organic sentience. *B = Bat, and I = Insane.
__________________
Church: I'm just worried, man, who knows if this stuff is contagious? For all we know Caboose could be next. Wake up tomorrow morning he's throwin' up, runnin' a huge fever, next thing you know he's bleeding out of his eyes 'cause his internal organs are liquifying. And I'm gonna be the one that has to hold his hand while he screams himself to death. That's not gonna be any fun. Caboose: I'm gonna go take a vitamin. |
#45
|
||||
|
||||
A good point, following on from that, is that any machine intelligence would by necessity be patterned after our own; after all, what other model do we have?
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images. Sergeant: You can? That's amazing! Mason: Yes. We call it 'Google Earth'. - Five Minute 24 S1 (it lives, honest!) "Everybody loves pie!" - Spongebob Squarepants |
#46
|
|||
|
|||
Um, self-aware robots? I thought this topic was about Asenion robots (viz. those that follow Asimov's laws). The biggest thing about Asimov robots is that they're objects. You can use them stupidly or evilly, or you can use them for productivity or comfort. An Asenion robot makes no decisions for itself - every single action is not only based on an order (or law), but can be mathematically predicted based on the situation and the nature of its active orders.
I've actually considered the value of another system of robot safety (a robot doesn't have morals, any more than a knife does) based on "standing orders." I'm not quite sure where I got the idea, but it's basically this - the robot's only intrinsic motivation is to follow orders. Now here's the neat part. Every robot is programmed to recognize all humans as having given a set of default orders, stuff like "don't harm me," "don't harm my property," et cetera. That's the basic idea of it. Anyone have their own ideas for robot security?
__________________
Currently in the works - Five Minute EXE Axess (indefinitely on hold for no reason), a bunch of random stories, Five Minute Starforce? |
#47
|
||||
|
||||
Well, I think Asimov's rigidly-constructed robots are possible, but only after we've used self-evolving systems to get a better understanding of how workable AI organises itself. I'm sure military and government contractors will take an Asimov model - after all, they are completely predictable, or should be -, but the real world needs a cheaper, faster and smarter solution, even if it comes with some risks.
I like the idea of standing orders. Especially since the "human" in Asimov's Laws should really be corrected to "sapient being". Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'? Justin: It's kind of a limited skill. Greg: Depends on how often you drive off bridges. - d02 Quotes |
#48
|
||||
|
||||
One wonders what amounts to a valid Turing Test in the 24th century. I think that a key requirement would be the creation of a process that the computer didn't already know.
__________________
mudshark: Nate's just being...Nate. Zeke: It comes nateurally to him. mudshark: I don't expect Nate to make sense, really -- it's just a bad idea. Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity. Adam Savage: I reject your reality and substitute my own! Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity. Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further. |
#49
|
||||
|
||||
Actually, I think the chief question is, where does "cheating the Turing test" end and "actually being sapient" begin?
Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'? Justin: It's kind of a limited skill. Greg: Depends on how often you drive off bridges. - d02 Quotes |
#50
|
||||
|
||||
When it can't be turned off.
__________________
"Please, Aslan," said Lucy, "what do you call soon?" "I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician. |
#51
|
||||
|
||||
I can turn off any sapient being; the matching tool is called "gun".
Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'? Justin: It's kind of a limited skill. Greg: Depends on how often you drive off bridges. - d02 Quotes |
#52
|
||||
|
||||
There's another way, of course...
__________________
Methinks Ted Sturgeon was too kind. 'Yes, but I think some people should be offended.' -- John Cleese (on whether he thought some might be offended by Monty Python) |
#53
|
||||
|
||||
Okay, dead does not equal inactive.
Define sentience.
__________________
mudshark: Nate's just being...Nate. Zeke: It comes nateurally to him. mudshark: I don't expect Nate to make sense, really -- it's just a bad idea. Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity. Adam Savage: I reject your reality and substitute my own! Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity. Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further. |
#54
|
||||
|
||||
Quote:
How about this: "When it isn't affected."
__________________
"Please, Aslan," said Lucy, "what do you call soon?" "I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician. |
#55
|
||||
|
||||
Ah. I concede that point, then.
Still, going transhuman, won't we be able to make a human brain capable of safe shutdown and restart? Admittedly, this will likely involve cybernetic implants or , at the very least, advanced medical treatment, and even then it'll probably be metastable. (Cryogenics and whatnot.) Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'? Justin: It's kind of a limited skill. Greg: Depends on how often you drive off bridges. - d02 Quotes |
#56
|
||||
|
||||
Talk about drifting topics...
Okay, cyrogenics/carbonite/instant dehydration cubes and whatnot are topics for their own thread. This thread is robotics and machine intelligence. Last time I checked, the poll was fifty-fifty. Any comment? Expected? Unexpected? Surprising? Not surprising?
__________________
mudshark: Nate's just being...Nate. Zeke: It comes nateurally to him. mudshark: I don't expect Nate to make sense, really -- it's just a bad idea. Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity. Adam Savage: I reject your reality and substitute my own! Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity. Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further. |
#57
|
||||
|
||||
I suppose on the face of it it may be taken as surprising that a bunch of nerds such as we wouldn't overwhelmingly say "yes", but then anyone who actually knows this site knows what a fractious bunch we are really.
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images. Sergeant: You can? That's amazing! Mason: Yes. We call it 'Google Earth'. - Five Minute 24 S1 (it lives, honest!) "Everybody loves pie!" - Spongebob Squarepants |
#58
|
||||
|
||||
No, we aren't.
__________________
"Please, Aslan," said Lucy, "what do you call soon?" "I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician. |
#59
|
||||
|
||||
2/5 of us are.
__________________
The first run through of any experimental procedure is to identify any potential errors by making them. |
#60
|
||||
|
||||
Fractious? The people on this site are about as fractious as two sleeping Trakenites.
In any case, I think this discussion has reached a state of decay, and we have come full circle to the questions I raised on page one, myself...
__________________
O to be wafted away From this black aceldama of sorrow; Where the dust of an earthy today Is the earth of a dusty tomorrow! |
|
|