The Five-Minute Forums  

Go Back   The Five-Minute Forums > FiveMinute.net > Science Fiction

View Poll Results: Asimov's Laws a Requirement? Preferable?
Yes 4 50.00%
No 4 50.00%
Voters: 8. You may not vote on this poll

Reply
 
Thread Tools Display Modes
  #41  
Old 01-29-2007, 07:35 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

Quote:
Originally Posted by PointyHairedJedi View Post
Visions of Multivac, Colossus, Shalmanesser and Skynet are ultimately just fantasy, unrealisable because conciousness is not something that can be created, whole and complete, utterly constrained in everything it does by a set of arbitrarily imposed rules. Machines that think, if there ever are any, will be like us - blank slates that must be taught how to think from the ground up.
While I agree that creating consciousness is tricky, I believe it'll be relatively easy to "grow" once you have the hardware to get a decently-sized neural network going. Like I said, self-evolving systems have a lot of potential. Whether we can hardcode any "rules" is debateable, but there are already some suggestions, such as "benelovence", i.e. we must make sure that any intelligence we do end up creating is fundamentally friendly to mankind.

Quote:
Of course, I have my doubts as to whether we will ever manage such a feat, as first we must understand how conciousness works in humans. It's a problem that I don't think will be solved in any of our lifetimes - though we may attain a vastly more complete understanding of the functioning of our brains, that won't tell us much about self-conciousness and free will. We may get machines that can learn how to do a few things, but I doubt there will ever be anything that has the amazing capacity and range of the human mind.
The "ever" is a dangerous thing. Saying something can't be done because you can't do it is shortsighted at best. Also, I know that it is possible to create something intelligent - after all, there's, you know, *us*. Once we understand the chemistry of neurons well enough to emulate one and manage to pack in enough nodes into a system that they are roughly equal to the amount of neurons we have, we should be able to start writing the first primitive self-evolving software and have it undergo rapid evolution towards more complex behaviour.

I think the real question is not whether this is feasible, there are over six billion of us walking around with just that kind of computer in our skull, but the more interesting questions are what kind of intelligence you will create in this way. In a lot of ways, we are who we are because of what came before us - how can we tell what will result from feeding the budding AI on what we *think* is the best way to grow an intelligence? We might be breeding a true alien that remains completely incomprehensible to us - the problem shifts if it becomes intelligent enough to analyze and understand *us*, but then you have the boogieman of an AI that's smarter than us. It will be able to talk to us, but we won't know what's going on inside of it. (AI that we can fully analyze and understand is likely to not be very useful, though...unless you're breeding artificial insects.)

That'll be a few interesting conversations, I think.

Quote:
Clearly you have never seen Mars Attacks!.
Oh, I have, it's one of my favorite movies. However, we can not assume that alien malefactors have any specific anti-nuclear technology and just give up here, because nukes are still the most destructive weapons we have and therefore our best shot. If nukes don't work, we're probably screwed. I fully blame Hollywood for the "The aliens are immune to nuclear weapons" trope - we can't hope to have some deus ex machine weakness in the attackers, therefore our best bet is brute force, and nukes are the best brute force weapons we have, plus we have a lot of them. If the aliens do happen to have a weakness we can exploit (and that seems reasonable; all the portrayals of superior aliens in Sci-Fi have dulled us to the fact that a human is an amazingly tough and resourceful animal, and likely to be far superior in at least one respect over the aliens) - more power to us.

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #42  
Old 01-29-2007, 09:05 PM
Nate the Great's Avatar
Nate the Great Nate the Great is offline
You just activated his Trek card
Member
 
Join Date: Mar 2004
Location: Minneapolis, MN
Posts: 4,859
Default

Okay, although I concede this whole "self-evolving" thing might be the best bet for a self-aware computer, I assert that this is EXACTLY what you don't want to happen. Think of Deep Thought. He wasn't even fully hooked up and he already knew about rice pudding and income tax! What's to stop a self-teaching computer from reaching the point of "these dirty bags of mostly water are so self-contradictory that they're not worth obeying"? I have no problem with an assembly line robot being able to figure out the most efficient way to perform an assembly line text, but you don't just give a robot total Internet access and step back.
__________________
mudshark: Nate's just being...Nate.
Zeke: It comes nateurally to him.

mudshark: I don't expect Nate to make sense, really -- it's just a bad idea.

Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity.

Adam Savage: I reject your reality and substitute my own!

Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity.

Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further.
Reply With Quote
  #43  
Old 01-29-2007, 11:28 PM
PointyHairedJedi's Avatar
PointyHairedJedi PointyHairedJedi is offline
He'd enjoy a third pie
Member
 
Join Date: Mar 2003
Location: The Scotlands
Posts: 4,354
Send a message via ICQ to PointyHairedJedi Send a message via AIM to PointyHairedJedi Send a message via Yahoo to PointyHairedJedi
Default

Quote:
Originally Posted by Gatac View Post
The "ever" is a dangerous thing. Saying something can't be done because you can't do it is shortsighted at best. Also, I know that it is possible to create something intelligent - after all, there's, you know, *us*. Once we understand the chemistry of neurons well enough to emulate one and manage to pack in enough nodes into a system that they are roughly equal to the amount of neurons we have, we should be able to start writing the first primitive self-evolving software and have it undergo rapid evolution towards more complex behaviour.
I concede your point on "ever". However, you're making the assumption that neurons+chemistry=sentience (or their digital equivalents). Neuroscience has discovered many things to date about the way our brains work - from the broad sweep of which regions influence what, to the small end of the scale like neurotransmitter chemistry and the function of different cell types. But, and this is the huge but, none of it tells us anything more about conciousness than philosophy has been able to in the last two and a half thousand years.

The main question it comes down to is this: is conciousness purely a function of the brain? On the face of it, yes, but think a little harder. If we take it to be the case, then which part of the brain is responsible exactly? Is it somehow the case that 'conciousness' only happens when an animal with a big enough brain comes along? If so, why? Is conciousness instead not primarily a biological function, but more of a learned one? Or is our memory the primary factor?

The point is, no-one really has a clue, or for that matter any idea how to find out. Conversely perhaps it'll be attempts to create machine intelligences that'll give us some handle on how we ourselves think, but like I said I doubt that it'll happen in any of our lifetimes.


As an aside, I find it interesting that no-one has thus far touched upon the ethics of creating machine intelligence. One of the things about Trek that has consistently bugged me in nearly every show is what an incredibly laissaz-faire approach the otherwise fanatically ethical Federation takes toward the creation of artificial lifeforms (in the form of holograms, mostly). We had that whole thing with Data being judged to be 'human' legally, but what of Vic Fontaine and the EMH? To be fair though, it's not something that much SF covers at all, but it seems like it should.
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images.
Sergeant: You can? That's amazing!
Mason: Yes. We call it 'Google Earth'.
- Five Minute 24 S1 (it lives, honest!)

"Everybody loves pie!"
- Spongebob Squarepants
Reply With Quote
  #44  
Old 01-30-2007, 03:00 AM
Hejira's Avatar
Hejira Hejira is offline
Regenerating like a Phoenix
Member
 
Join Date: Mar 2003
Posts: 160
Default

Quote:
Originally Posted by Infinite Improbability View Post
Okay, although I concede this whole "self-evolving" thing might be the best bet for a self-aware computer, I assert that this is EXACTLY what you don't want to happen. Think of Deep Thought. He wasn't even fully hooked up and he already knew about rice pudding and income tax! What's to stop a self-teaching computer from reaching the point of "these dirty bags of mostly water are so self-contradictory that they're not worth obeying"? I have no problem with an assembly line robot being able to figure out the most efficient way to perform an assembly line text, but you don't just give a robot total Internet access and step back.
You don't give a human child total Internet access and step back, either.

If computers/robots/machines/Tamagotchis ever reach the level of complexity that they can be self-aware, IMO they'd be as good as human, only with more batteries and less pooping. As such, anyone with a self-aware AI would pretty much be a parent, and some parents just suck. Others, though, are totally awesome.

Good parents teach their kids about morals, responsibility, and all that other stuff that stops most humans from going BSI* and killing everyone. I guess I just don't see the robotic sentience issue as any more different than organic sentience.

*B = Bat, and I = Insane.
__________________
Church: I'm just worried, man, who knows if this stuff is contagious? For all we know Caboose could be next. Wake up tomorrow morning he's throwin' up, runnin' a huge fever, next thing you know he's bleeding out of his eyes 'cause his internal organs are liquifying. And I'm gonna be the one that has to hold his hand while he screams himself to death. That's not gonna be any fun.
Caboose: I'm gonna go take a vitamin.
Reply With Quote
  #45  
Old 01-30-2007, 10:32 AM
PointyHairedJedi's Avatar
PointyHairedJedi PointyHairedJedi is offline
He'd enjoy a third pie
Member
 
Join Date: Mar 2003
Location: The Scotlands
Posts: 4,354
Send a message via ICQ to PointyHairedJedi Send a message via AIM to PointyHairedJedi Send a message via Yahoo to PointyHairedJedi
Default

A good point, following on from that, is that any machine intelligence would by necessity be patterned after our own; after all, what other model do we have?
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images.
Sergeant: You can? That's amazing!
Mason: Yes. We call it 'Google Earth'.
- Five Minute 24 S1 (it lives, honest!)

"Everybody loves pie!"
- Spongebob Squarepants
Reply With Quote
  #46  
Old 01-30-2007, 06:15 PM
GreenFire1 GreenFire1 is offline
Mirai heto Navi wo tore!
Member
 
Join Date: Sep 2006
Posts: 32
Send a message via AIM to GreenFire1
Default

Um, self-aware robots? I thought this topic was about Asenion robots (viz. those that follow Asimov's laws). The biggest thing about Asimov robots is that they're objects. You can use them stupidly or evilly, or you can use them for productivity or comfort. An Asenion robot makes no decisions for itself - every single action is not only based on an order (or law), but can be mathematically predicted based on the situation and the nature of its active orders.

I've actually considered the value of another system of robot safety (a robot doesn't have morals, any more than a knife does) based on "standing orders." I'm not quite sure where I got the idea, but it's basically this - the robot's only intrinsic motivation is to follow orders. Now here's the neat part. Every robot is programmed to recognize all humans as having given a set of default orders, stuff like "don't harm me," "don't harm my property," et cetera. That's the basic idea of it. Anyone have their own ideas for robot security?
__________________
Currently in the works - Five Minute EXE Axess (indefinitely on hold for no reason), a bunch of random stories, Five Minute Starforce?
Reply With Quote
  #47  
Old 01-30-2007, 06:41 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

Well, I think Asimov's rigidly-constructed robots are possible, but only after we've used self-evolving systems to get a better understanding of how workable AI organises itself. I'm sure military and government contractors will take an Asimov model - after all, they are completely predictable, or should be -, but the real world needs a cheaper, faster and smarter solution, even if it comes with some risks.

I like the idea of standing orders. Especially since the "human" in Asimov's Laws should really be corrected to "sapient being".

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #48  
Old 01-30-2007, 07:56 PM
Nate the Great's Avatar
Nate the Great Nate the Great is offline
You just activated his Trek card
Member
 
Join Date: Mar 2004
Location: Minneapolis, MN
Posts: 4,859
Default

One wonders what amounts to a valid Turing Test in the 24th century. I think that a key requirement would be the creation of a process that the computer didn't already know.
__________________
mudshark: Nate's just being...Nate.
Zeke: It comes nateurally to him.

mudshark: I don't expect Nate to make sense, really -- it's just a bad idea.

Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity.

Adam Savage: I reject your reality and substitute my own!

Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity.

Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further.
Reply With Quote
  #49  
Old 02-07-2007, 12:55 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

Actually, I think the chief question is, where does "cheating the Turing test" end and "actually being sapient" begin?

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #50  
Old 02-07-2007, 07:22 PM
Derek's Avatar
Derek Derek is offline
Dean of misderektion
Senior Staff
 
Join Date: Mar 2003
Location: Sector 001
Posts: 1,106
Default

When it can't be turned off.
__________________
"Please, Aslan," said Lucy, "what do you call soon?"
"I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician.
Reply With Quote
  #51  
Old 02-07-2007, 09:18 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

I can turn off any sapient being; the matching tool is called "gun".

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #52  
Old 02-07-2007, 11:22 PM
mudshark's Avatar
mudshark mudshark is offline
Is he ever gonna hit Krazy Kat, or what?
Member
 
Join Date: Mar 2003
Location: UMRK
Posts: 1,738
Default

There's another way, of course...
__________________
Methinks Ted Sturgeon was too kind.

'Yes, but I think some people should be offended.'
-- John Cleese (on whether he thought some might be offended by Monty Python)
Reply With Quote
  #53  
Old 02-08-2007, 03:52 AM
Nate the Great's Avatar
Nate the Great Nate the Great is offline
You just activated his Trek card
Member
 
Join Date: Mar 2004
Location: Minneapolis, MN
Posts: 4,859
Default

Okay, dead does not equal inactive.

Define sentience.
__________________
mudshark: Nate's just being...Nate.
Zeke: It comes nateurally to him.

mudshark: I don't expect Nate to make sense, really -- it's just a bad idea.

Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity.

Adam Savage: I reject your reality and substitute my own!

Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity.

Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further.
Reply With Quote
  #54  
Old 02-08-2007, 12:45 PM
Derek's Avatar
Derek Derek is offline
Dean of misderektion
Senior Staff
 
Join Date: Mar 2003
Location: Sector 001
Posts: 1,106
Default

Quote:
Originally Posted by Gatac View Post
I can turn off any sapient being; the matching tool is called "gun".
You know, even as I posted my statement, I figured someone was going to say that you can turn off humans. But there's a difference between destroying a sapient being and turning off a specific application in a fully-functional well-working computer.

How about this: "When it isn't affected."
__________________
"Please, Aslan," said Lucy, "what do you call soon?"
"I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician.
Reply With Quote
  #55  
Old 02-08-2007, 01:43 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

Ah. I concede that point, then.

Still, going transhuman, won't we be able to make a human brain capable of safe shutdown and restart? Admittedly, this will likely involve cybernetic implants or , at the very least, advanced medical treatment, and even then it'll probably be metastable. (Cryogenics and whatnot.)

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #56  
Old 02-08-2007, 05:59 PM
Nate the Great's Avatar
Nate the Great Nate the Great is offline
You just activated his Trek card
Member
 
Join Date: Mar 2004
Location: Minneapolis, MN
Posts: 4,859
Default

Talk about drifting topics...

Okay, cyrogenics/carbonite/instant dehydration cubes and whatnot are topics for their own thread. This thread is robotics and machine intelligence.

Last time I checked, the poll was fifty-fifty. Any comment? Expected? Unexpected? Surprising? Not surprising?
__________________
mudshark: Nate's just being...Nate.
Zeke: It comes nateurally to him.

mudshark: I don't expect Nate to make sense, really -- it's just a bad idea.

Sa'ar Chasm on the 5M.net forum: Sit back, relax, and revel in the insanity.

Adam Savage: I reject your reality and substitute my own!

Hanlon's Razor: Never attribute to malice that which can be adequately explained by stupidity.

Crow T. Robot: Oh, stop pretending there's a plot. Don't cheapen yourself further.
Reply With Quote
  #57  
Old 02-10-2007, 07:03 PM
PointyHairedJedi's Avatar
PointyHairedJedi PointyHairedJedi is offline
He'd enjoy a third pie
Member
 
Join Date: Mar 2003
Location: The Scotlands
Posts: 4,354
Send a message via ICQ to PointyHairedJedi Send a message via AIM to PointyHairedJedi Send a message via Yahoo to PointyHairedJedi
Default

I suppose on the face of it it may be taken as surprising that a bunch of nerds such as we wouldn't overwhelmingly say "yes", but then anyone who actually knows this site knows what a fractious bunch we are really.
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images.
Sergeant: You can? That's amazing!
Mason: Yes. We call it 'Google Earth'.
- Five Minute 24 S1 (it lives, honest!)

"Everybody loves pie!"
- Spongebob Squarepants
Reply With Quote
  #58  
Old 02-10-2007, 09:30 PM
Derek's Avatar
Derek Derek is offline
Dean of misderektion
Senior Staff
 
Join Date: Mar 2003
Location: Sector 001
Posts: 1,106
Default

No, we aren't.
__________________
"Please, Aslan," said Lucy, "what do you call soon?"
"I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician.
Reply With Quote
  #59  
Old 02-10-2007, 09:38 PM
Sa'ar Chasm's Avatar
Sa'ar Chasm Sa'ar Chasm is offline
Our last, best hope for peace
Staff
 
Join Date: Mar 2003
Location: Sitting (in Ottawa)
Posts: 3,425
Default

2/5 of us are.
__________________
The first run through of any experimental procedure is to identify any potential errors by making them.
Reply With Quote
  #60  
Old 02-10-2007, 11:56 PM
Chancellor Valium's Avatar
Chancellor Valium Chancellor Valium is offline
Reasonably priced male pills
Member
 
Join Date: Sep 2004
Location: Rhen Var, sitting on a radiator...
Posts: 4,595
Send a message via MSN to Chancellor Valium
Default

Fractious? The people on this site are about as fractious as two sleeping Trakenites.

In any case, I think this discussion has reached a state of decay, and we have come full circle to the questions I raised on page one, myself...
__________________
O to be wafted away
From this black aceldama of sorrow;
Where the dust of an earthy today
Is the earth of a dusty tomorrow!
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

Forum Jump


All times are GMT. The time now is 12:19 PM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.