View Single Post
  #43  
Old 01-29-2007, 11:28 PM
PointyHairedJedi's Avatar
PointyHairedJedi PointyHairedJedi is offline
He'd enjoy a third pie
Member
 
Join Date: Mar 2003
Location: The Scotlands
Posts: 4,354
Send a message via ICQ to PointyHairedJedi Send a message via AIM to PointyHairedJedi Send a message via Yahoo to PointyHairedJedi
Default

Quote:
Originally Posted by Gatac View Post
The "ever" is a dangerous thing. Saying something can't be done because you can't do it is shortsighted at best. Also, I know that it is possible to create something intelligent - after all, there's, you know, *us*. Once we understand the chemistry of neurons well enough to emulate one and manage to pack in enough nodes into a system that they are roughly equal to the amount of neurons we have, we should be able to start writing the first primitive self-evolving software and have it undergo rapid evolution towards more complex behaviour.
I concede your point on "ever". However, you're making the assumption that neurons+chemistry=sentience (or their digital equivalents). Neuroscience has discovered many things to date about the way our brains work - from the broad sweep of which regions influence what, to the small end of the scale like neurotransmitter chemistry and the function of different cell types. But, and this is the huge but, none of it tells us anything more about conciousness than philosophy has been able to in the last two and a half thousand years.

The main question it comes down to is this: is conciousness purely a function of the brain? On the face of it, yes, but think a little harder. If we take it to be the case, then which part of the brain is responsible exactly? Is it somehow the case that 'conciousness' only happens when an animal with a big enough brain comes along? If so, why? Is conciousness instead not primarily a biological function, but more of a learned one? Or is our memory the primary factor?

The point is, no-one really has a clue, or for that matter any idea how to find out. Conversely perhaps it'll be attempts to create machine intelligences that'll give us some handle on how we ourselves think, but like I said I doubt that it'll happen in any of our lifetimes.


As an aside, I find it interesting that no-one has thus far touched upon the ethics of creating machine intelligence. One of the things about Trek that has consistently bugged me in nearly every show is what an incredibly laissaz-faire approach the otherwise fanatically ethical Federation takes toward the creation of artificial lifeforms (in the form of holograms, mostly). We had that whole thing with Data being judged to be 'human' legally, but what of Vic Fontaine and the EMH? To be fair though, it's not something that much SF covers at all, but it seems like it should.
__________________
Mason: Luckily we at the Agency use use a high-tech piece of software that will let us spot him instantly via high-res satellite images.
Sergeant: You can? That's amazing!
Mason: Yes. We call it 'Google Earth'.
- Five Minute 24 S1 (it lives, honest!)

"Everybody loves pie!"
- Spongebob Squarepants
Reply With Quote