The Five-Minute Forums

The Five-Minute Forums (http://www.fiveminute.net/forums/index.php)
-   Science Fiction (http://www.fiveminute.net/forums/forumdisplay.php?f=9)
-   -   Should real robots obey Asimov's Laws? (http://www.fiveminute.net/forums/showthread.php?t=1342)

Nate the Great 01-11-2007 02:39 AM

Should real robots obey Asimov's Laws?
 
This is a fairly simple poll. Ever since Asimov created the Laws, scifi authors and fans everywhere have jumped on them as being "definative" or "mandatory" for modern robotics. However, we all know that such codified directives are hardly required. So, the question is, should we add them to robots when/if we get to the point that there are robots "smart" enough to understand and obey them?

Yeah, this is a topic designed for controversy, but I think that Doctor Who has been hogging the spotlight in this forum long enough.

e of pi 01-11-2007 03:03 AM

Are the Laws a good concept? Yes. Are they a good set of basic guidlines? Yeah. Are they perfect or the best? No. They're too simple, as seen in I, Robot and other areas, to allow full trust. More practical, but less explicitly statable or imaginable laws will be more likely.

mudshark 01-11-2007 04:52 AM

A requirement? No. They may not be preferable, or even practical, for that matter, but I think that the general notion of "laws of robotics" bears consideration.

Even though Asimov coined the word (in 1940, according to one dictionary), he would have been the first to point out that he himself was not a roboticist. He reasoned, though, that robots, beyond a certain level of sophistication, could not be programmed with every eventuality anticipated -- it was far too complicated -- and that they would need to have some set of rules for working out, independently, just what to do in an unforeseen circumstance. They needed to be able to think something through, and needed to know where the lines were... what was in-bounds and what was not.

His Three (or four) Laws of Robotics fulfilled a literary function. I think it likely that real roboticists will have to design in -- perhaps are beginning to do so already -- a comparable set of rules that, while they may bear little or no actual resemblance to Asimov's, will fulfill the practical equivalent of that function.

Hejira 01-11-2007 02:23 PM

I think if we get to the point where AI really is I, then morals should be taught to them, as they are to humans.

Yes, we have scumbag humans, but that doesn't stop us making more humans.

MaverickZer0 01-11-2007 08:37 PM

No. Setting guidelines like that will only eliminate the reaches of the AI. If you want to limit them, do so physically--give them only the strength of an adult human, say--rather than mentally by set parameters. They should learn morals like anyone else. And if they go psycho, we can lock them away in a capsule for 30 years with ethical sims running in their head.

But the Three Laws, when you look at them, are not practical. What if one human is in danger from another one, and you cannot shield the one in danger? The only option is to diasble the attacker, but under Asimov's three (though not the fourth) the robot would freeze. And then, what if many robots were in danger from a human, a fanatic or something? What then? Or many animals?

You can't cut everything down so simply. Asimov's Laws work well for Asimov's writing. In the real world, we'd need different limits; hence limiting physical strength. I hesitate to suggest we limit their intelligence as well, because that would be what they were for and unfair, but possibly that would only be when the robot seemed to be a danger.

Nate the Great 01-12-2007 08:29 PM

Uh, even human strength can be quite dangerous when coupled with the proper weapon and intelligence behind it.

I think that the key here is the question "how many tasks that we would want robots to do really require anything even remotely close to human intelligence?" I can't think of one. Hence any programmed moral code would be unnecessary.

As for the robot that can't shield the human, it's simple enough to add "if you can't protect one human without hurting another, just do nothing."

MaverickZer0 01-12-2007 08:44 PM

Agreed. But if not given a reason to do so, a large intellect will not rebel, knowing nothing could be accomplished. Robots are less likely to warmonger than humans, not more.

In which case, you're once again limiting an AI's capabilities. Limiting their possibilities in a way you could not do with a human. I am talking about possibly-sentient robots here, not factory-line workers that need only guidelines.

Limiting again. It's also saying to the human in danger, if you don't care about the robot, 'tough luck, it's just your time, though there is someone who could get you out of it right there'.

Nate the Great 01-13-2007 01:17 AM

I really will try not to tighten the range of my original question.

Oh, and how do people feel about Asimo and his friends? I saw a demonstration of him at the U of M, and I was curious.

mudshark 01-13-2007 03:38 AM

A nice toy that shows off their recent advances in mobility mechanics, as the name indicates.

Nate the Great 01-13-2007 06:58 AM

I particularly found the face-recognition feature interesting. Imagine being able to connect a basic topographic map with a name, and at several different angles at that.

Chancellor Valium 01-20-2007 04:02 PM

What do you mean, 'even' human strength?!

Of course human strength is bloody dangerous!

Sorry.

It all depends, I think on two questions:

1) Do robots count as self-aware, and/or as living?
2) Do we have the right to control what they think?

And right now, I'm not even going to pretend to know the answer. I will say this though:

Why do we *need* robots that are so advanced that such laws would be necessary? Has anyone thought about the economic consequences? What about the moral questions? Finally, what about the Marvin Problem?

Nate the Great 01-20-2007 07:33 PM

The only reason I can think of for "thinking" robots is to use as soldiers, and even then, I'm dubious. I'll cut down my rant on current military strategy to simply this: why have large numbers of ground troops AND weapons of mass destruction? For that matter, what are we accomplishing by having armies of robots attack each other? A waste of resources, no question.

Gatac 01-20-2007 08:51 PM

You've strafed one of my pet topics. Take cover, for I will now whip out a small treatise on the subject.

You ask why we have soldiers and WMDs. This is a bad question, because it assumes that war is only about the quickest and cheapest way to kill the most people. This is a common misconception. War is about defeating the enemy, specifically crushing his military might. While this can be served by killing his soldiers, this is often inefficient and hard to pull off, because humans are surprisingly sturdy. It is not so much our ability to directly withstand damage but to survive, to dig in, to keep your head down and spring up again. In a (relatively) recent example, Soviet forces on the Eastern front in WW2 were regularly taken by surprise when they advanced, because German troops knew exactly how to dig in. Even after hours of artillery bombardment, they'd still be alive - and ready to fight.

Simultanously, using WMDs has serious drawbacks. To start with, only nuclear weapons have any degree of proven effectiveness - biological and chemical weapons as they exist today are not controllable enough (biological) or just plain ineffective (chemical). Gruesome, yes, but frankly not worth the effort. Nuclear warfare doctrine takes a book to fully analyze, but the basic problems are this: Nukes are indiscriminate. You can train soldiers to kill other soldiers only, and for the most part that works out okay, but nukes kill everyone, and lots. Then, you have lingering, widespread effects, not to speak of the image penalty you take once you start tossing nukes. The danger of a nuclear free-for-all can not be understated, and even though steps have been taken to mitigate this, you can be sure that "nuclear deterance" is really just a nicer way of saying "mutually assured destruction".

As if that wasn't bad enough, we're increasingly getting boggled down in 4th Generation assymmetric warfare. You can't nuke terrorists. We need people on the ground, and lots of them. Simultanously, conventional wars are fought all the time, and the combined arms tactics used today are mind-bogglingly complex. To answer your question, I pull out the word "flexibility". If all you have is a hammer, every problem starts looking like a nail. A military force can have a forte, sure, but these days Air, Sea and Ground forces work together so closely that not being up to snuff in one area leaves you seriously vulnerable. We have to have all these toys because the other guys have them, too. You're sitting on a twisted form of the Prisoner's dilemma: Rationally, we all want peace. But we can't be vulnerable against possible attacks from X - who, by extension, can't be vulnerable against possible attacks from us.

"You put your gun down first!" "No, you!"

---

Now, how can robots help make war? First off, I don't expect the introduction of AI into warfare to shake things up quickly. I don't want to come off as arrogant, but even the smartest AI tank won't replace ten normal ones. A lot of the problems we deal with today are actually rather independant of the soldiers - communications and the fog of war, supply lines, the laws of physics. Could a robot tank possibly make shots faster and more accurate, figure out a better way over rough terrain or conduct longer missions because it doesn't need rest? Possibly, but these are incremental improvements and likely won't even show up in the first few generations of military AI, if they even stick around and work on it long enough to get something effective going. Humans are very smart machines. It'll be a while before we can build better ones even after we create something that is an artificially intelligent lifeform. Expect dogs and children, not digital Einsteins.

One often-cited aspect is human endurance. Robot tanks don't sleep! Robot jets can pull more Gs! Certainly true, but tanks today are rarely limited by crew endurance. They need fuel and maintenance as well as ammo, and the logistics trail it takes to provide that maybe gives you five instead of four tanks you can supply with the same amount of cargo capacity if you cut out the soldiers. Possibly more if you can also leave the maintenance and resupply to other robots and cut out the humans completely. Similarly, an AI jet could certainly pull more Gs, but we're also limited by the materials we have. Flying is extremely complicated, too. (Look how much work *human* pilots have to put in.) I do think that there's a lot of potential here, though.

---

So, what does all my rambling mean? (Certainly not coherent, more stream of consciousness, but oh well...) No big robot armies. Robots are going to be a very helpful resource - for human soldiers. Drones, powered armor, remote-control artillery fire - that's where we're going, with improved communications and recon as well as augmenting infantry soldiers. AIs could probably help us design much better gear, though.

And in 400 years, we'll have Bolos. Yay.

Gatac

Zeke 01-20-2007 08:52 PM

Excellent post, Gatac. I feel much smarter now. Just one thing I'd take issue with:

Quote:

Originally Posted by Gatac (Post 71973)
You're sitting on a twisted form of the Prisoner's dilemma: Rationally, we all want peace.

Unfortunately, that's just not something we can assume. Western civilization has mostly moved beyond the idea of waging war just to gain power (a very recent development -- it certainly wasn't true of Hitler). We're basically reactive now; even Russia and China are no longer into conquest, though that can always change. But right now we're facing groups like al Qaeda and people like Ahmedinejad who are very much active. They want Israel wiped off the map and Islamic rule extended. It doesn't count as wanting peace if the prerequisite is killing everybody else.

(I'm not breaking my own rule and trying to start a political debate, btw. These guys are on record about what they want, so repeating it is politically neutral.)

Quote:

Originally Posted by Infinite Improbability (Post 71972)
For that matter, what are we accomplishing by having armies of robots attack each other? A waste of resources, no question.

Uh... don't you mean what would we be accomplishing?

Chancellor Valium 01-20-2007 10:38 PM

Russia and China are not into conquest by military means.

When you're sitting comparatively a lot closer, you'd be surprised by how effective their current conquest of Europe is going, and they haven't had to lose a single soldier in doing so.

Nate the Great 01-21-2007 02:08 AM

I guess the "what are we accomplishing" is referring to the hypothetical robots waging war in all of our minds while we have this discussion.

I have a nice long rant all formed in my mind, but that would be stepping on somebody's toes. Suffice to say that I'm amazed that we've been letting other people take advantage of American ethics for this long. The "needs of the many" mentality is perfectly valid for me.

Nate the Great 01-21-2007 02:10 AM

I guess the "what are we accomplishing" is referring to the hypothetical robots waging war in all of our minds while we have this discussion.

Three times I've tried coming up with a response of my political opinions, but I don't want to get burned again. E-mail if you're really interested.

MaverickZer0 01-22-2007 08:16 AM

'Even' human strength compared to what strength it is possible for machines to have. Either way, a sentient machine isn't going to automatically try and attack us all, laws or no.

Gatac 01-22-2007 08:27 AM

I should specify: Rationally, we don't want to fight. This equates to peace insofar as few people - who are, arguably, mentally disturbed - would get into a fight for no reason. (Note that "Those are my orders" counts as reason; also, people do not behave rationally under a variety of conditions such as extreme emotional states or intoxication.)

But generally, humans don't fight for no reason.

The thing about wars is that only rarely do the people who actually start these wars fight in them. This mitigates their feeling of personal risk - it's easier to make other people risk *their* lives for your goals. Hence, a rational human could start a war, trading lives for power or resources, but I do not think that makes him evil or sick - he's just disconnected from the consequences of his actions.

Try to teach a kid that he shouldn't break windows when you never punish him for that.

Gatac

Nate the Great 01-22-2007 08:46 AM

If we start having robots fight our battles, how long will it be before we get to the point where even building the robots is wasteful of resources, and we get war computers and disintegration chambers?

Gatac 01-22-2007 11:54 AM

A good question, but that assumes we sink so far into warfare that we never get our space program in gear without ever fully destroying each other. I predict a societal collapse from lack of resources well before we reach that level of tech, OR we do the smart thing and spread to space first, which will effectively solve any resource problems we might have. Robotic warmachines do make more sense in a space application, but I'd say that likely leaves us with Berserker-esque self-building armies. The Star Trek "computer doomsday" scenario, while evocative, would require that the scenario be constructed that way - it doesn't seem like a likely evolution of fighting tactics, because the solution requires a lot of resources. Resources I'd sooner spend on more weapons rather than trusting my enemy to adopt a similarly wasteful, self-defeating technology. My instinct would be to fight until I can't, at which point my society would be so depleted that it'd leave us in the dark ages. If I adopt any enlightened position here, it would be that war has to stop. Then again, if we talk about an alien species that does not have a concept of "peace", this might work, but we're humans and we do have that concept. That makes this particular episode a bit better for me, because it makes the societies more alien, but it lacks that human analogue.

But feel free to disagree.

Gatac

Nate the Great 01-22-2007 05:26 PM

"Effectively?" Doesn't it cost like ten billion just for the fuel for a Moon mission? (Please don't supply the real number, it doesn't matter)

I was just being facetious, at least in part. My wish is just that we nuke each other away as examples of unworthy societies before we get sucked into a decades-long ground war.

Dark ages, hehe. Reminds me of an old Dave Barry gag. It's too long to relate here, but it's a hummer.

Gatac 01-23-2007 04:32 AM

Our fuel costs are at the very extreme now, because multi-stage rockets have an atrocious weight-to-payload ratio. Near future technology like a beanstalk (aka space elevator) would make this considerably cheaper. Heck, I can't even imagine a sensible method to get stuff into orbit that would be *more* expensive than what we're using now.

Once you're in orbit, things get much easier. Relatively speaking, of course. They'll still be hard, but it'll be a different set of problems.

Gatac

Nate the Great 01-23-2007 12:17 PM

I go with Neelix and say "orbital tether."

Gatac 01-24-2007 08:12 PM

Also, what's that about nuking each other? The whole volountary human extinction movement worries me - I see conservation of nature in an "enlightened self-interest" kind of way, where we should conserve nature to sustain our existence as mankind. In a way, I believe that the zeroeth law of robotics should be our maxim - we may not harm humanity, or through inaction allow humanity to come to harm.

It makes sense to maximize our efforts at energy conservation and environmental protection so we can use Earth's biosphere for as long as possible. It makes sense to spread out onto other planets because Earth's resources are limited. I think a lot of human thinking is held back by a sort of denial of death - what use is personal power to you if you're going to die anyway? Long-term planning is called for, and that - as crazy as this may sound - means that the best way to get what we want is to be nice guys.

Besides, using nuclear weapons to wipe out an appreciable part of humanity will totally fuck up Earth's biosphere. Life will find a way to continue, sure, but that's not what treehugging is about, now, is it?

Also, I'd rather we save the nukes for when we meet nasty aliens. They are currently our best weapon against opponents with a higher technological level.

Gatac

Nate the Great 01-25-2007 03:50 AM

There's no question that new conservation, recycling, etc. technologies, the phasing out of fossil fuels and so forth are the best (perhaps only) way we're going to survive another millennium as a non-stone-age society. The real question is whether society (especially the superpowers) will ever snap out of their self-indulging downward spiral and actually do something about it. I sincerely doubt it.

I wasn't wishing for destruction through nuclear conflagration, I was just pointing out that it'd be better than a slow crawl toward heat death.

Gatac 01-25-2007 06:28 AM

Here's some reasons why I don't think we should nuke each other, even in the face of humanity going down the drain.

First off, I've talked about the damage to Earth's biosphere. Plus, the survivors (and there will be survivors) will either be condemned to a dark age [if they're numerous] or to one of the slowest, nastiest deaths possible, widespread radiation poisoning [if they are few]. I'm opposed to the first one, because that's exactly what we're trying to prevent, and the second one on ethical grounds.

Second, self-determination. I'd much rather we back off on our moralistic laws on suicide and let the problem sort out itself as it gets worse. World is unbearable? Your call, Mister. Sign this scrip here, do you want your morphine overdose now or would you like to say goodbye to your loved ones first? This is harsh, I know, but it gives the people involved a choice rather than smacking them with nukes for the good of the planet. Plus, I can imagine that a lot of people in third-world countries do not actually care about the problems we as western civilisation have. Even if we all disappear of the face of the earth, they'll go on living their comparatively simple lifes - unless we poison the biosphere so massively that they can't farm or have herds. Nukes - well, they're sorta made to fuck up the biosphere, if you get my drift.

Third, the problem may, in fact, be self-correcting. As we get closer to the endgame, new technology may yet be developed that allows us to escape or at least use a new resource for a time. Lower population levels (re: suicide; I don't know if births would go up or down in light of the crisis, but I assume that we'll head downwards overall, at least in the western world) will lighten the load on nature and our resources. This is in fact the only similarity I have with VHE - I agree that, barring any near-future tech jumps that allow us to leave Earth, humanity is best served by sizing down some. However, I think depression and personal choice is at least as effective as preaching the "Kill yourself for the good of the planet" claptrap, plus it feels to me morally superior because you're not trying to convince people to kill themselves, merely providing a safe and painless way to do it.

Fourth, of course, nuking people has awful connotations associated with war. It doesn't matter if those people wanted to die (and good luck proving that), you've attacked another, sovereign country (or even yourself) with a weapon of mass destruction. Essentially, you're not trying to kill people, but a country, and that brings all side of nasty philosophical baggage, not the least of which would be that the few nuclear powers we have (seven, I think) would - for total human extinction - have to nuke everyone else first, at which point the justified question will arise whether they will actually kill themselves, too, after they've reduced the human population to 2 billion or less, which - as the VHE says - is in fact a far more sustainable population size than the 6+ billion we have now. Also, who gets to decide if you want to get nuked into oblivion? President, parliament, kings? Popular vote? How do you deal with dissidents? Are you going to kill people who adamantly don't want to die just because they happen to live nearby people who do want to die? Would that be a simple majority, 2/3rds, or is there a certain percentage of people where you have a sort of veto cutoff? What about neighbouring countries?

Gatac

Nate the Great 01-25-2007 08:36 PM

I never suggested a survey of:

"Would you like to be killed as part of an effort to ensure the survival of the human race by lessening the demand for natural resources?"

That's idiotic. I was just suggesting that heat death will create centuries of people miserable for their entire lifetimes. That's not humane.

Let the problem sort itself out? How do you anticipate that working out? That's what I meant by heat death. The gap between rich and poor will get bigger and bigger if current trends continue. What do you propose as a solution for this? A law that says, "the salaries and benefits given to senior management of corporations can be no more than X percent of the net annual profit of the corporation. Everything else has to be spread out amongst all of the employees." Good luck getting that one passed. How about the United Nations doing something along the lines of a global minimum wage? Uh, yeah, right.

Gatac 01-26-2007 06:23 AM

Gatac predicts the future! (Was: Asimov's Laws, Real-Life Applicability)
 
The "humane" part got me thinking about something else: With improved DNA screening, we can quite probably identify every major genetic ailment prior to birth in the near future. I know that this straddles the general abortion debate, but here's the rub: I believe that there's a choice there, and that it must be left to the parents. I believe people when they tell me that they love their children no matter what, but I find it harder to swallow when they say that if they had a choice, they'd rather stick with a differently-abled child than a healthy one. Even further along this line, you find people who intend to intentionally cause genetic disorders in their kids (for example, I've seen people advocating induced dwarfism), and here's where I draw the line. I don't get these people at all - I understand the "I want a healthy child" and the "Well, life's a gamble" crowd, but why *cripple* their children? That seems excessively cruel.

Of course, we then get to the question of whether society at large can even afford to care for differently-abled people, and whether they are a necessary part of society. (Not the people as individuals, but as being differently-abled.) It could be that we slowly drift into an Eugenics-esque era where we slowly get a grip on the genome of the next generation while advanced prosthetics and new treatments deal with existing cases. I don't think it's a bad idea, either. It does seem that for everyone who loudly proclaims that they are proud of their disability (and I think that's okay, too), there's a couple more who'd really like to walk, talk, see and think like what we define as "normal". Then again, we run into biodiversity issues and all that further claptrap, so I don't think there'll be easy answers.

---

Now to the heat death. I presume you mean the one of earth's ecosystem. Well, yes, that could be a problem, but they might find a solution. Maybe it takes living on a hot hellhole before you can seriously consider ways to protect yourself, but I wouldn't discount mankind's ingenuity + a problem + time. Also, shouldn't these people be able to decide for themselves whether they want to live? If the situation becomes truly inhospitable, we'll just die out, but before that there's a whole range of adaptions we can make. Indigenous people in Africa and Australia deal with blistering heat all the time, living at the edge of human survivability. Everything hotter than that will just plain not sustain human life.

I think this is another choice we must leave to the parents. Alternatively, provide the safe suicide option for people already living in that age.

---

When I said "Let the problem sort out itself", I meant making safe suicide available to everyone. I wasn't talking economics, but if you want to, okay.

Times are a-changin', as the song goes. International economy is going several wild ways, none of which could be predicted in the long term. To name but one example, the whole copyright debacle (viz: Internet "piracy") is changing economic realities as we speak. Norway just declared iTunes illegal under their laws, and if you told someone who's been in a coma for ten years, you'd have to explain what iTunes is. It'd blow his mind.

Similarly, the old globalism "We'll just outsource it to India" is running into problems, too, as the traditional outsourcing countries become more affluent themselves. China is rising quickly, and they have a goddamn space program now. Who knows where they'll be in 20 years? Who knows what's going to happen to our oil-dependant economies?

Redistribution of wealth has never worked. I firmly believe that the only way to deal with this is to raise the poverty line so high that everyone has a home, food and Internet access. This may sound basic, but it'll benefit us immeasurably - not only is it the humane thing to do, it also gives us access to literally billions of minds that went untapped for their full potential. Smart people are born everywhere. Give them access to knowledge and I think we'll have a few scientific revolutions ahead. Not the least of which is that access to the Internet is the ultimate in expression of free speech and commerce - that's why we must fight to protect it from those who are looking to turn it into another TV - or censor it. Viva la revolucion, brother!

How to drive this surge, you ask? Post-scarcity. We're all trekkers, so I'll say "replicator" and you know what I mean and what that implies. Don't laugh yet, we're getting there, too. Rapid prototyping is becoming more rapid and less prototyping as people are discovering that you can actually use such techniques to build useful stuff. Biologists are using rebuilt printers to build complex multi-cell organs from cloned cells. There's a proposal for a 3D printer large enough to build a *house* out there.

The future's a-comin', and it'll be bright. Can't stop the signal and all that feel-good stuff.

Gatac

Nate the Great 01-26-2007 11:05 AM

I saw the article about induced dwarfism, and I have to agree. You want a dwarf kid, adopt one. You want a blind kid, adopt one. It's not like the orphanages are running a shortage or anything.

Uh, yeah, I'll probably get flamed for that one, but I felt that this thread, which started a few dozen half-ideas ago has gotten a little too serious and needed levity.

No, my "heat death" refers to the rich getting richer, the poor getting poorer, and the rich getting fatter and the poor starving to death faster. At the same time, the West is supporting a large portion of the rest of the world at a rate that only allows a certain number to be helped. Specifically I refer to the Hundred Dollar Computer project. Uh, dudes, get the entire world eating two real meals a day and access to clean water before you start giving away cheap hand-crank computers, okay? For a hundred dollars you could feed a person for a couple months on a minimal rice-and-clean-water diet, right? And still have money left over for innoculations, penecillin, etc.

As for global warming, we should've been halfway through a global program to totally remove all dependency on oil (as a power source) by now. I'm not joking. With making non-oil plastic as a second goal.

To return to Asimov's Laws (amazingly a lighter topic than the current one :)), I don't think anyone posulated a reason for having robots operate on any sort of "intelligent" level.

Gatac 01-26-2007 11:33 AM

Yeah, adopt blind or deaf orphans! Not only do you not introduce another disabled child into the world, you're also giving love to a kid that's unlikely to be adopted otherwise.

My bad for the seriousness. I'm just a raging future-lover and can't help myself.

The laptop/humanitarian aid thing is tricky: we've made much of the third world dependant upon our charity. The red cross is totally undercutting local farmers with its food packages, while international business ripping off the locals encourages growing cash crops, which leads to more dependance on foreign aid. These are artifacts of colonialism that are very hard to undo. On the other hand, while I do approve of handing out laptops - *everybody* will need to learn how to use computers if they want to deal with the modern world -, this can't be at the expense of other, more pressing problems, such as the medical problems you mentioned.

The problem with getting rid of oil is that oil is handy. Energy density is high, and that energy can also easily be liberated and used. We need some sort of chemical energy storage for the near future; hydrogen works, but we need a better way of manufacturing it. Some developments in batteries look promising, too, as do ultracapacitators. Plastics without oil would be hard, but I'm sure we can replace plastics with other materials, like the various experimental configurations of carbon people are working on.

Intelligent is a difficult word; how intelligent are animals? Are they sentient/sapient? Psychology isn't nearly mature enough to deal with this. Maybe we should say what we do need: We need flexibility, ability to learn, complex pattern recognition and such, which looks like it could be done with fuzzy logic. Of course, fuzzy logic implies neural networks, which are modelled on how we think our brain works, so what comes out at the other end may emerge as intelligent even if we didn't design it to. That's what self-evolving machines are all about.

Gatac

Nate the Great 01-26-2007 12:50 PM

Yeah, computers are a necessary component of the modern world, but if I could wave a magic wand and get rid of the Internet in exchange for eliminating poverty and easily-cured disease all across the world, I would. So would you.

I'm a full advocate of the "teach a man to fish, don't give a man a fish" philosophy. Anything other than the "teach a man a fish" isn't solving any problems. Not one. All it's doing is slapping bandages on a wound that won't heal without something better.

Okay, animal souls are something I don't want to go into. That's a can of worms I want locked up nice and tight. This thread has enough those as it is.

Gatac 01-26-2007 02:21 PM

I'd rather have both the Internet and a reasonable standard of living for everyone.

Fully agreed on the "fish" metaphor. But that's also what makes net access - and, by extension, education/information - so important.

Yes, let's leave out the soul discussion.

Gatac

Nate the Great 01-26-2007 07:42 PM

Okay, what can the Internet give poverty-stricken denizens that a decent school system (including computers) can't do better?

This is a philosophical discussion involving personal opinion, so "both" is a valid answer, I suppose, but for me yes-or-no implies yes-OR-no, not yes-AND-no.

Gatac 01-26-2007 09:37 PM

The school system is actually a more controversial (in my mind) problem than you might think. I would much prefer giving children the opportunity to tackle prepared "units" of knowledge at their own pace, plus whatever else they want to know, then let them take tests to earn something like a GED at a fairly young age and move them into an apprenticeship-esque situation where they can start learning on the job fairly early, maybe 14-ish. This sounds counter-intuitive as hell, I know, but institutionalized education has some very real problems; read John Taylor Gatto's Underground History of American Education, for example - you may not agree with his conclusions, but he does cite a lot of historical information and makes a lot of criticisms that seem to be hard to refute. Among the most disturbing ones is the idea that a lot of the shortcomings of the Western educational system, such as it is, are not bugs, but features, like adapting a way of teaching children to read that actually retards their ability to pick up new words. Scary stuff.

I have no near-future plans for raising children, because I believe that that would require me to have a degree of financial independance so I can homeschool them effectively. I don't think teachers are evil or anything silly like that, but I do believe that I could provide a better learning environment myself than a school. However, I also admit that my research in this topic has been less than thorough, and I'm liable to refine or even change my opinion until the actual decision needs to be made. I also realize that homeschooling is far from being the best for everyone, but I do think we need to encourage it and slowly reduce the burden on public schools, as well as take a few serious looks at the curriculum.

Gatac

Nate the Great 01-26-2007 10:08 PM

Okay, the shortcomings of the modern American school system is a topic for another day. I ain't touchin' it.

I gotta wonder what the record is for "topic that's wandered furthest from the original in the same thread."

MaverickZer0 01-28-2007 09:59 AM

Why bother with a record? Some other thread will just go and break it, and how would the first thread feel <i>then</i>? Huh? Huh? Did you ever think about the threads' <i>feelings</i> before you said that? Huh?

Haha. ;p

Nate the Great 01-28-2007 08:02 PM

Well, there is a difference between "oh, here's a random topic, since I'm bored with this one," and "oh, that's interesting, let's talk about it in a new direction."

Cow to phone to paint to marshmallows is random.
Cow to leather to pleather to spandex to superheroes is linked.

I meant the later category.

PointyHairedJedi 01-29-2007 07:10 PM

To get back to the original question (not that robot armies of death aren't fun), I think the concept of a set of "Laws of Robotics" is pretty much moot. Asimov, and I think for a while pretty much everyone else, assumed that once a computer reaches a certain level of informational complexity it will essentially become "alive". The famous test associated with machine intelligence is of course the Turing Test, but a program being able to pass this means nothing except that it has been given sufficient information in sufficient combinations to fool a human into thinking that it is another human. It by no means equates to sentience, just clever programming, and that's a problem - how would we truly be able to tell if a computer was thinking or not? Visions of Multivac, Colossus, Shalmanesser and Skynet are ultimately just fantasy, unrealisable because conciousness is not something that can be created, whole and complete, utterly constrained in everything it does by a set of arbitrarily imposed rules. Machines that think, if there ever are any, will be like us - blank slates that must be taught how to think from the ground up.

Of course, I have my doubts as to whether we will ever manage such a feat, as first we must understand how conciousness works in humans. It's a problem that I don't think will be solved in any of our lifetimes - though we may attain a vastly more complete understanding of the functioning of our brains, that won't tell us much about self-conciousness and free will. We may get machines that can learn how to do a few things, but I doubt there will ever be anything that has the amazing capacity and range of the human mind.

Quote:

Originally Posted by Gatac (Post 72011)
Also, I'd rather we save the nukes for when we meet nasty aliens. They are currently our best weapon against opponents with a higher technological level.

Clearly you have never seen Mars Attacks!. :p

mudshark 01-29-2007 07:24 PM

Oo-Oo-Oo-Oo, Oo-Oo-Oo-Oo
When I'm calling you
Oo-Oo-Oo-Oo, Oo-Oo-Oo-Oo
Will you answer too?
Oo-Oo-Oo-Oo, Oo-Oo-Oo-Oo...
http://i57.photobucket.com/albums/g2...ilies/note.gif


All times are GMT. The time now is 07:39 PM.

Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.