Ethics and Artificial Intelligence

Originally posted by qckbeam
I could see humans creating a new race too, possibly a mix of organic and mechanical parts. A mix of genetics and computer technology. Very interesting.
I suppose that is a possibility but the big question is, why should we do it? What are the gains, for example? Should we do it just because we can do it?
 
Originally posted by BlazeKun
If they are smarter than us... What makes you all think that they would be inherently evil? If it is smarter than us then it would realize that pain and suffering is not good, and that happiness and love rules over all.
Ah, but how does a sentient program percieve happiness or love? It's not nesecarily the perception that humans have.
 
Originally posted by Rasmoh
Ah, but how does a sentient program percieve happiness or love? It's not nesecarily the perception that humans have.

They know our emotions, and what they mean... They see a someone in total fear and suffering, and they start to think and an incredible speed and eventually figure it out.
 
yes but its still a human PROGRAMMING a MACHINE to ACT .. it never actually thinks for itself. it may get EXTREMELY close to thinking for itself.

but its still going to be a bunch of 'if' 'than' 'or' 'this' statements and reactions
 
If a true AI were to be in games playing out as the NPCs you interact with, would it be wrong to kill them?
Well, it's a good question. I don't believe in there being souls (or anything super natural for that matter) so I won't use that argument however I do still feel that it's okay to kill a true AI if it were a character in a game. The AI, while freely thinking, does not feel pain and suffering and it never could - this is where the line is drawing. If something feels pain and suffering - using it for gameplay would then be considered morally wrong.

Also, a true could learn and at a fast rate, right? I mean the human brain is supposedly similar to that of a computer where as everything is down to true and false, 1 and 0 (that can go heavily philosophical compared to this) but the computer brain (the true AI) would have definte control over itself - it's learning process, movement and decision making, and even it's memory storage. Wouldn't that make the computer brain much more effective? Our brain and body works on water, chemicals, and electrical impulses (this is where we are similar) where as a computer brain would be almost entirely wires, electrical impulses, and silicon type of flesh.

I dunno, I'll stop here.
My answer to the question: No, it wouldn't be morally wrong to kill a NPC in a game with True AI because it wouldn't feel pain and suffering from it. It's not a true being.
 
First, BlazeKun, I believe that the superior beings we will one day create will be kind to us, but since they will be the new "best" I could see humans trying to supress them therefore creating quite a problem. To Rasmoh, I could see us dong it for a number of reasons. Servants, slaves, or maybe just because we could. Creating for creations sake, no particular reason.

Alos, in response to Ares, a true A.I. could experience another kind of suffereing and pain. Not physical pain, but rather mental or intellectual pain. The fear and pain of being erased or killed and watching that happen to other A.I. would be the same as our pain of being killed or the pain of watching others die.
 
Originally posted by qckbeam
To Rasmoh, I could see us dong it for a number of reasons. Servants, slaves, or maybe just because we could. Creating for creations sake, no particular reason.
When you think of all the uproar that cloning, for example, caused, I think that, atleast in current moral and ethical climate, that creating such hybrids just for creation's sake wouldn't happen. But that could change.

But if they were of some use, it could happen. Maybe we could use them as soldiers, rescue workers or even exploring other planets and space.

Originally posted by qckbeam
Alos, in response to Ares, a true A.I. could experience another kind of suffereing and pain. Not physical pain, but rather mental or intellectual pain. The fear and pain of being erased or killed and watching that happen to other A.I. would be the same as our pain of being killed or the pain of watching others die.

The question is, would survival instincts kick in? Would the A.I. try to avoid it's destiny of being erased somehow?
 
I think that if we were succesful in creating a true A.I., it would have survival instincts and somehow find a way to stop it's own death. Nothing that knows it's alive wants to die. This would be a very bad situation. I mean, a supercomputer that can think and has access to the internet would be one tough enemy to beat. The damage it could do would be remarkable since everything in our world is run by computers and interconnected. Think Terminator 3.
 
In terminator, skynet "woke up" I dont think it would be a "waking up". Computer programmers will be working for years to make computers automatically collect information and react before anything would be capable of doing something on its "free will".
 
Of course it would take a very long time to get to where I'm talking about, but if we created an A.I. powerful enought to become self-aware then something like terminator could happen if we treid to kill it.
 
Maybe the machines would need us for entertainment just as we need machines for entertainment :eek:
 
Lol so like playing a game with them is like rough houseing with a small child lol That could be kinda fun! :)
 
Originally posted by qckbeam
I think that if we were succesful in creating a true A.I., it would have survival instincts and somehow find a way to stop it's own death. Nothing that knows it's alive wants to die. This would be a very bad situation. I mean, a supercomputer that can think and has access to the internet would be one tough enemy to beat. The damage it could do would be remarkable since everything in our world is run by computers and interconnected. Think Terminator 3.
Also The Matrix, Starsiege, and other Sci-Fi themed medium.
 
I tend to agree with Rasmoh, if we actually reached the capability to have AI with true reasoning and self-knowledge, in all reality, the last place we'd see them would be in a game.

Albiet, this is all hypothetical, but once ANYTHING has the intelligence enough for self-preservation, they become something other than NPC's. They become a "clone" of life itself.

On a side note, I haven't read the whole thread, but it's a little disturbing that almost the majority of people posting here would have no problems with hurting or killing an AI with true emotions or feelings. It makes me feel a little differently about the gaming community.
 
Here's something I don't think people are quite getting.

If you're going to go through the trouble of creating something as infinitesmially complex as a self sentient AI for a videogame, why on earth would you go and terminate it just because it's avatar in the video game died. If the AI's character has been removed, he'll no longer be able to affect the game, but it doesn't mean he's dead. In order to kill an AI like that, he'd have to be deleted. Otherwise he'd just be in a statis when not in use. Indeed, why would you have an AI that complex just playing a SINGLE character. Shoot, it's a self-sentient program.. it can run ALL the characters in the game in a realistic manner if programmed correctly.

And my own personal rule on whether something is self-sentient or not; Can it, with no outside programming, say it doesn't want to die? If it grows to the point where it actually doesn't want to die, we have no right to destroy the program (Unless it's running around and killing things, then it's ass is grass just like any human :)

Sincerely,
Jeremy Dunn
 
I never got the assumption that if we created an A.I./Robot mind it'd somehow magically be better than us.

If you want that logic spelled out for you:

If humans are such massive **** ups what makes you think they have it within them to make something better than themselves?

Besides, I've yet to see a really complex computer that can go a year without crashing or being turned off for work. I'm not really afraid of an entire army of killbots powered by Windows Psyche.

As for A.I. in video games? Won't happen for a long long long long long long long long long time.

Why?

Because the games don't allow enough options to require it. Like Deep Blue and the like. You can program every possible option in a chess game into it, but it's not going to balance your checkbook or read your kid a bedtime story which, by the way, it wrote.

Computers are really great at linear functions, and pretty shit at drawing conclusions.

And to the person that said they think faster than us?

Purely incorrect. They think straighter. We think in curves. And we do it really fast.
 
Originally posted by Boogaleeboo
I never got the assumption that if we created an A.I./Robot mind it'd somehow magically be better than us.

If you want that logic spelled out for you:

If humans are such massive **** ups what makes you think they have it within them to make something better than themselves?

Besides, I've yet to see a really complex computer that can go a year without crashing or being turned off for work. I'm not really afraid of an entire army of killbots powered by Windows Psyche.

As for A.I. in video games? Won't happen for a long long long long long long long long long time.

Why?

Because the games don't allow enough options to require it. Like Deep Blue and the like. You can program every possible option in a chess game into it, but it's not going to balance your checkbook or read your kid a bedtime story which, by the way, it wrote.

Computers are really great at linear functions, and pretty shit at drawing conclusions.

And to the person that said they think faster than us?

Purely incorrect. They think straighter. We think in curves. And we do it really fast.

I think most of us know and agree with all that you've said. This whole scenario is hypothetical...we know it's a long ways out.
 
i have bearly read anything in this thread but i know from the last thread where this is comming from. now if the person or "A.I." was bad (mass murder or something along those lines) i would have no problem shooting them in the face. on the other hand if the person was nice or of no threat, it would depend on the mood i would be in. some of u might this this is horrible or unethical but games are meant for wut u would do in fantasy and never would think of doin or would want to do in real life. humans somtimes have unimaginale depths of unexplainable darkness.
 
Originally posted by KiNG
i have bearly read anything in this thread but i know from the last thread where this is comming from. now if the person or "A.I." was bad (mass murder or something along those lines) i would have no problem shooting them in the face. on the other hand if the person was nice or of no threat, it would depend on the mood i would be in. some of u might this this is horrible or unethical but games are meant for wut u would do in fantasy and never would think of doin or would want to do in real life. humans somtimes have unimaginale depths of unexplainable darkness.

Good point, but if your victim, even if it is a character in a game, pleads for his life, calls your name, and starts rambling about things he hasn't had a chance to do in his life yet - will you still feel "ok" that's it's just a game?
 
"humans somtimes have unimaginale depths of unexplainable darkness."

i am only human. again it would depend on my mood.
 
Originally posted by nbk-brando
This is in honor of the thread that was unintentionally hijacked (by me) here:

http://www.halflife2.net/forums/showthread.php?s=&threadid=10154&perpage=15&pagenumber=1

The ending discussion was, basically, what it means to create AI, how it may be possible, and what are the ethics issues that we'll be responsible for? If we eventually create something that can reason and have a sense of being, and these AI's are present in games, is it ethical to kill/hurt/maim them? Or have we crossed the line at this point?

simple it won't happen, do you really think they would make a game (and spend all that time codeing, a SELF AWARE AI in a game? they won't, will never happen, plus for it to be self aware would mean that it would require a memory, so the game would "grow" on your HD as it learns (which is pretty stupid) the only place real AI's will be if they ever make one will be in a controlled labratory setting. (the most advanced AI that will ever be in games will just be the kind that can make decisons basied in said game world, like out flanking you and tactics in a fps etc.)
 
My ti-83 graphed a cubic function x^3 is a cubic function. What do you mean by linear functions?

Nbk-brando, you shouldn't feel disheartened. This is currently hypothetical and its impossible to tell how you would truly react to that kind of situation without it actually happening. People might not understand your concepts of artificial intelligence having emotions or a sense of self-preservation. If people were actually presented with an electronic "organism" of a complex nature that is seemingly organic, then it would be hard for me to imagine a human uncaringly killing it.
 
Not in the "linear function" sense. I am aware the term has other meanings, but the words themselves where the closest to what I mean.

Computers aren't good at complex moral and societal points or intuitive functioning. They do grunt work. You throw a bunch of numbers at them, and they go straight through them and pop out an answer at the other side. People think of them as "Smart" or "Complex" but all they are is fast at one thing. It's a tool, nothing more or less.
 
Originally posted by nbk-brando
This is in honor of the thread that was unintentionally hijacked (by me) here:

http://www.halflife2.net/forums/showthread.php?s=&threadid=10154&perpage=15&pagenumber=1

The ending discussion was, basically, what it means to create AI, how it may be possible, and what are the ethics issues that we'll be responsible for? If we eventually create something that can reason and have a sense of being, and these AI's are present in games, is it ethical to kill/hurt/maim them? Or have we crossed the line at this point?

I really wish I could contribute to this thread but unfortunately with all the CS kiddies hanging around this thread will become spam bait within minutes... anything remotely intelligent scares them. :flame:
 
Originally posted by Boogaleeboo
I never got the assumption that if we created an A.I./Robot mind it'd somehow magically be better than us.

If you want that logic spelled out for you:

If humans are such massive **** ups what makes you think they have it within them to make something better than themselves?
really fast.

Yeah, it's just not clear what it means to be sentient. Would a computer-derived AI be able to acheive sentience and maintain its calculating prowess? We only have human minds to study now, but studies of autism suggest this may not be the case.
 
Originally posted by Boogaleeboo
Not in the "linear function" sense. I am aware the term has other meanings, but the words themselves where the closest to what I mean.

Computers aren't good at complex moral and societal points or intuitive functioning. They do grunt work. You throw a bunch of numbers at them, and they go straight through them and pop out an answer at the other side. People think of them as "Smart" or "Complex" but all they are is fast at one thing. It's a tool, nothing more or less.

Maybe you mean 'quantity' functions?
 
In a way, even the biggest of them is still just a really complex calculator.
 
Originally posted by richpull
Nbk-brando, you shouldn't feel disheartened. This is currently hypothetical and its impossible to tell how you would truly react to that kind of situation without it actually happening. People might not understand your concepts of artificial intelligence having emotions or a sense of self-preservation. If people were actually presented with an electronic "organism" of a complex nature that is seemingly organic, then it would be hard for me to imagine a human uncaringly killing it.

Oh yeah?

Originally posted by Brutal_Implant
any game that lets you kill kids is awsome (dues ex)

;(

But to be fair, you're right, richpull. My optomistic side tells me they would secretly have sympathy for them, and tell their friends they killed 500 of em.
 
Originally posted by sportz103
no link does, and you might not want people to know you know the pokemon's names

no, zelda/sheik does

topic: its artificial... how the hell can it be unethical?
 
When someone finds out what "consciousness" really is, we can discuss the ethics of killing it. :)
 
I HOPE THERE IS CHILDREN IN HALF LIFE 2, AND THE COMBINE MUTILATE AND EAT THEM, AND YOU CAN KILL THEM LIKE THE STUPID SCIENTISTS IN HALF LIFE 2
 
killing the AI or not...

Several things would affect my choice. First of all: If it is a character in a game, it will not be real, it will be programmed to act real and act hurt or loving. Secondly, if there is no afterlife programmed into the game, that means there are no consequences for good or evil deeds (assuming you believe in creation). Either way, there should be no reason for regret after slowly mutilating a clone of Bill Clinton or Jennifer Aniston.

All of this holds true if I were to meet an artificial being in my reality. Artificial beings can't have a soul, therefore I would not feel bad for hurting its pre-programmed emotions.

Good Day.
 
Err... so if AI ....and graphics are really good. Are we gonna get some funky VR attachments so we can 'interact'?
'Red Dwarf style groinal attachment comes to mind'

That said, im working and they may notice if i start grinding away...
 
Re: Pain and suffering

Would a character not have to suffer, and feel pain, in order to act realistically when you shoot it (or shoot someone it knows) ?

Re: What is conciousness

I don't think this matters. Put it this way... When you can sit and talk to a person and not realise they are AI (unless told) then they are conciousness enough to be considered real.

In real life, if their was a mother and child, could you shoot the child and stand by and watch the mother's reaction? Now, if you could do that in a game (say GTA) and the reaction was identical (not through scripted action but actual human modelled AI) then could you watch that? What is the difference?

The mother will, presumably, continue to be simulated (for the sake of consistency) and will consequently suffer the grief of losing a child as would any human.

This IMO is completely immoral.
 
Re: killing the AI or not...

Originally posted by junkie
Several things would affect my choice. First of all: If it is a character in a game, it will not be real, it will be programmed to act real and act hurt or loving.

The point is that it will not be programmed as you suggest.

In order to realistically simulate human behaviour we would have to program it to learn the way humans do. Any actions that will occur will be purely consequential (ie. as a result of its teachings), and not directly programmed.




Anyway, how it is done is unimportant. If we create something which behaves exactly as we do, and is indistinguishable as such, then we have create a new sentient life (as we currently understand it).

The creature will be self-aware, and will believe it has a soul and will believe in an afterlife of some description (this must be true otherwise we would be able to distinguish it from ourselves).
 
Good words MrD. I'm not going to start a flame over this, just gonna keep trying to explain myself. Have you ever seen the movie "13th Floor"? Its basically "The Matrix" only with good acting, a deeper plot, and less fighting. I recommend you watch it and hopefully I don't spoil it for you. Anyway, I still feel that if a human created it, no matter how real it seems to me, the reality is that it is not like me. Someday, if I live long enough to see all of this for myself, I might change my mind. However, as far as gaming goes, I have no problems killing my enemies.

Now, if someday gaming involves me taking over another person's body, whether they're on another planet, or if they're my next door neighbor, then, I will have issues pretending it isn't real. Either way, my morals are probably not equal to most. I'm pretty liberal in my thinking.
 
"Absolutely, but what about a negative scenario? What if the player, who is immersed in a virtual real world, becomes numb to all the violence in that world that looks and works like the real world with the exception that there are no consequenses, and starts to act similiarily in the real world?"

Isn't it what the US gov does ?
 
Back
Top