On Pain and Suffering

Mar 30, 2005 02:01

So last time I went off on one of the physical aspects of AI, this time I'm focusing more on one of the informational aspects of it ( Read more... )

Leave a comment

Comments 9

joefredbob March 30 2005, 22:55:17 UTC
The biggest problem is that when people say "do computers feel" they don't mean "do computers react as if they feel". An example. Let's say I was incapable of feeling pain. No physical way of experiencing a painful sensation. I could still decide (or be forced to decide if i were programable) to react as if I felt pain. I could still choose not to touch hot stoves for the sole reason that if I could feel pain it would hurt. This is not the same as feeling pain, but it results in the same actions, just like your computer program.

Reply

foolswisdom March 31 2005, 00:14:24 UTC
But see, my point was that "feeling pain" is not an accurate description of what's happening. All that the sensation we call "pain" is is just a description/translation by our brain of some negative stimulus. I can very easily program that same negative stimulus into a computer program by creating a "feeling" that certain stimuli provoke. I don't see how that is at all different from what we as humans do. The neurons that feel pain in our body could just as easily (in theory, at least) be wired into a computer. The computer would then be told, "When you recieve a stimulus along any one of these wires, add some quantity to your pain buffer."

When we jerk our hands off a hot stove (by reflex, not consciously, I mean), it's basically a buffer overflow in the "pain" region of our brain, which causes our brain to involuntarily react.

Reply

joefredbob March 31 2005, 07:57:08 UTC
No, there's one more thing about pain. It hurts.

Reply


savfan104 March 31 2005, 02:00:09 UTC
One more addition - it seems like it would be much more appropriate to simulate the human brain in a nondeterministic fashion.

Prolog is inherently the wrong language to describe the human brain - it's not based on logic.

Instead of saying that "if threshhold > some_value, then do this; otherwise, do something else," a better representation would be a probability of doing each action - this probability would be influenced by various factors, including but clearly not limited to the "pain level" of that action.

Or at least so it seems to me.

Reply

foolswisdom March 31 2005, 02:56:21 UTC
"Prolog is inherently the wrong language to describe the human brain - it's not based on logic."

Some might argue (myself included) that that quality of prolog makes it exactly the RIGHT language to describe the human brain. Humans are not particularly logical beasts.

Reply

aibrainy March 31 2005, 06:03:11 UTC
I think that he meant that Prolog is based on logic, while our brains are not.

Reply


aibrainy March 31 2005, 06:17:24 UTC
I would argue that there is a metaphysical level on which "feeling pain" (and, more so, "feeling" an emotion) is a reality beyond that of mere neurological stimulus. That is, the neurological stimulus-response system is not the ultimate reality, but the representation of that reality in the material world. The fact that a computer might be able to copy this representation (in a way) does not mean that the actual quality of what is going on is really of the same nature.

Now involuntary reactions to painful stimuli (by reflex) may or may not fall into the same category. The fact that we call both "pain" is (in my view) something of a misnomer: they happen to be described by similar (though not exactly the same) neurological processes, and the conscious perception of pain often follows an unconscious reaction, but this close association does not imply that we ought to equivocate them. We need to be careful which of these kinds of "pain" we're talking about.

Reply

avenger337 March 31 2005, 07:03:34 UTC
So would you then argue that aritficial intelligence (not the *imitation* of human intelligence, but an actual computer that is intelligent independant of humans) is impossible? and why?

Reply


Do rules work? anonymous May 3 2005, 18:25:18 UTC
You've suggested that we sometimes override pain messages, but what about when we actually find pleasure in pain? It seems to me that what separates the workings of the human brain from AI is that humans, while measuring the physical senses, tend to be much more interested in intent. How else might you explain a child's irrational fear of a moth in her hair or a worm on her face? She wrongly attributes evil intent to the harmless movements of the moth and worm. There is no pain. Take another example: the boy with a bloody nose. He is playing, having a good time, enjoying himself . . . until someone points out that he is bleeding. Depending on the boy and how old he is, he might start screaming in fear or just get angry. I've seen both responses. Neither one makes sense if you only rely on rules associated with physical inputs. Much more is at work.

Eric Muhr

Reply


Leave a comment

Up