The biggest problem is that when people say "do computers feel" they don't mean "do computers react as if they feel". An example. Let's say I was incapable of feeling pain. No physical way of experiencing a painful sensation. I could still decide (or be forced to decide if i were programable) to react as if I felt pain. I could still choose not to touch hot stoves for the sole reason that if I could feel pain it would hurt. This is not the same as feeling pain, but it results in the same actions, just like your computer program.
But see, my point was that "feeling pain" is not an accurate description of what's happening. All that the sensation we call "pain" is is just a description/translation by our brain of some negative stimulus. I can very easily program that same negative stimulus into a computer program by creating a "feeling" that certain stimuli provoke. I don't see how that is at all different from what we as humans do. The neurons that feel pain in our body could just as easily (in theory, at least) be wired into a computer. The computer would then be told, "When you recieve a stimulus along any one of these wires, add some quantity to your pain buffer."
When we jerk our hands off a hot stove (by reflex, not consciously, I mean), it's basically a buffer overflow in the "pain" region of our brain, which causes our brain to involuntarily react.
One more addition - it seems like it would be much more appropriate to simulate the human brain in a nondeterministic fashion.
Prolog is inherently the wrong language to describe the human brain - it's not based on logic.
Instead of saying that "if threshhold > some_value, then do this; otherwise, do something else," a better representation would be a probability of doing each action - this probability would be influenced by various factors, including but clearly not limited to the "pain level" of that action.
"Prolog is inherently the wrong language to describe the human brain - it's not based on logic."
Some might argue (myself included) that that quality of prolog makes it exactly the RIGHT language to describe the human brain. Humans are not particularly logical beasts.
I would argue that there is a metaphysical level on which "feeling pain" (and, more so, "feeling" an emotion) is a reality beyond that of mere neurological stimulus. That is, the neurological stimulus-response system is not the ultimate reality, but the representation of that reality in the material world. The fact that a computer might be able to copy this representation (in a way) does not mean that the actual quality of what is going on is really of the same nature.
Now involuntary reactions to painful stimuli (by reflex) may or may not fall into the same category. The fact that we call both "pain" is (in my view) something of a misnomer: they happen to be described by similar (though not exactly the same) neurological processes, and the conscious perception of pain often follows an unconscious reaction, but this close association does not imply that we ought to equivocate them. We need to be careful which of these kinds of "pain" we're talking about.
So would you then argue that aritficial intelligence (not the *imitation* of human intelligence, but an actual computer that is intelligent independant of humans) is impossible? and why?
You've suggested that we sometimes override pain messages, but what about when we actually find pleasure in pain? It seems to me that what separates the workings of the human brain from AI is that humans, while measuring the physical senses, tend to be much more interested in intent. How else might you explain a child's irrational fear of a moth in her hair or a worm on her face? She wrongly attributes evil intent to the harmless movements of the moth and worm. There is no pain. Take another example: the boy with a bloody nose. He is playing, having a good time, enjoying himself . . . until someone points out that he is bleeding. Depending on the boy and how old he is, he might start screaming in fear or just get angry. I've seen both responses. Neither one makes sense if you only rely on rules associated with physical inputs. Much more is at work.
Comments 9
Reply
When we jerk our hands off a hot stove (by reflex, not consciously, I mean), it's basically a buffer overflow in the "pain" region of our brain, which causes our brain to involuntarily react.
Reply
Reply
Prolog is inherently the wrong language to describe the human brain - it's not based on logic.
Instead of saying that "if threshhold > some_value, then do this; otherwise, do something else," a better representation would be a probability of doing each action - this probability would be influenced by various factors, including but clearly not limited to the "pain level" of that action.
Or at least so it seems to me.
Reply
Some might argue (myself included) that that quality of prolog makes it exactly the RIGHT language to describe the human brain. Humans are not particularly logical beasts.
Reply
Reply
Now involuntary reactions to painful stimuli (by reflex) may or may not fall into the same category. The fact that we call both "pain" is (in my view) something of a misnomer: they happen to be described by similar (though not exactly the same) neurological processes, and the conscious perception of pain often follows an unconscious reaction, but this close association does not imply that we ought to equivocate them. We need to be careful which of these kinds of "pain" we're talking about.
Reply
Reply
Eric Muhr
Reply
Leave a comment