It is not possible to have a conscious AI

Apr 25, 2016 11:12


The Chinese Room demonstrates that outward appearance of an understanding of meaning is by no means an actual indicator of understanding.
Bona fide understanding is a main feature of conscious thinking. If something is not conscious, it is not possible for it to understand.
What goes on inside the Chinese Room is an analog of programming. AI is ( Read more... )

Leave a comment

Comments 50

korchevoi April 25 2016, 18:50:34 UTC
What do we mean by AI? Is it something that is made from electronic stuff? Think about the internet as a whole. The internet is certainly created and possibly can meet Turing test and Chinese room test.

Reply

nanikore April 26 2016, 05:09:29 UTC
An artificial intelligence is a machine or machine software program which is capable of exhibiting intelligent behavior.

Edit: The Chinese Room thought experiment already assumes that the Room passes the test 100%, yet it demonstrates how it does so without the faculty of understanding. The passing of the test is an indictment of the Chinese Room as a non-conscious mechanism under a mask of conscious appearance , and not some support of its conscious status.

Reply

korchevoi April 26 2016, 08:15:38 UTC
R u kidding me? Thank you for clarification. I am an idiot who didn't know it. Think about my thesis. Who said that AI cannot include human being as its parts? How you can discern who answer you right now?

PS If u cannot imagine or understand something it does not mean that it does not exist.

Reply

nanikore April 27 2016, 10:03:07 UTC
"Who said that AI cannot include human being as its parts?"

I could only assume that you're not talking about cyborgs.

The internet is a communication medium. It itself has as much or as little consciousness as any communication medium. Consciousness resides not on the medium itself but on the end users OF the medium. Let's say that there are millions of people communicating via paper mail- We can call it "Snailmailnet". The consciousness of snailmailnet resides on the people communicating via the mail, and not the mail itself.

Reply


redslime April 25 2016, 19:53:44 UTC

Likewise, humans are not to demonstrate consciousness to other humans. You are most likely a bot pretending to be human.

Reply

nanikore April 26 2016, 05:13:22 UTC
You are correct that we could not distinguish between a p-zombie and a bona fide sentient being.

However, that distinction is prima facie, under the condition that we do not know the origin of said p-zombie.

If we know a p-zombie to be a p-zombie, then its identity as a non-sentient thing would be established.

If we know, for example, that some thing is programmed instead of a result of a life process (e.g. birth, which instills native intelligence instead of artificial intelligence) then we would know that it is not sentient.

None of us would demonstrate such self-deception and other-deception as to knowingly mis-categorize, at least I hope not.

Reply

esl April 26 2016, 20:35:08 UTC
You seem to believe that the programming language used to program biological machines (DNA encoding) is somehow more powerful than any of the general purpose programming languages we use to program general purpose computers.
Is there a reason you think that?

Reply

nanikore April 27 2016, 09:47:26 UTC
Research into the nature of consciousness still haven't had the final say as to the exact nature of it yet, so I'm not ready to treat consciousness as a part of a machine. We can view the human body as analogous to a machine, but not the mind. That's for another discussion, but I'm pointing out the inbuilt assumptions of your question. I don't treat the mind as such.

There is another categorical difference, that of performance. I'm not thinking of the issue in terms of performance; I'm concerned whether x is either conscious or not conscious, and not whether x is somehow more or less powerful than y.

Reply


Conflating two concepts dragonlord66 May 12 2016, 06:38:06 UTC
I believe that you are conflating two concepts.

1) the actuality that a computer could be sentient

2) a thought experiment that proves that something can seem intelligent while actually operating to a set of rules.

An implication if what you're saying is that as soon as we understand how the brain works to a sufficient level, humans will stop being sentient, because they will be (theoretically) deterministic.

what the chinese room does mean, is that we can't prove sentience, just as the brain in the jar means we can't prove the world is real. This means that we then have to take the fact that we are sentient and that the world is real on faith. By extension that everyone else is real is an act of faith.

Reply

Programming AI does not entail an understanding of the brain nanikore May 14 2016, 00:37:55 UTC
There is no equivalence between programming and "a sufficient understanding of the brain", whatever that means. In another thread with forum member esl, I questioned this understanding with the issues of underdetermination and exaustiveness.

The Chinese Room is a demonstration against what is dubbed "strong AI" by Searle: http://www.iep.utm.edu/chineser/

Reply

Re: Programming AI does not entail an understanding of the brain dragonlord66 May 14 2016, 09:16:58 UTC
the brain works by massively parallel computing of the inputs supplied from the body. memories are formed by neurons making permanent paths that reproduce the processing that occured. sentience arises out of this increasing complexity (less complex brains are less sentient).

therefore any computer program that becomes sufficently complex will start to show signs of sentience.

Take the chinese room to a logical extent. we (outside observers) know that there is a person in that room, we also know that people can learn languages. Therefore it is not outside the realms of possibility that the person will learn written chinese, especially if they start playing around with the reaponses.

that's before we start getting into things like true ai programming, that uses analogues of brain nurons. these systems are trained not programmed, and are already showing sufficent complexity that we can't identify how they got to certain answers.

which brings us back to the brain in a jar, how can you prove that you're not just a computer

Reply

Re: Programming AI does not entail an understanding of the brain nanikore May 14 2016, 10:51:25 UTC
If complexity gives rise to consciousness, then cell phones today would be conscious ( ... )

Reply


Leave a comment

Up