Philosophy Word of the Day – Chinese Room

Chinese room
Image via Wikipedia

This is the definition in a nutshell:

The Chinese Room argument comprises a thought experiment and associated arguments by John Searle (1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a “mind” or “understanding“, regardless of how intelligently it may behave.

Here’s the more detailed explanation:

Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All of the questions that the human asks it receive appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.

Some proponents of artificial intelligence would conclude that the computer “understands” Chinese. This conclusion, a position he refers to as “strong AI“, is the target of Searle’s argument.

Searle then asks the reader to suppose that he is in a closed room and that he has a book with an English version of the aforementioned computer program, along with sufficient paper, pencils, erasers and filing cabinets. He can receive Chinese characters (perhaps through a slot in the door), process them according to the program’s instructions, and produce Chinese characters as output. As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he will be able to do so as well, simply by running the program manually.

And yet, Searle points out, he does not understand a word of Chinese. He asserts that there is no essential difference between the role the computer plays in the first case and the role he plays in the latter. Each is simply following a program, step-by-step, which simulates intelligent behavior. Since it is obvious that he does not understand Chinese, Searle argues, we must infer that computer does not understand Chinese either.

Searle argues that without “understanding” (what philosophers call “intentionality“), we can not describe what the machine is doing as “thinking”. Because it does not think, it does not have a “mind” in anything like the normal sense of the word, according to Searle. Therefore, he concludes, “strong AI” is mistaken.

(Via Wikipedia)

Bookmark and Share

Reblog this post [with Zemanta]
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s