Philosophy Word of the Day – Turing Test

Alan Turing

Image via Wikipedia

“A test devised by Alan Turing in the 1950s intended to determine machine intelligence.  This test was invented by Alan M. Turing (1912-1954) and first described in his 1950 article. The basic setup of the test includes two people and the machine to be tested. One person is an interrogator, and the other person and the machine are respondents. The interrogator and respondents are all in different rooms and thus physically separated. The interrogator can only ask questions via a keyboard (e.g. a teletype or computer terminal). Both respondents attempt to convince the interrogator that they are the human respondent. Turing suggested that the test should be run for five minutes or so, but the precise length is somewhat irrelevant. This, then, is an imitation game for the machine.

“The machine is said to pass the test if the interrogator can not tell the difference between the respondents, or guesses at chance at the identity of the respondents. The machine fails the test if the interrogator can tell the difference. Turing thought that any machine which passes the test should be considered intelligent, or more precisely, should be considered to ‘think’.

“In other words, Turing proposed the test as a sufficient criterion for machine intelligence. He felt it was not a necessary condition because of the possibility that intelligent creatures could not correctly participate (for some physical reason) in the game. However, as Block (1995) points out it is possible to satisfy the Turing test with an unintelligent, physically possible machine. This means that the test does not seem to be a sufficient criterion either. If the test is neither necessary nor sufficient, perhaps it can be considered a ‘mark’ of intelligence, rather than criterial for intelligence.”

Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-560.

Block, N. (1995). Mind as the software of the brain. In D. Osherson, L. Gleitman, S. Kosslyn, E. Smith and S. Sternberg (eds). Invitation to Cognitive Science, MIT Press. [online version]

— Chris Eliasmith at Dictionary of Philosophy of Mind

Enhanced by Zemanta
Advertisements

Philosophy Word of the Day – Chinese Room

Chinese room
Image via Wikipedia

This is the definition in a nutshell:

The Chinese Room argument comprises a thought experiment and associated arguments by John Searle (1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a “mind” or “understanding“, regardless of how intelligently it may behave.

Here’s the more detailed explanation:

Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All of the questions that the human asks it receive appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.

Some proponents of artificial intelligence would conclude that the computer “understands” Chinese. This conclusion, a position he refers to as “strong AI“, is the target of Searle’s argument.

Searle then asks the reader to suppose that he is in a closed room and that he has a book with an English version of the aforementioned computer program, along with sufficient paper, pencils, erasers and filing cabinets. He can receive Chinese characters (perhaps through a slot in the door), process them according to the program’s instructions, and produce Chinese characters as output. As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he will be able to do so as well, simply by running the program manually.

And yet, Searle points out, he does not understand a word of Chinese. He asserts that there is no essential difference between the role the computer plays in the first case and the role he plays in the latter. Each is simply following a program, step-by-step, which simulates intelligent behavior. Since it is obvious that he does not understand Chinese, Searle argues, we must infer that computer does not understand Chinese either.

Searle argues that without “understanding” (what philosophers call “intentionality“), we can not describe what the machine is doing as “thinking”. Because it does not think, it does not have a “mind” in anything like the normal sense of the word, according to Searle. Therefore, he concludes, “strong AI” is mistaken.

(Via Wikipedia)

Bookmark and Share

Reblog this post [with Zemanta]