Boethius Translations

Glosses from the AI Age

Alan Turing, one of the pioneer theoreticians of computing, proposed what is now known as the Turing Test: if a computer can pass for a human in a text-based interaction, we should grant that it is intelligent. By the late 1970s some AI researchers claimed that computers already understood at least some natural language. But in 1980 philosopher John Searle introduced a short and widely-discussed argument – the Chinese Room argument – intended to show conclusively that it is impossible for computers to understand language or think, no matter how powerful or complex they become, or how much data is pumped into them.

Searle summarised his Chinese Room Argument concisely:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese, but he does not understand a word of Chinese.

The point of the argument is this: if the man in the room does not understand Chinese when following the instructions to provide Chinese output, then neither does any computer. Because no computer has anything the person in the Chinese room does not have.

Searle, J. R. (1980). Minds, brains, and programs, in Behavioral and Brain Sciences 3(3), September, pp. 417 – 424. DOI: https://doi.org/10.1017/S0140525X00005756