Will it be possible for computers to one day understand language?

December 6th, 2012

Computers in the past 40 years have exponentially increased in power while their costs have steadily decreased. This access to low-cost powerful computers has given researchers all over the world the possibility to expand in the field of artificial intelligence (AI), a branch of computer science that aims at creating intelligent machines. The field of AI is divided in two groups, strong AI and weak AI. The former aims at creating artificial intelligence that matches or exceeds human intelligence and is associated with the creation of traits such as consciousness, understanding and self-awareness. The latter focuses on the use of machines to do problem solving and reasoning tasks. While some initial and late successes in the field have led to the construction of self-driving vehicles, master level chess players and Jeopardy champions, all of these accomplishments are nowhere close to the goal of strong AI. Despite the difficulty, research in the field continues and the debates associated with it continue also. One of the classic open questions in AI and philosophy remains on whether computers will be able to understand language in the future.

This philosophical question has divided people on the issue and those who reject the idea of computers being able to understand language will often make reference to the John Searle’s chinese room argument. The chinese room argument can be summarized as follow:

A person is placed inside a room with no windows. There’s only a small slit underneath the locked entrance door through which Chinese people from the outside are able to pass small pieces of paper with meaningful Chinese sentences (an input). The person inside the room has a book containing a list of chinese symbols (a database) and a set of instructions that tells him how to combine the chinese symbols when given a certain sentence (a program). For example the Chinese people from outside can write on a piece of paper the chinese equivalent of “How are you today?”, slide it under the door, the person then looks through his book to find that the chinese equivalent of “Good, thank you!” is a good answer to the question, writes down the answer (without knowing what the meaning of the symbols are) and slides his answer under the door (an output). The Chinese people from the outside are satisfied with the answer, thinking that the person must have understood their question, when in reality the person inside the room had no idea of what he replied (although he gave the impression that he did).

Searle’s finishes his argument saying that “if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” (Cole). He suggests that syntax alone is insufficient for achieving semantics.
Several replies have been given against the chinese room argument and one of these is the virtual mind reply. According to the virtual mind reply, the fact that the person inside the room does not understand Chinese is accepted. However the fact that the system is running does not take away the possibility that it might create a distinct mind that understands Chinese. The claim is that the problem of computers understanding language should not be posed as “can computers understand language?” but as “can computers create a mind that understands language?”. It has been difficult to find a good counter-argument to this reply. Toward the end I will present my point of view, which will include a critic of the virtual mind reply.

Another reply to the Chinese room argument is the robot reply. This reply says that a computer that is trapped in a room has no way of understanding language or what the significance of each word is. As humans we are able to understand things because we have experienced them or we have heard of other people’s experiences. So the moment that instead of placing a computer inside a room we place it inside the body of a robot with all sorts of visual, auditory and kinetic sensors, that robot will be able to interact with the environment and learn just like a child does. This argument agrees with the fact that syntax alone is not sufficient for semantics, but the addition of causal connections to the world can provide a meaning to the symbols. In response to this Searle replied that sensors just provide additional input to the computer, making the computer’s task just more difficult.

Another counter to the Chinese room argument is provided in the systems reply (defined by Searle as “perhaps the most common reply”). The systems reply like the robot reply says that the person inside the room has no understanding of language, but the man does not represent the system as a whole, it’s just a part of it. The man does not understand language, but the system as a whole does. Searle’s response to it is that theoretically the man could learn every symbol and instruction and thus becoming the whole system, walk outside of the room and conversate in Chinese, but he would still have no understanding of Chinese.
The brain simulator reply brings up a quite interesting scenario: suppose a computer was instructed to replicate the sequence of nerve activity that happens in the mind of an actual chinese speaker. Because the computer now works exactly like the brain of a Chinese speaker, the computer must be able to understand language. Searle’s response is simple, suggesting that if the man in the room used water pipes and valves to activate the neurons to simulate the brain, he would still have no understanding of language.

A reply that is related to the brain simulator reply is The Other minds Reply, which says that we cannot tell whether humans understand Chinese, we can only look at their behavior and assume that they do. So if a computer is able to behave in a way that makes us think that it understands Chinese, then we must assume that it understands Chinese. Searle replies to it by saying that computers are only capable of syntactic manipulation which is neither constitutive of, nor sufficient for, semantic content, so even if they are capable of fooling us into believing that they are capable of understanding, they really are not.

One last reply to the Chinese Room Argument is the Intuition Reply. According to it, Searle’s argument came out of intuition that computers cannot understand language, but intuitions could be misleading and that his idea of understanding might not be in line with a world in which computers are treated as humans.

AI researchers Simon and Eisenstadt noted how the debate of whether computers will be able to understand language cannot be settled until we find a definition for the term “understand” that “can provide a test for judging whether the hypothesis is true or false” (Cole).

Who is right? Who is wrong? Searle’s argument is tied to the definition of “computer” in the traditional sense. If we take a classical definition, “(a computer is) an electronic device which is capable of receiving information in a particular form and of performing a sequence of operations in accordance with a predetermined but variable set of procedural instructions to produce a result in the form of information or signals.” (Oxford Dictionaries). Batch processing of instructions is the way computers process information, making them significantly different than the processing of a human brain. In a human brain neurons form an intricate network of multi-connected layers where “processing” happens in a multi-directional, heavily parallelized manner. The fact that multiple computers or computers with multicore CPUs can accomplish parallel batch processing does not make them more similar to the human brain, because they are still processing instructions in a batch (just multiple batches at a time). Given the assumption that when we say “computer” we mean a machine capable of doing batch processing of instructions, Searle’s argument is sound. Claims such as the ones made in the virtual mind reply are too vague when they refer to the possibility of computers being able to “create a mind that understands language”. Even if a process could theoretically lead to the creation of a “virtual mind”, it would be the result of a batch processing sequence of instructions and for the “virtual mind” to exist within the computer it would have to be a batch processing system itself, which is not how the brain works.

Note how if we shift away from the classical definition of “computer” to a broader one, Searle’s argument falls apart. Let’s suppose that in a near future the internal architecture of computers changed to use a hybrid mechanism: the logical processing of instructions could be done via traditional computing mechanism (using CPUs, GPUs, memory, etc.) and a biological subsystem designed to copy the functioning of the human brain with all of its multi-directional parallel characteristics as well as copies of biological sensor parts (ears, eyes) takes care of doing the “thinking”. If we called such machine a “computer”, then it would be easy to disprove Searle: a human is capable of understanding language because it has a brain and ways to perceive symbols and sounds. A computer (according to the futuristic definition) has a brain as well as ways to perceive symbols and sounds. Thus a computer could be capable of understanding language.

Cole, David. “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy. N.p., 22 Sept. 2009. Web. 28 Nov. 2012. <https://plato.stanford.edu/entries/chinese-room/>.

Oxford Dictionaries. “Computer.” Definition Of: Computer. N.p., n.d. Web. 29 Nov. 2012.  <https://oxforddictionaries.com/definition/english/computer>.