A philosophical look at why machines may follow rules without truly understanding them.
Written By : Sidra Tariq and Tehreem Zameer
Introduction; ( John Searles Chinese Room : Why AI Still Lacks Real Understanding )
John Searles Chinese Room : Why AI Still Lacks Real Understanding .The role of AI in today’s world is beyond question. Human beings greatly trust AI in every matter. With this extensive use of AI, many concerns arise. Can AI have human-like consciousness? Or does it really have understanding? This is the main concern of this review. John Searle was a renowned American philosopher, best known for his works in language and philosophy of mind. Regarding AI and understanding, I will discuss the Chinese Room Argument.
John Searle’s Chinese Room Argument was originally presented in the paper Minds, Brains, and Programs in 1980. It is one of the most influential yet controversial articles still discussed today. Strong AI claims that if a computer is programmed correctly, it does not only simulate a mind but also genuinely understands cognitive states. Let’s take a look at this argument.
Suppose I am locked in a room and given a huge collection of Chinese symbols. I do not understand Chinese, and for me, the language is just strokes on paper. Now I am given a task to match them with similar symbols. Since I do not understand Chinese, I am unable to do so. In the next step, I am given instructions in English to match another Chinese script. This time, the instructions are in English, which I understand, so I can do it effortlessly. This is repeated a third time. The first is called the script, the second is the story, and the third is the question. The symbols I provide are the answers, and the instructions given to me in English are called the program.
The main point of this argument is that a computer only manipulates the symbols it is given. In reality, a computer does not have any understanding of the symbols. In Searle’s view, programs are neither constitutive nor sufficient for understanding.
Syntax and Semantics in Searle’s View
Searle’s whole argument revolves around one key distinction. In this article, he defines syntax as the formal, structural properties of symbols. Syntax is completely independent of meaning. This is what a computer actually processes. On the contrary, semantics refers to grasping real meaning and focusing on interpretation and understanding of words. In the context of Searle’s argument, semantics are essential for understanding language, as they involve meaning and context. This, he says, is what is lacking in the syntactic approach of computers.
Strong AI vs. Weak AI Debate
Strong AI claims that a computer can literally have human-like minds. This notion is supported by Alan Turing, whose views focused on the capabilities of computers and the potential AI possesses. In his paper Computing Machinery and Intelligence, Turing proposed the Turing Test. The Turing Test measures a machine’s ability to exhibit intelligence equivalent to, or indistinguishable from, that of a human. In this test, a human evaluator engages in natural language conversations with both a human and a machine without knowing which is which. If the evaluator cannot reliably distinguish the human from the machine, the machine is said to have passed the Turing Test. This test evaluates a machine’s ability to simulate human-like conversations.
Searle does not support this test. According to him, in this test the computer is only imitating human language; it does not understand a single word. It is only playing with the words given to it in the form of a program. Searle argues that true understanding requires semantics, whereas Turing’s argument about a machine’s human-like ability is purely syntactic.
Biological Naturalism (Can Consciousness Be Produced Artificially?)
Searle argues that consciousness is a biological phenomenon, like digestion, lactation, or any human sense. The mind is real, and biological processes are created by the brain—just like the heart pumps blood and the stomach digests food. These arise from the causal power of natural systems. A computer program, on the other hand, can only imitate human behavior. It does exactly what a program orders it to do. It only follows rules and symbols without actually knowing what they mean.
So, if a machine acts intelligently or gives human-like responses, it only manipulates symbols without actual understanding. No matter how good a computer program is, it cannot truly understand. Searle argues that strong AI is mistaken in assuming that copying human behavior is the same as having a human-like mind.
The System Reply
Supporters of strong AI claim that if the man inside the Chinese room cannot understand Chinese, the room, the rules given to the man, and the symbols that belong to the system all understand. In other words, understanding does not belong to the individual; it belongs to the whole system working together. So, the whole system possesses understanding even if the human does not.
Searle opposes this view. He argues that if the person memorizes all the rules and performs all the calculations, he is now part of the system. But the problem remains the same: he still does not understand a single word of Chinese. So, if the person cannot understand, the system cannot understand either.
The Robot Reply
This reply suggests that if we put the program inside a robot with cameras, sensors, and the ability to move around, the robot could experience real-world interactions and behave intelligently. Then it would have real understanding of the world.
Searle’s point is that even if the program controls the robot, it is still manipulating symbols. He gives the thought experiment: imagine Searle is inside a robot, receiving Chinese symbols as input and sending out Chinese symbols as output. Searle still has no idea what any of it means. Adding human-like interactions to a computer cannot create real understanding.
The Brain Simulator Reply
The Brain Simulator Reply suggests that if we create a computer program that copies the exact pattern of neuron activity in a Chinese speaker’s brain while they understand Chinese stories, then the computer should also understand Chinese. The idea is that if the machine’s internal processes look just like a real brain’s processes, the machine must understand in the same way a human does.
However, Searle argues that even this is not enough to produce real understanding. He gives another example: imagine a person operating a huge system of water pipes and valves. Each valve represents a neuron. The person opens and closes valves according to instructions written in English. After all the correct valves are turned, Chinese answers come out of the system.
Even though the system behaves like a Chinese-speaking brain, neither the person nor the pipes understand Chinese. They are simply following rules and moving signals around. It only looks like understanding from the outside.
Searle’s main point: simulating a process is not the same as actually having that process.
Conclusion
Searle’s views on AI and computers are clear. Computers can exhibit human-like behavior and interactions, but they cannot have human-like understanding of anything—language, emotions, or any other human attribute. To the best of my knowledge, Searle is largely correct in his views, and I agree with his ideas. Humans are natural, while computers are human inventions; the two cannot be equated.
The writers Sidra Tariq and Tehreem Zameer are researchers from the Department of Philosophy, University of the Punjab, Lahore.
Author : linkedin.com/in/sidra-tariq-a4b834270
Email : Sidratariq741@gmail.com
Table of Contents
Read More :https://timelinenews.com.pk/pafs-ascendancy-pakistans-strategic-maturity/
In Urdu http://dailytimeline.pk
John Searles Chinese Room : Why AI Still Lacks Real Understanding

John Searles Chinese Room : Why AI Still Lacks Real Understanding























