What can programs written for computers tell us about how the mind works? Are computers valid tools to use for testing cognitive theories? If they are, does this imply that it is possible for computers to be "intelligent" in the same way that people are? Can computers ever "think" or be "conscious?"
Artificial Intelligence
The goal of AI is to write computer programs that exhibit human behavior, such as accurately perceiving objects or understanding speech.
This is personified in the form of the Turing test.
Expert systems: AI programs that attempt to encapsulate how an expert operates within a particular domain, such as a doctor making a diagnosis.
Computational modeling
Computational modeling has as its goal the construction of computer programs that not only exhibit human-like behavior, but do so using the same types of processes and representations that the human mind uses.
Put another way, a computational model is the instantiation of a theory of cognition.
These models can then be tested to see if the pattern of data they exhibit on a particular task matches the human data. If so, then the argument is that you have a successful theory of cognition.
Competing models can be differentiated by having them perform novel tasks and seeing how the predictions they make differ. Then you have people perform the task and see which model matches the data better.
Information processing machines
It is taken as a given that a computer is an information processing machine. Cognitive modeling then further assumes that the human mind is also, at its core, an information processing machine.
This carries the significant implication that anything a person can do, a computer is also, in theory, capable of doing.
In other words, machines can, in theory, be intelligent.
Can machines exhibit intelligence?
Some arguments against:
Intuition: Computers can only do what they are programmed to do. They cannot step outside the bounds of their program. Thus, they cannot show what we would call "intuition" or "insight" about completely novel problems.
Having intelligence vs. mimicking
intelligence: Computer may be able to display behavior that seems human-like,
i.e. PARRY the paranoid computer. But they won’t actually be "intelligent."
For one, they won’t necessarily be self-aware or conscious.
Searle’s Chinese Room
John Searle has proposed a simple thought experiment that captures the essence of the mimicry argument.
Imagine a man locked in a room. There are no windows, no doors…no interaction with the outside world except through a slot in one wall. The only things in the room are a pencil, a large stack of paper, and a book.
The book contains two columns. One column corresponds to symbols that might be written on pieces of paper slid into the room through the slot. The other column contains the set of symbols that should be written on a piece of paper and slid out of the room through the slot.
Now imagine that these symbols are Chinese characters, and that the person locked in the room has no knowledge of Chinese.
How the Chinese Room relates to cognitive modeling
Searle is arguing that, while we may be able to write computer programs that act like people, these programs will never actually be intelligent of their own right.
Thus, if you accept Searle’s argument, then the notion of the computer and the human mind both being information processing machines cannot be accurate. Therefore, while cognitive modeling may be able to tell us a story about what the mind appears to be doing, it will not allow us to actually build a complete Theory of Mind.
Flaws in the Chinese Room
Where do we go from here?
The upshot is that, while no one in the AI or cognitive psychology community has rushed to embrace Searle’s argument, no one has come up with the irrefutable argument against it, either.
Practically, it is still probably reasonable
to assume that, for the limited tasks around which most models are built,
the mind is for all intents and purposes an information processing machine.
Cognitive models
Any cognitive model must address two issues.
On the one hand, the theory must make claims about the nature of the representations that the cognitive mechanisms being modeled use.
On the other, it must also make claims about the processes that manipulate these representations.
Marr (1982) posits three levels on which models must be considered:
Different approaches to modeling
Two primary classes:
Symbolic processing models: These are models that use relatively high-level symbols or tokens , such as concepts, words, objects, etc., as their basic units. Information is thus segmented into these discrete symbols.
Manipulation of information generally involves addition and deletion of units, changing their properties, or creating new procedures for manipulating them.
Connectionist models: Also called neural networks. These are models that attempt to use basic units that have neuron-like properties. Information is captured by patterns of activation across sets of these units. Manipulation of information involves the flow of activation across the network.