The Representation of Knowledge
How is our information about the world
(semantic information) stored?
In particular, do we store information
in a picture format, a word format, some abstract format, or a combination?
The Debate
Unitary Content Hypothesis
Claims that all information in semantic memory is stored
in terms of an amodal, abstract representation. Percepts of a concept from
different sensory modalities activate the same set of information in semantic
memory. This hypothesis is instantiated in the Propositional Hypothesis.
Propositional hypothesis:
Posits that semantic information is stored in an abstract,
amodal (i.e. non-perceptual based) format called a proposition.
A proposition encapsulates the relationship between two
concepts, such as Color[Stop sign, Red].
The feeling of having mental imagery is simply an emergent
property of processing propositions referring to visual properties.
Multiple semantics hypothesis
In general, claims that semantic memory is divided into
modality-specific subsystems that store information related to their particular
modality.
Information perceived in a particular sensory modality
has direct access to that modality’s semantic subsystem, and indirect access
to the other subsystems.
The best known example of this theory is Dual-code
theory.
Dual-code theory:
Puts forth the proposition that we store information in
two types of formats, or codes:
-
Verbal codes — Where we store abstract, non-perception based
information such as "Dover is the capital of Delaware."
-
Imagistic codes — Where we store perceptually based information,
such as the color of a stop sign or the sound of a trumpet.
The Data
Mental Imagery: Do we or don’t
we have the ability to manipulate actual pictures in our minds?
Evidence for:
-
Mental rotation: When people are
asked to determine whether two objects presented at different orientations
are mirror images of each other, the time it takes them is a function of
how many degrees one object must be rotate so that it matches the other
object. (See Sternberg, Fig. 7.6 and 7.7)
-
Relative image size: When subjects
are asked to make judgements about one of a pair of imagined objects, it
takes them longer to make judgements about the smaller objects than the
larger objects. (See Sternberg, Fig. 7.8)
-
Image scaling: Subjects respond
faster to questions about larger features of images than smaller ones.
More evidence in favor of mental imagery:
-
Image scanning: When imaging a
map, subjects take longer to report information about objects that are
imagined as farther apart than objects that are closer together. (See Sternberg,
Fig. 7.9)
Evidence against mental imagery:
-
Ambiguous pictures:
Subjects that are presented with one interpretation of an ambiguous figure
are unable to report the alternative interpretations using only mental
imagery. (Sternberg, Fig. 7.4)
-
Detecting novel objects: Subjects
are unable to report novel interpretations of the objects composing a familiar
picture using imagery, such as reporting the presence of a parallelogram
in a Star of David. (Sternberg, Fig. 7.3)
-
Semantic labeling: Semantic labels
have significant effects on interpretations of objects. (Sternberg, Fig.
7.5)
Other evidence
A range of findings purport to address
which of these classes of models is the right one:
-
Naming vs. categorization: People
are faster at naming words than pictures, but are faster at categorizing
pictures than words.
-
Picture superiority effect: People
are able to both better recognize and recall pictures that have been presented
to them.
-
Cross-modal priming: Words prime
both pictures and words in a categorization task, but pictures only prime
picures. Conversely, pictures prime words better than words prime pictures
in a recognition task.
Cog Neuro Evidence
Double dissociation: If one
thinks that there are two different mechanisms responsible for two different
phenomena, one must find patients that have selective impairments at both
phenomena.
For multiple semantics:
-
Hemispheric specialization: There
seems to be evidence that the left hemisphere is specialized for language
processing and the right hemisphere for visuo-spatial processing.
-
Optic Aphasia: Patients that are
unable to name objects that are presented visually, but are able to name
them from other modalities such as touch. These patients still seem to
have access to semantic information from vision as evidenced by priming
and by an ability to properly mime object use and draw objects from verbal
request. Normal vision is also preserved.
Example: Beauvois’ (1982) color aphasic.
More cog neuro evidence
-
Other Aphasias: Furthermore, other
patients have been found that show similar patterns of deficits as optic
aphasics, but with different sensory modalities, i.e. tactile aphasics
or auditory aphasics.
-
Semantic access dyslexia: These
patients are impaired at reading words. It has been found that with some
of these patients, their performance can be improved by giving a semantically
related verbal prompt, whereas showing them a semantically related
picture does not help.
Cog neuro evidence in favor of the
Unitary content hypotheses:
-
Category-specific impairments:
Patients have been found that are selectively impaired at retrieving semantic
information about living things vis a vis man-made objects and vice
versa.