hat shape are Snoopy's ears? Which is darker green, a Christmas tree or a frozen pea? If you were driving to the airport and the road was blocked halfway there, what would be the shortest detour? When most people try to answer questions like these, they report visualizing the objects or scene - for example, "seeing" Snoopy's head and droopy ears. Perception is the registration of physically-present stimuli, whereas imagery is "seeing"-in the absence of the appropriate sensory input-patterns that arise from memory.
As the examples illustrate, visual mental imagery plays an important role in memory and spatial reasoning. It also aids linguistic comprehension, learning motor skills (visualizing oneself practicing a new skill can actually improve one's later performance of it), and even symbolic reasoning (as many mathematicians and physicists, including Albert Einstein, have attested). These functions of imagery require the brain to reconstruct, present for "recognition," and put to further use information stored from past, first-hand experience. Imagery stands at the crossroads of perception, memory, reasoning and emotion; by studying imagery, we gain insights into the commonalties and differences among these faculties.
Because imagery plays such a central role in so many types of activities, it has long been a topic of interest in psychology, psychiatry, and philosophy, and has recently emerged in one of the most rapidly growing parts of neuroscience, cognitive neuroscience. An understanding of the neural bases of imagery not only will bear on fundamental issues about mind and brain (such as how conscious experience arises from neural activity), but it also will illuminate the specific nature of mental deficits that follow brain disorders and have practical implications for teaching in the classroom and eliciting testimony in legal proceedings.
Until recently imagery was studied by asking people to introspect (to "look within"), and there was no way to validate or verify their reports. The study of this topic began to move ahead with the advent of modern cognitive tests, and took a giant leap forward with the availability of modern brain-scanning techniques.
The cognitive revolution
The 1970's were a watershed period in the study of cognition. During this
period, researchers treated the mind as if it were a complex computer program.
Such programs not only store information in specific ways, but operate on
this information to perform specific tasks. This conceptualization led researchers
to develop new behavioral techniques to study mental events. The logic was
the same as that of using a cloud chamber to study cosmic rays; rather than
observing the phenomena directly, their footprints are recorded. Instead
of tracks through a cloud chamber, cognitive researchers measured response
times, error rates, and judgments when subjects performed a specific task.
Consider three such findings from my laboratory.
Visualize a horse, as seen from the side, and mentally fix your gaze on its tail. Now decide whether its ears protrude above the top of the skull. If we had measured your response time, it would have been longer than if you had started at the center of the horse's body instead of its tail. Indeed, you would have taken even less time if you began by focusing your mind's eye on the horse's head. Response time increases linearly with the distance to be scanned - even though subjects have their eyes closed. Subjects report that they shift their mental gaze over objects, just as they would when scanning a scene.
Now try to visualize a honeybee as if you were holding it at arm's distance (this is a friendly bee, with no intention of stinging). What color is its head? If we had measured your response time, this would probably take you longer than if you began by visualizing the bee very close up. When an object is "seen" at a very small size or at a distance, people often report that they have to "zoom in" to "see" a detail.
Finally, consider the following: Visualize an elephant off in the distance, standing so that you see it from the side. Now imagine that you are walking up to it, so that the animal seems to loom. Is there a point at which the edges seem to blur, the elephant begins to "overflow" your image? Now try the same thing with a rabbit. Most people report that they can get "closer" to the rabbit than to the elephant before it starts to overflow. When subjects are asked to position a tripod at the distance from a wall at which an object seems to overflow the image, they position the tripod farther for larger objects. Indeed, the scope of the object (the "visual angle" it subtends) at the point of overflow remains constant, just as if there were a fixed-size "screen" on which the images are displayed.
The metaphor of a screen can be used to interpret all three sets of results. If visual mental images are spatial patterns, like those on a TV screen, then it makes sense that one would have to shift attention incrementally to get from one place to another. Similarly, if the screen has a grain (defined by the number of pixels per inch on a TV screen), then parts of smaller objects will be more difficult to make out. And if the screen has fixed edges, then larger objects will seem farther away when they just fill the screen.
However, it is clear that there is no actual screen in the head. If there were, who would look at it? There is no "little man" in the head, nor is there light to see by. The screen metaphor is seductive, but misleading. Turning to the brain offers a better way to understand such results, and allows us to extend them far more deeply.
The cognitive neuroscience revolution
In the late 1980's positron emission tomography (PET) began to be used to
study cognitive phenomena. For the first time, researchers could discover
which specific parts of the brain were activated when subjects performed
particular tasks. It has long been known, for example, that some parts
of the brain are "topographically mapped." Patterns that fall on the retina
are physically laid out on those parts of the brain. This is intriguing
because, like the screen metaphor, these parts of the brain are spatially
organized, have a grain (which arises because the neurons do not distinguish
among inputs from nearby locations), and the patterns can extend only so
far before they "overflow" the structure. We can begin to explain the behavioral
results obtained by cognitive investigators if images rely on such structures.
To test the explanation, our group at the Massachusetts General Hospital PET laboratory took advantage of a finding by Peter Fox, now of the University of Texas at Austin, and his colleagues. They asked subjects to look at different-sized patterns while their brains were scanned, and found that the larger the pattern, the more anterior the activation was along the calcarine sulcus. This crease in the brain is in the middle of the first cortical area that processes visual input, which is organized spatially (with more peripheral parts of the retina projecting to more anterior parts of the region).
We reasoned that when the subjects visualized letters at a tiny size, the parts of the brain that process input from the small central part of the eye would have to preserve greater spatial variation than when the letters were visualized at the larger size. Thus, we expected more activation for the small images than the large ones in these regions. On the other hand, when the subjects visualized letters at a large size, the parts of the brain that process more peripheral input should be activated - and these areas are not activated by the small images.
And in fact, precisely these results were obtained, even though the subjects kept their eyes closed throughout the task. Indeed, the precise loci of activation were about what one would expect if subjects visualized the small letters at about the size of letters on a typical page held at arm's length, and visualized the large images at about the sizes we measured in our earlier, behavioral studies of image overflow. Moreover, the greater the blood flow in the visual cortex, the faster the subjects performed this task.
Paradox lost
Turning to the brain allows us to dispel an age-old conundrum: What "looks"
at the image? Given that the parts of the brain activated during early stages
of perception are also activated in imagery, there is no problem; the brain
processes signals, and if imagery engenders the same kinds of signals as perception,
they can be processed the same way in both cases.
But how do mental images arise? In other experiments, we directly compared the areas of the brain activated during perception and imagery. In the perceptual experiment, subjects decided whether names were appropriate for objects that were seen from normal or atypical points of view. We predicted that subjects would have to search for additional properties when the view was atypical, and that this search process would be very similar to the process used to form images.
As evident in the illustration below, similar brain areas are in fact recruited during the two activities. Such findings explain why subjects can confuse having visualized an object with actually having seen it, which is of interest to those who study the reliability of eyewitness testimony.
Mental imagery is a bridge from perception to the mind. It is the cognitive faculty "closest to the neurology" because so much is now known about the neural mechanisms of perception. Given its long history, it seems fitting that the study of imagery is one of the topics to produce insights into how the brain gives rise to the mind.*
Dr. Kosslyn is Professor of Psychology at Harvard University and author of Image and Brain: The resolution of the imagery debate (MIT Press, 1994).