Introduction
Historical Precedents
Theories
Nomic
Covariation
Advantages of Covariation Theories
Covariation Theories and the
Disjunction Problem
Functional
Role Semantics
Advantages of Functional Role Semantics
Problems
for Functional Role Semantics
Causal
Theories and Cognitive Science
Bibliography
Other
Internet Resources
Related
Articles
|
Introduction
Very few people deny the platitude that minds causally interact with the world. Causal theories
of mental content suppose that such causal connections not only facilitate the
mind's interactions with the world, they provide the basis of mentality. A
large part of understanding the
nature of the mind is understanding how it comes to be aware of the world.
Theories of mental representation develop in the context of some or other representational
theory of mind (RTM). A representational theory of mind holds that
mental states have intentionality (are about the world) in virtue of a representational
relationship holding between the mental state and the object. Most
philosophical theories
of mental content hold that mental states represent the world in virtue of
the sorts of causal relationships those states have within the mind and/or with
the world. Historically, monistic physicalists find causal theories attractive; philosophers interested in understanding all mental properties, states, and/or events
(and all of existence) in terms of physical properties, states, and/or events.
Such naturalistic philosophers seek to understand the mind as purely
physical in origin and nature. The main body of this article discusses the historical precedents of contemporary
causal theories of mental content as well as the two major categories of causal theories of mental content current in the philosophical
literature. In each case the article outlines the theory, discusses
its advantages and strengths, raises commonly perceived problems for the
theory, and presents responses by proponents of the theory.
Historical Precedents
Though many theories of mental
content are causal theories, not all theories of mental representation are causal in nature. For
example, Locke supposes that ideas of primary qualities represented qualities
in an object, not through reliable causation, but because they were similar
(share the same properties). Contemporary theories of mental
content have predecessors dating back to Book III of John Locke’s Essay
Concerning Human Understanding, possibly Aristotle’s De Anima
, or even to Plato’s Theatetus. Locke’s notion of secondary
qualities (a quality or power of the object to cause particular ideas in
us which bear no similarity to the object, e.x. color) looks very much like
a contemporary causal theory based upon reliable causally mediated covariation (Cummins 1989).
Aristotle’s discussion in Book II of De Anima suggests that,
for example, color in the object is different from our sensations of color,
which are nevertheless reliably caused by light hitting the object.
Similarly, Plato’s analogy of perception as the matching of sensations caused
by the world to the impressions (knowledge) upon a ball of wax also suggests
that Plato entertained the notion that the sharing of essential natures between
mental representations and objects in the world allows one to represent those
objects as instances of kinds. Contemporary causal theories of mental representation
have developed in the theoretical context of explanation in cognitive
science. It can be helpful to view the intended role of such causal
theories in computational explanations
in cognitive science. However, contemporary causal theories can be
understood independently of their role in cognitive science.
Sidebar
on Explanations in Cognitive Science
Theories
There are two general causal theoretical approaches to a theory of representation
in the contemporary literature.
Neither theoretical approach regarding the representation relation has gained general
acceptance. However, there are several
agreed-upon constraints for a theory of representation.
First, the representation relation must be consistent with the physicalistic
nature of cognitive science and science generally. Second, the relation
must be present and explanatory in accepted explanations within cognitive
science. (an appropriate characterization of "accepted" explanations is
somewhat controversial). Philosophers particularly would also like
to see the accepted theory of representation explain many of the widely
accepted properties of belief. Block (1986) and Fodor (1988) provide
discussions of the potential problems, the solutions to which
many philosophers hope to find in a theory of mental representation.
One theoretic approach to articulating the representation relation, "Nomic
(law-like) covariation" or "covariation," postulates a simple causal relationship between the object or
property and the state that represents the object or property. The
other approach, "functional role semantics," hypothesizes that a state has
content in virtue of the state’s occupying a particular position in a complex
web of causal relationships characterizing the cognizer’s functioning.
Candidate content-fixing causal relationships include causal relationships
within the cognizer (i.e., relations between brain states) and/or without
(i.e., relations to the distal environment). Both functional role semantics
and covariation theories satisfy the (above)
first,
physicalistic constraint by hypothesizing that a cognizer’s states represent
the distal environment solely as a result of the specific sorts of causal
connections had by those states. The theories diverge in terms of the
specific causal relationships each emphasizes.
Covariation
Covariation theories hypothesize that a state represents an object or property in
virtue of a causal connection between the object or property in the world and the state that represents that object
or property within the cognizer. Information theory as well as
work within a psychophysics inspire contemporary covariance theorists. Fodor, for instance, notes that:
...there are circumstances such that red instantiations control ’red’ tokenings
whenever those circumstances obtain; and it’s plausible that ’red’ expresses
the property red in virtue of the fact that red instantiations cause
’red’ tokenings in those circumstances;
and the circumstances are nonsemantically, nonteleologically, and nonintentionally
specifiable.
In fact, they’re psychophysically specifiable. (Fodor 1988,
p.112) |
Covariation theories, as a result, seem to satisfy the
second constraint
upon theories of representation, i.e., that the relation must be present
and explanatory in accepted explanations within cognitive science.
For example, Hubel and Wiesel (1977) investigate the representational content
of cells in the striate (visual) cortex by monitoring the activity of those
cells looking for preferential relationships between the activity of
these cells and the presence of properties in the visual field.
A system’s states represent those objects and/or properties of the distal
world with which they covary according to covariation theories.
Specifically, covariation theories assign contents to states via some
version of the following definitions:
A state represents an object or
property iff the
probability of the object or property given the state equals 1. (Dretske 1981)
"...a symbol
expresses a property if it's nomologically necessary that all and only
instances of the property cause tokenings of
the symbol." (Fodor 1988, p. 100)
For any state, call it Sc, and object or property, call it C:
State Sc represents C
iff the system tokens Sc when, only when, and because of
C. (Cummins 1989) |
Illustrative animation
Advantages of Covariation Theories
Intuitive Appeal
The intuitive appeal of covariation theories stems from the idea that a cognizer
represents an object or property by being causally "in tune" with that object
or property. The causal connection provides a concrete connection between
the object/property and the state which represents it. It also insures
that the state represents elements of the cognizer's distal
environment.
Punctate Content
Covariationists such as Jerry Fodor advocate the approach because
covariation assigns content to individual states
independent of how the cognizer operates upon those states (inferences it
can make) and/or the content of other states with which the state might interact. Fodor refers to this assignment of content as "Punctate
Content," or "atomic content." Among the advantages claimed by Fodor
for punctate content is that it allows for content identity across individuals
who have quite different theories regarding some object or property.
Similarly, punctate content provides the only theory whereby people can refer
to real objects even when they have great numbers of false beliefs about
them. For instance, Fodor claims (1992) that Aristotle thought about
and talked about the same things we refer to by "stars" even though he falsely
believed stars to be relatively close and rotating around the Earth on glassy
spheres.
Systematicity
and Compositionality
Whereas the punctate
content argument for covariation theories supposes that fixing the content of
representational states in isolation from inferences in which the state might
participate lends nomic covariation an explanatory advantage with regard to
content identity, the systematicity and compositionality
argument supposes that covariation theories better explain
perceived truths about the combinatorial properties of language and
concepts. Compositionality is
the theory that the meaning of a complex expression in a language results
from the meanings of its constitutive elements and their relationship within the
complex expression. Compositionality
plays a central role in many linguistic theories, since its supposition for
both language and thought provides a fairly straightforward explanation of
the human ability to grasp an enormous number of different thoughts of varying
complexity and their corresponding linguistic expressions. For instance,
because we understand the individual elements like "cup" and "coffee", we
understand the complex expressions "cup of coffee," "hot coffee," "coffee
gives me the shakes," etc.. One explains our understanding by noting
that the meaning of these complex sentences is built-up from the meaning
of their constitutive elements and their combination within the complex sentence. Furthermore, languages and thoughts seem
to have a systematic structure to their compositionality. For example,
one seemingly cannot have the thought that "coffee has more kick that
tea" without also being capable of thinking that "tea has more kick
than coffee." Proponents of covariation theories (Fodor and Pylyshyn 1988, Fodor and McLaughlin 1991) argue that such seeming facts about
thought and language are naturally explained by covariationist semantics.
See Cummins et al (2001) for a forceful rebuttal.
Epistemology and Representation
A final perceived advantage for covariation approaches
lies in that they provide a clear-cut mechanism through which a cognizer
can come to know about an object or property in the world in virtue of their representational capacities. A cognizer that represents an object or
property in the environment does so insofar as the cognizer can reliably
detect the object’s presence. (Dretske, 1989)
Covariation Theories and the Disjunction Problem
The most widely discussed problem for covariation theories
is called the "Disjunction Problem." According to covariationists, a state
represents an object or property when the system tokens (enters into) that
state when confronted with the object or property and only when confronted
with that property. However, suppose that cognizer tokens a state, S
c, in all of its cat interactions. According to the when
clause of the covariationsist definition, the cognizer’s state appears to
represent cathood. Unfortunately, one night the cognizer tokens an S
c when seeing a skunk in the darkness. The only when clause
of the definition prima facie dictates that Sc never
really represented cathood simpliciter. Rather, the cognizer has always
represented a property that we can only describe using the disjunction, "cat
or skunk."
Illustrative Animation
The disjunction problem poses two related difficulties for covariation theories: First, the theory seems to dictate counterintuitive representational
contents for states. If one has beliefs about cats, (e.x. cats are domesticated
felines that have a number of distinct breeds), then covariation seems
to dictate that those beliefs have always been about things having the property cat or skunk. Second, the disjunction
problem seems to demonstrate that covariation cannot account for misrepresentation,
since any seeming case of misrepresentation by a cognizer becomes a correct
representation of a disjunctive property under covariation. In other
words, one never mistakenly believes that one sees one's mother at the corner,
one always correctly believes that "mother/other" is at the corner.
The Two-Case Strategy
Theorists have explored a number of solutions to the disjunction problem.
However, all of these solutions rely upon some form of the idealization strategy: A
given solution will separate cases of the tokening (occurrence) of a state
into two groups, one class in which content is already fixed and
representational error can occur (normal conditions), and a second class (ideal circumstances)
in which content is fixed by perfect covariation. There are several important
versions of this solution.

Ideal Conditions
To solve the disjunction problem,
one must modify the basic covariation theory so that the cognizer represents
"cat" and not "cat/skunk" despite occasionally mistaking
skunks for cats. Advocates of the ideal conditions solution to the disjunction problem
hold that under ideal perceptual conditions (i.e., in good
light, at close distance, etc.) one can distinguish cats from skunks. Thus,
they suggest cognizers represent cats as cats because of their discriminatory
capacities under ideal circumstances. Idealization in mainstream science is both the inspiration behind the covariationist’s
idealization strategy and its source of tacit plausibility. In normal cases
of scientific idealization theorists ignore certain negligible parameters
in real systems to formulate a law. The resulting law, though not strictly
true of any actual system, proves predicatively adequate, and quantifies a
real relation in actual physical systems. For example, real gases are
not composed of Newtonian molecules in that they are not point masses, nor
are their collisions perfectly elastic. Likewise, heat is not the only energy
source present in gases. In most cases, however, other parameters (like electromagnetic
forces) prove negligible. One idealizes away from such negligible parameters
to quantify the inverse relationship between, on the one side, the pressure
and volume of a gas, and on the other side, the temperature and number of
moles. Idealizing away from other parameters results in the ideal gas law:
PV = nRT.
The move toward idealization in response to the disjunction problem attempts
to emulate noncontroversial cases of scientific idealization like the above-described
ideal gas law. The basic idealization move defines representation as follows:
A state Sc represents Cs as Cs iff under
ideal circumstances the system tokens a Sc when, only when,
and because Cs are present. |
On the idealization line, Sc represents cats as cats
since, under ideal circumstances, the cognizer tokens Sc
s when, only when, and because of the presence of instances of cathood. State
Sc does not represent "cat or skunk" because under ideal
circumstances skunks do not cause the cognizer to token Sc. Such an idealization from error, admits the covariationist, breaks
down in abnormal circumstances. Nevertheless, the idealization allows
psychological laws to capture a real relation (the representation relation)
in actual systems. Moreover, since most cases are close to ideal circumstances,
psychological laws utilizing idealized covariation prove predicatively
adequate.
The covariationist’s ideal conditions solution divides tokenings
of a state by a system into content-imbuing (ideal) and content-fixed (malfunctioning/atypical)
classes. Covariationists assert that one legitimately idealizes away
from error because errors since errors prove coextensive with the cases of malfunction and/or
atypical situations from which one legitimately idealizes. For example,
if one pulls a weighted spring hard enough to bend or break the spring, it
will no longer obey the standard harmonic oscillation equation. Such
spring systems do not undermine the the basic physics of dampened harmonic oscillatory
systems because the circumstances are such that the spring system
cannot operate qua spring system. The same holds for cognitive
malfunctions: One’s tokening of an Sc as a result of an amphetamine
induced psychosis does not introduce disjunctive content because amphetamine
induced psychosis is a case of mental malfunction.
Of course, all skunk-caused instances of Sc will not
trace their origins to malfunction. One could token a Sc
as a result of the same cat-like features prompting tokenings of S
c for cats. Dismissing such look-alike cases as malfunctioning
requires one also dismiss cat cases as malfunctions. All cognition
becomes malfunction, if one focuses exclusively upon malfunction in specifying
ideal conditions. Consider another example.
Illustrative Animation
Subjects normally judge the right side of the above figure to have a much
darker shade of purple than the left side. In fact, the surface reflectance
of the purple areas of the figure remains uniform (it is all the same color
of purple). Psychologists refer to the mistaken judgment as "the assimilation
effect". Psychologists explain the assimilation effect by attributing
the mistake to the pooling together of signals from several retinal cells.
In most circumstances, pooling reduces equivocation (error or noise) arising
from false signals by individual retinal cells. However, when one
closely intersperses the dark and light patterns throughout a visual field,
pooling serves to inhibit or to excite cell firing, resulting in a misrepresentation
of relative shading. So, a case of normal functioning--in fact, functioning
that acts to reduce error--results in misrepresentation.
Covariance theorists respond to such normal functioning cases by
appealing to atypical conditions. For instance, when the pressure upon
a gas exceeds atmospheric level (1.013 x 105 N/m2), or when its temperature
becomes too great, the negligible parameters ignored by the idealized gas
law become non-negligible. The predictive accuracy of the idealized
gas law plummets, though the gas does not malfunction. Nevertheless,
these cases do not count against the idealized gas law because the circumstances
are atypical. The covariationist likewise claims that circumstances
are perceptually atypical, that is, less than ideal, when the system tokens
a Sc in response to a cat.
The covariationists’ move looks suspiciously circular unless they can specify
an independent means of ruling out case of similar features as atypical.
In other words, one must define ideal conditions so that malfunction and atypical
circumstances prove coextensive with error. But one cannot avail oneself
of the notion of error, nor of other intentional or semantic notions in formulating
and motivating the definition. One must have a reason for labeling
the skunk to Sc cases as less than ideal, and the reason
cannot be that skunks cause cat tokens (Sc) in feature similarity
cases. For example, in the movie "The Crying Game" an Irishman trying
to escape from the IRA becomes romantically involved with a woman.
However, much to the surprise of the Irish man (and the audience) this woman
is actually a man. The reaction of the Irishman clearly shows
he misrepresented the gender of his romantic partner. Yet, the Irishman
did see his partner in good light, at close distance, etc.. The natural
solution to this difficulty attempts to take advantage of the idea that there
are conditions under which such a sex difference would not escape notice.
However, in specifying such conditions as ideal conditions for this case,
one must be guided in a circular manner by one’s knowledge of the property
that the state actually represents.
Additionally, critics (Wallis 1994 and 1994a) argue that
appealing to idealization to defeat the disjunction problem is strongly and
negatively disanalogous with successful uses of idealization is science.
For example, in order to explain misrepresentation in cases of malfunction
or atypical circumstances, the covariationist must rely upon the content
dictated by idealized covariation. When a skunk causes an
Sc because bad lighting, the covariationist must suppose
that Sc represents cats in order to explain why the person
misrepresented the skunk as a cat. Thus, for the covariationist the
results of the idealization are applicable in conditions which violate the
presuppositions of the idealization. In contrast, when the pressure
upon a gas exceeds atmospheric level (1.013 x 105 N/m2), or when its temperature
becomes too great, the negligible parameters ignored by the idealized gas
law become non-negligible. In these atypical circumstances physicists
will agree that the ideal gas law does not apply since the presuppositions
of the idealization do not hold. Thus, while the mainstream use of
idealization is limited to ideal or close to ideal circumstances, the covariationist
cannot accept such limitations.
Learning Periods
Fred Dretske (1981, 1989) suggests that content gets
fixed for a state during a learning period. In this period, a cognizer
develops a perfect causal connection between the state and the object or
property with the help of an instructor who provides examples and corrections.
Once the learning period ends, the cognizer’s state has a fixed content and
tokenings of the state in cases where the object or property is not present
count as misrepresentations.
A state Sc represents Cs as Cs iff
in the learning situation the system tokens a Sc when, only when,
and because Cs are present. |
Or, to adopt probabilistic formulation more consistent with Dretske's work:
A state Sc represents Cs as Cs iff
in the learning situation the probability of Cs given that the system tokens a Sc
equals 1. |
Fodor (1988) and others have criticized this approach on
two grounds. First, Fodor argues against the existence of a principled distinction between
learning and non-learning periods. Hence, if Dretske cannot offer a
principled ground for
distinguishing between learning and non-learning periods, then Dretske
has no basis upon which to call some tokenings of the state content imbuing and others representing
or misrepresenting. Second, even if one could rigorously specify a learning period,
learning during this period seems to provide no principled distinction between the univocal and disjunctive
causal connections. Critics ask, "if one tokens Sc
for both cat and skunk in the post-learning period, why should suppose
that the causal connection created in the learning period actually holds between cats
and Sc and not between cat/skunk and Sc
?" Worse still, why isn't the content "learning period cat or
post-learning period cat/skunk"?
Philosophers also offer the following standard criticisms of the
learning periods approach. First, a cognizer can prima facie
only represent those objects/properties for which it has had a learning
situation. Thus, on the learning approach there can be no innate
knowledge. Second, cognizers do not seem to learn to identify properties
and objects with perfect reliability as required by Dretske. Finally,
since learning is a historical fact about a cognizer and a duplicate does not share one's learning
history, the states of one's
molecule for molecule duplicate would seem to lack the content had by one's own states.
(Such examples duplicate examples appear in Burge, 1979, Cummins 1989, Davidson
1987, and Putnam 1975).
Teleological Accounts
Advocates of a teleological solution to the disjunction
problem, such as Dretske (1988), Millikan (1983 and 1986), Neander
(1995 and 1996), Papineau (1984), Shapiro (1992 and 1996), and Sterelny (1990) suggest that evolutionary history (evolution or learning for Dretske) determines content. For example,
a frog captures and eats any ambient moving dot in its visual field.
As a result, hungry frogs eat ball-bearings rolled in front of them.
Advocates of the teleological account claim that frogs misrepresent ball-bearings
as flies because the function of a frog’s visual cells is determined by the
uses of those cells responsible for a frog’s ability to propagate its genome,
i.e., uses involving fly capturing. Such an account would define
representation as follows:
A state Sc represents Cs as Cs
iff a function of tokens of Sc in the cognizer is to indicate
when Cs are present. |
There is one caveat to the above definition in that the notion of representation
for teleosemanticists might be more like representing the presence of C
at a time. |
Two major distinctions appear among teleological theorists.: (1) whether the content-determining functions are those of
belief-generating mechanisms generically or of individual beliefs specifically and (2) whether
one should assess the function of states relative to producers of those states, or to the consumers of the states.
For example, Millikan holds that adaptational history determines the function
(and hence, the content) of
specific beliefs. The content of the frog's visual state
"fly," has an indicative function and content of fly, in virtue of the state's past co-occurrence with flies.
Others (ex. Papineau) hold that selection
determines the function of belief-making mechanisms. Thus, the state means
fly because the mechanism, the frog's visual system, that produced the state has the selective function of indicating the presence of objects in the distal environment. Some teleologists (ex. Millikan),
likewise emphasize that the
function of the state for consumers of the state determines
content. Others suppose that the function of the producers of the
state determines the state's content. Thus, for Papineau the function of
the visual system (the producer of the state) determines its content,
In contrast, the fact that Sc triggers the frog's
capturing mechanism (the consumer) gives Sc its
indicative function on Millikan's account.
Fodor (1988) criticizes teleological accounts on the ground
that evolution does not select with sufficient precision to account for typical
univocal content claims. If a frog represents ambient dots as "fly or
ball-bearing" in a fly-rich, ball-bearing-poor environment, then that disjunctive
representational content would account for the frog’s ability to propagate
its genome. Hence, claims Fodor, evolutionary history will not favor
"fly" over "fly-or-ball-bearing" as the content of frog visual cells.
Two other objections to teleological accounts appear in
the literature. First, most artifacts (e.x., compact discs, SUVs, etc.)
in the human environment were not present during a significant portion of
human evolutionary history. As a result, advocates of teleological solutions
to the disjunction problem must explain a huge percentage of the representational
capacities of humans, including the representations of many ordinary objects
like chairs or beer, by definition. Second, because teleological accounts
appeal to the evolutionary history of a cognizer to explain representational
abilities, any seeming representational abilities without such a history
would not have representational content on the teleological account For example, if a molecule for molecule duplicate of Millikan
where to spontaneously appear, it would seem to have all of her cognitive/representational
abilities. Yet, on Millikan’s account, her exact double would not have
any states with representational content. (Cummins, 1989, Davidson, 1987).
Asymmetric Dependence
Fodor’s (1990) asymmetric dependence theory suggests another
twist on the general idealization strategy. In addition to a nomic
connection between a state, Sc, and an object/property
(cathood) asymmetric dependence theories define representation in terms of
two counterfactual scenarios:
If, skunks cause Sc
s, then (1) breaking the skunk to Sc connection does not
break the cat to Sc and (2) breaking the cat to S
c connection does break the skunk to Sc connection.
|
Illustrative Animation
As above, asymmetric dependence rules out potentially troublesome cases because
of the existence of a
counterfactual set of conditions in which subjects can distinguish cats from
skunks. Critics (Cummins, 1989, 1997, and Wallis 1995) argue that asymmetric
dependence theories fare no better than other versions of nomic covariation.
One difficulty is that the brain recognizes higher level concepts through
the detection of features. As a consequence, it appears that there is
no asymmetric dependence between cases, or worse, it goes the wrong way from
fake to representation. For instance, if I find my car by looking
for features x,y,z, then I can break the car to Scar connection
by altering the appearance of my car. But that will not break the look-alike
car to Scar connection (violating 2). If, on the
other hand, I break the look-alike car to Scar connection
(say by altering my beliefs about my car’s appearance), then it seems I do
break the car to Scar connection (violating 1). One can
tell similar
stories in terms of the normal, albeit somewhat noisy, functioning
of cells in the visual system. (Wallis 1995).
Other philosophers have forwarded other objections. Adams and Aizawa
(1997) argue that Fodor's conditions commit him to the retinal projections
of properties and objects instead of the properties and objects
themselves. Antony and Levine (1991) as well Wallis (1995) argue
that Fodor's theory runs counter to other aspects of his own account of
mentality.
Other Problems for Nomic Covariation
Semantic Reduction and Verificationism
In addition to the disjunction problem, another objection
challenges to the ability of covariation theories to explain the representational
capacities of states with regard to objects/properties beyond low-level perceptual
properties (e.x. color). One can make one's best case for law-like
covariation determining the content of states representing low-level properties like red or zero-crossing.
One might even extend the notion to ordinary, medium sized objects/properties like table or
cow. However, at some point, probably early on one must suppose
that states represent properties and objects of a higher levels
of abstraction, at least one's like electron, in virtue of their being defined
(perhaps even implicitly) in
terms of the representational properties of lower level states.
So
what's proposed is a sort of foundationalism. The semantics of observation
concepts is indeed special: First, in that--given an intact observer--the
nomologically sufficient and semantically relevant conditions for their
tokenings are specifiable 'purely externally'; viz., purely psychophysically.
And second, in that all the other semantically relevant symbol/world linkages
run via the tokening of observation concepts. 'Horse' means horse if
'horse' tokenings are reliably caused by tokenings of psychophysical concepts
that are in turn caused by instantiations of psychophysical properties for which
instantiations of horse are in fact causally responsible. The
causal chain runs from horses in the world to horsy looks in the world to
psychophysical concepts in the belief box to 'horse' in the belief box. 'Horse'
means horse because that chain is reliable. (Fodor 1988,
p.122)
|
Robert Cummins (1989, 1997) objects to this strategy, claiming that the covariation
of mental states with objects/properties requires that the cognizer represent
heuristic information either implicitly or explicitly. Thus,
covariation will not explain the representational properties of any states
but the most primitive representational states. All higher level concepts
from dog to democracy must get meaning through definition. At first glance, it seems plausible to suppose that states that represent
high-level theoretical properties like electron get their meaning
through definition in terms of lower level properties. After all, such properties
and objects often have theoretical definitions. However, the failure of such
semantic reductions has shaped the history of western philosophy and mathematics
(Quine 1951). The logical positivists, for instance, held that the meaning of
terms was just the conditions for their verification (the verificationalist
theory of meaning). However, no generally accepted means of spelling out
such conditions emerged. Covariation’s critics
ask, "if the state one uses to represent crow gets its meaning from
being defined in terms of lower level properties, then why does one have
difficulty in articulating that definition in any but the most superficial
sense?" Psychological research also appears to run contrary to semantic reductionism
in that it indicates that object recognition and categorization are strongly
influenced by perceptual features, the exact make-up of which is somewhat
variable and fluid.
According to Cummins, mental states achieve covariation with cats because
we already have explicitly represented or implicit knowledge about cats (ex.
they are domesticated felines). Further, if covariation is not the simple,
unmediated causal relationship between the property/object and the state,
but rather the result of often complex causal interactions within the system,
then this suggests that covariation is really just another instance
of its competitor, functional role semantics. Similarly, Wallis argues
that visual object recognition is itself a cognitive task and as such appeals to
rather than explains mental representation.
Failure of Univocal Feature Detection for Low-Level Properties
Finally, some theorists like Kathleen Akins (1996) (also Churchland and
Sejnowski 1992 and Wallis 1995) argue that the various cells that perform
sensory transduction are not properly characterized as the sorts of feature
detectors required by nomic covariation. Such sensory cells do not
respond exclusively to the presence of some particular object or property,
but can and do respond to other properties. Their responses are not
all or nothing. Rather, they show "selective sensitivity" to properties
in that they respond to a wider range of stimulus, but respond more strongly
to certain stimuli. For example, rods and cones in the retina are often
portrayed as representing a certain wavelength of light. The response
curves of these cells in fact overlap dramatically.
Thus, the same response
can be elicited from a red cone by similar levels of blue or green light.
Additionally, perception is riddled by often useful confabulations by the
sensory systems. For example, rod and cone activation is represented
by averages of pooled cells almost immediately in the visual system.
This has a good effect in that it eliminates noise from sources like body
heat, but can also lead to the assimilation effect
noted earlier. Similarly, the visual system compensates for the blind
spots in each retina were the cells leave the eye by "filling in" that area.
blind
spot
The
Problem of Depth
Covariationist intend
their definition to capture a representation relation between states of the
cognizer and objects and properties in the immediate distal environment.
However, causal covariation occurs at a number of stages in the causal chain
leading to the tokening of a state by a cognizer. At one extreme, all
states of a cognizer happen when only when and because of the big bang. At
the other extreme, Sc will covary with some set of patterns of
retinal stimulation. Covariationists must find a way to rule out all links
in the causal chain except the object/property in the immediate distal
environment. For example, retinal stimulation patterns might be ruled out
as candidates for content because of the multiplicity of patterns that cause a
single state.
Illustrative
Animmation
Non-existent
Objects
Explaining the content of states that intuitively represent non-existent objects
or properties poses yet another challenge for
covariation theorists. If people have states that
represent the property of being a unicorn, it would seem that those states must
covary with unicorns. Since there are no unicorns with which a candidate
state can covary, covariation theories do not seem to have a natural way to
account for such representational abilities. Fodor (1990) claimsthat
his asymmetric dependence theory can handle such cases. Another possible
strategy for the covariationist would be to define non-existing objects.
Absent
Qualia
Some philosophers, for example,
Searle (1980), Maudlin (1989), and BonJour (1991) have also argued against
covariation theories (in fact, against all causal theories) on the grounds that
these theories are satisfied in cases where from the internal perspective of an
agent, the agent lacks the necessary conscious awareness of content. For
these philosophers mental states that have intentionality (are about something)
have a qualitative aspect to them.
By
'consciousness' I mean subjective phenomenal states or modes of
awareness. The most obvious examples of conscious episodes are
sensory: tickles, pain, visual experiences, and so on. But they are not
confined to straightforwardly sensory events. There is a certain
phenomenology associated, for example, with my pondering the fact that ice is
made of water. (Maudlin 1989, p.3)
|
Searle (1980), for instance, argues
that one can manipulate Chinese symbols having the appropriate causal
connections to objects or properties in the world without thereby coming to
understand Chinese.
Functional Role Semantics
Whereas covariationists focus upon a single causal
connection in fixing content, advocates of functional role semantics (Block
1986 and 1987, Field 1977 and 1978, Harman 1987), suggest that the overall
network of causal relations into which a state can enter fixes its content.
Often causal roles are specified functionally/computationally. Versions of
functional role semantics include "Conceptual Role Semantics," "Procedural
Semantics," or "Inferential Role Semantics". There are two versions
of this approach, designated "wide" or "long-armed" and "narrow" or "short-armed".
Narrow functional role theorists limit the causal relations that determine
content to those occurring between mental states. Wide theorists allow
connections to the distal environment and even social contexts to delimit
the content of a state.
Illustrative Animation
Advantages of Functional Role Semantics
Unified
Account of Content
Functional role theories have several attractive features.
For instance, functional role theorists do not need bifurcated accounts of representational content.
Recall that the law-like connection between state and property required by
covariation theories seemed the most plausible for low-level properties like
color or shape. Therefore, covariationists had a difficulties accounting
for a state's ability to represent higher-level properties and uninstantiated
properties. Thus, theorists often offer alternative accounts of the
representation of these properties (like definition through low-level
properties). Functional role semanticists hypothesize that all states get their content in the same manner, via their functional
role.
Hence, functional role theorists avoid difficulties in explaining the representation of high-level
and uninstantiated properties. They need not depend upon bifurcated
accounts of representational properties.
Belief
Change
Second,
functional role theorists can easily accommodate the observation that changes
in beliefs can result in changes in representational content, since changing
beliefs will often change the functional roles of states. For example,
prior to the discovery of Multiple Sclerosis theorists thought that Polio was
the only disease associated with fever, headache, stiff neck and back, muscle pain and
tenderness, and, if there is involvement of the central nervous system, paralysis of essential muscles, such as those controlling swallowing, heartbeat, and respiration.
As a result, researchers were baffled by the seeming vaccine resistant cases of
Polio. The discovery of MS seemed to change the representational content
of the mental representation of Polio.
Rationality
Constraints on Belief Ascriptions
Finally,
functional role theorists can capture the intuition that in trying to understand
the representational content of another cognizer’s state, one is constrained
by the heuristic that the overall set of content ascriptions must "make sense,"
i.e., be consistent with the supposition that overall, the cognizer’s interactions
with the world are intelligent or rational. For example, if someone saw an
actual UFO the covariationist might well suppose that UFO was the content of
their representational state. However, if the person was the head of the
UFO-Sceptics association and had a theory debunking the UFO interpretation of
their experience. The functional role theorist would likely not attribute
the content "UFO" to the person's state.
Problems for Functional Role Theories
Semantic Holism
Theorists typically raise three distinct but related objections
to all versions of functional role semantics. First, to identify
particular states in disparate individuals as states having the same representational
content, functional role semantics prima facie requires that the states
have identical functional roles. Fodor (1992, 1987) among others has
termed this view Holism, about content (also Semantic Holism).
For example, suppose that it appears that both Bill and Bob believe that
"Wallis’ article is enlightening." That is, they both point to the
same article, proclaim that it is enlightening, assign it in their class,
etc.. However, the causal roles of their respective states differ in
a single respect; Bill believes "Wallis is a pompous know-it-all," while
Bob believes "Wallis is a precocious windbag." Prima facie,
functional role semantics dictates that Bill’s belief that "Wallis’ article
is enlightening," has a different content than Bob’s. Similarly, people
with disparate cognitive or perceptual abilities seem to have disparate representational
contents. Thus, the theory appears to imply that the average human
has a different concept of traffic light than color blind humans.
Functional role semanticists generally adopt one of two responses to the above objection. One response strategy--by
far the predominant strategy--merely takes a page from the covariation
playbook and adopts a functional role version of the two classes
strategy. The functional role version of the two classes
strategy would prune
the number and/or kind of causal or computational connections necessary for
belief/content identity, thereby allowing for belief/content identity across
individuals with somewhat different causal/computational roles. In
the above case, differences in beliefs that are peripheral to the belief
that "Wallis’ article is enlightening" would not necessarily constitute the
basis for content non-identity.

While the just-rehearsed response has intuitive
appeal, critics (Fodor 1988) point to the potential difficulties in distinguishing
core (central) beliefs or other causal links from peripheral ones in a manner
that is not hopelessly unsystematic and ad hoc. For instance, what
causal connections (beliefs, desires, dispositions to take action, etc..)
constitute the core of one’s belief that the colorless, tasteless, odorless
liquid before one is water. Need one know, to take a case, of the existence
of deuterium oxide (heavy water), make appropriate inferences with regard
to D2O, discriminate between D2O and H2O,
etc.. One might opt to the common strategy of including only analytic beliefs and corresponding inferences
in the set of core beliefs and inferences in distinguishing
core from peripheral beliefs and inferences. Analytic truths are conceptual
truths, those things true solely in virtue of the nature of the
concept. Such a suggestion requires a real distinction between analytic
beliefs and non-analytic (synthetic) beliefs. Many philosophers feel
that Quine (1953/1970) has effectively undermined the robustness of such
a distinction. Similarly, Stich (1983) argues that judgments of belief/content
identity are not as intuitively precise as advocates of the analytic distinction
would seem to predict.
The second response to the prima facie difficulty of intuitively
identical content across differing functional roles claims that while such
beliefs are in fact non-content-identical, they are strongly content-similar.
The key concept here is the notion that belief identity is not a binary notion,
but ranges from completely non-identical to completely identical (Cummins
1989). The strength (or weakness) of this response lies in its ability
to accommodate the intuition that Bill’s and Bob’s beliefs have the same
content while simultaneously acknowledging the theoretical constraint that
differences in functional roles dictate differences in content.

Critics of Functional role semantics (ex. Fodor, 1988)
argue that a graded notion of content identity, in addition to being counterintuitive,
undermines the ability to formulate psychological generalizations and subsume
particular cases under those generalizations. Cognitive science, they
claim, would be reduced to the unworkable notion that Bill’s and Bob’s beliefs
are, say, 97% content similar to the belief that Wallis’ article is enlightening
and hence can be, say, 97% subsumed under generalizations regarding the belief
that Wallis’ article is enlightening. Furthermore, Fodor (1992 and
2001) claims colloquial notions of belief similarity like, "His notion of
mental representation is similar to mine," as well as their theoretical counterparts
presuppose a notion of belief identity that he claims cannot be provided
by holistic theories in any cases where beliefs diverge.
Content Fixing, Error, and Univocal Contents
A closely related objection to that of holism is the argument
that functional role theories have no non-arbitrary way either of fixing
content. Specifically, functional role semanticists either cannot distinguish
representational error from veridical representation, their theory results
in non-univocal content ascriptions, and/or the theory results in multiple
content ascriptions applying simultaneously to a given state. For example,
suppose that there are two worlds: one, call it Earth1 in
which there is no water, but in which D2O (or some chemically
different, but phenomenally similar substance) is plentiful; the other, call
it Earth2, where H2O is plentiful, but there is no
D2O. Bob grows up on Earth1, where he forms beliefs,
etc. about "water" as a result of interacting with D2O.
Bill grows up on Earth2, where he develops the exact same set of
beliefs, etc. about "water" as Bob, but based upon interacting with H2O. If one is a narrow functional role theorist (using only causal
connections "internal" to the cognizer), then the two men must have identical
belief content when thinking about "water". They either form beliefs, make
inferences, etc., using states that represent what we would describe using the
disjunction, "H2O or D2O", or their states represent
both H2O and also D2O (as distinct entities)
simultaneously. Most philosophers consider the latter even less intuitively
plausible than the former.
Long arm or two factor theories can distinguish Bill’s
and Bob’s beliefs since they include causal connections to environmental objects
and/or properties. However, suppose that Bob also has the beliefs that
"water is H2O," that "I live on Earth2," and "Earth
2 has only H2O on it’s surface." Is Bob
representing "water" as D2O, but forming a false belief
about the chemical structure of D2O? Or, is he representing
"water" as H2O and forming false beliefs about the D2O he finds in his environment. Critics assert that functional role semantics
lacks the resources to disambiguate such scenarios.
Representational Structure
Finally, Fodor and LePore (1992), Fodor and McLaughlin
(1981), and Fodor and Plysyhyn (1988) raise the objection that functional
role semantics seems to violate truths about the structure of language and
thought like compositionality and systemativity. Compositionality is
the theory that the meaning of a complex expression in a language results
from the meanings of its constitutive elements. Compositionality
plays an central role in many linguistic theories, since its supposition for
both language and thought provides a fairly straightforward explanation of
the human ability to grasp an enormous number of different thoughts of varying
complexity and their corresponding linguistic expressions. For instance,
because we understand the individual elements like "cup" and "coffee", we
understand the complex expressions "cup of coffee," "hot coffee," "coffee
gives me the shakes," etc.. One explains our understanding by noting
that the meaning of these complex sentences is built-up from the meaning
of their constitutive elements.
Fodor and others claim that the functional role of a complex,
non-idiomatic representation is not always a function of the functional roles
of its parts. As a result, functional role theories can represent "cup of
coffee" without having the concepts and associated inferences of "cup" or
"coffee". This alleged aspect of functional role representation schemes
would result in the possibility that one could represent and think about
"cup of coffee," but could not represent or think about "iced coffee," "hot
coffee," etc.. Nor could one understand or evaluate the sentence, "Coffee
is the legal drug for which we exploit the people and natural resources of
many third world countries."
Block
(forthcoming
) has suggested that functional role semanticists can account for compositionality
by adopting context-sensitive rules of use for words. That is, meanings
of words can be understood as sets of inferences (functional roles) to to
sentences containing the word and sets of inferences from sentences containing
the word. Thus, "cup of coffee" does get its meaning from cup and is
consistent with one understanding "iced coffee", etc.. Connectionists
with commitments to functional roles and others (Smolensky 1991 and 1991a)
have argued that connectionist systems can have constituents satisfying many
of the criteria of compositionality without adopting traditional compositional
representational frameworks.
Causal Theories and Cognitive Science
Cognitive science proceeds in the absence of a resolution of the debate over
theories of representation. However, the adoption of either nomic covariance
theories or functional role semantics does shape the research of individual
cognitive scientists. Many important debates in cognitive science have
resulted from allegiance to one or the other causal theory. For example,
one aspect of the connectionist vs Turing-compatible approaches computation
is that advocates of connectionist frameworks tend to adopt functional role
semantics, while advocates of Turing-compatible frameworks tend to adopt
nomic covariation. Similarly, the debate over the nature of mental
imagery was drawn along covariance vs functional role lines. It is
widely supposed that the adoption of one or the other theory of mental content
by the majority of cognitive scientists will have a profound impact on the
field.
Bibliography
- Akins, K. (1996) "Of Sensory Systems and the 'Aboutness' of Mental
States." Journal of Philosophy. 93, 337-372.
- Allen, C. (forthcoming) "A Tale of Two Froggies." Canadian
Journal of Philosophy.
- Almog, J., Perry, J. and Wettstein, H., eds. Themes from Kaplan
, New York: Oxford University Press (1989).
- Aristotle. De Anima, in The Complete Works of Aristotle:
The Revised Oxford Translation, Oxford: Oxford University Press (1984).
- Antony, L. & Levine, J. " The nomic and the robust," In (B. Loewer & G. Rey, eds)
Meaning in Mind: Fodor and his Critics. Blackwell (1991).
- Beaney, M., ed. The Frege Reader, Oxford: Blackwell Publishers
(1997)
- Berkeley, G. Principles of Human Knowledge, in M.R. Ayers,
ed., Berkeley: Philosophical Writings, London: Dent (1975).
- Block, N. (ed.). Readings in Philosophy of Psychology, Vol. 2
, Cambridge, Mass.: Harvard University Press (1981).
- Block, N. (ed.) Imagery, Cambridge, Mass.: The MIT Press (1982).
- Block, N. "Mental Pictures and Cognitive Science," Philosophical
Review 93 (1983): 499-542.
- Block, N. "Advertisement for a Semantics for Psychology," in P.A.
French, T.E. Uehling and H.K. Wettstein, eds., Midwest Studies in Philosophy,
Vol. X, Minneapolis: University of Minnesota Press (1986): 615-678.
- Block, N. "Functional Role and Truth Conditions," Proceedings
of the Aristotelian Society suppl. (1987) 61: 157-181.
- Block, N. "Mental Paint and Mental Latex," in E. Villanueva, ed.,
Philosophical Issues, 7: Perception (1996): 19-49.
- Boghossian, P.A. "Content," in J. Kim and E. Sosa, eds., A Companion
to Metaphysics, Oxford: Blackwell Publishers Ltd. (1995): 94-96.
- BonJour, L. "Is Thought a Symbolic Process?" Synthese 89
(1991) 331-352.
- Burge, T. "Individualism and the Mental," in P.A. French, T.E. Uehling
and H.K. Wettstein, eds., Midwest Studies in Philosophy, Vol. IV, Minneapolis:
University of Minnesota Press (1986): 73-121.
- Chalmers, D. The Conscious Mind, New York: Oxford University
Press (1996).
- Chalmers, D. "Connectionism and Compositionality: Why Fodor and Pylyshyn
Were Wrong," Philosophical Psychology 6 (1993): 305-319.
- Chisholm, R. and Sellars, W. "The Chisholm-Sellars Correspondence
on Intentionality," in H. Feigl, M. Scriven and G. Maxwell, eds., Minnesota
Studies in the Philosophy of Science, Vol. II, Minneapolis : University
of Minnesota Press (1958): 529-539.
- Chomsky, N. Aspects of the Theory of Syntax, Cambridge, Mass.:
The MIT Press (1965).
- Churchland, P.M. "Eliminative Materialism and the Propositional Attitudes,"
Journal of Philosophy 78 (1981): 67-90.
- Churchland, P.M. "On the Nature of Theories: A Neurocomputational
Perspective," in W. Savage, ed., Scientific Theories: Minnesota Studies
in the Philosophy of Science, Vol. 14, Minneapolis: University of Minnesota
Press (1989): 59-101.
- Churchland, P and Sejnowski, T (1992) The Computational
Brain. Cambridge, MA: MIT Press.
- Cummins, R (1989) Meaning and Mental Representation. Cambridge:
MIT Press.
- Cummins, R. (1996) Representations, Targets, and Attitudes.
Cambridge: MIT Press.
- Cummins R. The Lot of the Causal Theory
of Mental Content. Journal of Philosophy. 1997 10/01/;94(10):535-42.
- Davidson, D. "Radical Interpretation," Dialectica 27 (1973):
313-328.
- Davidson, D. "Belief and the Basis of Meaning," Synthese 27
(1974): 309-323.
- Davidson, D. "Thought and Talk," in S. Guttenplan, ed., Mind and
Language, Oxford: Clarendon Press (1975): 7-23.
- Davidson, D. "Rational Animals," Dialectica 4 (1982): 317-327.
- Davidson, D. "Knowing One's Own Mind", Proceedings of the American Philosophical
Association (1987).
- Dennett, D. Content and Consciousness, London: Routledge and
Kegan Paul (1969).
- Dennett, D. "The Nature of Images and the Introspective Trap," pages
132-141 of Dennett 1969, reprinted in Block 1981 (1981): 128-134.
- Dennett, D. The Intentional Stance, Cambridge, Mass.: The MIT
Press (1987).
- Dennett, D. "True Believers: The Intentional Strategy and Why it Works,"
in Dennett 1987 (1987a): 13-35.
- Dennett, D. "Reflections: Real Patterns, Deeper Facts, and Empty Questions,"
in Dennett 1987 (1987b): 37-42.
- Dennett, D. "Quining Qualia," in A.J. Marcel and E. Bisiach, eds.,
Consciousness in Contemporary Science, Oxford: Clarendon Press
(1988): 42-77.
- Dennett, D. "Real Patterns," The Journal of Philosophy LXXXVII
(1991): 27-51.
- Devitt, M. Coming to Our Senses: A Naturalistic Program for Semantic
Localism, Cambridge: Cambridge University Press (1996).
- Dretske, F. Seeing and Knowing, Chicago: The University of
Chicago Press (1969).
- Dretske, F. Knowledge and the Flow of Information, Cambridge,
Mass.: The MIT Press (1981).
- Dretske, F. Explaining Behavior: Reasons in a World of Causes
, Cambridge, Mass.: The MIT Press (1988).
- Dretske, F. "The Need to Know," in Lehrer, K. and Clary,
M. eds., Theory of Knowledge: The State of the Art Tucson, AZ:
University of Arizona Press (1989).
- Dretske, F. Naturalizing the Mind, Cambridge, Mass.: The MIT
Press (1994).
- Evans, G. The Varieties of Reference, Oxford: Oxford University
Press (1982).
- Field, H. "Logic, Meaning, and Conceptual Role," Journal
of Philosophy (1977) 74: 379-409.
- Field, H. "Mental representation," Erkenntnis 13 (1978): 9-61.
- Flanagan, O. Consciousness Reconsidered, Cambridge, Mass.:
The MIT Press (1992).
- Fodor, J.A. The Language of Thought, Cambridge, Mass.: Harvard
University Press (1975).
- Fodor, J.A. "Propositional Attitudes," The Monist 61 (1978):
501-523.
- Fodor, J.A. Representations, Cambridge, Mass.: The MIT Press
(1981).
- Fodor, J.A. "Introduction," in Fodor 1981 (1981a): 1-31.
- Fodor, J.A. "Methodological Solipsism Considered as a Research Strategy
in Cognitive Psychology," in Fodor 1981 (1981b): 225-253.
- Fodor, J.A. "The Present Status of the Innateness Controversy," in
Fodor 1981 (1981c): 257-316.
- Fodor, J.A. "Cognitive Science and the Twin-Earth Problem," Notre
Dame Journal of Formal Logic 23 (1982): 98-118.
- Fodor, J.A. Psychosemantics, Cambridge, Mass.: The MIT Press
(1987).
- Fodor, J. and Pylyshyn, Z. (1988). "Connectionism and Cognitive
Architecture: A Critical Analysis," in Cognition. 28: 3-71.
- Fodor, J.A. A Theory of Content and Other Essays, Cambridge,
Mass.: The MIT Press (1990a).
- Fodor, J.A. "Psychosemantics or: Where Do Truth Conditions Come From?"
in W.G. Lycan, ed., Mind and Cognition: A Reader, Oxford: Blackwell
Publishers (1990), (1990b): 312-337.
- Fodor, J. and McLaughlin, B. (1991). "Connectionism and the Problem
of Systematicity: Why Smolensky's Solution Doesn't Work," in T. Horgan
and J. Tienson (eds.) Connectionism and the Philosophy of Mind.
Boston: Kluwer Academic Publishers.
- Fodor, J. and LePore, E. Holism: A Shoppers’ Guide. Oxford:
Blackwell. (1992).
- Fodor, J.A. The Elm and the Expert, Cambridge, Mass.: The MIT
Press (1994).
- Fodor, J. The Mind Doesn’t Work That Way. Cambridge:
MIT Press. (2001).
- Fodor, J.A. and Pylyshyn, Z. "How Direct is Visual Perception?: Some
Reflections on Gibson?s ?Ecological Approach?,"Cognition 9 (1981):
207-246.
- Fodor, J.A. and Pylyshyn, Z. "Connectionism and Cognitive Architecture:
A Critical Analysis," Cognition 28 (1988): 3-71.
- Frege, G. The Foundations of Arithmetic, trans. J.L. Austin,
New York: Philosophical Library (1953).
- Frege, G. "On Sinn and Bedeutung" (1892) in Beany 1997:
151-171.
- Frege, G. "Thought" (1918) in Beany 1997: 325-345.
- Geach, P. Mental Acts: Their Content and Their Objects, London:
Routledge & Kegan Paul (1957) .
- Goldman, A. "The Psychology of Folk Psychology," Behavioral and
Brian Sciences 16 (1993): 15-28.
- Goodman, N. Languages of Art (2nd ed.), Indianapolis: Hackett
(1976).
- Grice, H.P. "Meaning," Philosophical Review, 66 (1957): 377-388;
reprinted in Studies in the Way of Words, Cambridge, Mass.: Harvard
University Press (1989): 213-223.
- Harman, G. Thought, Princeton: Princeton University Press (1973).
- Harman, G. "(Non-Solipsistic) Conceptual Role Semantics," in E. Lepore,
ed., New Directions in Semantics, London: Academic Press (1987): 55-81.
- Harman, G. "The Intrinsic Quality of Experience," in J. Tomberlin,
ed., Philosophical Perspectives 4: Action Theory and Philosophy of Mind
, Atascadero: Ridgeview Publishing Company (1990): 31-52.
- Haugeland, J. "Analog and analog," Philosophical Topics 12
(1981): 213-226.
- Heil, J. "Being Indiscrete," in J. Greenwood, ed., The Future of
Folk Psychology, Cambridge: Cambridge University Press (1991): 120-134.
- Horgan, T. and Tienson, J. Connectionism and the Philosophy of
Psychology, Cambridge, Mass: The MIT Press (1996).
- Horst, S. Symbols, Computation, and Intentionality, Berkeley:
University of California Press (1996).
- Hubel, D. and Wiesel, T. (1977) "Ferrier Lecture: Functional
Architecture of Macaque Monkey Visual Cortex." Proc. R. Soc. Lond. (1977) 198:1-59.
- Hume, D. A Treatise of Human Nature, L.A. Selby-Bigge, ed.,
revised P.H. Nidditch, Oxford: Oxford University Press (1978).
- Jackendoff, R. Computation and Cognition, Cambridge, Mass.:
The MIT Press (1987).
- Johnson-Laird, P.N. Mental Models, Cambridge, Mass.: Harvard
University Press (1983).
- Johnson-Laird, P.N. and Wason, P.C. Thinking: Readings in Cognitive
Science, Cambridge University Press (1977).
- Kaplan, D. "Demonstratives," in Almog, Perry and Wettstein 1989 (1989):
481-614.
- Kosslyn, S.M. Image and Mind, Cambridge, Mass.: Harvard University
Press (1980).
- Kosslyn, S.M. "The Medium and the Message in Mental Imagery," in Block
1982 (1982): 207-246.
- Kosslyn, S. Ghosts in the Mind?s Machine, New York: W.W. Norton
& Co. (1983).
- Kosslyn, S.M. and Pomerantz, J.R. "Imagery, Propositions, and the
Form of Internal Representations," Cognitive Psychology 9 (1977):
52-76.
- Leeds, S. "Qualia, Awareness, Sellars," Noûs XXVII (1993):
303-329.
- Lerdahl, F. and Jackendoff, R. A Generative Theory of Tonal Music
, Cambridge, Mass.: The MIT Press (1983).
- Levine, J. "On Leaving Out What It?s Like," in M. Davies and G. Humphreys,
eds., Consciousness, Oxford: Blackwell Publishers (1993): 121-136.
- Levine, J. "On What It Is Like to Grasp a Concept," in E. Villanueva,
ed., Philosophical Issues 6: Content, Atascadero: Ridgeview Publishing
Company (1995): 38-43.
- Lewis, D. "Analog and Digital," Noûs 5 (1971): 321-328.
- Lewis, D. "Radical Interpretation," Synthese 23 (1974): 331-344.
(Reprinted, with Postscript, in Lewis 1983: 108-121.)
- Lewis, D. Philosophical Papers, Vol. I, New York: Oxford University
Press (1983).
- Loar, B. Mind and Meaning, Cambridge: Cambridge University
Press (1981).
- Locke, J. An Essay Concerning Human Understanding, P.H. Nidditch,
ed., Oxford: Oxford University Press (1978).
- MacDonald, C. and MacDonald, G. Connectionism: Debates on Psychological
Explanation, Oxford: Blackwell Publishers (1995).
- Marr, D. Vision, New York: W.H. Freeman and Company (1982).
- Martin, C.B. "Proto-Language," Australasian Journal of Philosophy
65 (1987): 277-289.
- Maudlin, T. "Computation and Consciousness," Journal
of Philosophy 86 (1989) 407-432.
- McCulloch, W.S. and Pitts, W. "A Logical Calculus of the Ideas Immanent
in Nervous Activity," Bulletin of Mathematical Biophysics 5 (1943):
115-33.
- McGinn, C. "The Structure of Content," in A. Woodfield, ed., Thought
and Content, Oxford: Oxford University Press (1982): 207-258.
- McGinn, C. Mental Content, Oxford: Blackwell Publishers (1989).
- McGinn, C. "Content and Consciousness," in C. McGinn, The Problem
of Consciousness, Oxford: Blackwell Publishers (1992): 23-43.
- Millikan, R. Language, Thought and other Biological Categories
, Cambridge, Mass.: The MIT Press (1984).
- Millikan, R. "Thought Without Laws: Cognitive Science Without
Content." Philosophical Review (1986) 95:47-80.
- Neander, K. "Misrepresenting and Malfunctioning." Philosophical Studies
79:109-41 (1995).
- Neander, K. "Dretske's Innate Modesty," in Australasian Journal of Philosophy
74:258-74 (1996).
- Newell, A. and Simon, H.A. Human Problem Solving, New York:
Prentice-Hall (1972).
- Newell, A. and Simon, H.A. "Computer Science as Empirical Inquiry:
Symbols and Search," Communications of the Association for Computing Machinery
19 (1976): 113-126.
- Osherson, D.N., Kosslyn, S.M. and Hollerbach, J.M. Visual Cognition
and Action: An Invitation to Cognitive Science, Vol. 2, Cambridge, Mass.:
The MIT Press (1990).
- Papineau, D. "Representation and Explanation," in Philosophy of Science
51:550-72 (1984).
- Papineau, D. Reality and Representation, Oxford: Blackwell
Publishers (1987).
- Peacocke, C. Sense and Content, Oxford: Clarendon Press (1983).
- Peacocke, C. "Perceptual Content," in Almog, Perry and Wettstein 1989
(1989): 297-329.
- Peacocke, C. "Scenarios, Concepts and Perception," in T. Crane, ed.,
The Contents of Experience, Cambridge: Cambridge University Press
(1992): 105-35.
- Pinker, S. Learnability and Cognition, Cambridge, Mass.: The
MIT Press (1989).
- Port, R. and Van Gelder, T. Mind as Motion: Explorations in the
Dynamics of Cognition, Cambridge, Mass.: The MIT Press (1995).
- Putnam, H. "The Meaning of Meaning," in Philosophical Papers,
Vol. 2, Cambridge: Cambridge University Press (1975): 215-71.
- Pylyshyn, Z. "The Rate of ?Mental Rotation? of Images: A Test of a
Holistic Analogue Hypothesis," Memory and Cognition, 7 (1979): 19-28.
- Pylyshyn, Z. "Imagery and Artificial Intelligence," in Block 1981
(1981a): 170-194.
- Pylyshyn, Z. "The Imagery Debate: Analog Media versus Tacit Knowledge,"
Psychological Review 88 (1981b): 16-45.
- Pylyshyn, Z. Computation and Cognition, Cambridge, Mass.: The
MIT Press (1984).
- Quine, W. "Two Dogmas of Empricism" in Philosophical Review
60: 20-43 (1951).
- Quine, W. From a Logical Point of View. Cambridge: Harvard
University Press. (1953).
- Quine, W. Word and Object. Cambridge: MIT Press. (1960).
- Raffman, D. "The Persistence of Phenomenology," in T. Metzinger, ed.,
Conscious Experience, Paderborn: Schöningh/Imprint Academic
(1995): 293-308.
- Ramsey, W., Stich, S. and Garon, J. "Connectionism, Eliminativism
and the Future of Folk Psychology," Philosophical Perspectives 4 (1990):
499-533.
- Rey, G. "Introduction: What Are Mental Images?" in Block(1981): 117-127.
- Rey, G. "Sensations in a Language of Thought," in E. Villaneuva, ed.,
Philosophical Issues 1: Consciousness, Atascadero: Ridgeview Publishing
Company (1991): 73-112.
- Rumelhart, D.E. "The Architecture of the Mind: A Connectionist Approach,"
in M.I. Posner, ed., Foundations of Cognitive Science, Cambridge, Mass.:
The MIT Press (1989): 133-159.
- Rumelhart, D.E. and McCelland, J.L. Parallel Distributed Processing,
Vol. I, Cambridge, Mass.: The MIT Press (1986).
- Schiffer, S. 1987. Remnants of Meaning, Cambridge, Mass.: The
MIT Press (1987).
- Schiffer, S. "Introduction to the Paperback Edition," in Meaning
, Oxford: Clarendon Press (1988): xi-xxix.
- Searle, J. "Minds, Brians, and Programs," Brain and Behavioral
Sciences 3 (1980) 417-457.
- Searle, J.R. Intentionality, Cambridge: Cambridge University
Press (1983).
- Searle, J.R. The Rediscovery of the Mind, Cambridge, Mass.:
The MIT Press (1990).
- Sellars, W. "Empiricism and the Philosophy of Mind," in K. Gunderson,
ed., Minnesota Studies in the Philosophy of Science, Vol. I, Minneapolis:
University of Minnesota Press (1956): 253-329.
- Shapiro, L. 1992. Darwin and disjunction: Foraging theory and univocal assignments of content. Philosophy of Science Association 1992, 1:469-80.
- Shapiro, L. 1996. Representation from bottom to top. Canadian Journal of Philosophy 26:523-42.
- Shapiro, L. "Multiple Realizations," The Journal of Philosophy
, vol. 97, no. 12, pp. 635-654, (2000).
- Shepard, R.N. and Cooper, L. Mental Images and their Transformations
, Cambridge, Mass.: The MIT Press (1982).
- Shoemaker, S. "Qualities and Qualia: What?s in the Mind?"
Philosophy and Phenomenological Research 50 (1990): 109-31.
- Smolensky, P. "On the Proper Treatment of Connectionism,"
Behavioral and Brain Sciences, 11 (1988): 1-74.
- Smolensky, P. "Connectionist Modeling: Neural Computation/Mental
Connections," in L. Nadel, L.A. Cooper, P. Culicover and R.M. Harnish, eds.,
Neural Connections, Mental Computation Cambridge: The MIT Press (1989):
49-67.
- Smolensky, P. "Connectionism, Constituency, and
the Language of Thought," in G. Rey and B. Loewer (eds.) Meaning in Mind
. Cambridge: Blackwell (1991) : 201-227.
- Smolensky, P. "The Constituent Structure of Connectionist
Mental States: A Reply to Fodor and Pylyshyn," in T. Horgan and J. Tienson
(eds.) Connectionism and the Philosophy of Mind. Boston: Kluwer
Academic Publishers. (1991a).
- Sterelny, K. "Fodor's Nativism," in Philosophical Studies
55 (1989): 119-141.
- Sterelny, K. The Representational Theory of Mind: An Introduction.
Cambridge: Blackwell (1990).
- Stich, S. 1978. "Autonomous Psychology and the Belief-Desire
Thesis," The Monist 61 (1978): 573-591.
- Stich, S. From Folk Psychology to Cognitive Science,
Cambridge, Mass.: The MIT Press (1983).
- Strawson, G. Mental Reality, Cambridge, Mass.: The MIT
Press (1994).
- Tye, M. The Imagery Debate, Cambridge, Mass.: The MIT
Press (1991).
- Tye, M. Ten Problems of Consciousness, Cambridge, Mass.:
The MIT Press (1995).
- Tye, M. Consciousness, Color, and Content, Cambridge,
Mass.: The MIT Press (2000).
- Van Gelder, T. "What Might Cognition Be, if not Computation?", Journal of Philosophy XCI (1995): 345-381.
- Von Eckardt, B. 1993. What Is Cognitive Science?, Cambridge,
Mass.: The MIT Press (1993).
- Wallis, C. "Ceteris Paribus Laws and Psychological Explanation," Philosophy of Science Association 1994. (1994)1: 388-397.
- Wallis, C. "Representation and the Imperfect Ideal,"
Philosophy of Science. (1994a) 61: 407-428.
- Wallis, C. "Asymmetric Dependence, Representation, and
Cognitive Science," Southern Journal of Philosophy. (1995) 33:373-401.
cognitive
science
| computing,
modern history of
| connectionism
| consciousness: and intentionality |
consciousness: representational theories of
| folk psychology:
as a theory
| folk
psychology: as mental simulation
| information |
language of thought hypothesis
| mental content | mental content: externalist theories of | mental
content: narrow | mental content: nonconceptual | mental content:
teleological theories of |
mental imagery
| mental
representation
| neuroscience,
philosophy of
| perception | qualia
| reference| teleological
notions in biology
|