Reliabilism


Introduction

A school of thought linking knowledge either directly or indirectly to reliability emerged in the early seventies and thrives today. Reliaibilism is one of the most influential approaches in epistemology, rivaling Gettier's famous paper in impact. Indeed, a great deal of the epistemic literature for the last two decades defends or criticizes various versions of reliabilism, with distinguished philosophers making contributions on both sides of the debate. Reliabilists differ from many epistemologists in that they require a reliable mechanism for belief formation, but do not require any awareness or evidence of either the mechanism's reliability or an awareness or evidence of the mechanism's level of the reliability.  So, a reliabilist might allow your visual beliefs to count as knowledge on the basis of the reliability of your visual belief forming mechanisms, even if you did not consciously know the mechanisms involved, nor their reliabiliity. Thus, reliabilists differ from, for example, Bayesian theorists in that Bayesian's require one accept or discard a belief from one's rational belief corpus on the basis of one's calculation of it's likelihood to be true given one's evidence. Similarly, many epistemologists (e.x. Lehrer 1990), while recognizing the value of a reliable belief mechanism, are not reliabilists because they require a belief that the mechanism of belief formation is reliable.

Because reliabilism bases epistemic evaluations upon factors that hold true of the knower, but which they need not have psychologically available to them in forming and/or retaining a belief, reliabilism is an externalist theory.  In fact, Reliabilism is often equated with externalism though reliabilism certainly does not exhaust the externalist possibilities.  In contrast, most traditional epistemic theories base the ascription of knowledge and justification upon the knower's subjective conception (or one's potential subjective conception) of the evidence and evidential relations for their belief.  Traditional theories, as a result, are internalist theories.  

This notion that the fact of reliability, and not one's awareness of reliability, is sufficient for knowledge is both the reliabilist's strength and the source of contention between reliabilists and other epistemologists (see the criticisms section).  Since reliabilists do not require beliefs about the belief forming process, they can have a means by which potential epistemic regresses can terminate.  Thus, reliabilism represents one solution to the infamous regress problem in epistemology.  Similarly, reliabilism constitutes a powerful response to sceptical arguments.  Reliabilism allows one to have knowledge in virtue of actually having a reliable belief forming mechanism, thus rendering such sceptical hypotheses as the evil demon, brain in the vat, etc. irrelevant.  Finally, reliabilism is consistent with belief formation of a non-inferential nature (that is, not based upon prior beliefs).  This consistency makes reliabilism congenial to contemporary theories of perception in cognitive science.  It also allows for the attribution of epistemically praiseworthy states to creatures like dogs and cats, that seem to know about the world, but are not obvious candidates for second-order beliefs about the reliability of their belief forming mechanisms.

Perhaps the earliest statement of the Reliabilism thesis can be found in Frank Ramsey's (1931) The Foundations of Mathematics and Other Logical Essays. Systematic formulations of reliabilism appear in the early 1970's. In Belief, Truth, and Knowledge (1974), D.M. Armstrong formulates and defends an account of non-inferential perceptual knowledge comparing the non-inferential knower to an absolutely reliable instrument. At approximately the same time, Fred Dretske presents a similar conditional analysis of perceptual knowledge in his Seeing and Knowing (1968) and "Conclusive Reasons" (1971). Likewise, Alvin Goldman presents an explicitly reliabilist account of justification in his "What is Justified belief?" (1979). Other epistemologists have followed Armstrong, Dretske, and Goldman using reliability in a diverse set of projects.
 

Types of Reliabilism

All versions of reliabilism link knowledge to reliability. However, there are numerous differences among epistemologists regarding the manner in which they forge that link. A complete reliabilist theory must answer six foundational questions:

1.) What is the subject of the analysis?
2.) What is the basic unit of the analysis?
3.) What is the level of reliability required for knowledge?
4.) What type of reliability does knowledge require?
5.) How does one individuate processes or indicators?
6.) How does one determine the set of situations relevant to the level of reliability?

It is convenient to structure the explication of reliabilism around these issues. Later sections address criticisms of reliabilism and responses by important reliabilists.
 

The Subject of Analysis

The first issue one must address in formulating a reliabilist theory consists simply in specifying the subject of one's analysis or one's explanandum (thing to be explained). Reliabilists have forwarded theories concerning numerous objects of epistemic appraisal. The chief subjects of reliabilist analyses are: justification of belief, social justification, knowledge as belief, special subtypes of belief-based knowledge (e.x. apriori knowledge), and know how (i.e., one's knowledge of how to do things).  For example, Alvin Goldman has formulated reliabilist theories of both personal justification and social justification. Since Goldman adheres to the general framework of defining knowledge as justified true belief, his theories of justification are a key component in theories of their respective types of knowledge. Kornblith (1980) and Sosa (1980 and 1991) have also made important contributions to the reliabilist literature on justification. Some reliabilists (Armstrong 1974, Dretske 1981) do not define knowledge in terms of justification. If one seeks to define knowledge primarily, there still remains the issue of the type of knowledge. D.M. Armstrong focuses upon non-inferential perceptual knowledge. Dretske offers a very influential theory of de re, non-inferential perceptual knowledge. Wallis (1994) has advocated a reliablist theory of knowledge and of know how. 
 

The Unit of Analysis

Once one chooses an explanadum (thing to be explained) one must chose an explanans (thing that does the explaining). In other words, one must specify what has to be reliable in order for a person to gain knowledge. There are two general approaches to applying reliability in epistemology. On the one hand, reliable process theories tie either justification and/or knowledge to the reliability of the process (or processes) that generate the belief.  Goldman (1979 and 1986) outlines such an approach. For instance, one might know that the cat is in the trash because a reliable perceptual process generated that belief. On the other hand, reliable indicator theories tie knowledge and/or justification to the reliability of the indicator causing the belief or from which one infers the belief. For example, Dretske ties de re non-inferential perceptual knowledge to the conditional probability of the state of affairs holding in the world given the signal which results in the belief.  So, one might know that the cat is in the trash because one came to believe that the cat was in the trash as the result of a particular visual stimulus, where the probability of the cat being in the trash given that visual sensation equals 1.

 

Levels of Reliability

Dretske's requirement of perfect reliability for de re non-inferential perceptual knowledge is one of the few explicitly stated (and argued) specifications of a particular level of reliability in the literature. In order to articulate a reliabilist theory of knowledge one must specify the minimum level of reliability required for knowledge. Nevertheless, many reliabilists do not commit themselves to a level of reliability. The reason for this lack of commitment probably lies in the nature of reliability itself. Since reliability is measured by percentages, measures can range from 0 to 100%. It seems somewhat arbitrary to pick out, say 84%, as opposed to 83% or 85%. Worse still, there seems to be no epistemically relevant difference between 84% and 83.9%. The generally accepted scientific minimum level or reliability for a result is 95%, a convention which most scientists admit is somewhat arbitrary.

Another strategy for choosing a level might require the level of reliability to be better than chance. So, for coin tosses any true belief formed by a process that has a higher reliability than 50% would could as knowledge. For Russian Roulette, any process that had a higher reliability than 16.7% could produce knowledge. The idea here is that any process or indicator that beats chance gives the believer an edge that can only be explained by their epistemic perspective (i.e., their being "in tune" with the world). Of course, 16.8% reliability does not strike one as knowledge when one is looking down the barrel of a gun during a game of Russian Roulette. Thus, many reliabilists (Dretske 1981 and Goldman 1986) hold that there are other, more pragmatic considerations like the importance of the belief, the speed at which it was formed, and power of the belief forming mechanism, that might affect the minimum reliability level.
 

Types of Reliability

The particular notion of reliability that an epistemologist adopts constitutes another factor impacting the level of reliability. There are two elements to choosing an appropriate notion of reliability. First, one must determine the proper sense of "high truth-ratio." Reliabilists generally understand "high truth-ratio" as the probability of the belief being true, so that determining the proper sense of "high truth-ratio" amounts to deciding which notion of probability to adopt. Second, after choosing a notion of "high truth-ratio," one must determine the relevant means of measurement. That is, one must decide upon the manner in which reliability of a process or indicator will be measured.

If one adopts a classical notion of probability as one's theoretic framework for evaluating reliability, then the likelihoods of situations a believer might encounter are assigned based upon the principle of indifference (the assumption that all logically possible outcomes are equiprobable). For example, one might evaluate the reliability of one's belief that their lottery ticket is a winner using the odds of winning (in lotto, 1 in 80 million). Since the chance of any given ticket winning seems equal to the chance of any other ticket winning, this seems like a good estimate. A classical framework might work well for determining reliability of processes involving games of chance, but would prove inadequate in a world where situations have different frequencies of occurrence. For example, suppose a nonsmoker wants to evaluate the reliability of believing that they have lung cancer based upon classical probability theory. There are two possibilities, cancer and no cancer, hence the reliability of simply believing one has lung cancer is an unrealistic 50%.

Most reliabilists think of reliability under the propensity or frequency interpretation of probability. The frequency interpretation of probability understands probability in terms of the equation f/n, where f is the number of observed favorable events (events of interests) and n is the total number of observations. Specifically, frequentists assert that the likelihood of some event can be understood as the number to which one would converge (the limit) using f/n as n, the total number of observations, approaches infinity (probability of Y: P(Y) = fY/n as n → ∞).

Once one determines the relevant sense of a "high truth-ratio," one must choose among possible ways in which a process or indicator can be reliable.  For example, one might require absolute or overall reliability for knowledge or justification. The truth-ratio for the indicator or process would then be the probability of the process or indicator generating a true belief whenever it is used (within some class of specified situations). An absolutist notion of reliability seems intuitively plausible in that it requires of those who know or are justified that they consistently form veridical (true or accurate) beliefs.  However, considering cases where a high overall reliability looks spurious or accidental weakens one's absolutist intuition.  For instance, suppose that during a outbreak of tuberculosis a 19th century doctor in New York City tries to diagnose tuberculosis in its earliest stages by palpating (feeling) the patient's tongue. He circulates among the patients in the hospital wards using his diagnostic method, thereby finding that most of the patients have the disease. The doctor gains some renowned due to his very high reliability in predicting who would eventually come down with the disease. Does he know? Are his beliefs justified? Once one knows the details of the case it seems unlikely. Tuberculosis is readily transmitted by saliva. The doctor, not knowing the cause or pathways of infection, was passing saliva from one patient to the next. Thus, since he was very likely to run across someone who had tuberculosis in his rounds, the doctor literally created his own success by acting as the disease's vehicle of transmission. In such cases overall reliability cannot be traced to the process or indicator being "in tune" with the state of affairs in world.

Similarly, the reliability of a process or indicator need not be symmetric for a event or property and it's absence. Suppose that doctors develop a test to diagnose the presence or absence of a certain disease. The test has a high reliability in detecting the disease's absence. Since the disease occurs in only one in one hundred persons, the test proves highly reliable overall. However, it is worse than chance at indicating the presence of the disease. Doctor's would be foolish to believe the disease present in a case, even though indicated by the highly reliable test. For these reasons, reliabilists often require high conditional reliability as well as absolute reliability. That is, the new test would fail to generate justified belief or knowledge of the presence of the disease since its reliability given the presence of the disease is extremely low. The New York physician does not deserve his reputation a great diagnostician because, the odds that a patient had tuberculosis before the doctor's exam lead him to conclude that they had the disease is significantly lower.
 

Process Characterization

Reliabilism ties knowledge to the reliability of the process or indicator generating the output. However, one can characterize a process in different ways, each having potentially different levels of reliability. Reliabilists must, therefore, forward a plausible hypothesis as to the appropriate manner of process characterization for the purposes of epistemic evaluation. For example, does my ability to see my hand result from the highly specific process linking the particular pattern of retinal stimulation to the particular activity pattern of my visual cortex and beyond? Or, is my ability to see my hand an instance of a process with wider applicability? Many reliabilists differ to psychology as the proper source of process characterization  (Wallis 1994).  However, others (Goldman 1992 and Sosa 1991) adopt a notion of intellectual virtues and vices. Virtues and vices consist of ordinarily understood belief forming processes such as vision that are held to be either reliable (virtue) or unreliable (vice).
 

The Relevance Class

Epistemologists have often conflated the issue of finding a general formula for characterizing the relevant process for the purposes of determining reliability with the next issue: determining the relevance class. Goldman (1974 and 1986) refers to this problem as the "Generality problem." Feldman discusses the problem in terms of the "no distinction" and "single case" problems.

Just as different process characterizations potentially have divergent reliabilities, a given process also has potentially divergent reliabilities when evaluated over a particular class of applications. For example, color vision is reliable when applied to cases of close and careful observation in full natural light. However, in cases where the full spectrum of wavelengths present in natural light are absent, say in red or black light, color vision becomes unreliable. Thus, in addition to determining the basic unit of reliability, one must also determine the set of alternatives relevant to the determination of a process' or indicator's reliability. There are two views regarding what would constitute an adequate solution to the relevance class and process characterization issues. Most philosophers conceive of the relevance class and process characterization issues as issues for the reliabilist definition/theory of knowledge. (Goldman 1986, Feldman 1985, 1985a) Therefore, any ambiguities or errors in specifying the relevance class or characterizing the process trace to the definition/theory of knowledge. That is, they result from the inadequate nature of the theory/definition. Some reliabilists (Wallis 1994) view the relevance class and process characterization issues differently. On this view, these issues are issues for ascription of knowledge or justification and the prescription of epistemic methodology. Errors or ambiguities associated with ascription or methodology need not reflect negatively on a theory/definition of knowledge, but may merely indicate limitations on the part of investigators in applying the theory to particular situations. Thus, unlike errors due to definition/theory which always weigh heavily against that definition/theory, errors due to ascription/methodology only undermine the usefulness of the theory/definition if they exist in principle and/or are very severe in nature.

When thinking of the relevance class and process characterization issues as definitional problems, one's task is to determine the general rules for specifying how systems operate and the sorts of situations in which they operate independent of empirical investigation (apriori). The theory of knowledge determines the sorts of situations relevant to understanding a system's performance as well as the sorts of operations which characterize the system's performance. Moreover, because solutions to the characterization and the relevance class issues form part of the definition of knowledge, the presupposition is that the solution will provide unique answers to the questions; "What process-type?" and "What set of situations?".

When one construes the relevance class and characterization issues in terms of ascription and methodology, one then sees the role of a definition of knowledge to be the explication of the relationship between the system's processes and its potential and/or the actual environment of habitual performance. The definition of knowledge delimits a framework for formulating the relevance class and the processes characterization without actually providing such formulations independent of empirical investigation and argumentation. On the ascription/methodology approach the definition of knowledge dictates a definitive answer as to whether a given belief counts as knowledge given a determinate process characterization and relevance class. However, the definition of knowledge only acts to impose constraints on the specification of processes and relevance classes.

To take an example, the application of the idealized pendulum law to a pendulum resulting in an anomalous result does not necessarily refute the idealized pendulum law.  The system in question may violate the conditions of the idealization. For instance, it may have non-negligible frictional resistance at the arm pivot. Hence, the pendulum violates the prescriptions for the application of the law. Whether an anomalous result will undermine the idealized law depends on whether scientists determine that the application was consistent with the prescribed methodology for applying the law. If not, theorist must consider whether the application cannot be reformulated in a non-trivial manner to exclude such cases. But, whether the prescribed methodology was followed or is amendable is often a subject of heated debate.

Most approaches to the relevance class presume a definitional approach in supposing that to adequately address this issue one must formulate a general rule for determinately specifying the relevant class of alternatives used to evaluate the reliability of a given process. Overall, reliabilists have explored six basic strategies for determining relevance classes; actual applications, subjunctive conditionals, local reliability, global reliability, normal worlds reliability, and task-specific reliability.
 

Actual Applications

In evaluating reliability one might take the actual applications of the process or instances of the indicator. For example, Freddy the frog identifies and captures flying insects by using what one might call the "dot-to-shot" process of inferring their presence from ambient black dots. All of Freddy's uses of the dot-to-shot process occur in his natural environment. Assuming that one can say of Freddy that he knows, advocates of the actual applications strategy would say that Freddy knows when a flying insect goes by because his actual uses of dot-to-shot are reliable. Actualists do not consider counterfactual cases. For example, if Freddy were taken to a lab and used dot-to-shot, he would falsely infer the presence of flying insects when experimenters roll ball-bearings in front of him. Admirers of counterfactual notions of relevance classes would find the fact that Freddy will strike at ball-bearings until transformed into a living beany baby sufficient reason to include at least some counterfactual situations in relevance classes. Actualists would argue that Freddy's counterfactual behavior is not epistemically relevant to his actual performance.

Critics of the actual applications approach argue that in many cases what would have happened, counterfactuals, are relevant.  For instance, suppose that in order to form a belief as to whether your car has been stolen you rely upon the following process:  Flip a coin.  If the coin lands on heads, believe your car has been stolen,  If the coin lands on tails, believe your car has not been stolen.  As it turns out, your coin lands on heads and you form the true belief  that your car has been stolen.  Since you never use the process again, it is perfectly reliable.  Hence, your belief about your car's theft counts as justified or known.  Of course, the odds that your car has been stolen, even in Los Angeles, are dramatically smaller than the .5 probability that your coin will land on heads.  In this case, it seems as if the reliability of your process in counterfactual uses, i.e., more than one are relevant to the epistemic evaluation of your belief.   
 

Subjunctive Conditionals

One possible means of avoiding the pitfalls of epistemic happenstance is to allow counterfactuals to determine the relevance class as well as actual applications. Robert Nozick (1981) and Dretske (1968 and 1971) have suggested the use of subjunctive conditionals in their epistemic theories. In the case of Freddy the frog , Nozick and Dretske would ask what would happen if there was an ambient black dot but no fly.  More generally, subjunctivists require that if it were false that there was a fly Freddy, would not believe a fly present.  Freddy's propensity to use dot-to-shot indifferently in such cases would preclude his knowing on a subjunctive counterfactuals approach.

However, critics of the subjunctive conditionals approach claim that it too has its Achilles heel. One can rather easily specify circumstances where the process or indicator satisfies the subjunctive conditional, but which seem to be undermined by possible alternatives not ruled out by the subjunctive conditional. Imagine that Professor I. M. Guileless suspects one of his students, Dudley Doright, of plagiarism. Guileless pulls Doright aside and asks him about the matter. Doright, an impeccably honest and principled person, truthfully denies having plagiarized the work in question. Guileless accepts Doright's testimony and correctly believes that Doright did not plagiarized the paper. Unbeknownst to Guileless, Doright is the only student at the school who does not plagiarize and lie about plagiarizing when interrogated. It appears that Doright's case satisfies the subjunctivist's counterfactual criterion. If Doright had plagiarized, he would have, as is his character, confessed to Guileless. Nevertheless, it seems that Guileless fails to know since only the happy coincidence of having Doright as the suspect prevents Guileless from forming a false belief based upon testimony of the accused. Any other student would have plagiarized and lied to Guileless.
 

Local and Global Reliability

Still another class of strategies delimits the relevance class in terms of the class relevant alternatives. Neither global nor local reliabilism represents a general rule for specifying the relevant class of alternatives used to evaluate the reliability of a given process. Rather they represent guidelines for specifying such a rule. Those who espouse local reliability claim that the appropriate relevance class is the class of situations characteristic of the use of the process or indicator from which the particular belief arose. For instance, suppose that one wants to evaluate Bob's belief that the colorless, odorless, tasteless liquid in his glass is water. Bob determines that a sample is water through it's phenomenological properties. On Earth the only odorless, colorless, tasteless, liquid is water. However, on distant worlds there are other, undiscovered chemical compounds that would prove phenomenologically indiscriminable to Bob. Localists would assert that Bob knows that the liquid is water because in all cases of phenomenal discrimination in an exclusively water environment, phenomenal discrimination is reliable.

Globalists suppose that the relevance class is constituted of all or many of the sorts of situations in which the believer uses the process or indicator. Globalism has intuitive pull in that we seem to expect knowers to employ processes that work all or much of the time. For example, theories of human object recognition must explain more than object recognition at close range and in good light. Humans can recognize objects across a range of changing light levels, in cases of rotation, and in cases of partial occlusion.

Furthermore, suppose that one forms a belief on the basis of a well-defined set of successful uses of a process. The process, however, is generally unreliable and used indiscriminatively. Many reliabilists one would like to rule out knowledge or justification on the basis of such a generally unreliable process. For instance, suppose that Sigmund Fraud believes that all psychological illnesses, whatever their symptoms, are caused by psycho-sexual conflicts between parent and sibling. For the vast majority of illnesses Fraud's belief forming process yields false beliefs. However, there is a mental illness, call it "hate-my-dadosis," that does have its origins in such conflicts. Calling Fraud's diagnoses of hate-my-dadosis justified or knowledge given that these diagnoses are merely an instance of a uninformed application of an indiscriminate and generally unreliable process seems incorrect.
 

Normal Worlds

Normal worlds specify the class of relevant uses of a process, not merely in terms of the reliability of the process in actual world situations characteristic of one or more uses, but also in terms of situations nomologically consistent with the actual world. Normal worlds are those worlds consistent with our beliefs about the sorts of objects and properties existent in the actual world and how the actual world operates. Goldman (1986) explored a normal worlds approach to justification in which processes needed to have both high local and high global reliability in all normal worlds. Normal worlds approaches differ from other relevant alternatives approaches in that the relevance class is dictated, not necessarily by actual world situations, but by situations consistent with our beliefs about the objects, properties, and causal relationships in the actual world. If we have true beliefs about the actual world, then the relevance class will be populated by situations consistent with the actual world. However, if the actual world differs from the way we believe it to be the relevance class, and hence the justification of our beliefs will be based upon our conception of the world and not the actual world.

Among other objections to normal worlds one need only note that people have a number of epistemically troublesome false beliefs about the world (Wallis 1994). For example, large numbers of people believe in extrasensory perception (ESP). On the normal worlds view, then, our true beliefs that ESP does not exist based upon our experiences would be evaluated for reliability in worlds where ESP does exist. Such a normal worlds evaluation would likely find our beliefs unjustified.
 

Task-Specific Reliability

A final option explored by reliabilists as a means of fixing the appropriate relevance class, the task-specific approach, ties the specification of the relevant class of alternatives to the cognitive task specification. Specifying a cognitive task involves three elements; (1) articulating the precise nature of the problem to be solved, (2) specifying any law-like relationships or statistical regularities that underlie the performance of the cognitive task, and (3) specifying the actual process used to solve the problem. This tripartite task specification determines the system's environment of habitual performance, which in turn determines the relevance class. One evaluates the effectiveness of the process (of the strategy) relative to one's best hypothesis as to the situations that the cognizer encounters in performing this task.

Of course, the task-specific approach, with its intimate relationship to explanation in cognitive science is not a theory of ordinary knowledge ascription. Epistemologist may well be wary of the possibility of adapting the general principles of the approach to explain ordinary knowledge ascriptions.
 

Arguments Against Reliabilist Theories

Critics of the reliabilist approach have formulated an number of arguments intended to demonstrate the inadequacy of reliabilism. Though they appear under a number of names, this article refers to them as; (1) The Characterization/Relevance Class Arguments, The Irrationality Arguments, the Lottery Paradox/Deductive Closure Arguments, and the Evil Demon Argument.

Characterization/Relevance Class Arguments

Feldman (1985, 1985a) and Pollock (1984) have criticized reliabilism on the grounds that no satisfactory specification of the "relevant uses" of a process can be made. Feldman discusses this problem in terms of the "generality problem" and the "single case problem." Feldman charges that if one opts for a broad characterization of a psychological process, then one cannot distinguish between intuitively legitimate (justification or knowledge conferring) uses of a psychological process and uses one intuits to constitute illegitimate applications. To take a case, suppose that one characterizes one's visual beliefs as all being generated by one process, vision.  One cannot distinguish normal cases of veridical (accurate) shape recognition from cases like the "cafe wall illusion" shown below. 

 

                                                                The tiles in the above picture are NOT wavy and irregular as they seem to appear.  In fact, the lines are in parallel strips of uniform width.

At the other extreme, if one specifies one's belief formation process very narrowly, one's corresponding relevance class will only include the actual use in question. This would always give true beliefs a reliability of 1. Such reliability assessments also seem incorrect as it would make all true beliefs count as knowledge. Consider the following case: Flip Wrongly wishes to evaluate his true belief that his next coin toss would land on heads. Since the toss was number 169, he specifies his belief forming process on this occasion as "believing the toss will land on heads because it is toss number 169." This process has a reliability of 1, as the belief proves true and the process only has one instance. Flip concludes that he knew that the coin would land on heads. Of course, Flip always believes that a given toss will land on heads. His overall reliability on the 200 tosses he makes is actually only 40% correct due to the slight bias of the coin. Feldman asserts that epistemic evaluations such as Flip's, would clearly be incorrect.  Furthermore, Feldman and Pollock argue that there is no in principle and non-arbitrary manner in which to specify a process somewhere between the extremes of the no distinction specification and the single case specification.

While this objection has enjoyed a great deal of attention, reliabilists generally find the above objection unconvincing. Goldman (1986) and Wallis (1994) have suggested specific criteria for specifying a relevance class which neither Pollock nor Feldman have addressed in the literature.  Furthermore, the crux of these objections seems to depend upon assuming the above-mentioned definitional approach.  No writer has explored the applicability of this objection if one does not assume that a definition of knowledge must include a general rule for determinately specifying the relevant class of alternatives used to evaluate the reliability of a given process.
 

Irrationality Arguments

A second and widely accepted class of arguments (BonJour 1980, Lehrer 1990, and Putnam 1983) against reliabilism seeks to show that reliabilist theories sanction as justified or known beliefs that are clearly irrational from the believer's own perspective.  If correct these arguments would show that reliability is insufficient for knowledge and/or justification. That is, the examples would show that a belief can satisfy the reliabilist definition of knowledge or justification without the belief being justified or known. For example, suppose that I. M. Gullible has a second cognitive foible. In addition to believing student testimony Gullible also believes that he can predict the weather by pains in his knee. Unbeknownst to Gullible his knee is a perfectly reliable predictor of the weather. What is known to Gullible is that his colleagues have presented him with massive amounts of cogent scientific evidence establishing that knee pain cannot predict weather. Gullible's belief that it will rain on Tuesday satisfies the reliabilist definition of knowledge or justification. Nevertheless, the objection goes, Gullible clearly does not know or have justified beliefs.

Goldman has offered several responses to irrationality objections. In his 1986 book, Goldman disallows beliefs such as Gullible's rain belief because such cases violate his nonundermining clause. In brief, the rain belief does not satisfy Goldman's 1986 theory because the rain belief is undermined by Gullible's awareness of the evidence against his knee's predictive power. In his 1992 book, Goldman argues that such cases do not constitute counterexamples to his current theory because Gullible's belief forming process in maintaining his rain belief is an intellectual vice. That is, because ignoring massive amounts of cogent scientific evidence is not generally reliable, it would widely be included amongst the list of cognitive vices (not justification or knowledge producing processes). Thus, Gullible and other evaluators of his rain belief would reject the belief as being justified or known because it is maintained by a process that is a cognitive vice.
 

Lottery Paradox/Deductive Closure Arguments

Another, less widely discussed argument offered against reliabilism concerns the notion of deductive closure.  Simply put, many epistemologists feel that a knower should also know or justifiably believe many or most deductive consequences of their  known or justified belief or beliefs.  Reliabilists allow beliefs with a probability of less than 1 count as knowledge (because the belief's probability, while less than one, is greater than the minimum level prescribed by the theory). Therefore, beliefs formed using multiple known (reliabilist notion) beliefs, even those formed using deductive reasoning, will have a lower level of reliability than the original beliefs. The likely result of such a net loss of reliability through deductive inference is that the reliabilist seems committed to the notion that a belief formed through deductive reasoning from known beliefs will, at least occasionally, fail to count as knowledge.

The lottery paradox argument is a particular example of such a deductive closure arguments. Philosophers generally credit Kyburg (1961) with first noticing the tension between accepting beliefs into one's rational belief corpus (accepting them as rational) based upon high, but imperfect (less than 1) probability and effects of inference using probabilistic reasoning. Epistemologists (Armstrong 1974, Dretske 1981, and Pollock 1985) have adapted Kyburg's lottery paradox argument to try to prove that reliability is insufficient for knowledge or justification by showing that cases counting as justified or known on the reliabilist account violate their principle of deductive closure for knowledge or justification.
 

The epistemic version of the lottery paradox argument proceeds as follows:

If a person believes for each ticket in a 80 million ticket lottery that it will lose, then the reliabilist asserts that their beliefs are justified or known (since its reliability is 1/80,000,000).

If each of a set of beliefs is justified or known by a person, and he or she correctly uses deductive reasoning to reason from those beliefs to a conclusion, then the conclusion is also justified or known.

But, if for each and every ticket the person justifiably believes or knows that that ticket will  lose, then they could justifiably believe or know that all the tickets will lose.

Since a belief that all the tickets will lose is false, and the believer knows that belief to be false (because the person uses the odds to make inferences about individual tickets) the universal loser belief cannot be known or justified.

Hence, Reliabilism cannot be true.
 

Some reliabilists (Dretske 1981 and Armstrong 1974) accept the lottery paradox argument, and as a result require perfect (100%) reliability (at least for all relevant alternatives) for knowledge or justification. Others (Goldman 1986, Nozick 1981) reject the closure principle for knowledge. Suppose, for example, that Sheila knows that A and she knows that B. Suppose further that Sheila comes to believe the conjunction of A and B, not on the basis of her belief in A or in B, but on the basis of her beliefs that C and (A & B) C [If A and B, the C]. Since Sheila has committed the fallacy of affirming the consequent, it does not follow that she knows that (A & B)--even if (A & B) is merely the conjunction of two beliefs of which she independently has knowledge. Goldman holds that examples such as the Sheila case seem to show that having knowledge of conjuncts does not imply that one has knowledge that the conjunction holds.  Instead, such examples suggest that knowledge of the conjunction depends upon the manner in which one generates the conjunctive belief.
 

Evil Demon Argument

The final objection to reliabilism in the literature is directed against reliabilist theories of justification. Suppose that humanity lived in an world where all of our experiences are exactly the same, but we are actually systematically deceived by an evil demon. In this evil demon world, all our beliefs are false. Yet, since all of our experiences are exactly the same, most philosophers feel that our beliefs would still be justified. But, reliabilism seems to dictate that our beliefs would fail to be justified due to the low reliability of our cognitive processes. For instance, perceptual beliefs seem justified even though we actually live in an evil demon world, since all of our perceptual experiences are exactly the same as they would be in the world we believe to exist. Yet, reliabilism dictates that these beliefs lack justification since perception is unreliable in the evil demon world.

There are two general lines of response to the evil demon example.  On the one hand, some philosophers deny that the validity of the intuition in evil demon cases.  On the other hand, some philosophers attempt to grant the intuition without allowing it to undermine their theories by arguing that the set of relevant alternatives for the evil demon case are such that the processes used to form beliefs are reliable.  Consider the later case first.

Goldman's 1986 appeal to normal worlds was an effort to respond to this argument. Suppose one adopts a normal worlds approach to specifying the relevance class. In cases where the actual world differs from the way we believe it to be, then the relevance class and the justification of our beliefs will be based upon our conception of the world and not the actual world. One can exploit this potential difference between situations in the relevance class and actual situations in the evil demon case. Since on the normal worlds account, the relevance class which determines perception's reliability in the evil demon world remains the same as in the non-demon world, perception is still justification conferring in the evil demon world. Unfortunately, as mentioned above, this strength of the normal worlds view is also its weakness.  In his 1992 article, Goldman responds to the the evil demon argument by asserting that perceptual belief formation would be a cognitive virtue (a process that is perceived to be generally reliable).  Hence, perceptual beliefs would be justified in the evil demon case, because beliefs generated or maintained by a virtuous cognitive process are justified.  Adopting alternative (to actual) characterizations of reliability has been a strategy of Bach (1985),  Horgan (2001), Comesana (2001) as well as Goldman. 

Philosophers who deny the intuition of the evil demon case have offered to arguments.  Chandler (1987) and others note that externalism about mental content rules out evil demon scenarios since the person deceived is not forming beliefs with same content as you or I.  Since externalism about content asserts that content is fixed by causal relationships between the individual and the environment, radically different environments such as the normal world and the evil demon world will result in radically different content.  Others, for instance, Brewer (1997), hold that the intuition is in fact illusory.  Victims of the evil demon (or hallucinations) lack adequate reasons to endorse their perceptual beliefs, it only seems as if they do because philosophers are not careful to distinguish justified belief from belief that is not blameworthy.  Evil demon victims fall under the later and not the former case. 

Conclusion

Reliabilist theories of knowledge continue to enjoy the support of some epistemologists and the condemnation of others. Reliabilist are unified by the general theoretical commitment: A process or indicator can result in knowledge or justified belief (or both) purely from the objective fact that the process of indicator is reliable. That is, conscious awareness of the reliability of the process or indicator is not necessary (though it might often accompany the belief) for knowledge or justification. Within the reliabilist framework there are many permutations that have emerged as a result of the many questions that a reliabilist must answer to properly formulate a reliabilist theory of knowledge or justification. Likewise, objections to reliabilism have come from both the answers that reliabilist theorists give to these foundational questions (such as the Characterization/ Relevance class arguments), and the consequences of the general theoretical framework (such as the deductive closure arguments).
 

Bibliography
 

 

Armstrong, D. (1974). Belief, Truth, and Knowledge. Cambridge: Cambridge University Press.

Bach, K. 1985. A Rational for Reliaiblism. Monist 68:246-263.

BonJour, L. (1980). Externalist Theories of Empirical Knowledge" in French, P., Uehling, T., and Wettstein, H. (eds) Midwest Studies in Philosophy, vol. 5 Minneapolis: University of Minnesota Press.

Brewer,B.  (1997).  "Foundations of Perceptual Knowledge," American Philosophical Quarterly. 34(1): 41-55.

Chandler, H. (1987). Cartesian Semantics. Canadian Journal of Philosophy 17:63-69.

Cohen, S. (1984). Justification and Truth. Philosophical Studies 46:279-296.

Comesana, J. (2002). The Diagonal and the Demon. Philosophical Studies 110 (3):249-266.

Derksen, A. (1978). "The Alleged Lottery Paradox Resolved," in American Philosophical Quarterly. 15: 67-74.

DeRose, K. (1996). "Knowledge, Assertion, and Knowledge," in Australasian Journal of Philosophy. 74: 568-579.

Dretske, F. (1969). Seeing and Knowing. Chicago: University of Chicago Press.

Dretske, F. (1970). "Epistemic Operators," in The Journal of Philosophy. 67: 1007-1023.

Dretske, F. (1971). "Conclusive Reasons," in Australasian Journal of Philosophy. 49: 1-23.

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge: MIT Press.

Dretske, F. (1981a). "The Pragmatic Dimension of Knowledge," in Philosophical Studies.  40:363-78.

Faust, D. (1984). The Limits of Scientific Reasoning. Minneapolis, MN: University of  Minnesota Press.

Feldman, R. (1985). "Reliability and Justification," in The Monist. 68: 159-74.

Feldman, R. (1985a). "Schmitt on Reliability, Objectivity, and Justification," in Australasian Journal of Philosophy.
63: 354-60.

Goldman, A. (1979). "What is Justified Belief?" in (ed.) Pappas, G. Justification and Knowledge. Dordrecht: D. Reidel. pp.1-23.

Goldman, A. (1986). Epistemology and Cognition. Cambridge, MA: Harvard University Press.

Goldman, A. (1992). Liaisons: Philosophy Meets the Cognitive and Social Sciences. Cambridge: MIT Press.

Greco, J. (1994). "Virtue Epistemology and the Relevant Sense of 'Relevant Possibility'". Southern Journal of Philosophy 32 (1):61-77.

Heidelberger, H. (1963). "Knowledge, Certainty, and Probability," in Inquiry 6: 242-252.

Horgan, T., and D. Henderson. (2001). Practicing Safe Epistemology. Philosophical Studies 102 (3):227-258.

Klein, P. (1981). Certainty: A Refutation of Scepticism. Minneapolis, MN: University of  Minnesota Press.

Kornblith, H. (1980) "Beyond Foundationalism and the Coherence Theory," in Journal of Philosophy 72: 597-612.

Kyburg, H. (1961). Probability and the Logic of Rational Belief. Middletown,  CN: Wesleyan University Press.

Kyburg, H. (1983). Epistemology and Inference. Minneapolis, MN: University of Minnesota Press.

Lehrer, K. (1974). Knowledge. Oxford: Clarendon Press.

Lehrer, K. (1990). Theory of Knowledge. Boulder: Westview Press.

Nozick, R. (1981). Philosophical Explanations. Cambridge: Harvard University Press.

Piattelli-Palmarini, M. (1991). "Probability: Neither Rational nor Capricious," in Bostonia March/April: 28-35.

Pollock, J. (1983). "Epistemology and Probability," in Synthese. 55: 231-252.

Pollock, J. (1984). "Reliability and Justified Belief," in Canadian Journal of Philosophy.  14: 103-114.

Putnam, H. (1983). "Why Reason Can't be Naturalized," in Realism and Reason. Cambridge: Cambridge University Press.

Ramsey, F. (1931) The Foundations of Mathematics and Other Logical Essays. London: Routledge.

Sanford, D. (1981). "Knowledge and Relevant Alternatives: Comments of Dretske," in  Philosophical Studies. 40: 379-388.

Sosa, E. (1980). "The Raft and the Pyramid: Coherence Versus Foundations in the Theory of Knowledge", in French, P., Uehling, T., and Wettstein, H. (eds) Midwest Studies in Philosophy, vol. 5 Minneapolis: University of Minnesota Press.

Sosa, E. (1991). Knowledge in Perspective. Cambridge: Cambridge University Press.

Swain, M. (1981). Reasons and Knowledge. Ithica, NY: Cornell University Press.

Wallis, C. (1994). "Truth-Ratios, Process, Task, and Knowledge," in Synthese 98:243-69.

Yourgrau, P. (1983). "Knowledge and Relevant Alternatives," Synthese. 55: 175-190.