I. Introduction
1.1 In this chapter, Pollock and Cruz (1999) argue for direct reliablism as a viable answer to, “what are the actual norms governing human epistemic competence?” (191). Their argument for direct reliabilsm stems from their rejection of the doxastic assumption and claim that epistemic norms coincide with nondoxastic internalism. P&C realized that holistic coherence theories fail because they cannot differentiate between justified and justifiable beliefs. They concluded that reasons play an essential role in justification.
P&C critique foundationalism and doxastic theories because they cannot accommodate perceptual knowledge and or describe human rational cognition. In vision, beliefs about physical objects can be derived from the percept either (A) directly from the percept or (B) inferred from beliefs produced by the percept that describes the percept. Unlike foundationalism, which views (A) impossible, P&C argue that such epistemic norms are possible, yet unnecessary. The fundamental principle of direct realism is that inferences and beliefs are derived directly from the percept. Therefore, direct realism is similar to foundationalism, except the foundations are percepts and not beliefs.
1.2 levels of epistemological theorizing
In this section, P&C explain three levels of epistemological theorizing. In the low-level, philosophers participate in bottom-up theorizing by investigating kinds of knowledge claims. The intermediate-level involves the investigation of topics that pertain to all kinds of knowledge claims explored on the lowest-level. The highest-level is considered top-down epistemological theorizing because it harbors general epistemological theories that try to describe “how knowledge in general is possible” (192). P&C claim that epistemologically theorizing requires both bottom-up and top-down processes. Specifically, one must first use top-down theorizing to argue for a high-level theory and then form conformable low-level theories to support it. If one cannot find such low-level theories, the high-level theory should be abandoned. In Section II, P&C argue that defeasible reasoning “provides the inferential machinery upon which to build low-level theories of epistemic norms governing specific kinds of knowledge” (200).
1.3 filling out direct realism
P&C argue for the high-level theory of direct realism by constructing compatible low-level theories. Construction is defined as “describing the various species of reasoning that can lead to justified beliefs about different subject matter” (195). This is the main goal of the OSCAR project. The creation of an artilect depends on successful and detailed construction of low-level depiction of our epistemic norms so that a computer system can encode such norms (194). P&C are open to epistemologists that disagree with direct realism, but they doubt whether an artilect can be created on opposing theories of epistemic foundations.
II. Reasoning
Direct realism requires epistemic norms that can appeal to perceptual states but not necessarily our beliefs about that state. P&C state that there can be “half-doxastic connections” between beliefs and non-doxastic states that is analogous to the structure of ordinary defeasible reasons. The only difference in reasoning is the reason-for relation because different states with similar content can support different inferences.
P&C state the definition of reason as:
A state M of a person S is a reason for S to believe Q if and only if it is logically possible for S to become justified in believing Q by believing it on the basis of being in the state M.
In other words, the state M does not need to be a belief. The fact the ball looks red to Bob (P) is enough reason for Bob to believe that it is red (Q).
2.2 Defeaters
Defeaters for half-doxastic connections operate like defeaters proposed in the foundations theory. It is important to characterize defeaters for defeasible reasons in low-level accounts (201). P&C describes two kinds of defeaters. The second type of defeater is redefined to include nondoxastic states.
(1) REBUTTING DEFEATER: If M is a defeasible reason for S to believe Q, M* is a rebutting defeater for this reason if and only if M* is a defeater (for M as a reason for S to believe Q) and M* is a reason for S to believe ~Q.
(2) UNDERCUTTING DEFEATR: If M is a nondoxastic state that is a defeasible reason for S to believe Q, M* is an undercutting defeater for this reason if and only if M* is a defeater (for M as a reason for S to believe Q) and M* is a reason for S to doubt or deny that he or she would not be in state M unless Q were true.
A rebutting defeater (1) is a reason that denies the conclusion (Q). For example, if Bob colorblind and believes that his colorblindness is such that whenever something looks red it is actually green, he now has reason to believe the ball is not red. An undercutting defeater (2) is a reason that causes a person to no longer believe Q without negating it. This defeater attacks the connection between evidence for the reason and the conclusion by showing that one’s reason for believing Q doesn’t mean that Q is true. In this example, Q is the ball is red. If Bob is informed that the ball is being irradiated by red lights, Bob no longer has reason to believe that P, the ball appearing red, is actually red, rather than having a reason for saying that it is not red. Undercutting defeaters are reasons for “P does not guarantee Q,” which is abbreviated (P ⊗ Q) (197). In other words, the irradiation means that Bob’s belief that it is red doesn’t guarantee that it is red.
2.3 Justified Beliefs
In direct realism, beliefs are justified by reasoning (197). P&C define reasoning is as constructing longer arguments out of shorter ones, or subsidiary arguments. Each argument is a sequence of beliefs and nondoxastic mental states that are ordered such that each member is either (1) a nondoxastic mental state or (2) there is a proposition(s) or nondoxastic state earlier in the sequence that is reason for P (197). An argument is instantiated if a person is in a nondoxastic state and believes each proposition on the basis of earlier propositions.
Inference-graphs are a set of arguments that shows the construction of arguments. Each node is given a status- assignment, which assigns what inferences are defeated or undefeated. A partial-status assignment assigns defeated or undefeated to subset with the following rules:
- if A is a one-line argument (i.e., a single percept), A is assigned “undefeated”;
- if some defeater for A is assigned “undefeated”, or some member of the basis of A is assigned “defeated”, A is assigned “defeated”;
- if all defeaters for A are assigned “defeated” and all members of the basis of A are assigned “undefeated”, A is assigned “undefeated”.
In other words, an argument A is undefeated if and only if every argument in the graph is assigned undefeated to A.
Figure 7.1 illustrates the importance of defeasibility in justification of arguments. The arrows represent an inference from each node. Both P1 and Q1 are nondoxastic states.
Because the conclusion of the second argument is an undercutting defeater, Bob is not justified in believing P3 and the first argument is assigned defeated. (P2 ⊗ P3) is a defeater for an argument because it supports a defeater for a final step. However, if Bob finds a defeater for some part of the second argument, the first argument is still possible. An argument can be defeated if it is (1) based on defeated subsidiary argument, or (2) has undefeated defeater. Therefore, arguments are “provisional vehicles of justification” because arguments can defeat each other and that a defeated argument can be reinstated.
Figure 7.2 illustrates the concept of collective defeat.
Collective defeat is the situation where two or more arguments defeat each other. In this example, we have good reasons for believing that it is and isn’t raining. Since both conclusions are defeated by one of two possible status-assignments, both arguments are defeated relative to the inference graph. Therefore, we should not accept either conclusion.
(1) We assign undefeated to P1, P2, “Jones says it is raining,” “Smith says that it is not raining”, and “It is raining”, and defeated to “Smith says it is not raining.” Conclusion: it is raining.
(2) We assign undefeated to P1, P2, “Jones says that it is raining” “Smith says that it is not raining”, and “It is not raining”, and assigns “defeated” to “It is raining”.
An argument is provisionally defeated if a status assignments assigns defeated to it and another status assignment assigned undefeated to it. Unlike an argument that is defeated outright, it can still defeat other arguments.
Figure 7.3 illustrates provisional defeat
Smith and Jones accuse each other of lying. One status assignment assigns defeated to “Smith is a liar” and the undefeated to “Jones is a liar” and the other assignment does the reverse. If Smith is a liar, then the inference that it is raining from Smith is defeated. However, even if “Jones is a liar” is undefeated and “Smith is a liar” is defeated, the inference saying “it is raining” is still defeated. Therefore, this argument is provisionally defeated because the inference “Smith is a liar” is defeated and undefeated by separate status assignments and can defeat the inference that it is raining.
III. Perception
P&C claim that direct realism can solve the problem of perception, or how we can gain knowledge of the external world through perception. P&C consider the ability of perception to provide reasons for judgment about the world as the fundamental principle of direct realism (201). Section I states that the inference is produced directly from the percept, not indirectly through beliefs about the percept.
PERCEPTION:
Having a percept at time t with the content P is a defeasible reason for the cognizer to believe P-at-t.
P-at-t is the percept a reasoner believes at time t. P&C claim that this principle is the most basic component of rational cognition; it cannot be justified. This principle must be present because it is “an essential ingredient of the rational architecture of any rational agent” (201). Reliability defeaters are undercutting defeaters for perception. They prove that an inference from a percept is unreliable under present circumstances.
Perceptual-reliability:
Where R is projectible, “R-at-t, and the probability is low of P’s being true given R and that I have a percept with content P” is an undercutting defeater for PERCEPTION.
P&C call for importance of the projectibility constraint in reliability because it involves whether a percept is retained over time. Consider the example:
You have two circumstances. C1: Bob was born in 1998. C2: Bob is wearing rose-colored glasses. If Bob is wearing the glasses, C2 is a reliability defeater and it is unlikely that the ball that appears red to Bob is actually red. However, if Bob was born in 1998, this enables the possibility that Bob could also be wearing rose glasses. In the disjunctive circumstance (C1 v C2), there is a high probability that Bob could be wearing rose glasses. Therefore, it is also unlikely that the ball is actually red. This example shows that disjunctive circumstances present an indirect defeater to perception.
IV. Implementation
This section illustrates how reason schemas are implemented into OSCAR. We will clarify a few terms involved in implementation. OSCAR reasoning is the construction of both deductive inference rules and defeasible reason-schemas (203). Inputting the premises generates the queries, or “epistemic interests.” Reasoning through queries lead to conclusions and that are computed as inference graphs.
OSCAR performs bidirectional reasoning: the agent reasons forward from the premises (forward-reasons) and backward from the queries (backward-reasons) (203). Simple forward-reasons have no backward-premises and simple backward-reasons have no forward-premises. Mixed reasons contain backward- and forward- reasons. In simple reasons, the conclusions can be directly inferred from the reasons. In contrast, conclusions in mixed reasons are made only if (1) the reasoner adopts interest in the backward premises, and (2) those interests are discharged. Interest in backward premises is adopted only when inference nodes are constructed that present forward premises, and vice versa. The bidirectional use of each type of premise on the other provides control over reasoning progression.
The problem lies in how to implement perpetual reliability. The definition of perceptual-reliability is adjusted to consider reason strengths:
Perceptual-reliability:
Where R is projectible, r is the strength of PERCEPTION, and s < 0.5 ⋅(r + 1), “R-at-t, and the probability is less than or equal to s of P’s being true given R and that I have a percept with content P” is an undercutting defeater for PERCEPTION.
Reason strengths range from (0-1) but are mapped to probabilities in the interval (0.5-1) (206). P&C first propose this as a backward-reason, but subsequently claim that because that there are no constraints on R, the reasoner would spend too much time attempting to determine reliability given everything in the situation. P&C instead propose it as a degenerate backward-reason, with no backward premises, and P-at-t and the probability premise as a forward-premise. How can we implement perceptual reliability if we need to know if R is true at the time of the percept, but we can only infer it from the fact that R was true earlier?
V. Temporal Projection
This section of the chapter opens with a discussion of the strengths of PERCEPTION while acknowledging its major shortcoming: perception, at best, is nothing more than a “form of sampling.” That is, it is not possible for a cognizer to continually perceive and process the state of everything in his or her surrounding environment. Rather, individuals perceive small “space-time chunks” of their environments and make perceptual inferences about the state of the world at large by combining these chunks. The problem with this process of forming inferences, P & C argue, is that there is a surprising difficulty in drawing accurate conclusions about the world at large based on combinations of perceptual samples.
A large part of this difficulty involves the lack of time-sensitive stability exhibited by the majority of objects in the natural world. Making inferences based on single perceptual samples of given objects presupposes that the properties observed are stable over time, which is often not the case. Theoretically, an individual would need to observe the same object at multiple points in time in order to determine whether or not its properties had changed; only when affirming that they had remained unchanged could its stability be inferred, and broader inferences about its nature be made. However, making observations of the same object at various times requires the observer to accurately reidentify the object, a task that can become impossible when the object at hand rapidly or unpredictably changes its properties.
Thus, an assumption of some stability must be made about objects that an agent uses in the process of forming inferences about the world. P & C argue that an object is considered stable if, given that it is observed to hold true at an initial point in time, the probability is high that it will continue to hold true at a later point in time. They furthermore argue that the probability that the property of a given object will continue to hold true over time decreases as a function of the length of the time interval. P & C call the process of assessing the stability of an object based on the consistency of its properties over time temporal projection. Temporal projection, they argue, is essential in the rational assessment of property stability, and thus in the process of forming inferential conclusions about one’s surrounding environment. What does temporal projection look like when applied? In other words, how should temporal projection be implemented?
VI. Implementing Temporal Projection
In this section, P & C discuss a great deal of complex algorithms, atomic formulas and codes, which will not be discussed in detail here. Without discussing these formulas in great detail, it will suffice to consider P & C’s explanation of temporal projection as dependent on temporal projectibility. Intuitively, in order to temporally project the stability of a given object’s property of time, the properties of that object must be temporally projectable; i.e. it must be possible for a rational agent to determine their constancy based on probability. The remainder of this section is an analysis of atomic formulas and algorithms used in the process of temporal projection, all of which P & C argue are, in fact, temporally projectable.
VII. Reasoning About Change
In a vein similar to that of Section V, this section discusses the need of a cognizer to account for the tendency of most, if not all, objects in the natural world to change as a function of time and other variables. Moreover, P & C address the need of a rational agent to consider this tendency to change when making broader inferences about his or her surrounding environment. In their discussion of this need to account for change, P & C identify four kinds of reasoning:
- First, they argue that the agent must be capable of acquiring perceptual information about the surrounding world. This implies a necessity of proper cognitive functioning and reliably sensory interactions.
- Second, the agent must be able to combine isolated perceptual chunks of his or her surrounding environment into a coherent picture of the broader world.
- Third, the agent must be capable of perceptually detecting changes in previously identified components of his or her broader to picture of the world, and to amend this picture accordingly.
- Lastly, the agent must be capable of acquiring causal information about “how the world works” and to use this information to efficiently predict patterns of change that may result in the future, either from uncontrollable, natural circumstances or from the agent’s own actions.
The remainder of this section discusses the fourth type of reasoning in-depth, mentioning that the ability to foresee change necessitates the ability to foresee non-change; P & C write on page 219 that “…reasoning about what will change if an action is performed or some other event occurs generally presupposes knowing what will not change.” The remainder of the section is more or less an elaboration upon the logic of this claim, with a focus on the argument that predicting what will likely occur is largely dependent on knowledge about is not likely or impossible to occur.
VIII. The Statistical Syllogism
In this section, P & C continue to build on their foundational claim that an individual’s ability to rationally navigate through the world depends heavily on his or her ability to make reasonable predictions about changes that may take place in his or her environment under various circumstances. They argue that, in order to function in a complex environment, such as the natural world as we know it, a rational agent must be equipped with rules that:
- enable the agent to form beliefs in statistical generalizations, and;
- enable the agent to make inferences based on those statistical generalizations that are applicable to individual circumstances. (pp. 229-230)
P & C provide an archetypal, non-numerical version of the statistical syllogism in their “most Fs are Gs” example:
Most F’s are G’s
This is an F
_____________
This is a G.
P & C explain that, because human beings often reason this way, a rational execution of such logic is essential in making reasonable predictions about the state of one’s surrounding environment. The remainder of this section involves a series of statistical and algorithmic examples supporting the validity and applicability of this claim.
IX. Induction
This section relies heavily on the claims made in section VIII about the need for a rational agent to effectively make reasonable predictions about the world based on generalizations; these generalizations, they argue, can be either exceptionless (all Fs are Gs) or statistical, varying in probability (the probability of A being B is high). We become justified in believing the statistical generalizations we make through a process of induction.
P & C begin with an explanation of the simplest kind of reasoning, enumerative induction, which involves a process of generalization based on sampling (i.e. “all As in sample X are Bs, so all As are likely Bs”). The most important defeater to consider when evaluating this line of reasoning is the possibility that X is not a reasonably “fair” or accurate sample; that is, it does not accurately encapsulate or characterize the population that it supposedly represents. The “fairness” or accuracy of a sample, i.e. its reliability in the formation of conclusions about a represented population, depends on a number of factors, including sample size and sample diversity.
P & C argue that a second kind of induction, termed statistical induction, is much more important for rational agents in their process of forming conclusions about the world based on observation of samples. P & C succinctly summarize the principles of statistical induction in 7.6:
“If B is projectible with respect to A, then ‘X is a sample of n A’s r of which are B’s’ is a defeasible reason for ‘prob(B/A) is ap- 185. Fair sample defeaters are discussed at greater length in Pollock (1989), but no general account is given. 186. Further discussion of this can be found in Pollock (1989, 315ff). proximately equal to r/n’”
The remainder of the article involves the analysis of a number of algorithmic expressions and likelihood ratios that illustrate the process of statistical induction. One might assume that, like temporal projection, the justification of an inductive argument depends on a number of factors such as the constancy of observed properties over time, representativeness of the sample, etc. However, P & C conclude this section by arguing that the strength of induction is in its lack of need for justification (pp. 237). They add, however, that “…principles of induction are philosophically trouble free. Major questions remain about the precise form of the epistemic norms governing induction. Suggestions have been made here about some of the details of those epistemic norms, but we do not yet have a complete account.” (p.237).
Discussion Questions
1) P&C present that the task at hand is to construct low-level theories that support direct realism (192). How does bias fit into this claim? Specifically, are the low level theories constructed only to match with it, instead of accurately construct reality?
2) Does direct realism really avoid the problems of justification that foundationalism runs into?
3) The original example for statistical syllogism is “no one believes everything in the newspaper to be true – but I do believe that most is true and that justifies me in believing individual newspaper reports (230)” Can justification be based on the “most is true” concept?
4) In their discussion of perception in the role of forming inferences about the world, P&C lend a great deal of weight to the agent’s ability to integrate numerous perceptual “space-time chunks” into a coherent, broader view of one’s surroundings. Can you give any real-world examples of this process?
5) P & C claim that the strength of induction lies in its lack of need for justification. Do you agree with this? Is induction an infallible process?