I. Introduction

 

1.1 In this chapter, Pollock and Cruz (1999) argue for direct reliablism as a viable answer to, “what are the actual norms governing human epistemic competence?” (191). Their argument for direct reliabilsm stems from their rejection of the doxastic assumption and claim that epistemic norms coincide with nondoxastic internalism. P&C realized that holistic coherence theories fail because they cannot differentiate between justified and justifiable beliefs. They concluded that reasons play an essential role in justification.

P&C critique foundationalism and doxastic theories because they cannot accommodate perceptual knowledge and or describe human rational cognition. In vision, beliefs about physical objects can be derived from the percept either (A) directly from the percept or (B) inferred from beliefs produced by the percept that describes the percept. Unlike foundationalism, which views (A) impossible, P&C argue that such epistemic norms are possible, yet unnecessary. The fundamental principle of direct realism is that inferences and beliefs are derived directly from the percept. Therefore, direct realism is similar to foundationalism, except the foundations are percepts and not beliefs.

 

1.2 levels of epistemological theorizing

 

In this section, P&C explain three levels of epistemological theorizing. In the low-level, philosophers participate in bottom-up theorizing by investigating kinds of knowledge claims. The intermediate-level involves the investigation of topics that pertain to all kinds of knowledge claims explored on the lowest-level. The highest-level is considered top-down epistemological theorizing because it harbors general epistemological theories that try to describe “how knowledge in general is possible” (192). P&C claim that epistemologically theorizing requires both bottom-up and top-down processes. Specifically, one must first use top-down theorizing to argue for a high-level theory and then form conformable low-level theories to support it. If one cannot find such low-level theories, the high-level theory should be abandoned. In Section II, P&C argue that defeasible reasoning “provides the inferential machinery upon which to build low-level theories of epistemic norms governing specific kinds of knowledge” (200).

 

1.3 filling out direct realism

 

P&C argue for the high-level theory of direct realism by constructing compatible low-level theories. Construction is defined as “describing the various species of reasoning that can lead to justified beliefs about different subject matter” (195). This is the main goal of the OSCAR project. The creation of an artilect depends on successful and detailed construction of low-level depiction of our epistemic norms so that a computer system can encode such norms (194). P&C are open to epistemologists that disagree with direct realism, but they doubt whether an artilect can be created on opposing theories of epistemic foundations.

II. Reasoning

 

Direct realism requires epistemic norms that can appeal to perceptual states but not necessarily our beliefs about that state. P&C state that there can be “half-doxastic connections” between beliefs and non-doxastic states that is analogous to the structure of ordinary defeasible reasons. The only difference in reasoning is the reason-for relation because different states with similar content can support different inferences.

 

P&C state the definition of reason as:

 

A state M of a person S is a reason for S to believe Q if and only if it is logically possible for S to become justified in believing Q by believing it on the basis of being in the state M.

In other words, the state M does not need to be a belief. The fact the ball looks red to Bob (P) is enough reason for Bob to believe that it is red (Q).

2.2 Defeaters

Defeaters for half-doxastic connections operate like defeaters proposed in the foundations theory. It is important to characterize defeaters for defeasible reasons in low-level accounts (201). P&C describes two kinds of defeaters. The second type of defeater is redefined to include nondoxastic states.

(1)     REBUTTING DEFEATER: If M is a defeasible reason for S to believe Q, M* is a rebutting defeater for this reason if and only if M* is a defeater (for M as a reason for S to believe Q) and M* is a reason for S to believe ~Q.

(2)      UNDERCUTTING DEFEATR: If M is a nondoxastic state that is a defeasible reason for S to believe Q, M* is an undercutting defeater for this reason if and only if M* is a defeater (for M as a reason for S to believe Q) and M* is a reason for S to doubt or deny that he or she would not be in state M unless Q were true.

A rebutting defeater (1) is a reason that denies the conclusion (Q). For example, if Bob colorblind and believes that his colorblindness is such that whenever something looks red it is actually green, he now has reason to believe the ball is not red. An undercutting defeater (2) is a reason that causes a person to no longer believe Q without negating it. This defeater attacks the connection between evidence for the reason and the conclusion by showing that one’s reason for believing Q doesn’t mean that Q is true. In this example, Q is the ball is red. If Bob is informed that the ball is being irradiated by red lights, Bob no longer has reason to believe that P, the ball appearing red, is actually red, rather than having a reason for saying that it is not red. Undercutting defeaters are reasons for “P does not guarantee Q,” which is abbreviated (P Q) (197). In other words, the irradiation means that Bob’s belief that it is red doesn’t guarantee that it is red.

2.3 Justified Beliefs

In direct realism, beliefs are justified by reasoning (197). P&C define reasoning is as constructing longer arguments out of shorter ones, or subsidiary arguments. Each argument is a sequence of beliefs and nondoxastic mental states that are ordered such that each member is either (1) a nondoxastic mental state or (2) there is a proposition(s) or nondoxastic state earlier in the sequence that is reason for P (197). An argument is instantiated if a person is in a nondoxastic state and believes each proposition on the basis of earlier propositions.

Inference-graphs are a set of arguments that shows the construction of arguments. Each node is given a status- assignment, which assigns what inferences are defeated or undefeated. A partial-status assignment assigns defeated or undefeated to subset with the following rules:

  1. if A is a one-line argument (i.e., a single percept), A is assigned 
“undefeated”;
  2. if some defeater for A is assigned “undefeated”, or some member 
of the basis of A is assigned “defeated”, A is assigned “defeated”;
  3. if all defeaters for A are assigned “defeated” and all members of the basis of A are assigned “undefeated”, A is assigned 
“undefeated”.

In other words, an argument A is undefeated if and only if every argument in the graph is assigned undefeated to A.

Figure 7.1 illustrates the importance of defeasibility in justification of arguments. The arrows represent an inference from each node. Both P1 and Q1 are nondoxastic states.

 

Because the conclusion of the second argument is an undercutting defeater, Bob is not justified in believing P3 and the first argument is assigned defeated. (P2 ⊗ P3) is a defeater for an argument because it supports a defeater for a final step. However, if Bob finds a defeater for some part of the second argument, the first argument is still possible. An argument can be defeated if it is (1) based on defeated subsidiary argument, or (2) has undefeated defeater. Therefore, arguments are “provisional vehicles of justification” because arguments can defeat each other and that a defeated argument can be reinstated.

Figure 7.2 illustrates the concept of collective defeat.

Collective defeat is the situation where two or more arguments defeat each other. In this example, we have good reasons for believing that it is and isn’t raining. Since both conclusions are defeated by one of two possible status-assignments, both arguments are defeated relative to the inference graph. Therefore, we should not accept either conclusion.

(1) We assign undefeated to P1, P2, “Jones says it is raining,” “Smith says that it is not raining”, and “It is raining”, and defeated to “Smith says it is not raining.” Conclusion: it is raining.

(2) We assign undefeated to P1, P2, “Jones says that it is raining” “Smith says that it is not raining”, and “It is not raining”, and assigns “defeated” to “It is raining”.

An argument is provisionally defeated if a status assignments assigns defeated to it and another status assignment assigned undefeated to it. Unlike an argument that is defeated outright, it can still defeat other arguments.

 

Figure 7.3 illustrates provisional defeat

 

Smith and Jones accuse each other of lying. One status assignment assigns defeated to “Smith is a liar” and the undefeated to “Jones is a liar” and the other assignment does the reverse. If Smith is a liar, then the inference that it is raining from Smith is defeated. However, even if “Jones is a liar” is undefeated and “Smith is a liar” is defeated, the inference saying “it is raining” is still defeated. Therefore, this argument is provisionally defeated because the inference “Smith is a liar” is defeated and undefeated by separate status assignments and can defeat the inference that it is raining.

 

III. Perception

 

P&C claim that direct realism can solve the problem of perception, or how we can gain knowledge of the external world through perception. P&C consider the ability of perception to provide reasons for judgment about the world as the fundamental principle of direct realism (201). Section I states that the inference is produced directly from the percept, not indirectly through beliefs about the percept.

 

PERCEPTION:

Having a percept at time t with the content P is a defeasible reason for the cognizer to believe P-at-t.

 

P-at-t is the percept a reasoner believes at time t. P&C claim that this principle is the most basic component of rational cognition; it cannot be justified. This principle must be present because it is “an essential ingredient of the rational architecture of any rational agent” (201). Reliability defeaters are undercutting defeaters for perception. They prove that an inference from a percept is unreliable under present circumstances.

 

Perceptual-reliability:

Where R is projectible, “R-at-t, and the probability is low of P’s being true given R and that I have a percept with content P” is an undercutting defeater for PERCEPTION.

 

P&C call for importance of the projectibility constraint in reliability because it involves whether a percept is retained over time. Consider the example:

 

You have two circumstances. C1: Bob was born in 1998. C2: Bob is wearing rose-colored glasses. If Bob is wearing the glasses, C2 is a reliability defeater and it is unlikely that the ball that appears red to Bob is actually red. However, if Bob was born in 1998, this enables the possibility that Bob could also be wearing rose glasses. In the disjunctive circumstance (C1 v C2), there is a high probability that Bob could be wearing rose glasses. Therefore, it is also unlikely that the ball is actually red. This example shows that disjunctive circumstances present an indirect defeater to perception.

 

IV. Implementation

 

This section illustrates how reason schemas are implemented into OSCAR. We will clarify a few terms involved in implementation. OSCAR reasoning is the construction of both deductive inference rules and defeasible reason-schemas (203). Inputting the premises generates the queries, or “epistemic interests.” Reasoning through queries lead to conclusions and that are computed as inference graphs.

OSCAR performs bidirectional reasoning: the agent reasons forward from the premises (forward-reasons) and backward from the queries (backward-reasons) (203). Simple forward-reasons have no backward-premises and simple backward-reasons have no forward-premises. Mixed reasons contain backward- and forward- reasons. In simple reasons, the conclusions can be directly inferred from the reasons. In contrast, conclusions in mixed reasons are made only if (1) the reasoner adopts interest in the backward premises, and (2) those interests are discharged. Interest in backward premises is adopted only when inference nodes are constructed that present forward premises, and vice versa. The bidirectional use of each type of premise on the other provides control over reasoning progression.

 

The problem lies in how to implement perpetual reliability. The definition of perceptual-reliability is adjusted to consider reason strengths:

Perceptual-reliability:

Where R is projectible, r is the strength of PERCEPTION, and s < 0.5 ⋅(r + 1), “R-at-t, and the probability is less than or equal to s of P’s being true given R and that I have a percept with content P” is an undercutting defeater for PERCEPTION.

 

Reason strengths range from (0-1) but are mapped to probabilities in the interval (0.5-1) (206). P&C first propose this as a backward-reason, but subsequently claim that because that there are no constraints on R, the reasoner would spend too much time attempting to determine reliability given everything in the situation. P&C instead propose it as a degenerate backward-reason, with no backward premises, and P-at-t and the probability premise as a forward-premise. How can we implement perceptual reliability if we need to know if R is true at the time of the percept, but we can only infer it from the fact that R was true earlier?

 

V. Temporal Projection

 

This section of the chapter opens with a discussion of the strengths of PERCEPTION while acknowledging its major shortcoming: perception, at best, is nothing more than a “form of sampling.” That is, it is not possible for a cognizer to continually perceive and process the state of everything in his or her surrounding environment. Rather, individuals perceive small “space-time chunks” of their environments and make perceptual inferences about the state of the world at large by combining these chunks. The problem with this process of forming inferences, P & C argue, is that there is a surprising difficulty in drawing accurate conclusions about the world at large based on combinations of perceptual samples.

 

A large part of this difficulty involves the lack of time-sensitive stability exhibited by the majority of objects in the natural world. Making inferences based on single perceptual samples of given objects presupposes that the properties observed are stable over time, which is often not the case. Theoretically, an individual would need to observe the same object at multiple points in time in order to determine whether or not its properties had changed; only when affirming that they had remained unchanged could its stability be inferred, and broader inferences about its nature be made. However, making observations of the same object at various times requires the observer to accurately reidentify the object, a task that can become impossible when the object at hand rapidly or unpredictably changes its properties.

Thus, an assumption of some stability must be made about objects that an agent uses in the process of forming inferences about the world. P & C argue that an object is considered stable if, given that it is observed to hold true at an initial point in time, the probability is high that it will continue to hold true at a later point in time. They furthermore argue that the probability that the property of a given object will continue to hold true over time decreases as a function of the length of the time interval. P & C call the process of assessing the stability of an object based on the consistency of its properties over time temporal projection. Temporal projection, they argue, is essential in the rational assessment of property stability, and thus in the process of forming inferential conclusions about one’s surrounding environment. What does temporal projection look like when applied? In other words, how should temporal projection be implemented?

VI. Implementing Temporal Projection

 

In this section, P & C discuss a great deal of complex algorithms, atomic formulas and codes, which will not be discussed in detail here. Without discussing these formulas in great detail, it will suffice to consider P & C’s explanation of temporal projection as dependent on temporal projectibility. Intuitively, in order to temporally project the stability of a given object’s property of time, the properties of that object must be temporally projectable; i.e. it must be possible for a rational agent to determine their constancy based on probability. The remainder of this section is an analysis of atomic formulas and algorithms used in the process of temporal projection, all of which P & C argue are, in fact, temporally projectable.

 

VII. Reasoning About Change

 

In a vein similar to that of Section V, this section discusses the need of a cognizer to account for the tendency of most, if not all, objects in the natural world to change as a function of time and other variables. Moreover, P & C address the need of a rational agent to consider this tendency to change when making broader inferences about his or her surrounding environment. In their discussion of this need to account for change, P & C identify four kinds of reasoning:

  1. First, they argue that the agent must be capable of acquiring perceptual information about the surrounding world. This implies a necessity of proper cognitive functioning and reliably sensory interactions.
  2. Second, the agent must be able to combine isolated perceptual chunks of his or her surrounding environment into a coherent picture of the broader world.
  3. Third, the agent must be capable of perceptually detecting changes in previously identified components of his or her broader to picture of the world, and to amend this picture accordingly.
  4. Lastly, the agent must be capable of acquiring causal information about “how the world works” and to use this information to efficiently predict patterns of change that may result in the future, either from uncontrollable, natural circumstances or from the agent’s own actions.

The remainder of this section discusses the fourth type of reasoning in-depth, mentioning that the ability to foresee change necessitates the ability to foresee non-change; P & C write on page 219 that “…reasoning about what will change if an action is performed or some other event occurs generally presupposes knowing what will not change.” The remainder of the section is more or less an elaboration upon the logic of this claim, with a focus on the argument that predicting what will likely occur is largely dependent on knowledge about is not likely or impossible to occur.

 

 

VIII. The Statistical Syllogism

 

            In this section, P & C continue to build on their foundational claim that an individual’s ability to rationally navigate through the world depends heavily on his or her ability to make reasonable predictions about changes that may take place in his or her environment under various circumstances. They argue that, in order to function in a complex environment, such as the natural world as we know it, a rational agent must be equipped with rules that:

  • enable the agent to form beliefs in statistical generalizations, and;
  • enable the agent to make inferences based on those statistical generalizations that are applicable to individual circumstances. (pp. 229-230)

P & C provide an archetypal, non-numerical version of the statistical syllogism in their “most Fs are Gs” example:

 

Most F’s are G’s

This is an F

_____________

This is a G.

 

P & C explain that, because human beings often reason this way, a rational execution of such logic is essential in making reasonable predictions about the state of one’s surrounding environment. The remainder of this section involves a series of statistical and algorithmic examples supporting the validity and applicability of this claim.

 

IX. Induction

            This section relies heavily on the claims made in section VIII about the need for a rational agent to effectively make reasonable predictions about the world based on generalizations; these generalizations, they argue, can be either exceptionless (all Fs are Gs) or statistical, varying in probability (the probability of A being B is high). We become justified in believing the statistical generalizations we make through a process of induction. 

P & C begin with an explanation of the simplest kind of reasoning, enumerative induction, which involves a process of generalization based on sampling (i.e. “all As in sample X are Bs, so all As are likely Bs”). The most important defeater to consider when evaluating this line of reasoning is the possibility that X is not a reasonably “fair” or accurate sample; that is, it does not accurately encapsulate or characterize the population that it supposedly represents. The “fairness” or accuracy of a sample, i.e. its reliability in the formation of conclusions about a represented population, depends on a number of factors, including sample size and sample diversity.

P & C argue that a second kind of induction, termed statistical induction, is much more important for rational agents in their process of forming conclusions about the world based on observation of samples. P & C succinctly summarize the principles of statistical induction in 7.6:

“If B is projectible with respect to A, then ‘X is a sample of n A’s r of which are B’s’ is a defeasible reason for ‘prob(B/A) is ap- 185. Fair sample defeaters are discussed at greater length in Pollock (1989), but no general account is given. 186. Further discussion of this can be found in Pollock (1989, 315ff). proximately equal to r/n’”

The remainder of the article involves the analysis of a number of algorithmic expressions and likelihood ratios that illustrate the process of statistical induction. One might assume that, like temporal projection, the justification of an inductive argument depends on a number of factors such as the constancy of observed properties over time, representativeness of the sample, etc. However, P & C conclude this section by arguing that the strength of induction is in its lack of need for justification (pp. 237). They add, however, that “…principles of induction are philosophically trouble free. Major questions remain about the precise form of the epistemic norms governing induction. Suggestions have been made here about some of the details of those epistemic norms, but we do not yet have a complete account.” (p.237).

 

Discussion Questions

1) P&C present that the task at hand is to construct low-level theories that support direct realism (192). How does bias fit into this claim? Specifically, are the low level theories constructed only to match with it, instead of accurately construct reality?

2) Does direct realism really avoid the problems of justification that foundationalism runs into?

3) The original example for statistical syllogism is “no one believes everything in the newspaper to be true – but I do believe that most is true and that justifies me in believing individual newspaper reports (230)” Can justification be based on the “most is true” concept?

4) In their discussion of perception in the role of forming inferences about the world, P&C lend a great deal of weight to the agent’s ability to integrate numerous perceptual “space-time chunks” into a coherent, broader view of one’s surroundings. Can you give any real-world examples of this process?

5) P & C claim that the strength of induction lies in its lack of need for justification. Do you agree with this? Is induction an infallible process?

 

10 thoughts on “Pollock and Cruz, Ch. 7: “Direct Realism” (Todd Hunsaker and Emily Vicks)

  1. As Olivia touched on, the initial question that I had after this reading was- at what point does a percept become a belief? Where is the distinguishing point between these two concepts? In order to perceive something, isn’t it necessary to (at least subconsciously) form some kind of baseline belief about it? For example, if you perceive an object as blue, does that not also entail believing the color blue and believing in the way that color looks? In this way, I think the distinction between percept and belief may be overstated in P&C’s description of direct realism. How can a perception be the reason for a belief when it seems that these two terms are inextricably tied at the onset?

    Along with Chelsea’s question, how would emotion effect this idea of perception and belief? By P&C’s definition, would emotions be able to a priori affect perception? In this way, it seems that perception of the world around you would be less direct and more founded in previous conceptions than P&C acknowledged.

    I found P&C’s section “Reasoning about Change” to be particularly interesting. I think it is very accurate that in order to be rational, cognizers need to account for the natural change that happens daily. The physical world around you is situationally dependent, and different truths can be evident depending on those surroundings. It is important to be able to account for these changes- that is the base of learning and the base of evolutionary progress. However, the lifespan of a human is obviously short. Could it be that our rational ability has a cap based on the temporal limit of our lives? It seems to me that we could be making many incorrect conclusions due to the fact that we are basing many of our perceptions on a limited time frame. In other words, could it be that we can never achieve perfect rationality given the time constraints of our life?

  2. To address your second question, I am not so sure that direct realism really avoids the problems of justification that foundationalism runs into. Direct realism suggests that an inference used for reasoning judgment “is produced directly from the percept, not indirectly through beliefs about the percept”. I find it hard to accept this claim. I feel as though this theory, like foundationalism, fails to acknowledge cases of perceptual variation (i.e. illusions and hallucinations). In the case of illusion, an individual mistakenly perceives something that is not true. For example, it is possible when two people look at a red apple from different angles one sees the apple as purple and the other sees it as orange. This perception of the wrong color is due to the way the light reflects of the object and seen by the person. However there must be a way that our minds compare this visual input to previous beliefs for each person to correctly infer that the apple is red. Do we really produce inferences directly and solely from perception? Is it more likely that we do not observe the world directly and instead that our minds construct representations of the world in which we observe?
    Another objection to foundationalism involving perception that direct realism fails to address is the possibility of cognitive penetrability. For example, when we are shown a fragmented image of a common object, we are able to mentally fill in the blanks using our prior experiences/beliefs and infer what the image is. These theories claim that we prioritize perception however, if vision is cognitively penetrable, is it possible we produce unreliable inferences?

  3. Although confusing and sometimes hard to follow, I thought that P&C’s account of temporal projection and temporal projectability to be quite intriguing and thought-provoking in considering time for proper reasoning and rationale. That is, the perception of P-at-t0 does not simply lead to justifiable reason to believe P-at-t1 or P-at-t2 for that matter. First you must consider which objects and properties are temporally projectable, and then you must consider which objects and properties have the potential to be temporally stable. Although when analyzing the stability of such properties, we run into a number of problems. P&C declare that it is “epistemically impossible to investigate the stability of perceptible properties inductively without presupposing that most of them tend to be stable” (208). However, if you accept the need to assume, then you allow yourself the the ability to use induction and discover that “some perceptible properties are more stable than others [and] that particular properties tend to be unstable under specifiable circumstances” (208). For the objects and properties considered temporally projectable, we must also use Bayesian reasoning and probability to identify which are more likely to remain constant over time.

    Maybe I am interpreting this all in a negative light, but to me this just highlights our inability to make inferences and predictions for the future. Obviously we can use probability to get closer to the answer, but even then, most of these probabilistic inferences are unreasonably based on other inferences made at some point in time. This brings up the problem of regress stretching farther and farther from the initial inference and hopeful answer. With this in mind, how can we be confident in anything? Is it possible for the objects and properties that we observe to be changing faster than we realize? How does all of this play into the discussion of climate change? Furthermore, have we ever discovered an object or material to be completely stable over time? If not, maybe it is fair to instead assume all matter to be constantly changing (or changing at least one point in time). Therefore, it would only be necessary to justify stability rather than instability of an object or property, unless that is already what we, as humans, do…

  4. I appreciate what Pollock and Cruz are trying to do with the idea of direct realism, but I’m not sure I completely buy into it. Their model of knowledge seems to avoid the skepticism-related issues of the doxastic assumption, while at the same time drawing on a foundationalism-like system of reasoning that is also adaptable to contradictions and change. This is really appealing, but the incorporation of so many different things opens it up for attack from a number of different directions, and in addition I’m struggling to fit direct realism into my current view of epistemology. For example, Pollock and Cruz reject ideas like coherence theories because they lack a level of “justification,” but it seems that they can’t accomplish similar goals without incorporating “statistical syllogism,” which seems similarly lacking in justification. I also don’t see, for example, how this perception-centric approach can be applied to math, morality, or other abstract contexts in which a rigorous system of reasoning is well-suited.
    Over the course of the semester I’ve become attached to the idea that there may not just be one “correct” model of knowledge and reasoning, but potentially many that are more applicable or descriptive in different contexts. This has been brought up a few times in class, especially by Eric (as an “epistemic toolbox”) and Porter, and I think it helps explain why everything we’ve looked at so far seems to be reasonable at first but then breaks when applied to broader epistemological questions. Maybe we haven’t (or can’t) capture all of what reasoning is in one model, like the story of the blind men and the elephant. It seems to me that direct realism tries to account for understanding too rigorously in regards to concrete situations and in a way that’s not applicable to more abstract ideas, and is therefore less useful than other models that at least account for certain things really well.

  5. In my response to the article and summary above I attempt to answer the discussion questions posed by Emily and Todd: specifically question 1 and 3.

    Question1:
    P&C’s discussion on low-level theories vs. high-level theories leaves me questioning what exactly their argument is. I’m not sure if they are entirely arguing that construction of low-level theories, alone, supports direct realism. It seems like P&C claim that neither low-level theories nor high-level theories can independently exist or let alone be constructed in the first place without the supporting framework of the other (192-193). P&C do explicitly state, “it seems likely that little progress can be made on low-level theories without presupposing something about high-level theories” (193). They claim that low-level theories are used to “fill” in arguments for a high-level theory. But then they follow that claim by stating that the construction of the low-theories themselves can modify the high-level theory itself. Which one comes first? Low or high theory? How does this relationship/exchange between the two levels work exactly in application?

    This leads me to wonder then exactly how bias plays into both levels, not just low-level theories. If, in fact, low-level theories “fill” in the arguments for high-level theories, then doesn’t this imply that if one level theory has biases the other level will inevitably also hold the same biases?

    Question 3:
    I find this “most is true” concept both convincing and challenging. I think it would be hard to claim that people never reason using the “most is true” concept. In fact, I think more often than not we, or at least I, find ourselves ultimately breaching a forked road in our decision making process by submitting to this inference-based, perhaps less certain, form of justification – the “most is true” concept. I think the question though is whether or not from an epistemic point of view this form of reasoning would be considered a means of justification. It may not be justification in that it makes our reasonings free of contradictions or our beliefs infallible. However, this “most is true” concept may still be a viable way to reason based on P&C’s definition of reason (196). Going with the ticket example on pg 230: your choice to believe that your ticket may win based on knowing that some ticket will win, is a good enough reason for that belief. But I wonder if it being a “good enough reason” is the same as justification? Are justification and reasons completely different? Independent from one another?

  6. I was impressed with Pollock & Cruz’s account of direct realism; it was very well-detailed, but it left me with a strange sense of doubt. I agree with many of their assertions, especially the sections on reasoning (194-201), perception (201-203), and reasoning about change (219-229), and probability (229-234). (Temporal projection lost me a bit towards the end, but I do agree with presumptions of stable properties.) P&C allow for adaptation to changes in the environment and for beliefs to conflict and be amended to cohere. However, all of this, while making sense, seemed a bit mechanical, which I guess makes sense if this is meant to be processed by a computer. But it seems a bit idealist to imagine humans working under this model, when there are so many other factors and limitations.

    In their conclusion, P&C highlight this concern: “we have not addressed knowledge of other minds, a priori knowledge, or the kind of means-end reasoning require in searching for plans for achieving goals. A complete procedural epistemology must eventually address all these topics…” (239). I wonder how these topics would fit in. How does emotion play a role? Do we all have the capacity for what P&C have proposed, and if so, are they taking place subconsciously? Do we use all of P&C’s proposed model at once or pieces depending on the task at hand?

  7. I also agree that this was a good chapter in that P&C did a sufficient job summing up their book to provide a better understanding of everything we have discussed over the course of the semester. I found myself very intrigued by the levels of epistemological theorizing in section 1.2. Their statement “neither top-down nor bottom-up theorizing can be satisfactory by itself” (193), makes complete sense, as we want to make sure we factor in both our senses from the stimulus (bottom up), and the use of contextual information. We want to use the most basic responses in order to build up to higher levels of knowledge, but at the same time, understand that we need to be able to break down a high level system to understand its components. I found a good example to make sense of this: http://openpsyc.blogspot.com/2014/06/bottom-up-vs-top-down-processing.html

    In order to make proper use of these theorizing methods, we have to have effective reasoning strategies as well as understand how our perception plays a role. The example of Bob and the sunglasses helps. To approach the second discussion question that Todd and Emily bring up, I think direct realism provides a valid means of justification and does avoid the problems that foundationalism does, but only if we can validify the reliability of the senses/the person perceiving these inputs. I think this then goes back to foundationalism, where we must be able to believe and trust in the ability to perceive based on previous beliefs about the person we are questioning the reliability of. I’m not sure I can restate that very coherently, hopefully someone can understand where I am trying to go with this!

    Given that, I think my question to bounce back to our class discussion is how do we quantify direct realism when one object can have different responses on different perceivers? In this case, how might we use top down and bottom up theorizing strategies to get there?

  8. Along with Ryan I also had issues with P&C’s discussion of perception. On page 201, P&C discuss perception and conclude that “perception provides reasons for judgments about the world, and the inference is made directly from the percept rather than being mediated by a (basic) belief about the percept.” I had trouble with this statement because going back to the red room example (something looks red, but there is a red light in the room, you are either aware of it or not), wouldn’t a belief about the percept (i.e. you know that there is a red light) influence the reason for believing something (the object is not red. To me it doesn’t seem feasible to disassociate belief from perception as at some level, consciously or subconsciously, our beliefs about the percepts will influence our reasoning.

    According to P&C, perception is essentially the product of sampling the world around us, and that when something does not change quickly over time, it is considered a stable property (208). However, who or what determines if something is stable? It would seem that P&C support the idea that it is the cognizer who determines which properties are stable through inductive reasoning, however, can this process ever be deductive? Wouldn’t the concept of stability be subjective as cognizers may have different experiences that cause them to believe that something is either more or less stable? Also, why does stability decay as time increases? (209) Why do we have to conclude that something is defeasible? Can’t it just be considered to be changing and accept fluidity in updating our perceptions? (Also apologies if this doesn’t make sense, I don’t quite fully understand temporal-projection…)

    Lastly, I took issue with P&C’s defeat diagrams (7.1-7.3; 198, 200, 201 respectively) and the Smith and Jones raining/not raining example. I understand how equal argument can cancel each other out in a sense, however, very rarely are there not additional inputs (i.e biases, how much you trust Smith or Jones, wether it’s a time of the year where it is likely or unlikely to rain) that factor in. Therefore, the functionality of the defeater processes do not seem applicable in reality and seem only to exist in the hypothetical sense. How can they be modified to incorporate such external inputs?

  9. This chapter was a really effective last reading as it allows us to connect what Pollock and Cruz conclude their book with to most of the psychological studies that we have covered. Direct realism seems to represent the idea that we “directly” experience external material objects. Humans do not normally form beliefs about percepts, and instead move directly from percepts to beliefs. I can sympathize with Pollock and Cruz’s reasoning of the plausibility and motivations of direct realism, but this may be due to the fact that they spend almost all of their previous chapters arguing against any other option. However, I question the level of the ‘directness’ that P&C stand by. If direct realism holds that physical objects can be direct objects of perception, how would direct realism address the complex series of events involving vision and cognition in the brain before perception can even take place? Also, because direct realism seems to share similarities with foundationalism, what would Pollock and Cruz have to say about Pylyshyn’s study from the beginning of the semester and the existence of early vision?

    The implementation of direct realism through OSCAR is interesting to consider as well. OSCAR acts as deductive and defeasible reasoning agent. How does OSCAR compare and contrast to ECHO, Thagard’s connectionist AI model that relies on explanatory coherence? Additionally, I found an outside report about an update on OSCAR called “OSCAR-MDA,” which would attempt to replicate the methodological reasoning of a doctor in an emergency room if given the time to properly reflect on the situation (http://www.princeton.edu/~jchosea/oscar.pdf). What do you think of this, and what are other potential applications for OSCAR beyond implementing P&C’s epistemological theory?

Leave a Reply