Pollock and Cruz, Ch. 7: “Direct Realism” (Todd Hunsaker and Emily Vicks)

I. Introduction

 

1.1 In this chapter, Pollock and Cruz (1999) argue for direct reliablism as a viable answer to, “what are the actual norms governing human epistemic competence?” (191). Their argument for direct reliabilsm stems from their rejection of the doxastic assumption and claim that epistemic norms coincide with nondoxastic internalism. P&C realized that holistic coherence theories fail because they cannot differentiate between justified and justifiable beliefs. They concluded that reasons play an essential role in justification.

P&C critique foundationalism and doxastic theories because they cannot accommodate perceptual knowledge and or describe human rational cognition. In vision, beliefs about physical objects can be derived from the percept either (A) directly from the percept or (B) inferred from beliefs produced by the percept that describes the percept. Unlike foundationalism, which views (A) impossible, P&C argue that such epistemic norms are possible, yet unnecessary. The fundamental principle of direct realism is that inferences and beliefs are derived directly from the percept. Therefore, direct realism is similar to foundationalism, except the foundations are percepts and not beliefs.

 

1.2 levels of epistemological theorizing

 

In this section, P&C explain three levels of epistemological theorizing. In the low-level, philosophers participate in bottom-up theorizing by investigating kinds of knowledge claims. The intermediate-level involves the investigation of topics that pertain to all kinds of knowledge claims explored on the lowest-level. The highest-level is considered top-down epistemological theorizing because it harbors general epistemological theories that try to describe “how knowledge in general is possible” (192). P&C claim that epistemologically theorizing requires both bottom-up and top-down processes. Specifically, one must first use top-down theorizing to argue for a high-level theory and then form conformable low-level theories to support it. If one cannot find such low-level theories, the high-level theory should be abandoned. In Section II, P&C argue that defeasible reasoning “provides the inferential machinery upon which to build low-level theories of epistemic norms governing specific kinds of knowledge” (200).

 

1.3 filling out direct realism

 

P&C argue for the high-level theory of direct realism by constructing compatible low-level theories. Construction is defined as “describing the various species of reasoning that can lead to justified beliefs about different subject matter” (195). This is the main goal of the OSCAR project. The creation of an artilect depends on successful and detailed construction of low-level depiction of our epistemic norms so that a computer system can encode such norms (194). P&C are open to epistemologists that disagree with direct realism, but they doubt whether an artilect can be created on opposing theories of epistemic foundations.

II. Reasoning

 

Direct realism requires epistemic norms that can appeal to perceptual states but not necessarily our beliefs about that state. P&C state that there can be “half-doxastic connections” between beliefs and non-doxastic states that is analogous to the structure of ordinary defeasible reasons. The only difference in reasoning is the reason-for relation because different states with similar content can support different inferences.

 

P&C state the definition of reason as:

 

A state M of a person S is a reason for S to believe Q if and only if it is logically possible for S to become justified in believing Q by believing it on the basis of being in the state M.

In other words, the state M does not need to be a belief. The fact the ball looks red to Bob (P) is enough reason for Bob to believe that it is red (Q).

2.2 Defeaters

Defeaters for half-doxastic connections operate like defeaters proposed in the foundations theory. It is important to characterize defeaters for defeasible reasons in low-level accounts (201). P&C describes two kinds of defeaters. The second type of defeater is redefined to include nondoxastic states.

(1)     REBUTTING DEFEATER: If M is a defeasible reason for S to believe Q, M* is a rebutting defeater for this reason if and only if M* is a defeater (for M as a reason for S to believe Q) and M* is a reason for S to believe ~Q.

(2)      UNDERCUTTING DEFEATR: If M is a nondoxastic state that is a defeasible reason for S to believe Q, M* is an undercutting defeater for this reason if and only if M* is a defeater (for M as a reason for S to believe Q) and M* is a reason for S to doubt or deny that he or she would not be in state M unless Q were true.

A rebutting defeater (1) is a reason that denies the conclusion (Q). For example, if Bob colorblind and believes that his colorblindness is such that whenever something looks red it is actually green, he now has reason to believe the ball is not red. An undercutting defeater (2) is a reason that causes a person to no longer believe Q without negating it. This defeater attacks the connection between evidence for the reason and the conclusion by showing that one’s reason for believing Q doesn’t mean that Q is true. In this example, Q is the ball is red. If Bob is informed that the ball is being irradiated by red lights, Bob no longer has reason to believe that P, the ball appearing red, is actually red, rather than having a reason for saying that it is not red. Undercutting defeaters are reasons for “P does not guarantee Q,” which is abbreviated (P Q) (197). In other words, the irradiation means that Bob’s belief that it is red doesn’t guarantee that it is red.

2.3 Justified Beliefs

In direct realism, beliefs are justified by reasoning (197). P&C define reasoning is as constructing longer arguments out of shorter ones, or subsidiary arguments. Each argument is a sequence of beliefs and nondoxastic mental states that are ordered such that each member is either (1) a nondoxastic mental state or (2) there is a proposition(s) or nondoxastic state earlier in the sequence that is reason for P (197). An argument is instantiated if a person is in a nondoxastic state and believes each proposition on the basis of earlier propositions.

Inference-graphs are a set of arguments that shows the construction of arguments. Each node is given a status- assignment, which assigns what inferences are defeated or undefeated. A partial-status assignment assigns defeated or undefeated to subset with the following rules:

  1. if A is a one-line argument (i.e., a single percept), A is assigned 
“undefeated”;
  2. if some defeater for A is assigned “undefeated”, or some member 
of the basis of A is assigned “defeated”, A is assigned “defeated”;
  3. if all defeaters for A are assigned “defeated” and all members of the basis of A are assigned “undefeated”, A is assigned 
“undefeated”.

In other words, an argument A is undefeated if and only if every argument in the graph is assigned undefeated to A.

Figure 7.1 illustrates the importance of defeasibility in justification of arguments. The arrows represent an inference from each node. Both P1 and Q1 are nondoxastic states.

 

Because the conclusion of the second argument is an undercutting defeater, Bob is not justified in believing P3 and the first argument is assigned defeated. (P2 ⊗ P3) is a defeater for an argument because it supports a defeater for a final step. However, if Bob finds a defeater for some part of the second argument, the first argument is still possible. An argument can be defeated if it is (1) based on defeated subsidiary argument, or (2) has undefeated defeater. Therefore, arguments are “provisional vehicles of justification” because arguments can defeat each other and that a defeated argument can be reinstated.

Figure 7.2 illustrates the concept of collective defeat.

Collective defeat is the situation where two or more arguments defeat each other. In this example, we have good reasons for believing that it is and isn’t raining. Since both conclusions are defeated by one of two possible status-assignments, both arguments are defeated relative to the inference graph. Therefore, we should not accept either conclusion.

(1) We assign undefeated to P1, P2, “Jones says it is raining,” “Smith says that it is not raining”, and “It is raining”, and defeated to “Smith says it is not raining.” Conclusion: it is raining.

(2) We assign undefeated to P1, P2, “Jones says that it is raining” “Smith says that it is not raining”, and “It is not raining”, and assigns “defeated” to “It is raining”.

An argument is provisionally defeated if a status assignments assigns defeated to it and another status assignment assigned undefeated to it. Unlike an argument that is defeated outright, it can still defeat other arguments.

 

Figure 7.3 illustrates provisional defeat

 

Smith and Jones accuse each other of lying. One status assignment assigns defeated to “Smith is a liar” and the undefeated to “Jones is a liar” and the other assignment does the reverse. If Smith is a liar, then the inference that it is raining from Smith is defeated. However, even if “Jones is a liar” is undefeated and “Smith is a liar” is defeated, the inference saying “it is raining” is still defeated. Therefore, this argument is provisionally defeated because the inference “Smith is a liar” is defeated and undefeated by separate status assignments and can defeat the inference that it is raining.

 

III. Perception

 

P&C claim that direct realism can solve the problem of perception, or how we can gain knowledge of the external world through perception. P&C consider the ability of perception to provide reasons for judgment about the world as the fundamental principle of direct realism (201). Section I states that the inference is produced directly from the percept, not indirectly through beliefs about the percept.

 

PERCEPTION:

Having a percept at time t with the content P is a defeasible reason for the cognizer to believe P-at-t.

 

P-at-t is the percept a reasoner believes at time t. P&C claim that this principle is the most basic component of rational cognition; it cannot be justified. This principle must be present because it is “an essential ingredient of the rational architecture of any rational agent” (201). Reliability defeaters are undercutting defeaters for perception. They prove that an inference from a percept is unreliable under present circumstances.

 

Perceptual-reliability:

Where R is projectible, “R-at-t, and the probability is low of P’s being true given R and that I have a percept with content P” is an undercutting defeater for PERCEPTION.

 

P&C call for importance of the projectibility constraint in reliability because it involves whether a percept is retained over time. Consider the example:

 

You have two circumstances. C1: Bob was born in 1998. C2: Bob is wearing rose-colored glasses. If Bob is wearing the glasses, C2 is a reliability defeater and it is unlikely that the ball that appears red to Bob is actually red. However, if Bob was born in 1998, this enables the possibility that Bob could also be wearing rose glasses. In the disjunctive circumstance (C1 v C2), there is a high probability that Bob could be wearing rose glasses. Therefore, it is also unlikely that the ball is actually red. This example shows that disjunctive circumstances present an indirect defeater to perception.

 

IV. Implementation

 

This section illustrates how reason schemas are implemented into OSCAR. We will clarify a few terms involved in implementation. OSCAR reasoning is the construction of both deductive inference rules and defeasible reason-schemas (203). Inputting the premises generates the queries, or “epistemic interests.” Reasoning through queries lead to conclusions and that are computed as inference graphs.

OSCAR performs bidirectional reasoning: the agent reasons forward from the premises (forward-reasons) and backward from the queries (backward-reasons) (203). Simple forward-reasons have no backward-premises and simple backward-reasons have no forward-premises. Mixed reasons contain backward- and forward- reasons. In simple reasons, the conclusions can be directly inferred from the reasons. In contrast, conclusions in mixed reasons are made only if (1) the reasoner adopts interest in the backward premises, and (2) those interests are discharged. Interest in backward premises is adopted only when inference nodes are constructed that present forward premises, and vice versa. The bidirectional use of each type of premise on the other provides control over reasoning progression.

 

The problem lies in how to implement perpetual reliability. The definition of perceptual-reliability is adjusted to consider reason strengths:

Perceptual-reliability:

Where R is projectible, r is the strength of PERCEPTION, and s < 0.5 ⋅(r + 1), “R-at-t, and the probability is less than or equal to s of P’s being true given R and that I have a percept with content P” is an undercutting defeater for PERCEPTION.

 

Reason strengths range from (0-1) but are mapped to probabilities in the interval (0.5-1) (206). P&C first propose this as a backward-reason, but subsequently claim that because that there are no constraints on R, the reasoner would spend too much time attempting to determine reliability given everything in the situation. P&C instead propose it as a degenerate backward-reason, with no backward premises, and P-at-t and the probability premise as a forward-premise. How can we implement perceptual reliability if we need to know if R is true at the time of the percept, but we can only infer it from the fact that R was true earlier?

 

V. Temporal Projection

 

This section of the chapter opens with a discussion of the strengths of PERCEPTION while acknowledging its major shortcoming: perception, at best, is nothing more than a “form of sampling.” That is, it is not possible for a cognizer to continually perceive and process the state of everything in his or her surrounding environment. Rather, individuals perceive small “space-time chunks” of their environments and make perceptual inferences about the state of the world at large by combining these chunks. The problem with this process of forming inferences, P & C argue, is that there is a surprising difficulty in drawing accurate conclusions about the world at large based on combinations of perceptual samples.

 

A large part of this difficulty involves the lack of time-sensitive stability exhibited by the majority of objects in the natural world. Making inferences based on single perceptual samples of given objects presupposes that the properties observed are stable over time, which is often not the case. Theoretically, an individual would need to observe the same object at multiple points in time in order to determine whether or not its properties had changed; only when affirming that they had remained unchanged could its stability be inferred, and broader inferences about its nature be made. However, making observations of the same object at various times requires the observer to accurately reidentify the object, a task that can become impossible when the object at hand rapidly or unpredictably changes its properties.

Thus, an assumption of some stability must be made about objects that an agent uses in the process of forming inferences about the world. P & C argue that an object is considered stable if, given that it is observed to hold true at an initial point in time, the probability is high that it will continue to hold true at a later point in time. They furthermore argue that the probability that the property of a given object will continue to hold true over time decreases as a function of the length of the time interval. P & C call the process of assessing the stability of an object based on the consistency of its properties over time temporal projection. Temporal projection, they argue, is essential in the rational assessment of property stability, and thus in the process of forming inferential conclusions about one’s surrounding environment. What does temporal projection look like when applied? In other words, how should temporal projection be implemented?

VI. Implementing Temporal Projection

 

In this section, P & C discuss a great deal of complex algorithms, atomic formulas and codes, which will not be discussed in detail here. Without discussing these formulas in great detail, it will suffice to consider P & C’s explanation of temporal projection as dependent on temporal projectibility. Intuitively, in order to temporally project the stability of a given object’s property of time, the properties of that object must be temporally projectable; i.e. it must be possible for a rational agent to determine their constancy based on probability. The remainder of this section is an analysis of atomic formulas and algorithms used in the process of temporal projection, all of which P & C argue are, in fact, temporally projectable.

 

VII. Reasoning About Change

 

In a vein similar to that of Section V, this section discusses the need of a cognizer to account for the tendency of most, if not all, objects in the natural world to change as a function of time and other variables. Moreover, P & C address the need of a rational agent to consider this tendency to change when making broader inferences about his or her surrounding environment. In their discussion of this need to account for change, P & C identify four kinds of reasoning:

  1. First, they argue that the agent must be capable of acquiring perceptual information about the surrounding world. This implies a necessity of proper cognitive functioning and reliably sensory interactions.
  2. Second, the agent must be able to combine isolated perceptual chunks of his or her surrounding environment into a coherent picture of the broader world.
  3. Third, the agent must be capable of perceptually detecting changes in previously identified components of his or her broader to picture of the world, and to amend this picture accordingly.
  4. Lastly, the agent must be capable of acquiring causal information about “how the world works” and to use this information to efficiently predict patterns of change that may result in the future, either from uncontrollable, natural circumstances or from the agent’s own actions.

The remainder of this section discusses the fourth type of reasoning in-depth, mentioning that the ability to foresee change necessitates the ability to foresee non-change; P & C write on page 219 that “…reasoning about what will change if an action is performed or some other event occurs generally presupposes knowing what will not change.” The remainder of the section is more or less an elaboration upon the logic of this claim, with a focus on the argument that predicting what will likely occur is largely dependent on knowledge about is not likely or impossible to occur.

 

 

VIII. The Statistical Syllogism

 

            In this section, P & C continue to build on their foundational claim that an individual’s ability to rationally navigate through the world depends heavily on his or her ability to make reasonable predictions about changes that may take place in his or her environment under various circumstances. They argue that, in order to function in a complex environment, such as the natural world as we know it, a rational agent must be equipped with rules that:

  • enable the agent to form beliefs in statistical generalizations, and;
  • enable the agent to make inferences based on those statistical generalizations that are applicable to individual circumstances. (pp. 229-230)

P & C provide an archetypal, non-numerical version of the statistical syllogism in their “most Fs are Gs” example:

 

Most F’s are G’s

This is an F

_____________

This is a G.

 

P & C explain that, because human beings often reason this way, a rational execution of such logic is essential in making reasonable predictions about the state of one’s surrounding environment. The remainder of this section involves a series of statistical and algorithmic examples supporting the validity and applicability of this claim.

 

IX. Induction

            This section relies heavily on the claims made in section VIII about the need for a rational agent to effectively make reasonable predictions about the world based on generalizations; these generalizations, they argue, can be either exceptionless (all Fs are Gs) or statistical, varying in probability (the probability of A being B is high). We become justified in believing the statistical generalizations we make through a process of induction. 

P & C begin with an explanation of the simplest kind of reasoning, enumerative induction, which involves a process of generalization based on sampling (i.e. “all As in sample X are Bs, so all As are likely Bs”). The most important defeater to consider when evaluating this line of reasoning is the possibility that X is not a reasonably “fair” or accurate sample; that is, it does not accurately encapsulate or characterize the population that it supposedly represents. The “fairness” or accuracy of a sample, i.e. its reliability in the formation of conclusions about a represented population, depends on a number of factors, including sample size and sample diversity.

P & C argue that a second kind of induction, termed statistical induction, is much more important for rational agents in their process of forming conclusions about the world based on observation of samples. P & C succinctly summarize the principles of statistical induction in 7.6:

“If B is projectible with respect to A, then ‘X is a sample of n A’s r of which are B’s’ is a defeasible reason for ‘prob(B/A) is ap- 185. Fair sample defeaters are discussed at greater length in Pollock (1989), but no general account is given. 186. Further discussion of this can be found in Pollock (1989, 315ff). proximately equal to r/n’”

The remainder of the article involves the analysis of a number of algorithmic expressions and likelihood ratios that illustrate the process of statistical induction. One might assume that, like temporal projection, the justification of an inductive argument depends on a number of factors such as the constancy of observed properties over time, representativeness of the sample, etc. However, P & C conclude this section by arguing that the strength of induction is in its lack of need for justification (pp. 237). They add, however, that “…principles of induction are philosophically trouble free. Major questions remain about the precise form of the epistemic norms governing induction. Suggestions have been made here about some of the details of those epistemic norms, but we do not yet have a complete account.” (p.237).

 

Discussion Questions

1) P&C present that the task at hand is to construct low-level theories that support direct realism (192). How does bias fit into this claim? Specifically, are the low level theories constructed only to match with it, instead of accurately construct reality?

2) Does direct realism really avoid the problems of justification that foundationalism runs into?

3) The original example for statistical syllogism is “no one believes everything in the newspaper to be true – but I do believe that most is true and that justifies me in believing individual newspaper reports (230)” Can justification be based on the “most is true” concept?

4) In their discussion of perception in the role of forming inferences about the world, P&C lend a great deal of weight to the agent’s ability to integrate numerous perceptual “space-time chunks” into a coherent, broader view of one’s surroundings. Can you give any real-world examples of this process?

5) P & C claim that the strength of induction lies in its lack of need for justification. Do you agree with this? Is induction an infallible process?

 

Open Peer Commentary and Author’s Response: Why do Humans Reason? Arguments for an Argumentative Theory – Mercier and Sperber (Erick Masias, Devon Tomasi, Mary Thomas)

Peer Commentary:

Arguing, reasoning, and the interpersonal (cultural) functions of human consciousness”

Baumeister, Masicampo, and DeWall agree with M&S that reasoning is an interpersonal rather than individual exercise, and that reasoning was evolutionarily advantageous for humans. The purpose for reasoning, they posit, was to advance culture – individuals that could reason well (and thus propagate culture by creating interpersonal connections) were selected for. B, M, & D also argue that thinking and consciousness exists in order for us to share it with others – “much of thinking is for talking” (74).

Regret and justification as a link from argumentation to consequentialism

Connolly & Reb support M&S’s ideas on the evolutionary role of argumentation, but believe that emotions can be a critical link between argument making and consequential decision making. The phenomena of regret, regret avoidance, and justification are not obstacles to good reasoning but can actually facilitate decision making. The attraction effect is the phenomenon in which an option is seen as more attractive when compared to a separate, irrelevant option (75). External accountability can exacerbate the attraction effect, while regret priming – demanding to justify one’s decision to oneself – can eliminate this effect as the goal is to arrive at a conclusion that aligns with one’s values rather than just persuade others.

The freak in all of us: logical truth seeking without argumentation

De Neys attacks M&S’s presentation of peoples’ performance on classical, non-argumentative logical tasks. De Neys concedes that people do perform poorly in non-argumentative tasks, but people reason better in argumentative contexts doesn’t mean that they don’t try to reason logically outside of this context. Citing a number of psychology and neuroscience studies, De Neys presents data showing better reasoning by people in the classical reasoning tasks. He argues that even thought they are wrong, there is at least evidence that they are trying to be logical. In pointing this out, De Neys argues that classical reasoning tasks are less artificial than M&S suggest (76).

Reasoning as a lie detection device

Dessalles argues against the biological function of argumentation presented by M&S. M&S argue that the optimization of communication derived from argumentation occurs at a group level, but Dessalles points out that evolution works at the individual level rather than the group level. Reasoning, then, is for lie detection and aims to restore consistency to communication – remarkably similar to M&S’s proposed function of reasoning. According to Dessalles, the social benefit to the individual of exposing inconsistencies and restoring consistency allows the individual to compete well with their peers. There are side effects to this model of reasoning for social benefit, including invoking evidence that cannot be verified.

Reasoning is for thinking, not just arguing

Evans argues that while reasoning may have evolved primarily for argumentation, that it is now used for other functions. He states that M&S have been dismissive and limited when addressing the dual process theories, overlooking the importance of general, heritable intelligence in solving novel reasoning and decision making. To support this Evans provides evidence that the ability to solve novel problems is related to intelligence and the ability to reason, contrasting M&S’s argument that intuition is better. Evans disagrees with M&S’s ideas about the evolution of reasoning, pointing out that the evolution of higher cognitive abilities was not driven by Darwinian pressures, otherwise other animals would have evolved them as well.

Artificial cognitive systems: where does argumentation fit?

Fox brings findings from research with artificial intelligence systems into the discussion of the mechanisms and functions of argumentation in social interactions. He argues that reflective reasoning has more than just social benefits for the cognitive agent (human or artificial), and uses a formal model to further clarify M&S’s distinction between intuition and reasoning. He also argue that the argumentative theory is what is meant by evidence by determining what kinds of statements can and cannot be used in argumentation.

Reasoning, argumentation, and cognition

Frankish agrees with M&S that arguments enhance communication, but that this is perhaps primarily for enhancing collective cognition and that other social functions of arguing exist such as finding a mate. Reasoning, he argues, may have evolved for public use, but has been co-opted to serve individual cognition, in which there are more than just epistemic motives at play but also social motives. Individualized reasoning is just an internalized version of public argument that precedes individual reasoning. M&S’s theory of personal-level reasoning, Frankish argues, requires intuition regarding the rules of inference which can be either abstract (requires explicit learning) or linguistic (learned by exposure to arguments of others).

Reasoning as a deliberative function but dialogic in structure and origin.

Godfrey-Smith and Yegnashankaran argue that reasoning is an internalized form of interpersonal exchange of ideas, and has several functions but is primarily for deliberation. As a persuasive tool, reasoning in the form of dialogue is more adept at reaching conclusions and justifications. G-S & Y also find tension in M&S’s claims that people are poor individual reasoners but function well in groups, but concede that M&S’s theory better explains the existence of confirmation bias.

 

Understand, evaluating, and producing arguments: training is necessary for reasoning skills

Harrell refutes the claim by M&S that people are good arguers by citing experimental evidence to the contrary, including some of the same studies cited by M&S. She argues that the evidence presented by M&S only vaguely defines argumentation skills and provides poor evidence that people can understand, evaluate, and produce arguments. The literature, according to Harrell, actually shows that these skills are poor in untrained people, but that following formal argumentation training performance improves.

The argumentative theory of reasoning applies to scientists and philosophers, too
Johnson explores what the implications of M&S’s theory of argumentation are on the roles of professional reasoners such as scientists and philosophers. If the function of reasoning is the same for every person in society, as M&S implicitly state, does this mean that scientists and philosophers are in the business of persuasive argumentation – convincing people of their ideas – rather than seeking the truth? This must include M&S themselves as well in order to avoid the non-reflexive fallacy. M&S challenge the idea of scientists and philosophers as “elite thinkers”, so does every human have this capacity? Additionally, does this mean that scientists and philosophers are only governed by confirmation biases like the rest of us? “Research does not begin dispassionately” (81-82), and Johnson believes they should accept this as a reality of their profession.

True to the power of one? Cognition, argument, and reasoning
Khlentzos and Stevenson support the argumentative theory of reasoning presented by M&S, but question the way M&S have divided functions between systems 1 and 2, specifically how system 2 is a “backup” for system 1. K&S propose an alternative role for S2, as a reasoner that filters S1 outputs and independently produces conclusions that are subject to revision given new evidence. They concede that S2 plays some kind of regulatory role because S1 is both probabilistic and deductive. K&S disagree with M&S that confirmation bias is not a flaw and state that it can polarize conversation and dissuade argumentation.

What people may do versus can do
Kuhn seeks to expand on the power of human reasoning supported by the argumentative theory. She states that while can reason fairly well under ordinary conditions, numerous studies involved in the training of adolescents has shown that in supportive environments, people can reason can show much stronger argument skills. This supportive environment is “sustained engagement of adolescents in dialogic argumentation” (83), akin to a longitudinal study rather than the more cross-sectional studies cited by M&S. With this sustained training, over a few months argumentation improves greatly in participants. Kuhn is hopeful about applications of this and the possibility for universal improvement of argumentation with formalized training programs, and the establishment of a new norm of argumentative skills.

The need for a broad and developmental study of reasoning
Narvaez argues that M&S’ by focusing purely on a rhetorical use, they overlook the practical uses of reasoning, and therefore have too narrow of a view of the uses of reasoning. She notes that M&S overlook sociopolitical reasoning used to design laws, everyday reasoning used to take appropriate courses of action, and goal-motivated reasoning used to plan and reflect on failure and success. Her other main point takes issues with most of M&S’ research findings supplied by college students, noting that older adult subjects may have developed better inductive reasoning, while also pointing out the issue in generalizing human nature from the Western, Educated, Industrialized, Rich and Democratic (WEIRD) population.

Putting reasoning and judgment in their proper argumentative place

Oaksford generally argues with the thesis provided by M&S, but would like to see a more probabilistic analysis when judging the strength of an argument. They use the denying the antecedent fallacy to provide an example where the change in the degree of belief brought by an argument is useful in the judgment of the argument.

On the design and function of rational arguments

Opfer and Sloutsky propose three obstacles to reasoning as an argumentative tool. The first is that changing beliefs and attitudes would not be affected by solid reasoned argumentation, citing studies that show those that are less confident in their beliefs often yield to their more confident peers. The second is that emotionally charged examples often persuade more than reason, which takes up more cognitive resources. The final is their distinction between linguistic and argumentative “operators” and “receivers.” They contend that a more proficient language  user (operator) can aid the lesser (receiver) to a better use of language, but not in the case of more and less reasoned argumenters.

What is argument for? An adaptationist approach to argument and debate

Pietraszewski agrees with M&S’ thesis but wishes to answer what argument serves a purpose for if it is so that reasoning serves argument. They stray away from the classical conclusion of seeking truth and accuracy and instead propose that argument and communication exists to affect behavior, which will then alter the future actions of others in regards to the communication. This leads to 2 classes of argument psych: to deal with conflicts of interest and to socially coordinate. They follow the second class to be the evaluation of argument as “who is arguing should be just as important as what they are saying when considering the ‘goodness’ of the argument” (87).

The importance of utilities in theories of reasoning

Poletiek proposes that M&S’ theory does not properly address different reasoning contexts outside of an argumentative one. She uses the example of a hypothesis testing experiment where participants came up with different outcomes when in an argumentative motive versus other motives (other than determining the truth) to highlight the need of other utilities, such as signal-detection theory, to reason in a variety of contexts.

When Reasoning is persuasive but wrong

Sternberg takes issue with M&S’ claim that reasoning could have evolved out of a function to argue. They use an example 2 individuals arguing of the existence of a threat to, independent of who was correct, their reasoning was purely argumentative, but the correct individual survived. Put in the context of global warming, Sternberg contends that this model of reasoning will doom human survival and that reasoning should function to serve “veridicality” (89).

The chronometrics of confirmation bias: Evidence for the inhibition of intuitive judgments

Stupple and Ball use chronometric data of participants in reasoning tasks to disprove this idea brought forth by M&S that people first try to reason to serve their existing beliefs. They found participants spent most time reasoning through a believable but invalid proposition, resisting their intuitive judgments. With this evidence they conclude participants actually could reason in search of the truth while inhibiting their intuitions.

Spontaneous inferences provide intuitive beliefs on which reasoning proper depends

Uleman, Kressel, and Rim take issue with the over simplification of the M&S’ intuitive beliefs and provide evidence for the spontaneous inferences existing as what M&S think to be intuitive beliefs. They provide evidence for these spontaneous, unconscious inferences leading to the formation of conscious reasoning with an experiment in which participants familiarized themselves with the sentence “John returned the wallet with all the money in it” and in other tasks always associated John with honesty (90).

Query Theory: Knowing what we want by arguing with ourselves

Weber and Johnson see query theory oversimplified in M&S’ article as only an example of reason based choice, where they see it as evidence for the retrieval of implicit memories to evaluate choices leading to decisions and implement into arguments. Ultimately the processes shown by QT imply that argumentation is not only interpersonal but also leads to intrapersonal implicit preference construction. Both seem to have implications in argumentative contexts.

Reasoning, robots, and navigation: Dual roles for deductive and abductive reasoning

Wiles explores another aspect of cognition in regards to reasoning and argues for the existence of a more primitive function shared with other animals and modeled in robots and mice which is navigation. They find systems that navigate well to be abductive reasoners adding another aspect of reasoning not accounted for when viewing reasoning only as argumentative.

Some empirical qualifications to the arguments for an argumentative theory

Wolfe sees that data not accounted for by M&S actually indicate people aren’t actually as good at evaluating arguments at M&S say they are, citing several studies that went overlooked by M&S. They then point out a very key distinction not made by M&S: “confirmation bias typically refers to a biased search for or weighing of evidence, whereas myside bias refers to biases in generating reasons or arguments” (93). They have shown myside biases can actually be reduced with training. They ultimately propose that we actually do have the resources to form reasoned arguments on our own otherwise argumentation would not work.

Deliberative democracy and epistemic humility

Chien-Chang Wu wishes to apply M&S’ ideas into a political framework called deliberative democracy in order to apply group reasoning into public policy decision. He notes that M&S’ theory of reasoning fits well with deliberative democracy and focuses on an epistemic aspect, pointing out that there are complicated issues embedded in applying M&S’ theory that they don’t address. He lays out three conditions that he sees as necessary in order for deliberative democracy to produce epistemic good: considerations of ethics, a “deflationist” definition of truth (noting realist truth unlikely achievable), and considerations of the framing powers.

 

Author’s Response. Argumentation: Its adaptiveness and efficacy

R1. Different Definitions of Reasoning

M&S explain how reasoning, as they describe it, is a form of higher-order intuitive inference with a specialized domain and task, which contrasts with ordinary intuitive inference. Some commentaries defend a different definition of reasoning. Khlentzos & Stevenson suggest that some type of system 2 reasoning must have evolved to arbitrate between contradictory system 1 outputs (for instance when perception contradicts memory). They argue that reasoning is specifically geared toward this end, which M&S argee would be true with a much broader definition of reasoning. Poletiek and Narvaez both argue that reasoning guides strategy and action choice, which M&S argue is a function of intuitions and falls outside the scope of their definition of reasoning. A few commentaries raise additional mechanisms in System 2 (hypothetical thinking, elaborative planning, and avoiding decisions we would regret) that they argue directly lead to good outcomes without involving argumentation. M&S again argue that these mechanisms do not qualify as reasoning under their definition. They do think it would be interesting for researchers to consider System 2 as comprising several different mechanisms other than reasoning because it could explain the covariation of traits measured by various measures of cognitive ability. M&S end this section by saying that offering another reasonable and useful definition of reasoning is not enough to object to their definition.

R2. Evolution and function of reasoning

M&S argue that a number of objections were based on a misunderstanding of their hypothesis on the evolution and function of reasoning. They assert that they never argued that reasoning is designed only to find arguments to persuade others, or that epistemic goals should be poorly served by reasoning, or that mere rhetoric is all it takes to influence people, or that people hardly ever change their mind, as many authors believed. M&S apologize for devoting more space in their article to the production of arguments by communicators (rhetoric) than to the evaluation of arguments by the audience (epistemic). They say the argumentative theory would not make evolutionary sense if arguments were addressed to people who were wholly unable to evaluate them from a sound epistemic perspective

R2.1. The double sided argumentative function of reasoning

M&S argue that communication has evolved to be advantageous to both communicators and receivers. Receivers receive rich info that they could not have obtained on their own, and communication allows communicators to achieve some desirable effect from receivers. M&S assert that they argue the main function of reasoning is social but is meant to serve the social needs of the individual in response to Dessalles’s concerns. M&S argue that receivers need to use epistemic vigilance to benefit from communication and agree with Opfer & Sloutsky that the main heuristic consists of assessing a communicator’s trustworthiness. M&S argue, however, that this is not the only heuristic used. Coherence checking is also important for receivers but can also be exploited by communicators. M&S concede that argumentation can be misused and abused to serve the interests of the communicator. This does not work, however, with receivers who care to be well informed. When people are motivated to reason, they do a better job accepting only sound arguments.

R2.2 Other functions of reasoning?

Many commentaries agree with argumentation but suggest that it may serve additional social functions or functions contributing to individual cognition. M&S recognize this possibility and explain their claim was that argumentation was the main function of reasoning but any evolved mechanism can be put to a variety of uses. Dessalles and Frankish suggest argumentation could have evolved as a means to display one’s intellectual skills. M&S agree argumentation could be put to such use but only occasionally, usually in academic milieus, and has actually evolved to be efficient rather than impressive. Pietraszewski distinguishes two classes of reasoning that show argumentation is not used just in the defense of factual claims but also of claims that are matters of choice or social alignment. M&S welcome this observation but argue it simply highlights that communication involves a mix of means and goals. Baumeister et al. draw attention to consciousness and culture. M&S acknowledge the need for more research on the connection between consciousness and reasoning but are not convinced culture contributes to the function of reasoning. Godfrey-Smith & Yegnashankaran suggest that reasoning is individualistic in function but dialogic in structure. Evans and Frankish also argue that reasoning has evolved to serve individual cognitive goals, including anticipating the future and strengthening resolve. M&S do not dispute these claims but dispute that individual cognition is the main function of reasoning. They argue that the main contribution of reasoning to individual cognition is in helping people evaluate other people’s arguments, and that argumentation is therefore the main function.

R3. Strength and biases of reasoning and argumentation

R3.1. Are we really good at argumentation?

In this section, M&S address commentaries that argue that argumentative skills can be improved with training and critique the data used by M&S as evidence of people’s basic argumentative abilities. Overall, M&S concede that spontaneous argumentation skills are imperfect and can be improved by teaching (which is linked to the variable importance given to argumentation in different cultures and institutions) but maintain that they display a “remarkable superiority” to the reasoning skill elicited in non-argumentative contexts.

R3.2. How efficient is group reasoning?

This questions elicited contrary opinions from commentators. M&S stress that the argumentative theory does not predict that groups will always make better decisions, but merely that reasoning should work better in context of a genuine debate. They agree that many aspects other than reasoning can impact the outcome of a discussion and that reasoning in a group can bring poor outcomes when there is no genuine deliberation. They also concede that sometimes the best arguments will point in the wrong direction.

R3.3. The strength of confirmation bias

M&S argue when we look for arguments in a debate, we are mostly interested in arguments for our side or against the other side, which is why they say confirmation bias is a feature of reasoning. Poletiek questions the evidence from hypothesis testing, which M&S understand, but emphasize that reasoning is still unable to correct our own intuitions even though it can easily try and correct those of others. Wolfe present studies on myside bias, which M&S argue merely reflects a belief that it is better to provide arguments for one’s side rather than also for the other side.  DeNeys and Stupple & Ball critique M&S’s interpretation of the belief bias data because some people engage in logical reasoning when faced with such problems. M&S agree that in reasoning tasks people try to provide the correct, logically valid answer, but it is interesting that most of them fail. They argue this indicates that reasoning is not geared towards pure logical validity.

R4. On the working reasoning

R4.1. The algorithmic level

M&S acknowledge that their theory has a limitation in that it does not address the implications on the algorithmic implementation of reasoning (Khlentzos & Stevenson). They appreciate the contributions made by commentators on this issue. Most notably, Weber & Johnson offer a process-level specification of how reasoning works in decision making. M&S argue that this theory predicts reason-based choice and confirmation bias. M&S believe this theory does not compete with argumentative theory because it is based on the workings, rather than the function, of reasoning.

R4.2. Reasoning outside the lab

M&S applaud Narvaez for pointing out the limitations of the target article’s focus on experiments carried out in the laboratory. Many argue that WEIRD people (Western educated industrialized rich democratic) behave differently than the rest of the world. But M&S argue that the available data do not show that a culture would be deprived of reasoning and argumentative skills. Even illiterate societies can solve logical problems in the proper contexts. M&S address the importance of developmental data in the study of argumentative theory but did not focus on it. Narvaez and Wu provide further support for the argumentative theory by drawing attention to the political sphere. M&S argue that their theory can explain both the successes and failures of political debates. M&S concede that their theory applies to scientists and philosophers, including themselves (Johnson). Finally, M&S argue that people should be somewhat receptive to moral arguments while evaluating them on the basis of their own moral intuitions.

R5. Conclusion

M&S explain that the commentaries have not led them to revise their theory in any major way, but have pointed to fascinating directions for future research. They concede that more needs to be done to link their ultimate theory with process theories of reasoning. M&S suggest that other mechanisms besides reasoning might benefit from being viewed as having a social function and hope their contribution contributes to the growing body of research that shows that the human mind is a social mind.  

Discussion Questions:

  1. Are M&S not accommodating enough to other functions of reasoning, independent of whether or not the argument is the true main function?
  2. Do you agree with Johnson that argumentative theory should also be applied to professional reasoners like scientists and philosophers? Are they “seeking truth” or merely building their own arguments?
  3. De Neys asserts that just because people reason well in argumentative contexts doesn’t mean that they don’t try to reason logically outside of this context. How well, if at all, do you think people reason outside of an argumentative context?
  4. Do you accept M&S’s dismissal of Narvaez’s concern with their focus on WEIRD people? How do you think testing non-WEIRD people’s argumentative skills would be different and how would the results impact M&S’s theory?

Hugo Mercier & Dan Sperber (2011) “Why do humans reason? Arguments for an argumentative theory.” – Eliza Jaeger and Kristin Corbett

Introduction

Reasoning is a special kind of inference that consciously produces new mental representations from other consciously held representations, and it is unique to human beings. Mercier and Sperber specifically define reasoning as the production of a consciously produced conclusion from other consciously held premises (i.e. consciously held reasons), with an intuitive premise-conclusion component. In their article, they set out to investigate both how and why human beings engage in reasoning, although they focus most closely on the question of why (57).

 

  1. Reasoning: Mechanism and Function

 

1.1 Intuitive Inference and argument

Mercier and Sperber set out to create their own dual-process approach to distinguish between intuitions (system 1 reasoning) and logical reasoning (system 2 reasoning): processes of inference and reasoning proper (58). The output of processes of inference are intuitive beliefs, that are formed at a “sub-personal,” unconscious level, for which were are not aware of the reasons behind holding them. The output of reasoning proper are reflective beliefs, that we do have conscious reasons for holding. Reasoning proper allows us to represent our own mental representations as well as the representations of others (metarepresentations). It does, however, contain intuitive elements on a fundamental level.

 

M&S give Decartes’ Cogito argument (“I think therefore I am”) as an example of reasoning proper (59). Although the thinker is able to give reasons for believing the argument that they exist, the fundamental reasons for accepting this as an intuitively good argument are much cloudier. In this case, the clear-cut dual system becomes less clear, because the outputs of what was originally considered pure system 2 reasoning must contain some elements of system 1 reasoning (intuition). The premise-conclusion relationships in an argument must be intuited on an unconscious level if one is to be able to evaluate the merit of an argument. The function of this ability is extremely evolutionarily salient.

 

1.2 The function of reasoning

M&S reject the classical view of the evolutionary function of reasoning as an enhancement of individual cognition. Under this classical view, system 2 reasoning is achieved by correcting mistakes in system 1 intuitions. They also reject the hypothesized function of reasoning as a mechanism by which organisms can react to novel environments favorably, which they argue is simple learning. Instead, they propose that reasoning evolved as a form of epistemic vigilance, which allows the senders and receivers involved in interpersonal communication to evaluate the information and arguments being exchanged (60).

 

This potential evolutionary origin of reasoning stems from the psychological concepts of trust calibration and coherence checking. Put simply, individuals must be able to quickly and accurately evaluate new information received from other individuals, to avoid being misled. With a mechanism that allows communicators to effectively communicate and evaluate new ideas, the information that humans are able to share increases in both quantity and epistemic quality. This ability would be strongly selected for in an environment in which the fast exchange of accurate knowledge is essential. In other words, the main function of reasoning is argumentative (60).

 

  1. Argumentative Skills

 

2.1 Understanding and Evaluating Arguments

M&S state that there is a common conception that people in general are not very skilled arguers, a fact which would pose insurmountable obstacles to their theory that the primary function of human reasoning is argumentation (61). However, studies on persuasion and attitude change have shown that “when they are motivated, participants are able to use reasoning to evaluate arguments accurately.” Logical performance in reasoning research has been notoriously poor, but M&S argue this is due to the abstract, decontextualized nature of the tasks. In an argumentative context, people perform much better on the same kinds of tasks. Fallacies of argumentation are different than logical fallacies, and people generally perform well both in identifying argumentative fallacies and rejecting or accepting them as appropriate in context.

 

2.2 Producing Arguments

Previous studies would indicate people are generally unskilled in argument production as well. M&S argue that these apparent deficiencies actually stem from the context of the experimental tasks, which were ill-suited to reasoning’s argumentative function. In fact, people will use relevant data when they have access to it, they will develop more complex arguments if they anticipate any challenge to their assertions, and they are capable of formulating counterarguments so long as the argument being challenged is not their own (62).

 

2.3 Group Reasoning

M&S claim that prior reasoning research shows that in group settings, the dominant scheme is “truth wins.” Individuals show large improvements in reasoning tasks after group debates (an incredible increase of 10% to 80% correct responses in the Wason selection task) (63). Transcripts of group discussions, and the assembly bonus effect, suggest that this improvement is actually due to reasoning improvement, and not simply some members following other, “smarter” ones. M&S’s theory also predicts a strengthening of group opinion in artificial, nonoptimal group settings of prior agreement.

 

  1. The Confirmation Bias: A Flaw of Reasoning or a Feature of Argument Production?

According to M&S’s theory, confirmation bias is not a flaw of reasoning but rather a feature. In some cases of “confirmation bias,” people are simply trusting their beliefs by drawing positive inferences from them; no proper reasoning has occurred. According to their theory, true confirmation bias should only occur in argumentative situations and only in argument production. It is not a general confirmation bias, but rather a bias toward confirming one’s own arguments and refuting those of others (64).

 

3.1 Hypothesis Testing: No Reasoning, No Reasoning Bias

M&S argue that the people’s poor performance in hypothesis testing is not due to reasoning at all. Lacking an argumentative setting, participants are simply adopting a “positive test strategy” in an intuitive manner. If, instead of producing their own hypothesis, people are given one from someone else, they do employ reasoning and are better able to falsify it.

 

3.2 The Wason Selection Task

M&S claim that poor performance on the Wason selection task occurs because the utterance of certain concepts in the in rule itself incites intuitive mechanisms of comprehension which cause people to focus on certain cards and make an inference as to the answer; subsequent reasoning processes only justify the answer already formulated. Participants in argumentative group settings perform much better on the task, as do people who are highly motivated to disprove the rule provided (65).

 

3.3 Categorical Syllogisms

People perform poorly on categorical syllogisms because solving them correctly requires producing counterexamples to one’s own conclusion. They perform much better if the conclusion is unbelievable or if it is provided by someone else.

 

3.4 Rehabilitating the Confirmation Bias

The confirmation bias is traditionally viewed as a dangerous defect in reasoning, due to cognitive limitations. The fact that people are quite adept at falsifying propositions when motivated troubles the matter of causation, and the consequences of confirmation bias are only disastrous in abnormal contexts of prior agreement (whether inter- or intra-personally). In more felicitous contexts of groups solving disagreements, the confirmation bias is an efficient division of cognitive labor (65). High performance in group reasoning tasks suggests that the confirmation bias is primarily present in argument production, not evaluation.

 

  1. Proactive Reasoning Belief Formation

Most of our beliefs as individuals go unchallenged as they are unexpressed or only relevant to ourselves. If we identify a particular belief of ours as possibly contentious, we regard it as an opinion and may search proactively for arguments to justify this opinion, a phenomenon studied as “motivated reasoning.” (66)

 

4.1 Motivated Reasoning

In a study in which participants were given a fake medical result, they tended either to discount the rate of false positives provided, or utilize it to undermine the test, depending on whether their result was positive or negative. If this were due to wishful thinking, participants could dismiss the test entirely, but instead they produced arguments to support their opinion. To M&S, this motivated reasoning is targeted at justifying beliefs to others; any personal belief revision in the name of truth-seeking is incidental.

 

4.2 Consequences of Motivated Reasoning

 

4.2.1 Biased Evaluation and Attitude Polarization

When participants are presented with a study either confirming or attacking their prior position on the death penalty, they are more likely to criticize the methodology if the conclusion reached differs from their own. M&S interpret this as evidence that people’s goals in such a situation are “argumentative rather than epistemic.” (67) Additionally, people spend more time evaluating an argument contrary to their own opinion, as rejecting the argument requires justification that accepting it does not.

 

4.2.2 Polarization, Bolstering, and Overconfidence.

When people think about a stimulus regarding which they have a prior decided opinion, they tend to polarize and strengthen their existing attitude, rather than reevaluating it. This tendency increases with time spent thinking, motivation to think, and the number (if any) of explicit arguments the person puts forth supporting their opinion (67). Being publicly committed to the opinion results in bolstering, an increased pressure to justify the opinion rather than change it, an effect which is strengthened by heightened accountability. Providing an answer to a question causes people spontaneously to produce justifications for their answer, resulting in subsequent overconfidence.

 

4.1.3 Belief Perseverance

Belief perseverance depends on the orientation of people’s intuitive inferences, and whether evidence presented supports these inferences, rather than on the order of evidence presented, indicating that belief perseverance is simply a special type of motivated reasoning (68).

 

4.1.4 Violation of Moral Norms

The study of moral hypocrisy shows that reasoning is better suited to justifying people’s actions than guiding them (serving an argumentative rather than moral or epistemic goal) (69). The effect of moral hypocrisy in certain judgments can be eliminated by introducing cognitive load during the judgment process and therefore interfering with reasoning.

 

  1. Proactive reasoning in decision making

Mercier and Sperber argue that the main role of reasoning is done in anticipation in anticipation of the necessity of defending a decision. This process does not always result in the weighing of pros and cons in an reliable way (the classical view of reasoning), as has been shown by extensive empirical evidence (69).

 

5.1 To what extent does reasoning help in deciding?

Many studies have shown that decisions based on careful, conscious reasoning actually results in poorer decisions than does unconscious decision-making processes that are not based on carefully stated reasons (69). Most decisions are made intuitively, and those that are made through reasoning often result in decisions that may be easy to justify, but they may not be the best decisions (69).

 

5.2 Reason-based choice

This bias that people have to make more readily justifiable decisions causes them to make a number of classically “irrational” decisions, to avoid the risk of criticism. This phenomenon, termed reason-based choice, causes people to make “mistakes” on tasks designed to measure rationality. Making choices based on reasons, no matter the justifiability of those reasons, will be favored when making decisions and thus result in irrational decisions (70).

 

5.3 What reason-based choice can explain

Reason-based choice, M&S argue, can explain a great number of the well-known challenges to human rationality. They list the disjunction effect, the sunk-cost fallacy, framing, and preference inversion as empirical psychology research examples of reason-based choice in action (70). What all of these examples have in common is that they provide significant evidence for cognitively unsound uses of reasoning (71). M&S define these deviations from irrationality as the misuse of an evolutionarily favorable mechanism for decision making. As they argue throughout the text, reasoning most likely evolved to function in a social context, and allows for people to anticipate which arguments they need to justify in order to have other people take their beliefs seriously. At its core, their argument is that the function of reasoning is to lead people to justifiable decisions, and not necessarily good decisions (as defined by a classical definition of rationality). The instances in which this distinction must be made (i.e. between justifiable and good) are rare, and therefore do not pose a significant threat to their argument.

 

  1. Conclusion: Reasoning and rationality

Reasoning allows for human communication to be both reliable and potent, and benefits both senders and receivers in the exchange of information. This argumentative theory of reasoning shows that “irrationality,” as it is classically understood in psychology and philosophy, is merely the result of the human tendency to systematically look for arguments to justify beliefs and actions. As Mercier and Sperber demonstrate with their review of the results of many reasoning tasks, people engaged in argumentation favor arguments that support their own views if they have “an axe to grind” about the argument, but truth wins when all participants have equal interest in discovering the right answer to a problem (72). This means that truth doesn’t necessarily always win, but the best arguments do. With time and with enough participants engaged in conversation, however, the best arguments will eventually equate to the truth.

 

Discussion questions:

  1. Does an argumentative function of reason have disastrous moral or epistemic consequences?
  2. Can the biased features of argumentative reasoning be effectively modulated by group debate or is another solution in order?
  3. How does Mercier and Sperber’s model of reasoning differ from complex learning mechanisms? How might this be selected for in an evolutionary context?
  4. Does this conception of reasoning fit into a normative or a descriptive model of rationality?

Open Peer Commentary and Author’s Response: Subtracting “ought” from “is”: Descriptive versus normativism in the study of human thinking -Elqayam and Evans

Open Peer Commentary:

Throwing the normative baby out with the prescriptivist bath water

Achourioti, Fugard, &  Stenning agree with E & E that it would be foolish to say there is one formal model of rationality, but they disagree with the argument that descriptivism is the answer. These authors argue that there are two norms that are integral to the process of human reasoning:

Constitutive norms: These norms answer the question of “what the reasoning is”

Regulative norms: These norms answer the question of “why the reasoning is the one that it is”

These authors argue that “thoroughgoing descriptivism” is not enough to explain how people reason and the level of people’s understanding, and normative concepts cannot be extricated from discussions about human reasoning.

 

Norms for reasoning about decisions

Bonnefon addresses the new “trend” in philosophy to look at how reasoning intersects with decision-making. Bonnefon shows three different examples of how this new intersection provides an opportunity for both normativist and descriptivist approaches. Bonnefon argues that although these two branches of philosophy (decision-making and reasoning) rely on mostly normativism as independent subjects, when these subjects are combined a strict normativist approach is weaker.

 

The unbearable lightness of “Thinking”: Moving beyond simple concepts of thinking, rationality, and hypothesis testing

Brase & Shanteau argue that the HTT is not sufficient to solve the problems posited by E & E and instead the focus should be on “a better conception of how theories are constructed and evaluated” (250). The method of “strong inference” in which tests are used to compare many “viable” hypotheses to see which can be excluded should be preferred. There should not be one model that encompasses how thinking works. Rather, Brase & Shanteau argue that there should be an emphasis on the “domain-specificity” of reasoning, or the concept that there are many different types of “thinking.”

 

Competence, reflective equilibrium, and dual-system theories

Buckwalter & Stich agree with E & E on the merits of descriptivism, but these authors argue that the distinction between competence theory and normative theory is not well defended. John Rawls posits the idea of a “reflective equilibrium” in which moral principles and moral judgments align. Cohen furthers this concept with respect to reasoning and says that if there are normative and descriptive models of reasoning, these models must “coincide” (252). Buckwalter & Stich argue that if the dual process model of reasoning is correct, it is unlikely that a normative model could ever be correct. System 2 seems to be a system that could vary individual to individual and therefore cannot be encompassed by one model.

 

A role for normativism

Douven argues that there are merits to both normativism and descriptivism in conversations about human reasoning. Douven argues that “long-run accuracy” is a priori our “epistemic goal,” and that E&E completely miss this point in their argument (252). Douven also disagrees with E & E’s argument that empirical research is not helpful in order to distinguish between models of reasoning. Douven argues that empirical research is helpful in the case when there is no known model for a type of reasoning. Douven believes that E&E’s argument is not strong enough to entirely throw out normativism.

 

The historical and philosophical origins of normativism

Novaes is interested in the question “is thinking a normative affair at all?” (253). Novaes points to the history of the thought that thinking falls into the category of normativism and that logic should be the normative system used. Novaes draws the connection to Kant’s philosophy, a perspective that is defined by “transcendental idealism” (254). Novaes states that if we reject transcendental idealism, then we can also reject the concept that thinking is normativist.

 

Just the facts, and only the facts, about human rationality?

Foss is in agreement with E&E that there should be greater emphasis on scientific facts. However, Foss argues that they failed to give a definition of rationality in their argument. In addition, Foss argues that competency theory is a type of normativism, and so it may be that normativism cannot be entirely rejected in psychological studies.

 

Overselling the case against normativism

Fuller & Samuels agree with E&E’s argument but find flaws in two aspects of their argument. Firstly, Fuller & Samuels think that researchers will end up falling into “normative interpretations” even if they rely on formal theories (255). Secondly, Fuller & Samuels argue that E&E have a too narrow definition of normativism.

 

Undisputed norms and normal errors in human thinking

E&E state that there seem to be multiple norms for any given task of human reasoning and therefore it cannot be determined which norm is correct. Girotto disagrees with this argument and states that there often can be one norm, and that norms are useful in providing guidance for individuals. Therefore, norms are necessary in conversations about human thinking.

 

Normative theory in decision making and human reasoning

Gold, Colman, & Pulford argue that the is-ought problem is not significant enough to reject normativism. Rather normative theories can be used in “generating powerful descriptive theories” (257). In conversations about moral reasoning and morals in general, norms are incredibly useful. Therefore, normativism should not be completely thrown out.

 

Why rational norms are indispensable

Hahn argues that norms are important because they provide standards that can be used as “interpretative tools”, with respect to evaluation and prediction of behavior (257). Hahn argues that these normative models would be well complemented by descriptivism. Hahn also argues that the “is-ought” distinction has nothing to do with normativity whatsoever. Therefore, normativism is a crucial model and should not be rejected.

 

Defending normativism

Hrotic, an anthropologist, shares his concern that “in practice, the distinction between ‘oughts’ is fuzzy” (258). He raises an important question on how necessary, if at all, it is to be fully aware of when you are using an directive ought vs. an evaluative ought. He goes on to discuss how biases are relevant to and, perhaps, also useful in understanding academic methods/human reasoning despite our lack of ability to articulate the reasons for our biases.

 

Cultural and individual differences in the generalization of theories regarding human thinking

Kim & Park agree with E&E’s argument in favor of descriptivism over normativism, but argue that the current form of descriptivism is still limited in allowing us to fully comprehend human cognitive processes. Kim & Park offer us a way to address this limitation. They emphasize the importance of including cultural differences when conducting descriptivism research, suggesting that individual differences across different cultures influence behaviors unique to a particular group or culture. Kim & Park also suggest that while cognitive goals may differ across cultures, the motivational system (goal activation) by which the goals are pursued and achieved may be universal (pg 259-260).

 

Norms and high-level cognition: Consequences, trends, and antidotes

Though McNair & Feeney agree with E&E, they are less critical in their objections towards normativism. The authors have three main points. 1) Normativism is not uniformly disastrous as normativism has enabled us to understand behaviors that are based in normative models (Bayesian). 2) Normativism in the debate on human reasoning will continue to be increasingly relevant. 3) Primarily focusing on expert reasoners in studying human cognition is limiting and problematic. McNair & Feeney note that it is imperative for us to find a way to study human reasoning processes across both naive and expert reasoners.

 

Norms, goals, and the study of thinking

Nickerson argues that there is a need to find balance between normativism and descriptivism. Though he gives merit to E&E’s efforts to challenge normativism and their suggestion that research should focus on “how thinking is actually done” (261), Nickerson finds is less persuaded by E&E’s attempt to dismiss normativism completely from the field. Nickerson proposes that a middle ground must be established between the two different approaches. He suggests that descriptivism is used to learn how reasoning is done in order to understand how we ought to reason.

 

The “is-ought fallacy” fallacy

Oaksford and Chater claim that E&E’s “application of the “is-ought” fallacy is itself fallacious” (262). O&C clarify that their original argument in their paper did not claim that the Bayesian probability ought be the normative system by which we follow – a claim they say E&E assume O&C made. O&C also discuss how certain statements of explanation do not directly lead to an “ought” statement, but rather derive what “is” – stating descriptive facts by means of normative theories.

 

Systematic rationality norms provide research roadmaps and clarity

Pfeifer argues that normative theories should not be eliminated from cognitive research. Pfeifer claims that E&E’s argument that “conditional elimination inferences are single-norm paradigms” (263) fails to be free of conflict in the context of probability logic. The author suggests that instead of doing without normativism entirely in psychological research, as E&E proposes, possible improvements/changes to current normative theories should be considered.

 

A case for limited prescriptive normativism

Pothos and Busemeyer argue that identifying when a certain cognitive process was successfully applied is strictly contextual. They go on to explore Quantum probability and how some researchers have claimed that this model is a foundation for understanding human reasoning/thinking. P&B conclude that it is quite impossible to conduct valuable research on cognitive processes in the absence of formal frameworks and that such use of formal frameworks “inevitably… lead to some limited prescriptive normativism” (265).

 

Epistemic normativity from the reasoner’s viewpoint

Proust argues that E&E did not attempt to fully consider how an individual’s ability to assess his or her own reasoning performance impacts how he or she carries out first-order reasoning tasks. Proust suggests that there is variability in the way one person approaches a particular task than another person does, establishing that reasoners each interact with tasks differently due to his/her own respective set of experiences.

 

Naturalizing the normative and the bridges between “is” and “ought”

Quintelier and Fessier draw attention to the negative consequences that can possibly result from applying E&E’s approach to fields of scientific research beyond the cognitive sciences. They note that the meaning of a normative term very much depends on “one’s epistemological or meta-ethical views” (266). Q&F offer us two naturalistic ethics examples that walk us through ways in which “ought” is used in proper context to address a particular and relevant conclusion to a particular task.

 

Truth-conduciveness as the primary epistemic justification of normative systems of reasoning

Schurz appreciates E&E’s efforts to challenge normativism, but identify that E&E’s argument falls short. Schurz bases his response to E&E on two main points: 1) that formality is not synonymous to normativism and 2) that evaluative norms are in no need of justification. Schurz emphasizes the importance of recognizing that a form of prescriptive normativism is necessary in our understanding of the human rational and psychology.

 

Reason is normative, and should be studied accordingly

Spurrett rejects E&E’s argument and claims that it is “always appropriate” to assess whether an individual’s reasoning reflects “truth” (268). He demonstrates how people can and do reason poorly, ultimately concluding that it is “nonsensical” to define these processes as reasoning in the way that a purely descriptive approach would require, and criticizes the authors of the target article for their vague and seemingly contradictory definition of normativism.

Normative models in psychology are here to stay

At the heart of Stanovich’s argument is the idea that normative theory has been, and continues to be, a useful metric in the study of human reasoning (268). He rips into what he sees as an artificial division between Bayesian theories and instrumental rationality, which is essential to E&E’s model but contradicts the prevailing idea that the latter incorporates the former. Stanovich also emphasizes the fact that people tend to be able to recognize that a normative strategy is a superior solution to a problem (269).

Understanding reasoning: Let’s describe what we really think about

Sternberg appreciates the underlying motivation behind the target article and expands on it, arguing that normative models have caused psychologists to focus on a “narrow sliver” of the problems and decisions that people commonly face (270).

Normative benchmarks are useful for studying individual differences in reasoning

Stupple and Ball are sympathetic to E&E’s point that matching reasoning decisions to seemingly consistent normative strategies can lead to an incorrect diagnosis of the “underlying analytic process” (270). However, they ultimately argue that such “normative responses” can be incredibly useful to researchers, as demonstrated by their proposed methodological triangulation approach (271), and that normativism and descriptivism are on a continuum with no clear division.

Probability theory and perception of randomness: Bridging “ought” and “is”

Sun and Wang are skeptical of E&E’s call to abandon normativism, which they claim to be as valuable as descriptivism to the study of streak patterns in Bernoulli trials (271), but agree that researchers should not throw around “ought inferences.”

Normativism versus mechanism

Thompson defends E&E’s position on the role of normative models in psychology, emphasizing that normativism is misleading and limited (272). Her main point is that it causes researchers to focus on whether reasoning is “good” or “bad” rather than explore the complicated processes that dictate reasoning.

Neurath’s ship: The constitutive relation between normative and descriptive theories of rationality

Waldmann acknowledges that it would problematic to overstate how well a normative model fits empirical results, but thinks that normativism is very relevant to theoretical, causal, and practical rationality when used correctly (273).

What is evaluative normativity, that we (maybe) should avoid it?

Weinberg takes issue with E&E’s assertion that psychological theory should not “substantively” incorporate evaluative normativity, arguing that normativism is already too well incorporated into current psychology (274).

Authors’ Response: Towards a descriptivist psychology of reasoning and decision making

R1. Introduction

E&E reiterate two of their underlying arguments: that “is-ought” inferences based on normative theories are not reliable and that approaching psychological theory from a normativist viewpoint leads to “systematic biases” in current research (275).

 

R2: Between normativism and descriptivism: Definitions and boundaries

Instrumental rationality is not normative, and there is no clear line between the “ought”s of evaluative and instrumental. If formal theories are seen as simply computational, then we only give up evaluation when dismissing normativism. An alternative to descriptivism could be “soft normativism”, with normative evaluations in addition to goals of descriptive research. We need to distinguish normative and competence theory but still value why they are linked. Formal theories are levels relating to computational analysis (descriptive). Only used HTT as an example of descriptive theory that uses formal theories, so assumption of ought to adapting behavior to environment can be seen as normative. Bounded rationality cannot really fit with radical types of normativism.

R3: Epistemic rationality and self-knowledge

Psychological science should not focus on norms—they should spend time on conducting research. Constitutive rules are not normative, as they define the system (think chess rules) and what it is, not what it ought. Regulative rules regulate behavior (think table manners). The authors believe that we do not need a normative account of belief (think ranges in memory and vision). We can have descriptivist view on people’s own interpretations of their epistemic goals and norms, but what they deem as correct does not necessarily need to be normative, only plausible. Fast intuition leads to confidence in righteous behavior, which can be separate from it being normatively accurate. Justification does not equate to rationale: no one acknowledges previous biases in the reasoning for their actions.

R4: Normativism and descriptivism in dual-process research

Two minds theory: old (intuition, reinforcement of behavior) and new (reflective, goal oriented) mind both have control over our brain, sometimes cooperatively, sometimes competing. Abstract/normative does not equal Type 2 thinking, contextual/cognitive biases do not equal Type 1. There cannot be a normative reasoning with two separate systems, especially when the second one varies based on a person’s environment and identity.

R5: The new paradigm psychology of reasoning

The deduction paradigm has been changing through a Kuhnian revolution and paradigm shift as there is more of a push towards the integration of reasoning and decision making theories. The “new” paradigm psychology of reasoning removes truth and deduction as constraints, focusing more on probabilistic and Bayesian approaches, pragmatic factors, and degrees of belief. However, the new paradigm is divided on support for versus against the use of a normative framework, so the most popular alternative is Bayesianism.

        Bayesianism allows for inferences on beliefs with varying degrees of certainty as has a direct connection between reason and decision. The question then is whether it is a prescriptive normative account or descriptive. The authors see Bayesianism as an accurate descriptive account. Beliefs and utilities are more apt to be subjective based on individual accounts, making descriptive Bayesianism hard to disprove. Researchers have leaned toward alternative descriptive accounts because of changes in belief in the beginning compared to the end, unlike the Bayesian view of belief not changing simply based on sequence (example of evidence presented in a courtroom).

R6: Cognitive variability

Individual differences tend to be associated with normativist, and cultural differences with moderate relativism. There are individual and cultural differences in reasoning, e.g. age, IQ, working memory, Western vs eastern culture. Language is cognitively variable, making cultural norms of rationality based on language more apt. Cross cultural research is important in order to rid of previous researcher cultural biases.  Assessment of normative behavior shows change in what people do, whereas interpretation of the correctness of that behavior varies. Descriptivist approach tends to show qualitative differences. “Can” vs “cannot” has unclear boundaries, as ought and can cannot imply each other as well as Kant may have originally thought.

R7: Descriptivism versus normativism in conduct of empirical research on thinking

Normativism and descriptivism can play roles in research, but despite biases, normativism can also be valuable. There is support for looking at how we reason AND how we should. There is a difference between what we expect to be normative and what the behavior actually is. The authors are not sure if some of the arguments are based on normative accounts or competence/computation.

R8: Conclusions

The authors suggest any researcher take a step back before evaluating what they are doing. Regardless of whether there is progression towards a more descriptivist approach, the authors hope that they have at least provided a space for thought about reasoning and decision making, which have had normative theory in them, and conversation acknowledging normativism along the way.

 

Discussion questions:

  1. Do you agree with Hahn that normativism has nothing to do with the is-ought distinction?
  2. Sternberg argues that normative models are not applicable to the vast majority of everyday decisions. Is this an accurate assessment, and does it change how you feel about E&E’s argument?
  3. How are Oaksford & Chater proposing the process of iteration? Do you think this is rational when looking at behavior and what is versus what ought?
  4. Is it possible to find a middle-ground/balance between normativism and descriptivism when trying to understand how it is we think and how we ought to think? Nickerson suggests that the “truth is somewhere in between” the two methods. Do you agree?

 

Elqayam & Evans (2011), “Subtracting “ought” and “is”: Descriptivism vs. normativism in the study of human thinking” – Bridget Instrum & Olivia Artaiz

I. Logicism and normativism and their disconnects

Norms play a special role in mental processing- reasoning, judgment, and decision making- and direct us in how we ought to behavior. There are two types of normativism: empirical and prescriptive (234).

Empirical normativism: Thinking reflects S
Prescriptive normativism: Rational thinking should be measured against
S as a normative system, and ought to conform to it.
*S represents any type of normative system including logic, Bayesian
probability, or decision theory.
*”normativism” can be substituted with with other formal normative
systems such as logicism or Bayesianism.

These two tenets make normativism a system of varying degrees (diagram on 235). Researchers vary in where they stand in across both the prescriptive and empirical spectrum, however, tend to adhere to high prescriptive norms. Interestingly, there are no individuals who adhere to high empirical norms and low prescriptive norms simultaneously. Many factors, including the role of a priori knowledge in normativism or whether or not a normative system is.

E&E’s main problem is with prescriptive normativism, arguing that it is a problematic system and unnecessary in scientific studies regarding human thinking (235). In their article, they touch on what they mean by normativism, the problem of arbitration, the is-ought inference and its role in two research programs in human thinking, how normativism creates negative bias in the study of human thinking, and why normativism is unnecessary. In conclusion, they suggest that a descriptive approach should be taken when addressing mental processes.

II. Normativism, rationality, and the three senses of ought

There are a number of different types of rationality including instrumental, bounded, ecological, and evolutionary rationality. Normative rationality is yet another form of rationality and is unique because it implies what we ought to do. There are three main meanings for ought (236):

  1. Directive Deontic:  Selection-for and functional analysis. You must turn left at the light.
  2. Evaluative Deontic: Normative. You should not steal.
  3. Epistemic: Expressing belief of probability – He should be able to catch his flight.

The normative ought is evaluative. The distinction between different forms of ought have aided in the controversy over instrumental rationality (acting in a way to achieve one’s goals). Some philosophers, like Oaksford and Chater (O&C), don’t see the distinction between the evaluative and directive ought. Instead, they view instrumental rationality as something that should be justified through norms, thus blurring the lines between instrumental and normative rationality. However, E&E disagree with this and believe that there should be a clear distinction between these two types of rationality. They also take the stance that normativism isn’t necessary or useful when discussing function, adaptation, and ecological and instrumental rationality: “if behavior is typically adapted and we typically achieve personal goals, we should be rational without the use of normativism” (236).


III. Normative systems and the problem of arbitration

Each paradigm requires a particular normative system, and only the one right and appropriate norm can be applied.

Despite the need for one right system, there is no universal “clear-cut norm,” and this weakens normativism (237). Normativism is weakened further if there happens to be multiple norms that fit into a paradigm. This phenomenon is called the normative system problem or inappropriate norm argument.

There are three norm paradigms: single, alternative, and multiple (237).
Single: Only one norm can be applied. Examples include signal detection
task and conditional elimination inference. No conflict.
Alternative: One standard norm and at least one alternative can be applied. Examples include conditional induction inference and the Wason selection Task. Conflict.
Multiple: There are several, equally standard norms that can be applied. Examples include metadeduction. Conflict.

Single norm paradigms are common in situations such as memory task, however, they are rare in reasoning and decision making scenarios. This paired with the frequency of alternative and multiple norm conflict pose a major challenge for normativism.

IV. The computational, the competent, and the normative

E&E’s main argument is not a complete rejection of the formal systems in exchange for processing ones. Instead, they reject the use of these formal systems as normative ones and reject the deontic, evaluative ought. E&E draw a distinction between normative theory and competence theory in order to highlight the distinction between ought and is. The competence theory includes Chompsky’s and Marr’s parallel definitions of competence and computational levels of analysis.  For Chompsky, competence is the “structural description of abstract knowledge that is value free,” or rather looks to answer the question ‘what is…? (239). Likewise, Marr’s definition of computational levels of analysis is something that describes what is being computed and why, or rather answer the question ‘how is…?’ Both of these stances help make up the computational/competence theory (descriptive theories), while the question of what ought to be makes up the normative theory. E&E believe it is critical to discern between these two theories and between is and ought. Without a clear distinction between descriptive theories and normative theories, people fall into a controversial type of inferences: inferring ought from is.

V. Inferring ought from is

Normativism functions selectively and and there is one “appropriate” norm to use for a given scenario. However, this becomes increasingly difficult in alternative and multiple norm scenarios where there can be one or many alternative norms. Choosing the correct norm is difficult but also extremely crucial for normativism to function properly.

Challenging normative theories are competence theories. Competence theories are descriptive and supported by descriptive evidence. E&E and other researchers believe that descriptive evidence cannot be used in normative instances. However, when people derive a normative conclusion from descriptive premises, they are said to follow the is-ought inference. Often times, the is-ought inference includes an implicit normative premise. The implicit normative premise is an internal belief that “we should  act in line with our natural instincts” (240). The issue with this is that if this premise is not made explicit, the argument becomes fallacious. Similar to the is-ought inference, the naturalistic fallacy draws evaluative conclusions from natural events. Specifically, the naturalistic fallacy occurs when ethical norms are inferred from natural phenomena such as evolution.

V.1 Oaksford and Chater’s (O&C)  Bayesian rational analysis: O&C proposed that logicism, both empirical and prescriptive, should be rejected and replaced by Bayesianism. Bayesianism suggests that human thinking is based on Bayesian probability and normatively justified by it. However, E&E oppose O&C on the level  that they do not separate is from ought. O&C support the circle of normativity, where everyday rationality (successful behavior, instrumental) is justified by formal rationality (logic, Bayesian probability, etc). Furthermore, everyday rationality provides empirical evidence to chose the formal normative system. However, the empirical evidence does not clearly state which normative system to use. E&E have difficulty with O&C’s complex model as it uses a number of different oughts and uses the is-ought inference.

V.2 The individual differences programme of Stanovich and West: S&W’s earlier work also follows the is-ought inference by connecting normative and computational-level analysis. S&W suggests that correct reasoning is due to higher quality reasoning and cognitive capacity, as higher ability patients gave better answers to particular tasks in past studies. Therefore, the system the higher ability people endorse is the correct system and should become the norm. However, this falls into the trap of the is-ought inference. The is  in this case is that higher ability people chose more correct answers and the ought is that the norm they utilized is the correct one.

V.3 Evaluative ought versus directive ought: O&C rational analysis and S&W’s earlier work both incorporate the is-ought inference, however, they draw different normative conclusions. O&C focuses on adaptationist learning and suggests that ‘gene-directed behavior’ is rational. On the other hand, S&W focuses on the self-described Meliorist approach and suggests that people are not innately rational but can learn to be rational through education and training (242). This distinction can be seen in O&C’s and S&W’s interpretation of the Wason selection task. S&W suggest that logic is the best normative system  for the task whereas O&C suggest that information theory is the best norm to use. S&W chose logic because higher ability individuals solve the problem through logic, and O&C chose information theory because the majority of individuals use it during the task.

 

VI. Normativist research biases

E&E argue that normativism has triggered three types of research bias in psychologists’ approach to studying human reasoning, thinking, and JDM (Table 3). First, they use a logic and deduction paradigm and Bayesian rationality to introduce the prior rule bias. E&E show that after untrained individuals are instructed to accept a new normative system of logic, their thinking will reflect this “built-in” normative system (empirical normativism). This built-in system ultimately constrains reasoning and may permit participants to rationally get answers wrong (243). Secondly, they introduce interpretation bias, which suggest that normativism has negatively influenced the way results are recorded and interpreted. Evans suggests that to avoid bias we should record exactly what people do without interpreting their logical accuracy or concern of what they ought to do. The ought-is fallacy is an interpretation bias involving the normative account of dual-process theories of reasoning. The ought-is fallacy assumes that System 2 is responsible for correct normative responding, while System 1 (ie. heuristics) is associated with cognitive bias. This a situation where System 2 (i.e. analytical) is involved in inferring is from ought. Some authors suggest that System 2 is “necessary” for normative rationality while E&E suggest this “rule-based reasoning” may actually result in normative error (245). Thirdly, the clear norms bias proposes that psychologists are biased to select research questions involving single-norm paradigm even though they are rare and largely inapplicable to questions regarding JDM. In response to these problems, E&E suggest that adopting a descriptivist approach in place of a normativist approach would help eliminate these biases.

 

VII. Can we manage without a normative theory?

The previous sections of this article, E&E have identified the problems of normativism, however, they acknowledge the fact that normative based formal systems have motivated several valuable and productive psychological research paradigms (i.e. Wason selection, 2-4-6 task, etc.). These formal theories have a range of important relations to psychological theories shown in Fig. 2. Given the “heuristic value” of formal theories, E&E argue that descriptivism is a viable alternative to normativism because it can maintain these important relations to psychological theories, without problematic inferences and research biases (246).This may be achieved by a dual-process framework dubbed hypothetical thinking theory (HTT). HTT extract the aspects of subjectivity, belief, and uncertainty from Bayesian theory and proposes that System 2 is capable of hypothetical thinking using epistemic models.

 

Conclusions

E&E conclude that a normativist approach to the psychology of reasoning and JDM is both probabilistic and unnecessary. Instead we should utilize a descriptivist approach as an alternative to normativism in order to avoid research biases and circular is-ought inferences. However, it should be noted that norms can play a role in applied sciences and research regarding planning, policy, and development.

 

Discussion Questions:

  1. In section II E&E discuss the question regarding function and normativism (heart example). Do you agree with E&E that function falls under a different type of ought than normativism, or do you think that function should be classified with normativism?
  2. If E&E warn against inferring ought from is, where does the concept of ought originate from?
  3. E&E highlight the arbitration problem as something that weakens normativism, especially because of the limited application of the single-norm paradigm. Can you think of an instance (not mentioned in the article) in which single-norm paradigm can be useful?
  4. Although E&E do not suggest the complete elimination of normativism, do you think it is possible to approach reasoning and JDM without a normative theory?

Peer Commentary to Oaksford and Chater: Precis of Bayesian Rationality (Timmy Ogle, Carly Watson, Nosagie Asaolu)

  • Allott & Uchida
    • O&C claim that heuristics that involve information gain should be used. Allott & Uchida state that classical logic and O&C’s probabilistic account of conditionals and of inference must be supplemented by accounts of processing.
  • Brighton and Olsson
    • O&C discuss rational analysis as a process model used to develop optimal behavior. However, Brighton and Olsson believe that functional analysis can occur without a need for optimality.
  • Danks & Eberhardt
    • Danks & Eberhardt agree with O&C that the teleological explanations of human behavior are desirable, but they need a stronger foundation. They attest that Bayesian inference is neither a normative principle nor a subject of optimality as a result of people approximating explanations.    
  • Neys
    • Neys shares that O&C’s modeling has an exclusive focus on output data which could lead to biased conclusions. He indicates that people are constantly trying to meet the norm.
  • Evans
    • O&C state that individuals resemble Bayesian reasoners more closely than standard logic. Evans agrees that the Bayesian model is better for real world reasoning than one based on truth-functional logic. However, Evans doesn’t know why O&C need to fit a normatively rational model to human reasoning.
  • Griffiths
    • Griffiths further examines the strengths and weaknesses of Bayesian models of cognition. Strengths include the systematicity of rational explanations, transparent assumptions and combining symbolic representation with statistics. Some of the challenges include providing psychological mechanisms, explaining origins of knowledge and describing how people make new discoveries.
  • Hahn
    • Hahn believes that an increase in explanatory power can be achieved by restricting a psychological theory. Although cognitive neuroscience experiments can lead to results, they are not as significant because of the successful opposite trend of O&C.  
  • Halford
    • O&C believe that confidence is a function of informativeness. Halford counters that confidence is inversely related to complexity and that Bayesian rationality should be replaced by principles regarding cognitive complexity.  
  • Khalil
    • Khalil examines the question of rationality and whether humans use classical deductive logic or probabilistic reasoning. He attests that organisms do process information and respond to the environment in ways that qualify them as rational.
  • Liu
    • Liu proposes that conditional probability hypothesis exists only when reasoners explicitly evaluate probability of conditionals, but that it may not exist when making (MP) inferences.  
  • McKenzie
    • O&C suggest that deductive reasoning is parsimonious at a local and global level. They focus on environmental structure at the computational and algorithmic levels.
  • Nelson
    • Nelson believes that naive heuristic strategies can perform better than “optimal models.” Thus the normative role of the theoretical model and the adaptiveness of human behavior should be reexamined.
  • Oberauer
    • O&C state that people use probabilistic information to reason with unknown information. Oberauer believes that the probabilistic view on human reasoning has high a priori rationality and that that data by O&C is ambiguous.
  • O’Brien
    • O&C have rejected logic and supported probability theory. O’Brien explains that the mental-logic theory is based on logic that developed through bioevolutionary history to gain an advantage in making simple inferences.
  • Over and Hadjichristidis
    • O and H have an issue with O&C’s assumption that minor premises in conditional inferences are always certain, and believe that Jeffrey’s rule is not limited enough to account for actual probability judgements.
  • Pfeifer and Kleiter
    • They address O&C’s probabilistic approach from a probability logic standpoint. They discuss coherence, normativity, logic, and probability from this viewpoint.
  • Poletiek
    • Poletiek proposes an alternative falsification test to the logical falsification theory of testing. Conversely to logical falsification theory, the Severity of Test is an explanation that involves confirming evidence, instead of falsifying.
  • Politzer and Bonnefon
    • They agree with O&C that human reasoning cannot be purely based on logic. However, they have qualms with BR because it doesn’t address how conclusions are formed. Additionally, they believe that O&C ignore the  importance of defining uncertainty.
  • Schroyens
    • They challenge the normativeness of BR by bringing up the fact that a rational analysis consists largely on individuals’ differing environments and goals as influences on their rationality. Furthermore, Schroyens believes that it is misleading when O&C ignore algorithmic-level specifications when comparing probabilistic and nonprobabilistic theories.
  • Stenning and van Lambalgen
    • They do not agree with O&C’s claim that logical methods cannot encompass nonmonotonicity, and therefore a probabilistic approach is required. They give examples of where BR fails to account for some forms of nonmonotonicity, and further suggest that a non-Bayesian theory must be used in addition.
  • Straubinger, Cokely, Stevens
    • While O&C solely address adult reasoning, S/C/S approach reasoning as something that varies over an individual’s lifespan. Because of the variations between individuals and age groups, S/C/S believe that one model (BR) isn’t sufficient to describe human reasoning as a whole.
  • Wagenmakers
    • Wagenmakers agrees that the information gain model is the best model to describe the Wason card task. However, he questions why participants don’t select all four cards given the information gain model. He also wonders if incentive, like money, would change the results.

 

Author’s Response:

 

R2.1:

O&C denounce Evans’ Dual Process view because it seems possible that System 1 and System 2 could contradict one another. Additionally, they claim that addressing individual differences in reasoning isn’t necessary for determining whether there is a single or multiple human reasoning systems.

R2.2:

The authors observe that deductive reasoning, which is certain, is not observed outside mathematics and thus, their account of reasoning, involves making pragmatic choices due to uncertainty. Thus, rather than working from the “premises alone”, BR allows for “uncertain, knowledge rich” inference.

R2.3:

O&C counter Politzer & Bonnefon’s criticism by providing examples (algorithms and constraint satisfaction neural network implementation of the probabilistic approach) of how BR accounts for the generation of conclusions.

 

R3.1:

O&C respond to Pfeifer & Kleiter’s statement, that the probability theory inherently includes classic logic, by saying a Bayesian inductive perspective is necessary because classic logic isn’t very applicable to everyday life.

R3.2:

O&C claim that adding a condition of relevance doesn’t address the uncertainty problems because elements outside of mathematics are inherently uncertain. Furthermore, O’Brien’s system doesn’t correctly capture the intuitions of relevance between antecedent and consequence.

R3.3:

The authors argue that resolving clashes between premises can only be obtained by differentiating between stronger and weaker arguments, and degrees of confidence in the premises of those arguments.

 

They posit that logical methods provide no natural methods for expressing such matters of degree; but dealing with degrees of belief and strength of evidence is the primary business of probability theory.

 

R3.4:

O&C respond to objections regarding the generalization of probabilistic reasoning and existing conflicts between prior beliefs and logical reasoning. They posit that the “description of behavior in logical or probabilistic terms doesn’t mean that the behavior is governed by logical and probabilistic processes”. The conclude that without probabilistic reasoning, logic cannot accurately capture human patterns of thought

 

R3.5:

O&C justify the Bayesian approach as a “pragmatic” choice given its wide application in the cognitive and brain sciences. They also assert that the Bayesian assumptions may be “too weak” insofar as it imposes “minimal coherence criteria” on beliefs. Lastly, they dismiss objections regarding justification as they propose probability as a “better” (not the best) means of dealing with uncertainty.

 

R3.6:

O&C respond to concerns regarding the “rigidity” and “uncertainty” of Bayesian probability. First, they assert that BR doesn’t need to account for all uncertainty, regarding conditionals, as some uncertainty isn’t relevant to the data. Second, they explain that the apparent lack of “rigidity” is Bayesian as it accounts for “pragmatic utterances”. Lastly, they disagree that people can reason deductively about probability intervals as new information is always incorporated from world knowledge.

 

R3.7:

The authors posit “disinterested” and “goal oriented” methods of inquiry. The former aims to maximize the expected amount of information gotten from a task while the latter maximizes the expected utility of getting information. By adopting a “goal oriented” method, they avoid the postulation of “specific machinery”.

 

R4: (comprehensive)

 

The authors criticize “algorithmic” models (eg connectionist models) insofar as they shed no light regarding “why” the modeled processes function. They also argue that “ecological rationality” supplements normative rationality and  “rational analysis aims to explain ecological rationality”. Moreover, they posit that rational analysis is “goal specific” insofar as “rational” refers to information processing systems.

 

They also  acknowledge the challenges faced when attempting to implement BR on an algorithmic level. Notwithstanding, they assert that “understanding the rational solution to problems faced by the cognitive system crucially assists with explanation in terms of representations and algorithms”. Thus, rational analysis assists algorithmic explanation.

 

Also, O&C acknowledge that rational analysis may be challenged when there are many, near optimal, rational solutions and, sometimes, finding exactly the optimal solution may be over-restrictive. In such cases, they suggest rational analysis will select a solution based on its “relative goodness”. They also justify the simplicity of the naive Bayes model as it can be justified by Bayesian reasoning.

 

R5:

First, the authors reveal a doxastic and factual distinction insofar as changing degrees of belief doesn’t entail a change in the real conditional probability. Also, the authors respond to objections regarding the BR model. Most importantly, they state that the experiments were performed “pragmatically” insofar as “it conforms to the current demand of most journals”.  

 

Discussion Questions:

  1. Many commentators feel that BR doesn’t provide an adequate explanation for how people generate conclusions. Could fast and frugal heuristics serve as an explanation? In other words, to what extent do fast and frugal heuristics serve as the “specific machinery” for probabilistic reasoning?
  2. Is BR normative or descriptive? Are there any tensions between rational analysis and ecological rationality insofar as the former seems normative and the latter accounts for individual differences.
  3. Why is it that people seem more rational in the real world than in the laboratory? That is, why are there more violations of logic when in a controlled setting?

 

Oaksford & Chater (2009), “Précis of Bayesian Rationality: The Probabilistic Approach to Human Reasoning” — Steven Medina & Deniz Bingul

Oaksford and Chater challenge the logicist conception of human rationality. In its place, they advocate Bayesian rationality, which offers a framework for reasoning in the face of uncertainty. Bayesian rationality involves probabilistic reasoning. Here, probability describes an agent’s degrees of belief and is thus considered qualitative/subjective, not numerical.

 

1. Logic and the Western conception of mind

Oaksford and Chater describe an early approach to rationality, known as the logicist conception of the mind, according to which inferential relations maintain absolute certainty. Oaksford and Chater proceed to use syllogisms in demonstrating that logical arguments are truth preserving: To believe the premises of a logical argument is to believe its conclusion. So, denial of that conclusion is incoherent. O&C note that logical certainty prevents the addition of contingent facts (70). They then introduce two contemporary theoretical accounts of human reasoning—the mental logic and mental models theories. The mental logic view assumes reasoning involves logical calculations over symbolic representations; the mental models view takes reasoning to involve concrete representation of situations (71). In concluding this section, O&C introduce Bayesian rationality as the approach that best deals with the discovery of theory-refuting data.
2. Rationality and rational analysis

This section seeks to demonstrate why a Bayesian perspective is better than a logical one. O&C begin by outlining the six steps of rational analysis (71–72), which includes normative theory in its larger account of empirical data concerning thought and behavior. Rational analysis aims to understand the structure of the problems facing the cognitive system and takes into account the relevant environmental and processing constraints. O&C address two caveats of Bayesian rationality—that rational analysis isn’t a theory of psychological processes and that it doesn’t measure performance on probabilistic or logical tasks (72). The authors assure us, though, that these caveats are not to be of any inconvenience.

 

3. Reasoning in the real world: How much deduction is there?

This section challenges the use of the logic calculus in everyday reasoning. In reasoning about the everyday world, we usually have only bits of knowledge, some of which we believe only partially and/or temporarily. Moreover, the non-monotonicity of commonsense reasoning means that we can overturn virtually any conclusion upon learning additional information (72). Non-monotonic inferences cannot be accounted for by conventional logic. Importantly, classical logic fails to deal with the notorious “frame problem,” which refers to the difficulty of representing the effects of an action without having to enumerate obvious “non-effects” (73). BR, by creating non-monotonic logics, thus addresses this “mismatch” between logic-based and commonsense reasoning.

 

4. The probabilistic turn

O&C offer probabilism as an approach that best deals with the uncertainty of everyday reasoning. The authors provide the example of court cases decided by jury. In these situations, new pieces of evidence can modify one’s degree of belief regarding the guilt of the defendant. Here, probability is determined subjectively (74). In the remaining paragraphs, O&C present a modified version of the familiar conditional If A then B. This version, embraced within the cognitive sciences, takes B to be probable, if A is true (74). The authors conclude this section by noting the shift towards probability theory across a number of domains, including philosophy of science, AI, and cognitive psychology.

 

5. Does the exception prove the prove the rule? How people reason with conditionals

This section deals with the first of three core areas of human reasoning—conditional inference. The authors identify four conditional inference patterns (Fig. 1): (1) Modus Ponens, (2) Modus Tollens, (3) Denying the Antecedent, and (4) Affirming the Consequent. Of the four, two are logical fallacies—Denying the Antecedent and Affirming the Consequent.

Figure 2, Panel A presents data from experiments that asked people if they endorse each of the four inference patterns. The observed results diverge from the predictions of the standard logical model. Logicists attempt to account for the divergence by allowing people the pragmatic biconditional interpretation, though it is logically invalid (75).

The Bayesian approach, however, appeals only to probability theory. The probabilistic account of conditional inference entails four key ideas (75):

  1. P(if p then q) = P(q|p), aka “The Equation;”
  2. Probabilities are interpreted as degrees of belief, and this allows for belief updating;
  3. The Ramsey Test determines conditional probabilities; and
  4. By conditionalization (i.e., when the categorical premise is certain, not supposed), our new degree of belief in q is equal to our prior degree of belief in pq. Quantitatively: If P0(q|p) = 0.9, and P1(p) = 1, then P1(q) = 0.9. The takeaway here is that, from a probabilistic account, we are able to update our degrees of belief in q upon learning p is true without making too strong of a claim.

The remainder of this section deals with biases observed in conditional inference. The first of these are the inferential asymmetries (that MP is more widely endorsed than MT and that AC is more widely endorsed than DA). Though the probabilistic account can explain inferential asymmetries without invoking pragmatic inferences or cognitive limitations, it distorts the magnitudes of the MP–MT and DA–AC asymmetries (Panel C). Learning that the categorical premise is true can alter one’s degree of belief in the conditional, and this constitutes a violation of the rigidity condition. This violation lowers our degree of belief in P0(q|p), and this lower estimate, when included in calculations of probabilities for DA, AC, and MT, in turn explains the resulting magnitudes of the asymmetries (76).

The second of these biases is the negative conclusion bias—that people endorse DA, AC, and MT more often when the conclusion contains a negation. Since the probability of an object being red, for instance, is lower than the probability of it not being red, P0(p) and P0(q) take on higher values when p or q is negated. So, a seemingly irrational negative conclusion is simply attributed to a “high probability conclusion effect” (77).

The authors conclude the section with an overview of a small-scale implementation of the Ramsey test and question whether future implementations can explain the full range of empirical observations in conditional inference.

 

6. Being economical with the evidence: Collecting data and testing hypotheses

This section deals with the second of three core areas of human reasoning—data selection. Recall the Wason selection task. In testing the hypothesis if there is an A on one side of a card, there there is a 2 on the other, one should seek out falsifying examples (i.e., p, not q cases). Accordingly, one should select the A and 7 cards. This does not seem to be the case, however: Participants more often select cases that confirm the conditional (confirmation bias) (77).

Bayesian hypothesis testing is comparative—not falsifying. The optimal data selection (ODS) model assumes that people compare the dependence hypothesis (HD)—that P(q|p) is higher than the base rate of q—with an independence hypothesis (HI), according to which P(q|p) is the same as the base rate of q (78). Initially, people are thought to be equally uncertain about which hypothesis is true. The goal of the selection task, then, is to reduce this uncertainty. Using Bayes’ theorem (see Note 2), one can calculate her new degree of uncertainty about HD upon discovering p→q.

The ODS model is based on information gain, and participants in Wason’s task base their decisions on expected information gain. The ODS model also assumes the rarity of the properties that belong to the antecedent and consequent. So, the expected informativeness of the q card is greater than that of the not q card since almost certainly we would learn nothing about our hypothesis if we investigated not q cases. This approach is at odds with the falsification perspective but agrees with the empirical data (78). The ODS model thus suggests that performance on Wason’s task is in fact consistent with rational hypothesis testing behavior.

The authors also address the apparently non-rational matching bias, as a result of which participants match values named in the conditional. Given the rule if A then not 2, people tend to make the falsifying response: They select the A and 2 cards (rather than the 2 and 7 cards). Because of the rarity assumption, however, not q is a high probability category, and a high probability consequent thus warrants the falsifying response (79).

In the remainder of the section, the authors discuss deontic selection tasks (which involve conditionals that express rules of conduct, not facts about the world). In such instances, people do select the “logical” cards (the p and not q cards). Here, it is not a hypothesis that is tested but a regulation; it is useless to confirm or disconfirm how people should act. Rather, participants seek out violators of the rule (79). Moreover, in such selection tasks, people select cards to maximize expected utility, and because only the p and not q cards have positive utilities, these are the cards chosen (80). This model has also been extended to rules with emotional content.

 

7. An uncertain quantity: How people reason with syllogisms

This section deals with the last of three core areas of human reasoning—syllogistic inference, which relates two quantified premises, of which there are four types: all, some, somenot, and none. Of the 64 possible syllogisms, 22 are logically valid (Table 1).

The Probabilistic Heuristics Model (PHM) employs the probabilistic approach to syllogisms. PHM’s most important feature is that it also applies to generalized quantifiers, like most and few (82). PHM assigns probabilistic meanings to terms of quantified statements (81). For instance, the meaning of the universally quantified statement All P are Q can be given as P(Q|P) = 1. Similarly, the generally quantified statement Most P are Q can be understood as 0.8 < P(Q|P) < 1, for instance.

These interpretations are then used to build simple dependency models of quantified premises, and these models can be parameterized to determine which inferences are probabilistically valid (81).

The PHM model also assumes that, in general, because the probabilistic problems encountered by the cognitive system are very complex, people employ simple and effective heuristics to reach “good enough” probabilistic solutions (83).

There are two background ideas to keep in mind regarding heuristics (81): (1) the informativeness of a quantified claim and (2) the probabilistic entailment between quantified statements. A claim is informative in proportion to how surprising (unlikely) it is. No P are Q, which is very likely to be true, is thus an uninformative statement; All P are Q is the most informative. Regarding the second idea, the quantifier All probabilistically entails (p-entails) Some; Some and Somenot are mutually p-entailing.

There are two types of heuristics for syllogistic reasoning (82)—the generate heuristics (produce candidate conclusions) and the test heuristics (evaluate the plausibility of candidate conclusions).

There are three generate heuristics: (G1) the min-heuristic, (G2) p-entailments, and (G3) the attachment-heuristic.

The two test heuristics are (T1) the max-heuristic and (T2) the some_not-heuristic. In general, where there is a probabilistically valid conclusion, these heuristics identify it successfully. The authors offer experimental data in support of this claim.

 

8. Conclusion

Taken together, the empirical data provided support O&C’s probabilistic approach to human rationality. In sum, the cognitive system is best understood as building qualitative probabilistic models of the uncertain world.

 

Questions:

  1. Does Bayesian rationality actually account for the uncertainty of everyday situations that logical methods ignore?
  2. Can logical rationality and Bayesian rationality exist in unison?
  3. If classical logic is indeed inadequate in its explanation of human rationality, what becomes of the normative/descriptive gap?
  4. How might a program like ECHO accommodate the Bayesian perspective, if at all?

Individual Differences in Reasoning… Peer Commentaries and Author Replies

Stanovich and West set out to address two major topics in their response: the role of individual differences in the normative/descriptive gap and the two-process model of evolutionary rationality (System 1) and normative rationality (System 2).

R1. Individual differences and the normative/descriptive gap

R1.1. Normative applications versus normative models.

First, they touch on the distinction between normative applications and normative models, claiming that many authors misunderstood their attempt to use patterns of individual differences. S&W clarify that they used patterns of individual differences not to determine whether normative models are adequate, but to determine when to apply a certain normative model to a particular situation. They stress the importance of empirical data in determining the applicability of certain normative models (rather than their correctness). With their data, S&W were trying to shed light on specific norm applications in particular situations and whether the norms applied were appropriate to the situation or not (for example, they reference the cabs problem and the applicability of Bayes’ Theorem to the situation).

R1.2. The understanding/acceptance assumption.

They also discuss the understanding/acceptance assumption and its efficacy as a tool for judging different explanations for the normative/descriptive gap. They confirm Schneider’s view that the assumption is “necessary but not sufficient criterion” (702) for understanding the gap in behavior. Notably, the assumption posits that intelligent people are more likely to apply the correct normative application. S&W next turn to the Allais paradox, countering the belief of Ayton and Hardman that the assumption fails if it cannot judge the normative model applied in this case. However, S&W argue that norm misapplication is at fault for the dispute arising from this paradox, and the understanding/acceptance assumption should not be held accountable.

R1.3. Relativism and rational task construal.

In the next section, S&W look at rational task construal. S&W cite the Panglossian view on rational task construal, that any and all task construals are rational and intransitivity can be eliminated by construing the problem in such a way that removes the intransitivity. In this view, the task is construed in such a way that makes the task and task responses rational and the person involved rationally competent. However, alternative task construals do not protect one from irrationality, as one can be charged with irrationality in a different part of the process. Cognitive evaluation is crucial for determining the rationality of task construals.

R1.4. Cognitive ability and the SAT.

S&W strongly defend their usage of tasks from the heuristics and biases literature and SAT scores as different measures of cognitive ability. They claim that the tasks they measured were of a completely different nature than those on the SATs, and that, notably, reasoning tasks do not have a correct answer. Correlation, or lack of correlation can be used to examine the normative model and task construal used in particular problems. S&W refute the claims of those who criticize their use of SAT performance as an indicator of cognitive ability. They draw clear relationships between general intelligence, working memory, computational capacity, and fluid intelligence, all reflected by SAT scores. Notably, the authors mention that education does not appear to affect performance on heuristics and biases tasks. They also note their usage of converging measures of cognitive ability that measured intelligence in a different way than the SATs and showed the same correlations.

R1.5. Alternative interpretations of computational limitations

In their target article, S&W interpreted cognitive ability measures as agents of overall cognitive intelligence. They admit, however, that there are several alternatives: (1) Cognitive ability measures may be indicators of an individual’s tendency to respond to problems with appropriate strategies, and (2) that cognitive ability may portray the number of different problem representations with which an individual can cope. These alternative interpretations therefore lead us to three distinct computational limitations: (1) limitations in how one can successfully process information, (2) limitations in the flexible deployment of strategies to solve a problem, and (3) limitations in the types of problem representations one can handle and, thus, the types of strategies one can use to solve a problem.

R1.6. Alternative Construals as a Computational Escape Hatch.

Ball & Quayle bring up an interesting idea of a computational escape hatch, which prompt S&W to blur the line between seemingly distinct notions of alternative task construal and computational limitations.  Alder suggests, and S&W agree, that perhaps tasks are being interpreted as different from what the experimenter intended because individuals cannot fully grasp that task, and thus the normative model. S&W mention that alternative construals can be used as computational escape hatches by either being consciously aware of all the alternative task interpretations and choosing the one with the lowest computational demands, or choosing it without being aware of the other alternatives.

R1.7. Thinking Dispositions and Cognitive Style.

S&W agree with Kuhberger and argue that thinking dispositions and cognitive capacities are different, distinguishable concepts as they function on different levels of analysis. They argue that cognitive abilities indicate individual differences in the efficiency of processing at the algorithmic level while thinking dispositions indicate individual differences at the intentional level. S&W have found that thinking dispositions can explain variance independent of cognitive capacity supporting the separability of the two.

R2. Rationality and Dual Process Models

As Friedrich and others point out, S&W are proud of their ability to “bridge between the evolutionary and traditional biases perspectives” (707), especially in order to reinforce their argument for the dual-system processing. The goals of the 2 systems should be similar, to strive for normative rationality, but Frisch stresses that it is not necessarily that both systems will compute and conclude the same way. System 1 processes, or lack of processing, are often attributed to any irrational behavior. However, S&W recognize the potential downfall in overanalytics, citing Hardman’s research in which they found those who were analytical made less effective decisions. They also continue to support the rationality behind the Meliorist framework.

R2.1. Short Leash, Long Leash Goals

S&W explain short leash vs long leash using the Mars Rover example, in which something that can no longer be “short leash” controlled remotely must be given the “long leash” ability to control itself (710). Giving Systems 1 short leash goals of “if A, then B” allows it to remain functionally rational. System 2 is more equipped to reach for long leash goals, in which we boil it down to “Do whatever you think is best,” as offered by Dawkins. In order for System 2 to correctly strive for this goal, however, it must be given the tools to analyze and also be able to recognize what “is best,” a point that Greene & Levy comment on. S&W here push against Ayton’s notion that a rational bee with long leash goals should not sacrifice itself for the good of the hive, saying that evolutionary psychologists mistakenly presuppose that those with evolutionary rationality necessarily have individual rationality. System 2, unlike System 1, is capable of continuous goal evaluation.

R2.2. Cultural Evolution of Normative Standards

Schneider believes that “cultural evolution of norms somehow present difficulties for our conceptualization” (712). S&W disagree, as they feel cultural history of evolution supports that individuals creating progressive change in standards are of high intelligence. Then, others with less intelligence can use the newly developed standards themselves as learners. Panglossians often downplay the evolution of reasoning norms, since an “incorrect” response should never happen, but fail to recognize that changing norms can allow this once “incorrect” response to now be “correct.”

R3. Process and Rationality

Commentators such as Hoffrage, Kahneman, and Reyna critic the lack of algorithmic-level process models for many of the tasks mentioned in the target article. While S&W agree that these process models are important, they argue that this was not the point of their research program. They felt that rather than focusing entirely on the algorithmic-level model it was best to explore intentional-level models and its variance in rationality as well. They argue that exploring intentional-level constructs does not detract from the search for more extensive algorithmic-level specification and that the two levels may actually have a synergistic interplay.

R4. Performance Errors

Hoffrage treats many errors that Stanovich & West as computational as performance errors, most notably “recurring motivational and attentional problems” (713). However they argue that this is not appropriate because such errors are “in fact like cognitive styles and thinking dispositions at the intentional level,” (713) since stability and predictability goes against the random nature of performance errors. Stanovich and West argues that this is the most important implication of a performance error – that it is to be considered “trivial”, and reminds us that it only becomes something significant when it is repeated and forms a pattern.

R5. The Fundamental Computational Bias and “Real Life”

Commentators criticise the authors for focusing on problems that are not similar to real life, however the authors respond by arguing that “real life” is no longer real life as technology has “presented evolutionarily adapted mechanisms with problems they were not designed to have.” (714) They gave examples such as the food people eat as well as communications and advertisements. Commentators such as Ayton and Hardman therefore point out the necessity of the “fast and frugal” heuristics studied by Gigerenzer. Kahneman acknowledges that this process is important, but reminds readers that system 2 is still necessary to correct the associated biases. The authors seem to agree with these analyses and do think that most humans still live in the world of system 1 and laments that there are “very few situations where System ½ mismatches are likely to…have knock-on effects.” (714)

R6. Potentially Productive Ideas

Stenning & Monaghan suggests other ways to reparse the System 1/2 differences, such as cooperative vs. adversarial communication or explicit vs. implicit knowledge. Moshman’s dissection between “what a system computes,” and “how the processing is implemented,” may lead to greater explanatory power with greater clarification. The authors acknowledges other ideas such as a finer-grained scoring system by Hoffrage & Okasha, taking into account the test subjects decision-making history by Fantino, and the inclusion of further tools such as developmental data by Reyna & Klaczynski or the notions of rationality in other literatures such as philosophy by Kahneman.

R7. Complementary Strengths and Weaknesses

The different “camps” have all advanced the field through their own lenses respectively; the Panglossians have demonstrated that the answers to real-world problems are often “in human cognition itself,” (717) and that mankind is already optimising in the real world. The only necessary action is to characterise and optimise further the process. The Apologists have shed light on the power of evolution and its ability to shape cognition. The Meliorists have argued for the possibility of cognitive change and warned against the possible results of mismatches between the ways we think and the way we should think in a modern day society.

Similarly each have its weaknesses as Meliorists sometimes jump ship too quickly and blame flaws in reasoning while the Panglossians are often forced into uncomfortable positions to defend the human rationality; thus it is necessary to be open to the other possibility and take that into account. The Apologists can sometimes be too old-fashioned and fail to recognise the huge differences between the modern society and that during which humans evolved.

Discussion Questions:

  1. Do you think the characterizations by Stanovich & West in R7 are valid?
  2. Did S&W defend their choice to use the SATs as a cognitive ability measure well?
  3. Do you agree with the criticism that the alternative task construal is a way to mask computational limitations?
  4. Does S&W argument that today’s society is too technologically advanced for System 1 to continue to adapt make sense?

Stanovich & West, “Individual differences in reasoning: Implications for the rationality debate?” – Audrey Goettl & Kendall Arthur

Stanovich and West (S&W) attempt to maintain the rationality of human thought by explaining the gaps between descriptive and normative models for decision making. Often times humans’ patterns of judgment do not follow the normative models of decision-making and rational judgment. There are two major schools of thought in regards to how these inconsistencies should be accounted for: the Meliorists and the Panglossians. Meliorists believe that there is a deficiency in human cognition and that we should reevaluate and reform our way of thought to coincide with the normative model. Panglossians believe that these gaps are not a result of human irrationality and that the normative model should be changed rather than our way of thinking.

Rather than clearly identifying themselves Meliorists or Panglossians S&W aim to lay out the possible explanations for the differences between human responses and normative performance. They attribute these deviations from the normative to four possibilities: 1) performance error, 2) computational limitations, 3) application of the wrong normative model and, 4) alternative task construal, all of which maintain the rationality of human reasoning.

 

Performance Error

Performance error is the unsystematic deviation from the normative model, or essentially random lapses in judgment due to either distraction, lack of attention, temporary memory deactivation, etc. It is these momentary attention, memory, or processing lapses that cause the individual variation in judgment. Performance error as an explanation can be taken to its farthest limits by attributing all deviations from the normative model to performance error.

 

Computational Limitations

Computational limitations, unlike performance error, are systematic deviations from the normative model as a result of deficiencies in human cognition. The individual differences in human decision-making result from different levels of computational limitation (some people have more brain power than others). Variations in cognitive ability lead to variation in human judgment.

 

Application of the wrong normative model

The process of assigning a normative model to a problem is a complex and involved action and it is extremely difficult to find a problem that can be clearly matched with a normative model. As a result of this complexity there is a lot of room for error in decision-making and possible application of the wrong normative model, leading to individual variation especially if everyone cannot identify and then use the same models for judgment.

S&W identify a correlation between the understanding and the acceptance of a normative model, otherwise known as the understanding/acceptance principle. The greater the understanding of a model is, the higher the acceptance of that model. With more understanding an individual is driven towards to normative model, making individuals with higher cognitive ability more likely to fall in line with normative thought. It is not always clear which pattern of reasoning should be used for specific decision making situations. Higher understanding decreases the possibility of applying the wrong normative model by increasing one’s ability for reflection and evaluation of an issue, thereby increasing one’s ability to identify the appropriate response.

 

Alternative task construal

Alternative task construal attributes individual variation in reasoning to the subject having a different interpretation of a task than the experimenter intended, leading the subject to provide the appropriate normative answer to a different problem, unlike with application of the wrong normative model. Application of the wrong normative model focuses on individuals’ inability to identify the correct way to solve a problem, while the variation from alternative task construal comes from individuals interpreting different problems all together.

Given that normative reasoning is being used as a standard for decision-making and because alternative task construal is a source of variation among individuals’ reasoning, we have to evaluate which interpretations/construals are appropriate or inappropriate for that standard. The possibility for various interpretations implies that we now need principles for rational construal. Finding these principles can be done using the same methods of justification used in instrumental rationality. For example, a construal could be deemed appropriate or inappropriate based on how efficient they are at helping a subject reach their goals.

S&W propose the dual process theory as one possible explanation for why we interpret tasks differently. System 1 processing is a system of intelligence focused on the maximization of advantages based on the evolutionary interests of the human race and is largely unconscious. System 2 processing is a system of intelligence focused on the maximization of individual advantages based on the interests of the whole person. The two systems tend to yield different construals and cause variation in individual reasoning because not everyone has the cognitive ability to employ system 2 processing in the same manner.

 

Conclusion

S&W after laying out the four possible sources of variation between individual reasoning, conclude that the most compelling explanations for the sources of decision-making differences are application of the wrong normative model and alternative task construal. Ultimately, S&W agree with the Meliorist conclusion that there are individuals differences that do not fit in with the normative model of decision-making and human reasoning is not ideal “as is”.

 

Responses:

Note: We will focus on the responses up to pg 680 with Hunt.

 

One of the biggest points of concern for a lot of the commentaries we encounter in this section is S&W’s claim that failure to use a certain normative model in a given situation is indicative of human irrationality. Ayton argues that trying to determine what is “normative” is not useful because decisions cannot be evaluated so easily. Bees, for example, violate what we expect to be the normative model, but their behavior is still successful if they survive. So although bees do not follow the normative model, they are not considered “irrational.” Baron states that normative models should not be justified by consensus/intuitive judgements, but should instead come from analysis of specific situations that are supported by more evidence. Goodie & Williams question defining reasoning in terms of aptitude, because aptitude must be measured using reasoning and this is circular.

 

Others are compelled by the way S&W outline the dual-processing model, but think that it could be better developed. Ball & Quayle provide an especially interesting view by further explaining different scenarios in which the systems are used, highlighting that System 1 is often used as an escape hatch for when System 2 is overloaded. Friedrich takes issue with the distinction between the two systems, especially the overemphasis on System 2 that comes from the experimenter’s expectations and from the fact that most study samples come from elite universities where analytic processing is emphasized. Frisch offers that instead of trying to figure out which system is “better” or used more often, we should view the systems as working in tandem (like yin and yang) – and that a careful balance between the two would be most effective.

 

Finally, some commentators question how S&W’s system would actually function in the “real world.” DeKay et al and Greene & Levy argue that S&W need to consider the evolutionary aspect of our psychological mechanisms so that we can better understand our behavior and more clearly define what is normative. Using the evolutionary lens explains that sometimes errors exist because they were evolutionarily advantageous in the past, and that variation is important to allow us to adapt to new and unpredictable environments. Girotto also explains that people in “real world” scenarios might choose non-normative responses in order to choose a response that would be optimal for a group (ie, a collectively optimal decision).  Hardman questions the viability of the understanding/acceptance principle and provides examples of when that has not actually been true (sometimes people do not pick the option they know more about).

 

Discussion Questions:

  1. Of the four explanations that S&W provide for deviations from normative models, which do you think is most compelling?
  2. Are we satisfied by S&W’s use of the SAT as a way to measure intelligence? What would be a better way to account for different types of intelligence?
  3. Of the various critiques to S&W, which do you think is most problematic for them?
  4. S&W ultimately seem to favor the Meliorist perspective. Do you agree with this conclusion, or do you find the Panglossian view more accurate?

Links tangentially related to class discussions

So in Section B sometimes we reference sources oddly related to the discussion. This post is a place to put links to them so others can check it out if they want. Here are three I’ve mentioned in past classes.

Re: Can computers ever be rational in the way that humans are rational? What role does consciousness play in rationality?

Maybe you’ve seen the bumper sickers “Keep Vermont Weird”?  Here’s an example: There’s an organization based here in Addison County, Vermont that seeks to create conscious beings that are more than robots, but independent entities with rights. The organization is classified as a “church” in the town reports (meaning they don’t pay property tax”) and they have weekly “services.” You can check out the website, but if I understand it, you can “upload your mind” into their computer program (for a fee, I believe!) and it will embody your soul into perpetuity.

Their website is www.terasemmovementfoundation.com. If you google Bina48, their spokesperson robot, you’ll see all kinds of links, including one in which “she” was interviewed by students for The Middlebury Campus.

 

Re: Discussion of Thagard’s ECHO program and questions about the impacts of input bias in computer programs, NPR last week had a whole series about computer bias. Very interesting stuff. There were articles during All Things Considered all week. Here’s the link to Monday’s:

http://www.npr.org/2016/03/14/470427605/can-computers-be-racist-the-human-like-bias-of-algorithms

 

Re: Importance of good use of science to formulate justified beliefs:  When decisions need to be made (legal, humanitarian, etc) how does one analyze the data? The Human Rights Data Analysis Group is a non-profit organization that analyzes data to illuminate truths.  From their website: “We believe truth leads to accountability, and at HRDAG, promoting accountability for human rights violations is our highest purpose. In the wake of mass killings and genocide, deportations and ethnic cleansing, and systematic detention and torture, accountability may mean many things. It could mean, simply, learning what really happened. Accountability could also mean a criminal trial for perpetrators. Or it might mean having the worst perpetrators removed from public office. Because accountability hinges on truth, we work toward discovering the most accurate “truth” possible. To this end, we apply statistical and scientific methods in the analysis of human rights data so that our partners—human rights advocates—can build scientifically defensible, evidence-based arguments that will result in outcomes of accountability.”

Just an example of real-world scenarios in which “justified true beliefs” are important.  (Full disclosure, the HRDAG founder Patrick Ball is my brother.)