Introduction

Reasoning is a special kind of inference that consciously produces new mental representations from other consciously held representations, and it is unique to human beings. Mercier and Sperber specifically define reasoning as the production of a consciously produced conclusion from other consciously held premises (i.e. consciously held reasons), with an intuitive premise-conclusion component. In their article, they set out to investigate both how and why human beings engage in reasoning, although they focus most closely on the question of why (57).

 

  1. Reasoning: Mechanism and Function

 

1.1 Intuitive Inference and argument

Mercier and Sperber set out to create their own dual-process approach to distinguish between intuitions (system 1 reasoning) and logical reasoning (system 2 reasoning): processes of inference and reasoning proper (58). The output of processes of inference are intuitive beliefs, that are formed at a “sub-personal,” unconscious level, for which were are not aware of the reasons behind holding them. The output of reasoning proper are reflective beliefs, that we do have conscious reasons for holding. Reasoning proper allows us to represent our own mental representations as well as the representations of others (metarepresentations). It does, however, contain intuitive elements on a fundamental level.

 

M&S give Decartes’ Cogito argument (“I think therefore I am”) as an example of reasoning proper (59). Although the thinker is able to give reasons for believing the argument that they exist, the fundamental reasons for accepting this as an intuitively good argument are much cloudier. In this case, the clear-cut dual system becomes less clear, because the outputs of what was originally considered pure system 2 reasoning must contain some elements of system 1 reasoning (intuition). The premise-conclusion relationships in an argument must be intuited on an unconscious level if one is to be able to evaluate the merit of an argument. The function of this ability is extremely evolutionarily salient.

 

1.2 The function of reasoning

M&S reject the classical view of the evolutionary function of reasoning as an enhancement of individual cognition. Under this classical view, system 2 reasoning is achieved by correcting mistakes in system 1 intuitions. They also reject the hypothesized function of reasoning as a mechanism by which organisms can react to novel environments favorably, which they argue is simple learning. Instead, they propose that reasoning evolved as a form of epistemic vigilance, which allows the senders and receivers involved in interpersonal communication to evaluate the information and arguments being exchanged (60).

 

This potential evolutionary origin of reasoning stems from the psychological concepts of trust calibration and coherence checking. Put simply, individuals must be able to quickly and accurately evaluate new information received from other individuals, to avoid being misled. With a mechanism that allows communicators to effectively communicate and evaluate new ideas, the information that humans are able to share increases in both quantity and epistemic quality. This ability would be strongly selected for in an environment in which the fast exchange of accurate knowledge is essential. In other words, the main function of reasoning is argumentative (60).

 

  1. Argumentative Skills

 

2.1 Understanding and Evaluating Arguments

M&S state that there is a common conception that people in general are not very skilled arguers, a fact which would pose insurmountable obstacles to their theory that the primary function of human reasoning is argumentation (61). However, studies on persuasion and attitude change have shown that “when they are motivated, participants are able to use reasoning to evaluate arguments accurately.” Logical performance in reasoning research has been notoriously poor, but M&S argue this is due to the abstract, decontextualized nature of the tasks. In an argumentative context, people perform much better on the same kinds of tasks. Fallacies of argumentation are different than logical fallacies, and people generally perform well both in identifying argumentative fallacies and rejecting or accepting them as appropriate in context.

 

2.2 Producing Arguments

Previous studies would indicate people are generally unskilled in argument production as well. M&S argue that these apparent deficiencies actually stem from the context of the experimental tasks, which were ill-suited to reasoning’s argumentative function. In fact, people will use relevant data when they have access to it, they will develop more complex arguments if they anticipate any challenge to their assertions, and they are capable of formulating counterarguments so long as the argument being challenged is not their own (62).

 

2.3 Group Reasoning

M&S claim that prior reasoning research shows that in group settings, the dominant scheme is “truth wins.” Individuals show large improvements in reasoning tasks after group debates (an incredible increase of 10% to 80% correct responses in the Wason selection task) (63). Transcripts of group discussions, and the assembly bonus effect, suggest that this improvement is actually due to reasoning improvement, and not simply some members following other, “smarter” ones. M&S’s theory also predicts a strengthening of group opinion in artificial, nonoptimal group settings of prior agreement.

 

  1. The Confirmation Bias: A Flaw of Reasoning or a Feature of Argument Production?

According to M&S’s theory, confirmation bias is not a flaw of reasoning but rather a feature. In some cases of “confirmation bias,” people are simply trusting their beliefs by drawing positive inferences from them; no proper reasoning has occurred. According to their theory, true confirmation bias should only occur in argumentative situations and only in argument production. It is not a general confirmation bias, but rather a bias toward confirming one’s own arguments and refuting those of others (64).

 

3.1 Hypothesis Testing: No Reasoning, No Reasoning Bias

M&S argue that the people’s poor performance in hypothesis testing is not due to reasoning at all. Lacking an argumentative setting, participants are simply adopting a “positive test strategy” in an intuitive manner. If, instead of producing their own hypothesis, people are given one from someone else, they do employ reasoning and are better able to falsify it.

 

3.2 The Wason Selection Task

M&S claim that poor performance on the Wason selection task occurs because the utterance of certain concepts in the in rule itself incites intuitive mechanisms of comprehension which cause people to focus on certain cards and make an inference as to the answer; subsequent reasoning processes only justify the answer already formulated. Participants in argumentative group settings perform much better on the task, as do people who are highly motivated to disprove the rule provided (65).

 

3.3 Categorical Syllogisms

People perform poorly on categorical syllogisms because solving them correctly requires producing counterexamples to one’s own conclusion. They perform much better if the conclusion is unbelievable or if it is provided by someone else.

 

3.4 Rehabilitating the Confirmation Bias

The confirmation bias is traditionally viewed as a dangerous defect in reasoning, due to cognitive limitations. The fact that people are quite adept at falsifying propositions when motivated troubles the matter of causation, and the consequences of confirmation bias are only disastrous in abnormal contexts of prior agreement (whether inter- or intra-personally). In more felicitous contexts of groups solving disagreements, the confirmation bias is an efficient division of cognitive labor (65). High performance in group reasoning tasks suggests that the confirmation bias is primarily present in argument production, not evaluation.

 

  1. Proactive Reasoning Belief Formation

Most of our beliefs as individuals go unchallenged as they are unexpressed or only relevant to ourselves. If we identify a particular belief of ours as possibly contentious, we regard it as an opinion and may search proactively for arguments to justify this opinion, a phenomenon studied as “motivated reasoning.” (66)

 

4.1 Motivated Reasoning

In a study in which participants were given a fake medical result, they tended either to discount the rate of false positives provided, or utilize it to undermine the test, depending on whether their result was positive or negative. If this were due to wishful thinking, participants could dismiss the test entirely, but instead they produced arguments to support their opinion. To M&S, this motivated reasoning is targeted at justifying beliefs to others; any personal belief revision in the name of truth-seeking is incidental.

 

4.2 Consequences of Motivated Reasoning

 

4.2.1 Biased Evaluation and Attitude Polarization

When participants are presented with a study either confirming or attacking their prior position on the death penalty, they are more likely to criticize the methodology if the conclusion reached differs from their own. M&S interpret this as evidence that people’s goals in such a situation are “argumentative rather than epistemic.” (67) Additionally, people spend more time evaluating an argument contrary to their own opinion, as rejecting the argument requires justification that accepting it does not.

 

4.2.2 Polarization, Bolstering, and Overconfidence.

When people think about a stimulus regarding which they have a prior decided opinion, they tend to polarize and strengthen their existing attitude, rather than reevaluating it. This tendency increases with time spent thinking, motivation to think, and the number (if any) of explicit arguments the person puts forth supporting their opinion (67). Being publicly committed to the opinion results in bolstering, an increased pressure to justify the opinion rather than change it, an effect which is strengthened by heightened accountability. Providing an answer to a question causes people spontaneously to produce justifications for their answer, resulting in subsequent overconfidence.

 

4.1.3 Belief Perseverance

Belief perseverance depends on the orientation of people’s intuitive inferences, and whether evidence presented supports these inferences, rather than on the order of evidence presented, indicating that belief perseverance is simply a special type of motivated reasoning (68).

 

4.1.4 Violation of Moral Norms

The study of moral hypocrisy shows that reasoning is better suited to justifying people’s actions than guiding them (serving an argumentative rather than moral or epistemic goal) (69). The effect of moral hypocrisy in certain judgments can be eliminated by introducing cognitive load during the judgment process and therefore interfering with reasoning.

 

  1. Proactive reasoning in decision making

Mercier and Sperber argue that the main role of reasoning is done in anticipation in anticipation of the necessity of defending a decision. This process does not always result in the weighing of pros and cons in an reliable way (the classical view of reasoning), as has been shown by extensive empirical evidence (69).

 

5.1 To what extent does reasoning help in deciding?

Many studies have shown that decisions based on careful, conscious reasoning actually results in poorer decisions than does unconscious decision-making processes that are not based on carefully stated reasons (69). Most decisions are made intuitively, and those that are made through reasoning often result in decisions that may be easy to justify, but they may not be the best decisions (69).

 

5.2 Reason-based choice

This bias that people have to make more readily justifiable decisions causes them to make a number of classically “irrational” decisions, to avoid the risk of criticism. This phenomenon, termed reason-based choice, causes people to make “mistakes” on tasks designed to measure rationality. Making choices based on reasons, no matter the justifiability of those reasons, will be favored when making decisions and thus result in irrational decisions (70).

 

5.3 What reason-based choice can explain

Reason-based choice, M&S argue, can explain a great number of the well-known challenges to human rationality. They list the disjunction effect, the sunk-cost fallacy, framing, and preference inversion as empirical psychology research examples of reason-based choice in action (70). What all of these examples have in common is that they provide significant evidence for cognitively unsound uses of reasoning (71). M&S define these deviations from irrationality as the misuse of an evolutionarily favorable mechanism for decision making. As they argue throughout the text, reasoning most likely evolved to function in a social context, and allows for people to anticipate which arguments they need to justify in order to have other people take their beliefs seriously. At its core, their argument is that the function of reasoning is to lead people to justifiable decisions, and not necessarily good decisions (as defined by a classical definition of rationality). The instances in which this distinction must be made (i.e. between justifiable and good) are rare, and therefore do not pose a significant threat to their argument.

 

  1. Conclusion: Reasoning and rationality

Reasoning allows for human communication to be both reliable and potent, and benefits both senders and receivers in the exchange of information. This argumentative theory of reasoning shows that “irrationality,” as it is classically understood in psychology and philosophy, is merely the result of the human tendency to systematically look for arguments to justify beliefs and actions. As Mercier and Sperber demonstrate with their review of the results of many reasoning tasks, people engaged in argumentation favor arguments that support their own views if they have “an axe to grind” about the argument, but truth wins when all participants have equal interest in discovering the right answer to a problem (72). This means that truth doesn’t necessarily always win, but the best arguments do. With time and with enough participants engaged in conversation, however, the best arguments will eventually equate to the truth.

 

Discussion questions:

  1. Does an argumentative function of reason have disastrous moral or epistemic consequences?
  2. Can the biased features of argumentative reasoning be effectively modulated by group debate or is another solution in order?
  3. How does Mercier and Sperber’s model of reasoning differ from complex learning mechanisms? How might this be selected for in an evolutionary context?
  4. Does this conception of reasoning fit into a normative or a descriptive model of rationality?

11 thoughts on “Hugo Mercier & Dan Sperber (2011) “Why do humans reason? Arguments for an argumentative theory.” – Eliza Jaeger and Kristin Corbett

  1. I found M&S’s article to be very compelling. Their idea that reasoning evolved in a social context makes a lot of sense to me as I’ve suggested several times in class. I had been imagining it developing as a tool for humans to use to better collaborate and build alliances, though, rather than as M&S describe it as an “argumentative” tool to persuade others or to justify ones own beliefs.(p60) Can reasoning skills serve both purposes: a base one of persuading or justifying and also a more altruistic one of collaboration/alliance building? Is the only difference between the two one of conscious practice and application? What role does education (generally, and in classic reasoning skills specifically) play in one’s ability to use reasoning for a higher purpose?

    I found the section on violation of moral norms especially troubling, where they give evidence to suggest that “epistemic or moral goals are not well served by reasoning. By contrast, argumentative goals are: People are better able to support their position or to justify their moral judgements” (p68) Yet they go on to say that in collective settings over many generations great achievements of moral and epistemic human thought can occur (p72) How do we leverage these successes in order to get more of these “good” outcomes in human interactions?

  2. There are definitely aspects of Mercier & Sperber’s argument for the argumentative theory that compel me. Per our class discussion last Tuesday, these authors also highlight the social component of rationality, especially in how they argue that the main role of reasoning happens in anticipation of the necessity of defending a decision, which does not always result in weighing the pros and cons per the classical (and arguably, normative) view of reasoning, and instead, has roots in evolution and allows people to form the arguments that they need socially–that is, as you said, for other people to take them seriously. I am stuck by the distinction between justifiable decisions and “good” decisions (as defined by the classical definition of rationality), and how inextricable that is from social/cultural/contextual factors.
    Actually, with respect to the section “Group reasoning, 2.3” where M & S show research that in group settings, “truth wins”–the reason I won’t be in class for this presentation is an interview at this lab at Virginia Tech, and I was reading their research and came across a neuroscientific gambling study evaluating group decision making, and how inherent our risk-taking preferences affect how we view and act on information from other people. Based on their study, the article is quoted as saying “The [neuro]science behind choice becomes more complex depending on how much weight someone gives to the decisions of other people.
    No one in the testing group knows how the gambles will play out, but people still tend to conform with others. The likelihood that you will be nudged depends on how much you value what others say.” (Full article here, it’s actually an interesting read: http://research.vtc.vt.edu/news/2015/may/18/brain-scanning-reveals-birds-feather-really-do-flo/). Research like this makes me gravitate toward an argumentative theory a lot more than potential universal norms for rationality. Can we even consider rationality without taking setting and social factors into account?

  3. Like the others, I found this article particularly interesting in thinking and bringing a new perspective to human reasoning that we have not yet touched on (at least in very much depth) in class. My first question is how much of a role does intuition and instinct actually play in human reasoning and decision-making? Furthermore, would we consider this intuition normative or descriptive compared to the classical reasoning and reason-based choice? According to argumentative theory, reason-based choice “is what should happen when people are faced with decisions…that can be easily justified and are less at risk of being criticized” (69). If this fulfills the normative, then intuitions must fulfill the descriptive given the frequency that humans use ‘fast and frugal’ heuristics and intuitions in their reasoning. The only problem is that many psychological studies have found that often logic and reason-based choices neither lead to the correct answers nor the best decisions. When an individual is given the time to weigh and sort out the evidence, their justification becomes riddled with bias to confirm and defend their final decision. M&S argue that “people are good at assessing arguments and are quite able to do so in an unbiased way” (72), but such biases are imminent when it comes to producing arguments as a function of reasoning. Therefore, if humans tend to make better decisions based on intuition, don’t you think that this system of reasoning is more rational than reason-based choice? Maybe humans ought to reason based on intuitions alone. Perhaps there is a biological/evolutionary explanation or purpose for intuitive reasoning, such as the decision to run from a charging wooly mammoth. If so, what is the purpose for logical reasoning?

    I am still curious as to how and why we are able to rationally assess arguments, but not produce them? Is it solely because of our ability to communicate and interact with larger groups to fact-check the arguments and work together to reach truthful conclusions? Conversely, is it because the production of arguments requires working alone to defend your position against thousands (or more) of skeptics with different values and beliefs as you, and therefore it is nearly impossible for your argument to be rational if it does not align with the beliefs and norms of everyone else? If so, it seems like an unfair system to me, and it seems like we should revise our definition of rationality and reconsider what we deem normative and what we deem descriptive in terms of intuition-based and reason-based choice among humans.

  4. I agree with some of the others that M&S’s theory of argumentative reasoning and ideas regarding the nature of human reasoning and are very attractive and seem to characterize my experiences with the relationship between knowledge on a topic and human reasoning well. I was particularly intrigued by their section on group reasoning and its power in finding truth. Perhaps I am attracted to this because it supports the very romantic notion that “if we all work together, surely we can find the truth/make things right/solve the world’s problems” and that “we’re better than the sum of our parts” (or the “assembly bonus effect” found by a number of studies on page 63).

    For me this group effect is interesting to think about in regards to the first discussion question about incorporating the argumentative theory into epistemic or moral debates. Perhaps this is too big of a leap, but given the evidence on the ability of groups to come to rational conclusions, how do groups come to extremely immoral conclusions (as — pointed out by Professor Khalifa in class the other day — many have over the course of history)? Is it true that as long as both sides have adequate information, eventually the rational conclusion will surface? And how do these group effects play out across cultures with different standards of rapport between individuals?

    M&S also provide evidence in section 5.1 showing that often the best decisions arise from making intuitive rather than well-reasoned, stepwise judgements (69). Any group debate would involve more decision-making strategies and stepwise judgements as they are easily to substantiate to others that intuitive judgements. What then is the value of decisions made by group consensus?

  5. While I also agree with many of what M&S point out, I’m not too convinced on their group reasoning claims (Section 2.3), and I found their confirmation bias predictions (Section 3) and their consequences of motivated reasoning (Section 4.2) limited and contradicting, respectively. One thing that M&S don’t mention is how biases play a role with the people themselves. M&S claim that it is not the case that people believe the “smartest” person, but that most participants don’t actually change their mind unless they are thoroughly convinced that their initial answer was wrong (63). However, I wonder if “cultural differences” such as gender, age, sexuality, race, perceive education/knowledge etc., were taken into account when doing those experiments. Or on another note, in relation to research, how do such biases come into play when learning about a new research method/topic?
    M&S also claim that confirmation bias can be used in two scenarios, when people want to confirm their own claims and when people have an “absence of reasoning proper” (63). M&S also propose three predictions 1) that “genuine confirmation bias should occur only in argumentative situations”, 2) that “confirmation bias should only occur in the production of arguments, and 3) that confirmation bias is a bias in favor of confirming one’s own claims (64). While I actually agree with these predictions, I considered these prediction examples of a normative model because they themselves admit that the (beneficial) usage of the confirmation bias is limited to particular scenarios, such as “among people who disagree but have a common interest in the truth” (65). I think that people are usually too tangle in their own biases and because we are social beings, forming a “common interest in the truth” can be really hard. Also, M&S noted that participants produce/search for bias arguments more than evaluate, even when their task if to evaluate an argument (67). With so much access to education and knowledge, can the thin line between bias production and bias evaluation become thinner or more distinctive?

  6. I also found myself agreeing quite strongly with M&S. While many of the authors we have read claim that poor performance on classical reasoning tests such as the Watson selection task are due to phenomena such as natural irrationality in humans or computational error, the view of these results as positive and purposeful is very interesting. However, while I understand their point that argumentative reasoning facilitates social interactions and communication in a manner that is evolutionarily beneficial, I struggled with their emphasis on how a group setting improves reasoning and leads to more correct conclusions. In Sections 2.2 and 2.3, original studies of reasoning asking participants to respond to a given topic showed that participants resorted to weak explanations such as “it makes sense” and “failed to anticipate counterarguments and generate rebuttals (62).” However, if they were challenged by the experimenter or placed in a group setting with people who had differing views, participants provided much stronger arguments supporting their claim, and the groups even came to better conclusions than individuals who completed the same task. For me, this seems a little counterintuitive. Evolutionarily, natural selection chooses those who are individually able to support themselves the most. How does this “group mentality” serve an evolutionary purpose? Furthermore, M&S contradict themselves later in the paper by saying that people are “proactive and anticipate situations in which they might have to argue to convince others that their claims are true or that their actions are justified.” Although participants in the original studies were asked questions based on topics they did not know too much about, wouldn’t this proactive attitude also encourage them to provide stronger arguments for their claims than just that they “make sense”?

    “Biased assimilation” occurs when people produce arguments to support their personal evaluation of a conclusion. We have recently discussed the effects of ideologies such as normativism and descriptivism on the history of research protocols and techniques. The epistemic shortcomings that can result from this “biased assimilation” makes me question how this phenomenon influences interpretation of scientific research results. It is widely acknowledged that the desire to make the outcomes of an experiment fit neatly into the hypothesis of the study and results of previously published data can lead to biased reasoning that causes exaggerated or false conclusions. Does M&S’s discussion on “biased assimilation” provide further support for a descriptivist model in which conclusions on a normative standard should not be made?

    1. **The quote about people being proactive and anticipating situations is from page 66, and biased assimilation is discussed on 67. My apologies for leaving out those page numbers.

  7. It seems to me that M&S have a Panglossian view of reasoning. I personally was more of a meliorist when it came to norms of rationality, but in the case of reassessing why we reason and what we try to accomplish with it, I can see myself switching over to a Panglossian point of view. This all being said, I am more interested in their idea presented in section 5 that unconscious, thought may be superior than conscious thought and argue against reason-based choice. They concede that studies that prove this are hard to replicate (69). I am not necessarily skeptical even though this is the case, because of further literature from Malcolm Gladwell in his book Blink that further explores intuitive decision-making. For the most part, the intuitive thinking can be very powerful in reaching the optimal choice. But in one extreme, when explaining the shooting of an innocent Amadou Diallo by a police officer, an intuitive based choice could have been affected by implicit biases and led to the incorrect decision. By their proposition, a “truth” might not be reached by reasoning since this would only support an argument, but could be reached perhaps with enough people engaged in the situation. Unfortunately, the constraints of such a situation would not allow for this… Is there a way to refine reasoning to reach truth, in a scenario where the reasoning is independent of group reasoning?

  8. This article is so interesting! I want very much to accept Mercier and Sperber’s (Panglossian?) theory of argumentative reasoning, but I do have some unanswered questions.

    The authors’ account of human reasoning as chiefly argumentative in function is pretty compelling: It does seem to explain much of the apparently irrational behavior observed on a number of reasoning tasks. We know that people do poorly on the Wason selection task. O&C have tried to explain performance on this task in terms of subjective probability and information gain. M&S suggest that the decontextualized nature (61) of the task is to blame. The Wason task creates an artificial scenario that involves no interaction: The researcher in no way tries to convince participants of something here. As a result, participants are not cued to evaluate the argument presented. Interestingly, when we replace p and q with less abstract terms, such that the new rule in question becomes “If you are drinking alcohol, then you are over 18,” people do make the correct (“rational”) choice. (Thanks Prof. Khalifa for this example.)

    This all makes sense to me, but what becomes of classical logic and its symbolic representations then? Is classical conditional inference/syllogistic reasoning useful only insofar as it structures debate? Is there no intrinsic value to a conditional statement, like If p then q? Don’t classical inference patterns (like modus ponens/tollens) inform non-argumentative situations, too? (For instance: “If a bear approaches me, I should run.” This statement does not really lend itself to debate, but it’s still important that we be able to evaluate it.) Maybe someone can offer a little clarification on this.

  9. Mercier and Serber make very detailed arguments with so much evidence that it becomes difficult to not agree with them. The section that I thought was the most interesting was section 2.2, “Producing arguments.” While many of these arguments explain reasoning, this particular section explains argument productions, which is a good indication of how we reason in the first place. The text explains, “The first is that people resort to mere explanations (“make sense” causal theories) instead of relying on genuine evidence (data) to support their views” (62). When not given evidence, it appears that people do not rely on logic to make decisions, meaning that they are not rational thinkers; they just believe an idea for no reason. Later on they were given evidence, which they did use to justify their decisions. What the authors are trying to get at is that people are capable of being skilled arguers, such in those cases when they are presented with evidence, but do not always demonstrate those skills, such as is demonstrated in “other nonargumentative, settings.” This idea clearly relies on having resources, evidence, knowledge beforehand, which, such as is demonstrated in Bayesian Probability, is descriptive. Normative models of reality would imply that people can make good and rational arguments in the absence of knowledge (which is not demonstrated in the text) , whereas descriptive is able to describe situations based on the only knowledge that is present.

    As discussed in previous classes, does living in a technology-driven word with information always at our hands, are we progressing towards a more normative model? Does the fact that people can form good arguments when given any evidence mean that even the false information we gather from the internet can help us be more rational and reason how we ought to? On the other hand, like Devon mentioned, the authors argue that having more information means that humans can make more counter-arguments. Does this mean that more information can help us form better arguments or that we bypass the epistemic process and miss any epistemic processes? Which is more important in a normative model, argumentative rationality or epistemic rationality?

  10. Mercier and Serber present an interesting argument and strong evidence in its favor throughout their article (in my opinion). I was particularly drawn to section 4.2 Consequences of Motivated Reasoning. The “Biased Evaluation and Attitude Polarization” section presented the death penalty debate example to explain a phenomenon that we all experience in our everyday lives: Reasoning is often used produce supportive arguments to confirm our own already formed viewpoints rather than to assess them objectively. I have noticed this most recently with the presidential election. It is extremely rare to have a constructive argument or discussion with anyone about their viewpoints in this realm. It becomes a back and forth where both people are simply producing arguments rather than evaluating their views objectively. As M&S state, in these discussions people “…are not trying to form an opinion: They already have one” (67). The authors argue that the “goal is argumentative rather than epistemic, and it ends up being pursued at the expense of epidemic soundness” (67). What I found extremely interesting about this idea is that the authors suggest that a higher level of knowledge on a topic can increase the amount of biased evaluations one makes by making “it possible…to find more counterarguments” (67). Their knowledge therefore allows them to form a better argument for their viewpoint, but it can lead to “poor epistemic outcomes”. This differs from the other models we have discussed in class which suggest that reasoning with a higher level of knowledge and intelligence would increase rationality and help create a normative “ideal” that we could reference in certain situations. What would Normative look like in this model? Because the authors are arguing that the primary function of reasoning is to produce and evaluate arguments in communication, would a Normative model consist of using reasoning to create the most persuasive arguments even if they have epistemic shortcomings?

    Furthermore, I really like how the authors conclude by saying, “Human reasoning is not a profoundly flawed general mechanism; it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels” (72). Throughout this section of the course, we have grappled with trying to understand the function of reasoning, how people deviate from norms of rationality, and whether this is important at all. I like how M&S connect their model to evolution and argue that people deviate from epistemic norms because our reasoning has evolved to serve a primarily communicative purpose, rather than an epistemic one. Regardless of whether you buy M&S’s argument or not, I think it is important to acknowledge that, although human reasoning is clearly imperfect, it is very successful at serving this argumentative function. Additionally, M&S’s views align with the well supported idea that humans are inherently social beings (as Alex argued in class). Would this model be an example of a truly descriptive model because it illustrates the value in how people actually reason, rather than how they “should” reason according to epistemic norms?

Leave a Reply