Introduction

Paul Thagard’s article “Explanatory Coherence discusses a negative coherence theory based on seven principles. These principles establish relations of local coherence between hypotheses and other propositions. Thagard presents these principles through their implementation in a connectionist program called Echo. In Echo, coherent and incoherent relations are encoded by excitatory and inhibitory links, respectively. Echo’s algorithm is based on considerations of explanatory breadth, simplicity, and analogy. Thagard argues that Echo simulates human reasoning by  accepting or rejecting hypotheses.

Explanatory Coherence

Thagard defines explanatory coherence as propositions “holding together because of explanatory relations.” (436) The author’s definition of explanatory coherence is based on: “(a) a relation between two propositions, (b) a property of a whole set of related propositions, or (c) a property of a single proposition” (436). Thagard claims that “(a) is fundamental, with (b) depending on (a), and (c) depending on (b)” (436). The author focuses on the acceptability of a proposition, which is independent of a set of propositions. The greater the coherence of a proposition with other propositions, the greater its acceptability. Thagard also states that although explanation is sufficient for coherence, it is not necessary. For instance, two prepositions can cohere for nonexplantory reasons, such as in deductive, probabilistic, and semantic coherence. Incoherence occurs when two propositions contradict each other.

Thagard establishes seven principles as the makeup of explanatory coherence:

  1. Symmetry: asserts that pairwise coherence and incoherence are symmetric relations.
  2. Explanation: states (a) “what explains coheres with what is explained” (437);  (b) “that two propositions cohere if together they provide and explanation” (437); and that (c) theories with fewer propositions are preferred.
  3. Analogy: states that (a) analogous propositions cohere if they are also explanatory; and (b) “when similar phenomena are explained by dissimilar hypotheses, the hypotheses incohere” (437).
  4. Data Priority: assumes that each proposition contains a degree of acceptability because they were achieved by methods that led to true beliefs (464); if the preposition  doesn’t fit with other beliefs, then it has to be “(un)explained away.” (Professor Khalifa)
  5. Contradiction: syntactic and semantic contradictions.
  6. Acceptability: proposes that (a) “we can make sense of the overall coherence of a proposition in an explanatory system from the relations established by Principles 1-5” (438); and that (b) when a hypothesis only accounts some evidence in the system, it is a weak hypothesis.
  7. System Coherence: local coherence of propositions dictate the explanatory coherence of the whole system.

A Connectionist Model: Echo

Thagard introduces connectionist models to aid the reader’s understanding of Echo. Connectionist techniques describe networks in terms of units that excite or inhibit other units. For example, when looking at the Necker cube, such that A is a corner on the cube face in the foreground, then one must also perceive corners B, C and D as on this face (439). This focusing of attention on A is termed “activating” A. The relationship between A and the other three corners demonstrates that A excites B, C and D. Ultimately, the connectionist model reveals a holistic approach to perception (438).

Thagard advocates that Echo simulates human reasoning. Consistent with other connectionist models, Echo is based on excitatory and inhibitory links: if Principles 1-5 state that two propositions cohere, an excitatory link is established between them; if they incohere, an inhibitory link is established. Echo attributes the same weight to propositional links because they are symmetric (Principle 1). However, if an explanation has a larger number of propositions, the degree of coherence between each pair of propositions decreases, and Echo proportionally lowers the weight of excitatory links (Principle 2). Hypotheses that explain analogous evidence cohere with each other (Principle 3).  When the network is run, activation spreads from a special unit that always has an activation of 1, giving each unit acceptability (Principle 4). Units that have inhibitory links between them have to compete with each other for activation. The activation of one of these units will suppress the activation of the other (Principle 5) (439).

Echo’s values are set by a programmer, therefore it is not objective.  Specifically, the variability of Echo’s parameters (tolerance, simplicity impact, analogy impact, skepticism, and data excitation) appear arbitrary. However, Thagard insists that if a fixed set of default parameters apply to a large range of cases, then the arbitrariness is diminished (443).

Applications and Implications of Echo

The reading describes several examples of how Echo supports scientific theories and legal reasonings. However, the Craig Peyer trial is particularly interesting because it is the only provided example where the findings of Echo were inconsistent with the actual outcome. The jury’s decision was not unanimous in Peyer’s trial. Echo, however, found that it was easier to reject Peyer’s innocence hypothesis than that of Chambers (452). Ultimately, this example demonstrates a disconnect between Echo and actual human reasoning.

Thagard explores how connectionist models compare to other programs used to develop artificial intelligence (AI). Echo’s major limitation on AI development is that it only represents one method of exploring rationality and cognition (457). Thagard encourages collaboration across many disciplines, such as neuroscience and mathematics, in order to fully understand the mind. He specifically contrasts connectionist models with probabilistic models and explanation-based learning. Thagard acknowledges that probabilistic models are attractive because they are based on axioms. However probabilistic models cannot evaluate scientific phenomenon that do not have a statistical foundation (459). Furthermore, connectionist models may resolve issues with explanation-based learning in AI. Some explanation-based learning systems perform hypothetical experiments to identify causal relationships, but this method is not practical for complex theories. Connectionist models can enhance these systems by offering comparisons between multiple explanations to select the strongest relationship (459).

Thagard also discusses the role of connectionist models in relation to psychology, specifically in attribution theory, discourse processing and conceptual change (459-461). He highlights that Echo models how individuals attribute information about their surroundings to develop causal explanations for their observations (460). However, Echo cannot simulate how individuals consider alternative reasons for another individual’s behavior, such as that they were coerced (460). Thagard commends Echo for mapping human discourse processing, including interpreting a question as a request or decoding a story. He argues that individuals are constantly evaluating hypotheses about others’ intentions and meanings in texts (460). Lastly, Thagard praises Echo for simulating the shifts in beliefs of subjects learning a new phenomenon when they are given more evidence to aid their understanding (460).

In terms of their implications in philosophy, connectionist models contradict with several other theories. Echo employs a holistic approach because it evaluates a hypothesis in relation to other propositions within a larger system. However, Echo contradicts with holism in that it can also consider local propositions (463). Second, Thagard acknowledges that Echo contradicts with probabilistic theories. Echo does not define propositions by their support in probability axioms, but by degrees of activation. Similar to his discussion of Echo on probabilistic models in AI, Thagard claims that some concepts cannot be assigned a probability (464). Third, Thagard asserts that explanatory coherence contradicts with confirmation theories because these theories evaluate a hypothesis based on the number of observed instances of its claim. He believes this method has similar limitations to the probabilistic models used in psychology and philosophy (464). By considering Echo’s implications on various fields of study and real world scenarios, we may better understand its advantages and weaknesses.

Critiques of Echo

Thagard acknowledges that Echo’s limitations consist of a programmer bias, a lack of a definition for “explanation,” the normative and descriptive dichotomy, and dependence on causal relationships and the seven principles. As previously discussed, a significant limitation of Echo is that the programmer encodes the initial data. Therefore, this individual has agency in how the claims are framed, which might reduce the reliability of the outputs (453). Another restriction is that Echo is based on explanatory relationships between propositions, but there is not a consensus on the definition of an “explanation.” Furthermore, Thagard advocates for viewing explanatory coherence as both descriptive and normative, or describing how people reason and how people should reason (465). It is unclear whether Echo is descriptive or normative. Additionally, Echo can only analyze causal relationships (454). Lastly, the quality of Echo is dependent on the quality of the seven principles. These principles have the potential to be disproven or expanded upon, therefore Echo in its current state might not be the best version of itself (456).

Questions

  1. What are the advantages of using Echo’s and its parameters?
  2. Thagard supports explanatory coherence on the basis of the theory’s connection with acceptability. How can we be sure that an explanation is acceptable? What is an explanation?
  3. In supporting a hypothesis, Echo distinguishes between the strength of propositions and the amount of propositions? How do humans evaluate this difference?
  4. Consider the example of Craig Peyer’s trial and how the outcome of Echo was inconsistent with the outcome of the jury. What differences does this example suggest between Echo and human rationality? Is Echo a prescriptive or descriptive model?

 

11 thoughts on “Paul Thagard “Explanatory Coherence” (Ariana Mills and Jess Gutierrez)

  1. Sorry for the rather late reply. I am optimistic about the ability for us as humans to be able to eventually generate a model about our own thought processes. It is a difficult task, as Alex points out, since there are many components that we ourselves are not even aware of when we think. But the models serve specifically to shed light on those unknown processes, as we, as philosophers and scientists, are able to have educated guesses about how it functioned; and empirical evidence is the best support for validation. I do share the concern that the system might to be arbitrary, as initial calibrations are indeed made by the engineer of the system him/herself; however that is common across all systems, and moreover it is the foundation of personal preferences. I fail to see why the system should not have some bias, since there are many that we have during our daily lives as human beings; my dislike of mushrooms does not necessarily translate into someone else’s web of coherent propositions.

    The issues with which I take, therefore, are more specifically to the principles that Thugard has proposed. For instance the inclusion of Occam’s razor in the explanation principle (that theories with fewer propositions are preferred) is a for me rather strange as the “degree of coherence” should not be “inversely proportional to the number of propositions,” (437) as coherence is not a mutually exclusive thing that sums up to a finite number. I believe that it might in fact be the opposite, as the increase of number of coherent propositions together creates a stronger relationship between all of them, with regard to other clusters of nodes in the system; this demonstrates that the proposition itself is better documented, and that there is a range of information that we know about it. To propose an inverse relationship between the two seems to be a reductionist move in my opinion.

    1. Similarly the incoherence between two dissimilar hypothesis that results in an “analogous” (437) phenomenon strikes me as strange, since there is no necessity for them to “resist holding together” (436), especially given the harsher definition by Thagard. It is illogical to assume that similar results must be produced from similar causes, as there are countless examples to be found.

      Apart from these singular objections that can be altered easily with further developments of the model, I find Thagard’s explanation of the advantages of a coherence-based system very compelling, as indeed human thinking is rare based on pure Bayesian evaluation of probabilistic outcomes. Thus the AIs based on this system seem to have a greater degree of flexibility and an increase ability for them to “learn” rather than to have to accept initial coding in order to function.

  2. I share a lot of the same concerns about the ECHO model that have been brought up here, especially regarding the disparity between the system and how people actually reason. Thagard talks about the model as “a natural measure of global system coherence” (438) and yet, if it is attempting to reflect cognition, it lacks so much of the global system of human reasoning as mentioned by others on this blog. I’m quite doubtful that all cognitive processes we undergo are able to be quantified, weighed, and predicted by any system we create. Does it seem possible to others that we could ever be aware enough of our own reasoning so as to break it down into component parts? Isn’t this awareness required before we can tackle representing the interactions realistically via artificial intelligence?

    I think it will always be problematic in attempting to model human reasoning that the input production cannot be automated, as there will always be some form of programmer bias involved (if not the direct input, then the automated process itself will be influenced by some programmer). Is there any way for this to not be the case? Won’t the “natural language system” or the “integrated system of scientific reasoning” (454) mentioned by Thagard always be limited by the creator? Just as with how attention affects our “basic” visual processes, won’t the focus of the programmer always have some effect on the outcome of the system?

  3. As Max first pointed out and many of my classmates have vehemently agreed with, Thagard’s claim that ECHO is an accurate representation of how individuals reason is something I’d classify as a BS claim in class. He claims that the seven principles that makeup explanatory coherence represent the way in which individuals process their reasoning. Most humans do not coldly calculate the weight and strength of certain pieces of evidence and hypotheses. When we process things it is very difficult to eliminate the biases that we hold; biases that have formed from our past experiences, beliefs, and emotions. A certain piece of evidence may hold a substantial amount of strength to one individual while it holding practically no strength to another. ECHO does not, and theoretically cannot, account for the different ways in which individuals reason given that humans are diverse and flawed creatures. Perhaps, Thagard’s coherence model reflects how humans ‘should’ think, but is not a correct representation of the reality. Given that humans vary so much and, at least I would claim, are flawed in some way could ECHO claim to be a better model for reasoning and drawing logical conclusions?

    In regards to court cases, such as those of Chamber’s and Peyer’s, I am not completely certain of how ECHO weighs hypotheses and their subsequent evidence. This my reflect my lack of understanding of how ECHO functions, but do different propositions hold varying strengths? I understand how forensic evidence can be applied, my concern is with other forms of evidence such as eyewitness accounts. I believe, and there is plenty of research that supports this claim, that eyewitness accounts are unreliable. How would ECHO process this type of information? Since this is a proposition based on an individual’s observations Thagard asserts that this proposition “has a degree of acceptability on its own” through the principle of data priority (437). This would give an eyewitness statement more ‘strength’ than another “hypothesis whose sole justification is what it explains” (438). Would something as unreliable as an eyewitness statement carry more weight in ECHO than other lines of evidence? if so, perhaps this reflects that ECHO does not accurately distinguish between the strenght of certain propositions.

  4. In reading the blog post and classmates’ responses, I am most interested in fleshing out where technology like Echo fits in with the conversations we’ve been having in class and in the two prior units. For instance, I’m curious about the relationship between Echo and the neural nets we read about in the naturalizing epistemology unit; we learn, for instance, that Echo’s values are set by a programmer, whereas the neural nets take in a large quantity of data and “set their own values” as a result. Outside of programmer bias, which is discussed in the article, and with more of a focus on the different ways and reasons for creating an AI in the first place, how does this discrepancy of models speak to the various ways and theories of how humans learn and reason, both examined in this article and examined previously in class?

  5. I agree wholeheartedly with the claims that have already been put forth. I think the principles of explanatory coherence underlying the ECHO system appear to be a reasonable basis on which to run the system. However, as Jess and Ariana mention, the principles still have room for improvement and alteration (or could be disproven completely) (456), which makes me question the reliability of the system. Also, as Max mentions, for the system to mirror human reasoning, it becomes necessary for humans to adopt the 7 principles and reason accordingly.

    I also take issue with the “data priority” principle. Thagard asserts that “a proposition describing the results of observation has a degree of acceptability on its own…[and] it can stand on its own more successfully than can a hypothesis whose sole justification is what it explains” (437-438). This brings me back to foundationalism, and the idea that there are some basic beliefs that are in a class of their own. Why do the propositions that arise from observation get priority? Thagard goes on to say that based on background knowledge and experience, we know that certain observations are likely to be true unless proven otherwise. Two things come to mind here: the role of perception and its reliability in forming as basis for our beliefs and the role of previous knowledge and how it informs our beliefs. How do previous knowledge and the concept of occurrent and non-occurrent beliefs work into the ECHO system? Can they work into the system? How do you determine in the ECHO system which propositions should be linked with the special data priority unit?

    In addition, as Neve mentioned, emotions, prior beliefs, and many external factors deeply affect people’s beliefs and reasoning. Although ECHO is adaptable, it does not appear to be able to take into account many necessary “human” factors that go into reasoning.

    Thagard did not speak to the concept of beliefs and justification, and I am wondering how those play into his theory of coherence. It could be that he just called beliefs by another name, but it seemed to be that he wanted to distance himself from that way of thinking to take a broader/different perspective.

    Lastly, the jargon threw me off when considering how everything was connected within the system. What are the relationships between hypotheses, propositions, evidence, explanations and which are synonymous?

  6. Thagard rejects the dichotomy between descriptive and normative (prescriptive) in his work, claiming that the seven principles of explanatory coherence applied in ECHO represent the way in which people should think and in fact do think. I have several questions/concerns with Thagard’s claim. My primary concern is that ECHO does not actually give a very good descriptive account of how people reason. While Thagard claims his coherence model applies to scientific, legal and daily reasoning, it is hard to accept the idea that daily reasoning looks anything like this model of explanatory coherence. To accept this claim, it would need to be the case that the majority of people employ the seven principles of coherence when reasoning. The corollary to this claim is that people, in general actually reason as they should. Both claims seem pretty unlikely.
    Therefore, it seems like Thagard’s model could only represent a very small number of instances of reasoning in the realms of science and law.

    However, are Thagard examples of reasoning in science and law useful? Both the phlogiston example and the Darwin example seem to have proposition sets that are heavily skewed towards what is established as the “obvious” conclusion. These are not close cases. Furthermore, the example of legal reasoning, the Peyer trial, does not result in the same conclusion as the jury. Does this damage Thagard’s claim that his account is truly descriptive?

    My next question involves clarification:

    Do Thagard’s principles of explanatory coherence use the principles of both negative and positive coherence? Principle four, the principle of data priority seems to be a negative justification while the other principles seem designed for positive coherence. Perhaps I just need some clarification here. Honestly, I’m not 100% clear on how ECHO works.

    Finally:

    Given that ECHO is a computer program does it really address the problem of perception inherent in the doxastic assumption?

    1. Like Max, I also had difficulty accepting the fact that ECHO accurately explains the way humans actually reason. Similar to the way I felt about the Churchland article relating to connectionist models, I find using a computer program like ECHO to be a pretty reductionist way of viewing human cognition and reasoning. Not only do I find it hard to believe that the “usual” way most people reason is by using the seven principles he outlined (like Max), but I also feel like it is missing a lot of other factors that tend to influence the way we reason. I feel that emotion, for example, plays a huge role in the way a belief may be accepted or rejected (ie, we might really want to believe something is true because it benefits our happiness, so we convince ourselves that it coheres with our other beliefs). Is it possible that emotion played a role in the Peyer case, causing the human jury to be undecided but ECHO (which does not include emotion) to determine a guilty verdict? Aren’t there other things that influence the way that we accept things to be true?

      Thagard argues that this theory and ECHO simulates actual human reasoning. But if the “acceptability” principle of this theory is based on the idea that the more evidence there is the more acceptable a hypothesis will be, how can we explain something like religion? Religion is based on faith, which means that we accept it to be true even when there isn’t a lot of clear-cut evidence to support it. I see this theory working more in the scientific field, but I do find it difficult to apply to actual human reasoning and beliefs.

    2. Like Max and Audrey my main problem with the Echo system was the claim that it accurately portrayed human decision-making/reasoning. Many of the ways that humans view the world is through our past experience and this experience influences whether or not we believe a hypothesis coheres with our already held beliefs. How can Echo as a computer program accurately demonstrate the path towards explanatory coherence when humans evaluate coherence in light of their own experiences and this program has none?
      Furthermore, as Ariana and Jess mention in their summary essay, Echo’s values were input by a human and therefore biased in terms of assessment, and while we can generalize trends in human decision making, we can never separate ourselves from our personal experience. Using Echo as a basis for explanatory coherence, given its restrictions, does the program make Thagard’s theory more or less acceptable? How do we lay out the parameters of acceptability and when a hypothesis only explains some of a system does that necessarily mean it is a weak hypothesis or do the unexplainable aspects of the system just not fit?
      Another problem I have with Echo comes when there happen to be several explanations for the same phenomenon. Does the program take that into account? Does it weigh one explanation more than the other? Theoretically yes, it will weigh the explanation that coheres best, but is that coherence always clear?
      Essentially, I’m skeptical that Echo accurately mimics how humans reason and I’m also just really confused about how the system chooses to weigh certain factors and explanations more than others, what is its basis?

  7. Similar to Emily, I also question the principle of data priority in that it requires no more justification than “results of observation” to be considered some degree of acceptable (437). I am more in favor of the explanatory coherence requirement, stating that “we should not take propositions based on observation as independently acceptable without any explanatory relations to other propositions” (438), which Thagard himself acknowledges.

    I think that this is a good example of why in the case of Craig Peyer the jury was hung but the ECHO found him guilty. Much of the evidence was circumstantial, which humans know to process as possible instead of justifying. However, the observations that ECHO processes allow it to form much more concrete beliefs given the data priority. Additionally, jurors are subject to emotions, beliefs prior and unknown to this trial that may impact their perspective, and convincing/swaying arguments from both sides of the court. Since ECHO does not factor these into its decision, the data priority principle allows it to come to more rapid conclusions. Whether this be a positive contribution, I am torn. Generally I find myself to be a “facts over emotion” type person, yet when it comes to murder cases, ECHO fails to acknowledge the judicial systems “beyond a reasonable doubt” requirement.

    That being said, I do believe that many other things can be explained by ECHO since they do not incorporate any subjective processing, such as many science or mathematic beliefs. If we were to decrease reliance on human reasoning, is the ECHO capable of providing rationalization and explanation for us? Can the ECHO function properly and be justified if the data priority principle is undermined or disproven?

  8. Echo offers a scientific way to justify proposition on the basis of its connections with other propositions. It is strong in that explanations are accepted whether its relationships to many other propositions cohere. The more activation of the system, the stronger the assurance that the proposition fits. The seven principles it is based on allow Echo to be applied to scientific, philosophical, and psychological reasoning. However, this reasoning is formed from how Echo literally defines the seven principles in the computational program. The fact that the system is set to different parameters of degrees of freedom and tolerance questions the value of the explanations the connectionist models are producing (439).

    One factor that questions whether Echo can provide acceptable explanation is the way in which acceptability is defined and addressed. Thagard claimed that acceptability depends on but is detachable from explanatory coherence, but whether the propositions are activated in the system appears to depend on their coherence with other propositions. In the context of Echo, acceptability depends on the weight of the link between special unit to data units. However, I question whether Echo is accurate because it seems like the system relies on inherent acceptability of the propositions themselves. In the phrase, “acceptability of P in system S depends on coherence with the proposition S,” can you rely on S (436)? Are there certain propositions can be inherently accepted, and if so, doesn’t that mean that some propositions are privileged over others? As a result, I am also skeptical of the principle of data priority, which states that “propositions that describe the results of observations have a degree of acceptability on their own.” (437) In addition to Ariana and Jess’s question of what is an explanation, I ask if Thagard’s definition of acceptability in the context of Echo is a reliable way to support or refute explanations themselves.

    Ariana and Jess illustrate that probabilistic models cannot evaluate scientific phenomena that lack statistical foundations (464). Thagard argues that connectionist models are more psychologically plausible because comparisons can be made between multiple explanations that in turn strengthen the relationship. This notion closely resembles holistic theories, in which beliefs are justified based on their application to all other beliefs a person has. However, is it necessary for Echo to also consider propositions on the basis of their support in probability axioms (464)? I think that probability takes into account past experience and reasons to form a supportive claim. Thagard acknowledges that Echo is limited in that it represents one method for exploring rationality. Echo provides a systematic way to analyze casual relationships, but what about the relationships aren’t causal? Echo relies on degrees of activation, but is this quality sufficient to include experience and processing that is influential in cognition? Since Echo cannot stimulate how individuals consider alternative reasons for another individuals behavior, do the activations fail to represent which propositions are truly justified, or in this context, “explained?” (460)

Leave a Reply