Monthly Archives: March 2018

Evaluating Theories of Truth

While trying to evaluate various theories of truth, I have found that most theories are self-serving to one’s personal interpretation of truth, rather than distinct ways of viewing the same concept.  Compare these two quotes regarding truth: “Truth is rarely writ in ink.  It lives in nature.” (Martin H. Fischer), and “Every truth bends and is reshaped by other forces.” (Leslie Woolf Hedley).  Fisher, a psychologist, presents an understanding of truth very compatible with realism.  Truth lives independently of us, regardless of our (mostly unsuccessful) attempts to capture or describe it.  He would agree with correspondence theories of truth, although here he contends that most human propositions fail to fully capture these truths.  But if truth is something that exists independently of humans, what does that say about the “truth” of anything intangible that humans create?  Realism states that reality, the state of the world, is consistent for everyone.  However, pragmatics counter that one’s reality is not the same as another’s, and therefore two truths can occur simultaneously because they are relative to the conceptual scheme in which they exist.  This understanding of truth is what Hedley was getting at.  On a spectrum that ranges from “fact” to “opinion”, Hedley’s interpretation of truth would land much closer to “opinion” than Fisher’s.  To me, it is unclear whether either truth theory is more valid than another.  They are both completely acceptable and sound depending on one’s definition of truth, and each become implausible when applied to the other’s understanding of truth.

I’m wondering what course a debate between a realist and a pragmatist would take if they were to agree on a common conception of truth beforehand, of if that would be possible.  As people with less stake in the outcome of the debate are we allowed to have various definitions of truth, and change them to fit our situation as needed?  It seems as though it is necessary to consistently follow one truth theory but is it possible to separate out the situations in which the various theories are applicable?

A Paradox of Ecological Rationality?

I just wanted to share a paradox that occurred to me as we concluded our discussion about Gingerenzer’s theory of ecological rationality.

If we measure rationality to be heuristics that produce “actual success in solving problems” (Gingerenzer, Bounded and Rational, 123), it may lead us down a slippery slope. Humans are not the only organisms that exhibit rational heuristics by this definition — take a dog, for example. A dog acts very friendly towards its owner, which is a very reliable heuristic for securing a stable food supply and place to live. Thus dogs are also rational. We could continue to find examples of increasingly simpler beings and find heuristics that we could call rational. But at what point could we draw the line? Can we draw a line? If a line can be drawn, wouldn’t the definition of rationality consist in (or be defined by) that line?

Here’s a phrasing of the paradox that mirrors the sorties paradox. Humans are rational when they use effective ecological heuristics. A slightly less complex being is rational when using effective ecological heuristics. By induction, a rock is rational when using effective ecological heuristics. One such heuristic is being hard, which solves the problem of persistence, since on Earth hard things tend to exist the longest.

The paradox consists in that it is exceedingly implausible that a rock is to any extent rational. I’ve thought of two different resolutions to the paradox: (1) accept that rationality admits to degrees, or (2) claim that rationality also depends on adaptability to different environments.

Supposing (1), we could say that less complex beings are less rational. In effect, rationality would be a function of successfulness of ecological heuristics and cognitive complexity. From this position, it would follow that we could conceive of some extremely rational being with equally extreme computational power. But this entity would be essentially the same as the omniscient being from the unbounded rationality camp. Supposing (2) also takes us down a similar path, where an infinitely adaptable being would be infinitely rational, but I think infinite adaptability approaches to content-blind rationality, since isn’t a rule for determining which set of rules (corresponding to a particular environment) to apply after all environment-independent?

Thoughts on rationality & content-blind norms

I found Gigerenzer’s argument for the ecological rationality of heuristics and his argument against content-blind norms to be compelling. Specifically, I’d like to share some of my thoughts about content-blind norms.

Intuitively, it seems to make sense that rationally parsing meaning should be a content-blind process. After all, if it weren’t, wouldn’t content-dependent interpretation already require a preconception of the content’s meaning? If so, it seems this would amount to infinite regression, or a recursively defined meaning-generating function that accepts meaning of the sentence as input.

Let M be a function that maps from a sentence s to a meaning m (so we could say M(s)=m). Let D be the incomplete meaning-generating function from the above paragraph, where D maps a proposition (P) and meaning (m) to a (hopefully more complete) meaning. With this, we could write M(s) = D(P, M(s)). To clarify, proposition P corresponds to the content-blind syntactic structure of sentence s, while the meaning is the complementary context-sensitive component of sentence s.

Clearly, function D doesn’t do anything except call itself ad infinitum. It does, however, indicate what properties a correct semantic interpretation function C must possess: a base case for generating meaning that requires no precomputed meaning, and a means of reducing the extent of precomputed meaning required. That is, C(P, ε) must be defined, and C must satisfy M(s) = C(P, M(s’)), where s’ is a “less meaningful” sentence than s in some way.

The necessary existence of a base case suggests that at there must be some class of sentences whose meanings are entirely content-blind. I take this to mean that rationality is ultimately rooted in some fundamental set of processes that are entirely context-blind. That is not to say, of course, that rationality can be defined by content-blind norms—only members of this fundamental set can be defined by them.

I do not think this conclusion contradicts Gigerenzer’s view on viewing rationality in terms of content-blind norms. As Matheson puts it, “that cognitive virtue cannot be located entirely within the mind does not imply that it is located entirely outside the mind” (143). Supposing content-blind norms alone cannot be used to measure rationality, this question arises: can solely content-sensitive along with content-blind norms be used to measure rationality? For those two types of norms to suffice would require rationality to be purely self-blind. That is, for a given sentence, there would be one unique rational way to interpret it (ignoring situations of ambiguity). But is this the case? Each individual obviously interprets identical sentences in nonidentical ways. If rationality is self-blind, then, although no one may actually be 100% rational, we could conceive of someone who is 100% rational, the epitome of human rationality norms. Moreover, this would suggest that there could exist no two people who are fully rational yet with different minds.

I think it likely that rationality is also self-sensitive; that is, rationality needs to be evaluated with respect to the (ir)rational agent.