Naturalism and the Grounding Metaphor

  • In Hopes of Demarcating Scientific Ontology

What distinguishes scientific ontology from non-scientific ontology? Chakravartty proposes the norm of naturalized metaphysics (NNM):

…the principle that scientific ontology is properly delimited by metaphysical inferences and propositions that are sufficiently informed by or sensitive to scientific-empirical investigation as to provide or constitute ontological knowledge relating to the sciences. (67)

However, consonant with his voluntarism, there can be substantial differences about what counts as “sufficiently informed by or sensitive to scientific-empirical investigation.” As he puts it, “no stance in, no ontology out” (65-66).

            At least one place where these differences play out is in delimiting the subject matter of the sciences that is germane to scientific ontology. There are at least two places where this affects how the NNM is applied:

  • The explicit subject matters of the sciences are things that can fall out of reading the sciences at face value. For instance, molecular biology’s explicit subject matter includes gene transcription, DNA, and RNA.
  • The implicit subject matters of the sciences, “things whose natures are not the face-value targets of scientific work, but which are rather mentioned in passing” (69). This includes properties, causal relations, laws of nature, possibilities, and necessities.

Here are some of the questions that arise which Chakrvartty appears to think are stance-dependent. (1) Should scientific ontology restrict itself to explicit subject matters? (2) If scientific ontology is not restricted to explicit subject matters, then which implicit subject matters are proper targets of analysis? Different scientific ontologists may diverge in their applications of the NNM in answering these questions.

  • On Conflating the A Priori with That Which is Prior

Some claim that science presupposes metaphysics; others, such as Chakravartty, speak of metaphysical inferences. What’s at stake in this distinction?

A scientific domain D presupposes a metaphysical claim M if scientific investigation in D would not be possible without M.

  • Examples: measurement presupposes that quantities exist. Chakravartty also mentions classical mechanics presupposing that physical spaces obeys the axioms of Euclidean geometry as another example.

Some who appear skeptical of metaphysical inference accept that science frequently presupposes metaphysics. (Who? Chakravartty does not say.) How is this a coherent position? Chakravartty three possibilities:

  • First Possibility: “Metaphysical” presuppositions aren’t really metaphysical

The first possibility runs as follows:

P1.       Metaphysical presuppositions are not a priori.

P2.       The conclusions and criteria of evaluation in metaphysical inferences are a priori.

P3.       Only a priori claims are “really” metaphysical.

C.         So, metaphysical inferences, but not metaphysical presuppositions, are “really” metaphysical.

Chakravartty criticizes P1, arguing that presuppositions cannot be directly observed. “it was not because the geometry of spacetime was somehow empirically detected to be non-Euclidean that Einstein ushered in a new way of thinking about spacetime with his theory of general relativity.” (74)

  • Note: Chakravartty isn’t very clear about this, but I think the implicit point seems to be that there is a metaphysical inference from the predictions of relativity theory to the metaphysical presupposition that space is non-Euclidean.
  • Second Possibility: Presupposed metaphysics is less problematic than inferred metaphysics, version 1

Here’s the second way of cleaving metaphysical presuppositions from metaphysical inferences:

P1.       Metaphysical presuppositions do not concern ontology.

P2.       Metaphysical inferences do concern ontology.

P3.       Only metaphysics that concerns ontology is problematic.

C.         So, metaphysical inference, but not metaphysical presupposition, is problematic.

Chakravartty argues that the most plausible way of making P1 true is to adopt a deflationary metaphysics, in which putatively metaphysical claims are really about something non-metaphysical (for instance, merely about social practices in the scientific community.) However, deflationists will also reject P2. So, deflationism cannot fund this argument.

  • Third Possibility: Presupposed metaphysics is less problematic than inferred metaphysics, version 2

P1.       Metaphysical presuppositions are often tacit.

P2.       Metaphysical inferences are often explicit and deliberate.

P3.       Only explicit and deliberate metaphysics is problematic.

C.         So, metaphysical inference, but not metaphysical presupposition, is problematic.

The problematic assumption here is P3. Why think that something’s being explicit is problematic and something’s being tacit is not?

  • How Not to Naturalize Metaphysical Inferences

What exactly is the relationship between science and scientific ontology, such that the latter can be distinguished from other “non-scientific” ontologies? Chakravartty considers two proposals.

  • The Heuristic Conception

This approach to naturalized metaphysics, associated with Quine, sees philosophy as doing important conceptual preparatory work before handing off a topic of research to the empirical sciences.

Chakravartty objects that the heuristic conception is always out of time, as it were. We would have no way of knowing whether we were doing scientific ontology now, since it could only be redeemed by future science.

  • The Continuity Conception

Philosophy should be continuous with science’s aims, methods, subject matters, and criteria of evaluation.

Chakravartty thinks that this is more or less correct, save for the continuity of external subject matters with internal ones that are more philosophical in nature. Roughly, this means that science’s claims about explicit subject matters “ground” scientific ontology’s claims about its implicit subject matters. As Chakravartty notes, this grounding metaphor is in need of unpacking.

For instance, there is not a tidy division of labor with science (in say, its explicit subject matter) providing a posteriori constraints on ontology’s a priori claims. As we’ve already seen, science itself is fraught with a priori claims.

  • Unpacking the Metaphors: “Grounding” and “Distance”

The “ground” of scientific ontology is empirical inquiry.

“Distance” from this ground can be construed in terms of epistemic risk—given the empirical inquiry in question, what is the probability that the conclusion drawn from it (via metaphysical inference) is false?

Epistemic risk is a function of two things:

  • Empirical vulnerability: “how susceptible a proposition is to empirical testing.” (85)[1]
  • Explanatory power: “a measure of how well a metaphysical inference or resulting proposition satisfies the criteria typically associated with good explanations of the data of observation and experience,” such as “simplicity, internal consistency, coherence with other knowledge, and the capacity to unify otherwise disparate phenomena.” (87)

The more empirically vulnerable a statement is, the less its explanatory power and its epistemic risk.

These pull us in opposite directions: greater empirical vulnerability is good but greater explanatory power is also good. So, they trade off each other.

  • Theorizing versus Speculating

Naturalized metaphysicians frequently pride themselves on doing something akin to high-level scientific theorizing. They contrast this activity with the speculation characteristic of more traditional metaphysics.

Chakravartty contends that “There is no objective distinction between theorizing and speculating in the context of scientific ontology.” (89)

He argues for this as follows:

P1. If an objective distinction between theorizing and speculating exists, then there is a fact of the matter about the appropriate level of epistemic risk, i.e., the balance between testability with explanatory power, when drawing inference from the empirical content of science.

P2. There is no fact of the matter about the appropriate level of epistemic risk when drawing inference from the empirical content of science.

C.   Therefore, no objective distinction between theorizing and speculating exists.

The contentious premise is P2, so he offers some examples to motivate it. The only one that he really discusses in any detail is novel prediction. A novel prediction is an unexpected (typically precise) prediction that turns out, upon subsequent investigation, to be true. When a theory makes a novel prediction, its empirical vulnerability increases, so the epistemic risk in accepting it decreases. Does this mean that the following is true?

All and only theories that make novel predictions exhibit an acceptable level of epistemic risk.

If so, there would be a fact of the matter about the appropriate level of epistemic risk (P2 in the previous argument would be false.) However, Chakravartty argues that many good theories do not make novel predictions but are explanatorily powerful (natural selection). So, it cannot be that the absence of novel predictions can be used to rule out some metaphysical approaches as non-scientific ontologies.


[1] A more standard word for this is “testability.” This more conventional word-choice seems preferable in my opinion, since, for Chakravartty, epistemic vulnerability is a good a thing, yet there are many cases in ordinary language in which vulnerability is a bad thing (vulnerability to attack, for example).

One thought on “Naturalism and the Grounding Metaphor

  1. Kenzo Okazaki

    I just wanted to briefly comment after a first read on the premise P2 regarding novel prediction. I do not mean to be too picky about the example he offers, but it seems that the theory of evolution does make enough predictions to cast it as novel. For example, even if it does not describe with very good accuracy what kind of adaptations will develop within a species, it does offer a prediction that such change will occur under a certain set of conditions (genetic mutation, selective advantage with regard to a given resource, etc.). Is there a limit on the specificity of prediction that a theory has to meet to make it novel? This was not immediately clear to me from the reading, but I do not see why it would be the case.

Leave a Reply