I just wanted to share a paradox that occurred to me as we concluded our discussion about Gingerenzer’s theory of ecological rationality.
If we measure rationality to be heuristics that produce “actual success in solving problems” (Gingerenzer, Bounded and Rational, 123), it may lead us down a slippery slope. Humans are not the only organisms that exhibit rational heuristics by this definition — take a dog, for example. A dog acts very friendly towards its owner, which is a very reliable heuristic for securing a stable food supply and place to live. Thus dogs are also rational. We could continue to find examples of increasingly simpler beings and find heuristics that we could call rational. But at what point could we draw the line? Can we draw a line? If a line can be drawn, wouldn’t the definition of rationality consist in (or be defined by) that line?
Here’s a phrasing of the paradox that mirrors the sorties paradox. Humans are rational when they use effective ecological heuristics. A slightly less complex being is rational when using effective ecological heuristics. By induction, a rock is rational when using effective ecological heuristics. One such heuristic is being hard, which solves the problem of persistence, since on Earth hard things tend to exist the longest.
The paradox consists in that it is exceedingly implausible that a rock is to any extent rational. I’ve thought of two different resolutions to the paradox: (1) accept that rationality admits to degrees, or (2) claim that rationality also depends on adaptability to different environments.
Supposing (1), we could say that less complex beings are less rational. In effect, rationality would be a function of successfulness of ecological heuristics and cognitive complexity. From this position, it would follow that we could conceive of some extremely rational being with equally extreme computational power. But this entity would be essentially the same as the omniscient being from the unbounded rationality camp. Supposing (2) also takes us down a similar path, where an infinitely adaptable being would be infinitely rational, but I think infinite adaptability approaches to content-blind rationality, since isn’t a rule for determining which set of rules (corresponding to a particular environment) to apply after all environment-independent?