Rationality, Logic, and Heuristics

 

Raymundo Morado

Institute for Philosophical Research

National University of Mexico

Mexico City, Mexico

morado@servidor.unam.mx

Leah Savion

Department of Philosophy

Indiana University

Bloomington, Indiana, U.S.A.

lsavion@indiana.edu

 



Abstract:  The notion of rationality is crucial to Computer Science and Artificial Intelligence, Economics, Law, Philosophy, Psychology, Anthropology, etc. Most if not all of these disciplines presuppose the agent's capacity to infer in a logical manner. Theories about rationality tend toward two extremes: either they presuppose an unattainable logical capacity, or they tend to minimize the role of logic, in light of vast data on fallacious inferential performance. We analyze some presuppositions in the classical view of logic, and suggest empirical and theoretical evidence for the place of inferential heuristics in a theory of rationality. We propose (1) to outline a new theory of rationality that includes the key notion of logical capacity as a necessary but realistic factor, (2) to expand the notion of inference to include non-deductive inference, specifically non-monotonic, and (3) to emphasize the logical role of inferential heuristics and constraints such as cognitive economy.

                 

Keywords: Defeasible, Logic, Rationality, Heuristics, Cognitive Economy

 

 


1.     Logical acuity as a necessary factor in rationality

 

The concept of rationality is highly complex, and often involves distinct constructs based on principles borrowed from physics, social science, psychology, evolution, economy, political studies, philosophy, etc. Adopting a belief, drawing inferences from it, constructing a value system, and acting based on beliefs and desires, can all be judged as rational or irrational (see [1]). This paper focuses on the rationality of inference, leaving aside other factors in belief formation and action. The rationality of inference has logical acuity as a necessary component.

            Logical acuity involves much more than the ability to draw correct conclusions. The logical agent makes plans, discerns alternatives, discards the irrelevant, argues, negotiates, understands arguments from different points of view, engages in counterfactual reasoning, evaluates evidence and accepts obvious consequences of its beliefs. Logical inference is but a part of being logical, which is in turn only a fraction of what it is to be rational. Still, the analysis of logical inference, its structure, and its contribution to the understanding of actual human reasoning is a good starting point for a theory of rationality, since it has been a subject of rigorous examination for centuries.

           

Traditionally logic was considered a normative description of the workings of an ideal mind. We have known since Aristotle that people do not reason in perfect accordance with any accepted logical system. Nevertheless, until recently, philosophers, as well as psychologists, sociologists, and anthropologists accepted a highly idealized model of human reasoning (see [2]). Laws of classical logic were considered, at least implicitly, to be the laws of thought.

            The gap between the dictates of logical theories and actual human inference has been studied extensively since the 1960s. The results of the experiments, though controversial in interpretation, have shaken the traditional view of man as the “rational animal” (Aristotle), “noble in reason, infinite in faculties” (Shakespeare), by displaying unequivocally a disturbing picture of human inferential abilities.

            The most common response was to apply the “Competence-Performance” distinction, according to which we have an innate perfect logical competence, which could be captured with a set of rational rules, only marred by imperfect performance in the application of these rules due to human biological, cognitive, and perhaps social limitations (see [3]). The normative logical dimension appeared as descriptive of the competence level hidden by performance obstacles. The competence was often described as algorithmical, as constituted by a set of rules that guarantee the deductive validity of an inference. This model is used by a large number of “mental logic” theories, which disagree about the precise core of logical competence rules[1] and their nature (syntactic vs. contentful rules) (as discussed in [4]). Many predictions made by this class of theories have clashed with further experimental observation, resulting in a suggested new model, in which heuristics have been proposed as explanatory substitutes for the algorithms as the core devices and procedures for inference. This has led some authors to claim that logic cannot be a necessary factor in rationality, since logic is thought of as always algorithmic, and heuristics are considered non-algorithmic (e. g., [5]).

            We want to resist the temptation to circumscribe logic to the algorithmic paradigm of inference. Given our conviction in logical acuity as an essential part of rationality, we prefer to use an extended notion of logic as a general theory of inference that includes heuristics and allows fallibility without compromising formality and rigor. Classical logic turns out to be an extreme example of heuristic method.

 

2.     Traditional presuppositions for metalogical properties of rational inference

 

The framework of the research on human reasoning in the last few decades is a by-product of the popular linguistic paradigm, the formal logic paradigm and the computer paradigm. For a while logicians basked in the glow of the achievements of Frege/Russelll/Whitehead. The relations of reasoning to formal logical theories suggest a picture of human thought made of atomic, discernible components, containing operations that function recursively on well-defined semantic structures. It is common to examine the "syntax" of a natural language and of inferential thinking independently of other aspects of the subject matter. The computer paradigm strengthens the picture of the mind as analogous to a program (data base + operations), where explanations can be adequately supplied at either the program or at the implementation levels.

            Logicality was equated with the ability to follow classical rules of inference to generate axiomatic-based theorems. But, as was pointed in [6], classical logic is insufficient in many areas of artificial intelligence and cognitive science:  planning, searching, pattern recognizing, CWA, schemas, scripts, frames, etc.

            The following are some of the presuppositions about the nature of rational inference, that stem from these paradigms:

1.       "Logical omniscience"

2.       Infallibility

3.       Consistency

4.       Context-free rules

5.       No time, space or other resource limitations on the execution of an inference.

 

Logical Omniscience

 

Classically, a rational belief system is logically closed. Logical closure is the property of a set of propositions of being closed under logical consequence. That is, all the logical consequences of subsets of propositions are included in the set. To this logical property of sets corresponds an epistemic property of agents. An agent is "Logically Omniscient" if and only if her belief set is logically closed.[2]  For instance, in classical logic, logical omniscience entails the belief in all logical truths, no matter how abstruse, since they are vacuously logical consequences of any beliefs.

            Logical Omniscience would not be desirable in the presence of inconsistencies. If we use classical logic, a contradiction entails everything and trivializes any system of beliefs. A rational agent, upon discovery of any contradiction, would have to stop making inferences until consistency is restored, which might never occur. How rational would such a course of action be?

We are normally able to continue operating in the presence of belief conflicts and only refuse to draw (some) conclusions in small areas, as localized as possible. We learn to live with errors, set priorities for conflict-resolution and establish emergency mechanisms to ensure a graceful degradation of output if we cannot mask the problem (see [8]).

As mentioned before, a rational agent is expected to draw obvious consequences, but even if logical omniscience could be part of a model of implicit beliefs, it is certainly too strong for the explicit ones, and it is not a rational desideratum if the agent is prone to inconsistency.

 

INFALLIBILITY

 

An infallible inferential system starts with a set of logical truths, and processes them through valid rules of inference that preserve truth. Unfortunately, we do not always start from necessary assumptions, our information is often false and almost always incomplete. Furthermore, infallible rules are insufficient for many daily situations and they tend to be too expensive computationally. As a result, we have to resort to approximations, estimates, heuristics. The ideal of infallibility gives way to the idea of plausibility.

 

CONSISTENCY

 

In data base systems, a lot of effort goes into ensuring and maintaining consistency (sometimes misnamed "truth" maintenance).  This is of paramount importance since, as mentioned above, in classical logic from a contradiction anything follows. Therefore, a contradiction would render one’s system “trivial”, capable of inferring all propositions as true. But triviality is not a necessary consequence of inconsistency.

            If we only use classical logic, we cannot escape the NP-complete task of maintaining consistency. A more economical strategy could be to distinguish between rational and irrational conclusions from the same set of inconsistent beliefs. Therefore, classical logical cannot be the only measuring stick of rationality.

            A system that can have a contradiction without triviality is called “paraconsistent”. Such systems have been proposed in [9], [10] and [11], to model scientific theories, where known inconsistencies do not entail triviality, and to model multi-agent interactions (dialogues, incompatible evidence, conflicting information, etc.).

            Human inferential systems are paraconsistent in the sense that we have contradictory beliefs yet reasoning continues through the use of heuristics without collapsing into triviality.

 

CONTEXT-FREE RULES

 

Classically, logic has eschewed semantic content in favor of formal context-free treatments of inference. But logic can be informal and rationality has to deal with semantic inferential properties, as exemplified by the Greek and Roman treatment of “topoi”, fallacies, and sophisms.

            Many heuristics are content-specific or domain-specific. Some heuristics are learned from experience and many successful executions are due to familiarity with contextual parameters. These parameters are important if an agent is to react rationally to highly contextual “environment variables”, for instance those involved in natural language processing.

 

NO TIME, SPACE OR RESOURCE LIMITATIONS

 

Traditionally, no provision is made in logic for the resource limitations of the agent concerning categorization and retrieval, the time needed to execute an inference, the size of working-memory space, or selective attention. As argued in [12], a theory of rationality must take all this into account. In many situations it is not rational to engage in calculations that exceed a prudent allocation of resources. Spending time in extensive reflection might be rational in itself, but often can kill you.

 

3.     Heuristics

 

A realistic understanding of logical acuity requires the inclusion of heuristical factors. Heuristics are intuitive, sometimes preconscious, cognitive processes or principles, that generally promote rapid and efficient encoding, inference, retrieval and production of information (see [13]).

 

The term "heuristics" has become popular in last few decades as a blanket term for non-algorithmic mental processes (see [5]). In this paper the term is expanded to include any inferential strategy, automatic or consciously adopted. The inferential products of heuristics can be viewed on a continuum expressing the degrees of certainty of the conclusion given the available premises.

            Under this conception, the presence and the employment of cognitive heuristics are unavoidable for our intelligent life. The world presents itself to us in a messy array of bits of data that is ambiguous, often unrepresentative, and without a clear structure that enables correct logical inferences. In order to survive we need to reason about our environment from incomplete, fragmented information, within a short amount of time, and with a rather limited computational ability (see [14]). Algorithmic methods for drawing conclusions, offered by deductive theories, are bound to yield correct results when applied properly, but are notoriously slow and demanding in terms of cognitive work and memory space. The evolutionary need for fast accumulation of information dictates the existence of inferential heuristics. Speed of knowledge gathering is probably at least as important for our survival as the precision of the information we gather, our interpretations of it, and the inferences we draw. This fact may explain why we are generally so bad in calculating, but so brilliant in estimating quantities, distance, outcomes. The speed/accuracy trade-off is the result of employing  information-processing heuristics and inferential strategies that allow the selection and simplification of issues within a reasonable amount of time and resources.

            Inferential heuristics, such as prototypicality (a categorization mechanism for classifying new concepts by their degree of similarity to a typical, core concept already known), representativeness (application of a simple resemblance criterion to a new task), availability (the rule that dictates reasoning with information readily available) and anchoring (sticking to an initial presentation in the process of problem solving or comprehension), provide a necessary survival tool for processing information while ensuring cognitive economy.

            The way we understand and apply the concept, the principle of cognitive economy says that our brain is designed to cope with our needs, the world, and the limitations of our cognitive tools, by attempting to minimize cognitive work while accumulating needed information fast directly or by inference, often at the expense of accuracy. Since it is not rational to expect a finite being to employ only algorithms that would severely limit the information obtained in a give time frame, a theory of rationality must consider sometimes rational the use of economical inferences that lose precision in favor of quantity and speed.

            The effect of heuristic rules is demonstrated at one extreme by logically infallible methods (like complete induction, or syllogisms), and at the other extreme by appallingly dubious and persistent ways to jump to logically unjustified conclusions.

            The loss of accuracy associated with a heuristic is often called a “bias” (see [13]). A bias marks the boundaries of the unsuccessful application of the relevant heuristics, and a systematic tendency to err. For instance, the principle of conservation represents the assumption of invariability through some transformations. This principle is used effectively when its application is either theoretically correct ("p or q" makes the same logical contribution as does "q or p") or factually correct (pouring liquid into a container that is only different in shape does not affect its quantity). When the same cognitive device causes one to accept that "p if q" is logically equivalent to "q if p", or that changing the liquid's temperature does not effect its quantity, it is considered a bias. Prototypicality, which is a necessary and often effective heuristic for coping with deficient information, becomes a liability when Tweety does not fly.

            The most familiar examples of biases associated with inferential heuristics appeared in the literature already in the 70s: experiments and observations show that people chronically misconstrue random events as representative, admit confirming evidence as disconfirming, commit deductive fallacies such as “affirming the consequent” while avoiding the application of valid rules such as Modus Tollendo Tollens. People interpret irrelevant events as substantiating their well-grounded misconceptions, over-rely on anecdotes that support their beliefs while disregarding hostile evidence, and cement their surprisingly “commonsensical” naïve theories with ad-hoc and fragile explanations.

            These seemingly inevitable biases were used to support two incompatible (and wrong) claims: (i) heuristics cannot be a part of rationality because of their bias-prone non-algorithmic nature;  (ii) heuristics can be a part of rationality, but logic must be sacrificed, since heuristics cannot be formalized; furthermore, the prevalence of biases indicates that illogicality seems to be inevitable.

            Our position is that we are more rational the more (and better) heuristics, and the fewer (and less damaging) biases we have. The conclusions of the previous section showed the inapplicability of the classical presuppositions to a rational inferential system. Heuristics are not only needed as tools for coping with a large amount of information rapidly, and less expensive in terms of cognitive work -- they may also avoid the cost of triviality discussed above. Instead of eliminating logicality from theories of rationality (in favor of slippery talk about survival, adherence to social rules, etc.,) we prefer to bring closer logical inference and heuristics with the help of bridging notions such as cognitive economy.

 

 

4.     Remarks about formalization

 

An epistemic agent capable of facing even minimal challenges in the real world (be it a computer or a human), needs to be able to handle incomplete and/or inconsistent descriptions about what states of affairs actually hold. Normally, we use rules that, though defeasible, guarantee a minimum of rationality in our reasoning. Classical logic provides guidelines for increasing explicit information through logical consequence, and even allows us to retract information with its principles of Reductio ad Absurdum and Modus Tollendo Tollens. Unfortunately, most formalizations emphasize a traditional deductive model of rational inference in which we simply add beliefs when information is increased, but never subtract them.

            We say that a consequence operator Cn is nonmonotonic if and only if a belief set X can be a subset of Y and yet Cn(X) not be a subset of Cn(Y). That is, adding information to X might eliminate previous consequences. For instance, classical, intuitionistic and modal logics, are monotonic because the addition of information does not affect the validity of the inferences previously drawn.

            Traditional examples of nonmonotonic formalisms include those for scientific induction and abduction, probability and statistics. Examples of nonmonotonic inferences in Computer Science are Negation As Failure in logical programming, and the Closed World Assumption (CWA) in database management. The CWA has a parallel in the human ability to jump to conclusions on the basis of insufficient information, treating it as if it was complete. Reasoning from ignorance is often a good strategy, because many facts are so salient that the absence of their report counts as evidence against their occurrence. People continuously infer from information that might even be in principle incapable of completion. In such cases the unreasonable behavior might be not to infer. A mark of rationality is the ability to revise and bracket our provisional conclusions without ceasing the inferential process.

            We can even make the normative claim that for an agent with cognitive limitations to be rational, some of its conclusions must be retractable. So, we need models that incorporate the provisional status of our inferred beliefs. A model for rationality that does not countenance retractability, a purely monotonic model, fails to answer this need.

            Since the early 1980s we have additional formalisms to model different aspects of nonmonotonic inference. For instance, [15] adds to classical logic Circumscription Schemas to produce the effect of the Closed World Assumption. The circumscription limits the domain or the extension of a predicate, and chooses minimal models.

[16] uses a modal non-monotonic logic with a logical operator M that marks something as "possible as far as the system knows". Popular alternatives include the use of Default logic in [17], Autoepistemic logic in [18] and Preferential Models in [19].

Heuristics often exemplify “nonmonotonic reasoning” because in many cases they produce defeasible beliefs, retractable in the face of new evidence. Since this behavior is at least partially formalized already in the aforementioned non-monotonic logics, the charge that heuristics are not formalizable loses credence.

 

5.     Consequences for the notion of rationality

 

The project of constructing a new theory of rationality must strive for an account of the underlying inferential mechanisms in terms of multiple theoretical constructs. Such a model accommodates the use of contentful rules of inference as well as syntactic rules; allows for the employment of pragmatic devices (such as mental models, imagery, schemas, scripts) and defeasible heuristics; takes notice of a large variety of cognitive limitations (not only those associated with memory capacity and computation time); recognizes general biases and provides an account for "deviant reasoning" in terms of non-monotonic procedures.

            It is possible to discern, within the phenomenon of reasoning, the "encoding", the "representation", the "strategies", the "competence" and the "performance". We submit that the unifying source of constraint on the whole inferential process is the biological principle of Cognitive Economy. The brain does not merely record and represents aspects of the external reality and our reactions to it; it categorizes information, reduces its complexity within the conceptual structures to a manageable scale, and organizes it to allow effective retrieval. These processes are possible thanks to the (mostly) cost-effective mechanism we call cognitive economy.

The effect of this crucial need for economy in reasoning has not been sufficiently explored in the literature. Cognitive economy, we believe, plays a significant role in shaping the "knowledge" aspect of the inferential process (information, representation, conceptualization) and its manipulation. The human cognitive architecture has design features that promote power and speed at the expense of some reliability. A rational system should be capable of producing a large number of conclusions, in order to overcome ignorance that is detrimental to survival.

 

6.     Conclusions

 

In this paper we suggested a view of rationality that assigns a logical role to heuristical reasoning. We started by claiming that any theory of (rational) reasoning includes the notion of logical ability. Traditionally this notion has been confined to algorithmic (deductive) inference. A critical examination of the presuppositions underlying this tradition exhibited the insufficiency of any theory of rationality that limits itself to classical logic. As a result, we expanded the notion of logicality to include non-deductive reasoning, and mentioned recent formalizations of non-monotonic consequence relations. A theory of rationality does not have to give up the ideal of formalization only because it accommodates heuristic inferences, even if not all heuristics are formalizable.

            The yet relatively bare structure of a dynamic conception of inferential heuristics people employ, together with the explanatory power of the principle of cognitive economy can be used as foundations for a theory of rational inference.

 

BIBLIOGRAPHY

[1]         Gilovich Thomas, 1991: How Do We Know What Isn’t So, The Free Press.

[2]         Johnson-Laird and Bryne R.: Deduction 1991, Lawrence Erlbaum Associates Publishers, Hillsdale NJ.

[3]         Manktelow K. and Over D. 1990: Inference and Understanding, Routledge, London and NY.

[4]         Nisbett, R. and Ross, L. 1980: Human Inference: strategies and shortcomings of social judgement, Prentice Hall NJ.

[5]         Kahneman, D., Slovic P. and Tversky A. 1982: Judgement Under Uncertainty: Heuristics and Biases, Cambridge University Press, MA.

[6]         MINSKY, Marvin Lee, 1974,  A framework for representing knowledge. Artificial Intelligence Memo 306, MIT AI Lab.

[7]         Hintikka, Jaako. (1989) The Logic of Epistemology and the Epistemology of Logic. Kluwer, Dordrecht.

[8]         BELNAP, Nuel D., Jr., 1976,  ``A useful four‑‑valued logic''. In EPSTEIN, G.,  DUNN, Jon Michael (eds.), 1976,  Modern Uses of Multiple‑Valued Logic; Proceedings of the 1975 International Symposium on Multiple‑valued Logic, Reidel.

[9]         da Costa, Newton C. A. (1974). “On the theory of inconsistent formal systems”,  Notre Dame Journal of Formal Logic, vol. XV. No. 4, octubre, pp. 497‑510.

[10]     da Costa, Newton C. A. (1982). “The Philosophical Import of Paraconsistent Logic'”,  The Journal of Non‑Classical Logic, vol. I, no. 1, pp. 1‑19.

[11]     MARCONI, Diego, 1981,  ``Types of non‑scotian logic''. Logique et Analyse (NS), XXIV, no. 95‑‑96, pp. 407‑‑414.

[12]     Cherniak, C. 1986: Minimal Rationality, The MIT Press, Cambridge, MA.

[13]     Evans, J. 1989: Biases in Human Reasoning, Lawrence Erlbaum Associates Publishers, Hillsdale NJ.

[14]     Gardner Howard. The Unschooled Mind, Basic Books 1991.

[15]     McCarthy, John. (1980) “Circumscription: A form of non-monotonic reasoning”, Artificial Intelligence 13, pp. 27-39.

[16]     McDermott, Drew V., Doyle, Jon, 1978. “Non-monotonic logic I”. MIT Technical Report Memo 486.

[17]     Reiter, Raymond. (1980) “A logic for default reasoning”. Artificial Intelligence, vol. 13, no. 1-2, pp. 81-132.

[18]     Moore, Robert C. (1985) “Semantical considerations on nonmonotonic logic”. Artificial Intelligence, vol. 25, no. 1, pp. 75-94. Una versión más breve apareció en IJCAI (1983), pp. 272‑279. Véase también SRI AI Center Technical Note 284 (1983). Reimpresión en Ginsberg (1987), pp. 127‑136.

[19]     Kraus, S., Lehmann, D., Magidor, M. (1990) “Nonmonotonic Reasoning, Preferential Models and Cumulative Logics”, Artificial Intelligence, 44, 167--207.

[20]     J. Macnamara. A Border Dispute: The place of logic in psychology.  Massachusetts: MIT, 1986.



[1] The gamut of postulated innate syntactic logical rules ranges from 3 or 4, to several hundreds in [21].

[2] A logically omniscient agent believes any logical consequences of her beliefs, and that includes all logical truths. Real omniscience includes also all empirically true beliefs. For more on this, see [7].