Symposium: Selmer Bringsjord & John Licato, New-Millennium Logic, Computing, & the Mind

1 Historical Context

Aristotle inaugurated formal deductive logic before, and for exploration in, the first millennium.1 The Organon provided principally two things: an initial (and painfully inadequate2) stab at formalizing the stunningly seminal reasoning of Euclid, and another at formalizing the capacity for humans to acquire and reason deductively over inferentially interconnected declarative knowledge and belief.

The first of these two achievements eventuated, near the end of the second millennium, in perhaps modern mathematical logic’s greatest triumph: a formalization of not only Euclid’s reasoning, but the reasoning of mathematicians through the ages. In addition, since this formalization allowed the Entscheidungsproblem to be rigorously posed, and settled, general-purpose computation at the level of standard Turing machines (and their equivalents) arrived on the scene; soon these purely abstract machines were physicalized. These remarkable developments eclipsed the invention, progress in, and great promise of inductive logic, in the second millennium.3

The second trajectory established by Aristotle eventuated, also near the end of the second millennium, in the modern, classical conception of AI: viz., AI as the attempt to engineer artificial intelligence by building a machine able to itself use logic at the human level.4 Such machines were often, and sometimes still are, called “knowledge-based” systems.

In addition, traditionally, formal logic and logicist disciplines such as decision theory have been devoted to confronting, and at least attempting to resolve, paradoxes. Aristotle, in this regard, was active. He for instance took on Zeno’s paradox of the arrow. His proposed solution, while seminal, was ultimately inadequate — but when Leibniz and Newton invented the differential and integral calculus in the second millennium, the paradox of the arrow was put to rest. It’s reasonable to say the same sanguine thing about other deductive paradoxes, for example The Liar. Yet many paradoxes remain unsolved to this day; particularly challenging ones produce their counter-intuitive results on the strength not just of deduction, but of induction as well, and they often involve propositional attitudes like knowledge and belief.

We are now firmly into the third millennium. Today, AI isn’t just an interesting thread within the march forward of science and engineering; no, AI seems to dominate the headlines. But there are some noteworthy twists: One, robots, only creatures of fiction for the vast majority of the second millennium, are here, and fast becoming ubiquitous. Two, that part of logic-less AI based in statistical learning is all the rage: “deep learning,” if media reports of today were to be trusted, will be bringing us computing machines of superhuman intelligence in just a few short years.

2 On-Theme Papers & Attendees

Within the historical context set out above, and specifically under the themes brought out in its articulation, our symposium is composed of a 15-minute overview by Bringsjord, and then the following four papers. The enumeration of the quartet that follows includes authors in each case, with bolded text for an author’s name indicating that the researcher in question is already committed to coming to Ferrara.

“Formalizing Confidence Propagation in Analogico-Inductive Reasoning”; John Licato, Maxwell Fowler (Indiana University/Purdue University Fort Wayne) et al.

In the third millennium, inductive logic will be revived into all of the glory that its advent in the second one quietly indicated. Along with the probabilistic and statistical machinery so popular now in AI, inductive logic is driven by an insistence that cogent argumentation is crucial. This means movement beyond deductive proof to analogical, inductive, and abductive argumentation, and use of not only probability theory, but other formal frameworks for uncertainty (e.g., confidence levels/strength factors). At the same time, these innovations will be applied not just to the extensional realm of classical mathematics, but the realm of human mentation. The paper from Licato and Fowler constitutes a detailed case in point.

“Normative Conflicts and Moral Robots: Challenges and Prospects”; Paul Bello (Naval Research Laboratory) & Matthias Scheutz (Tufts University)

Contradictions had for Euclid great utility, and their role in his work has been sustained to this day in many parts of the formal sciences. For example, his justly famous reductio proof that there are infinitely many primes still routinely serves as an exemplar for secondary-level students of mathematics across the globe. But logic in the third millennium will increasingly explore, for domains other than classical mathematics, more flexible kinds of “conflict” — such as the kinds that arise when norms, including ethical ones, conflict. Bello and Scheutz provide their own case in point, showing that third-millennium deontic logics can handle such conflict, and do so in connection not merely with intelligent software, but real-world robots.

“Toward a Logic-Based, AI-Powered Interrogation System”; Will Bridewell, Paul Bello (Naval Research Laboratory), Rikhiya Ghosh (RPI) et al.

Aristotle, as mentioned, laid a foundation for a focus on the logic of knowledge and belief, and AI as the second millennium ended, buoyed by advances in philosophical logic, began to build artifacts that not only (at least in some formal sense) know, believe, desire, and intend (so-called “BDI” systems). But today, logic for AI and cognitive robotics is pressing ahead with implemented formalizations of extremely nuanced concepts in the realm of human psychology and communication. Could an AI system, armed with computational logic able to model such phenomena as mendacity, paltering, and bullshitting, detect instances of such concepts in human interrogatees, and prove that such detection is correct? Bridewell et al. establish the affirmative response to this query.

“Solving the Lottery Paradox in a Cognitive Calculus”; Kevin O’Neill, Peter Kassimis (Rensselaer Polytechnic Institute) et al.

The Lottery Paradox (LP) involves not just deductive reasoning, but inductive reasoning as well. A pioneer in the intersection of computing and philosophy, the late John Pollock, proposed a solution, using his Oscar system. Unfortunately, Oscar fell into desuetude. It has now been resurrected, and, courtesy of more expressive metalogical machinery than what Pollock originally devised, Pollock’s intractable approach paves the way to a tractable one. In addition, a second, Bringsjordian solution to LP, via a so-called “cognitive calculus” that is a third-millennium successor to BDI logics, is provided by O’Neill et al.

1. There were some antecedents, yes, but for present purposes they can be blamelessly left aside.

2. Courtesy of the theory of the syllogism, the fixed, limited quantification of which Euclid had greatly exceeded.

3. A robust treatment would cite Pascal, Boole, Kolmogorov, and that indefatigable inductive logician, Carnap.

4. We thus have Herbert Simon’s LOGIC THEORIST, which stole the show at the original 1956 Dartmouth Conference that marked the inception of the field of AI, by revealing to the world that some of the theorems in Russell and Whitehead’s Principia Mathematica could be machine-proved.