The Rise and Fall of GOFAI [2700 views]

Recently various pundits (including myself) have announced the end of Good Old Fashioned AI (GOFAI). But it has an impressive history, and encountered failure only on the verge of what would have been its greatest triumph.

What is GOFAI? Some say it is what grew out of the 1956 Dartmouth AI meeting. However others have described it as based on “symbolic reasoning”, which has a history well before 1956. We’ll adopt this definition.

Numbers and the Abacus

The invention of numbers and numerals was probably the first triumph of GOFAI. Numbers and their symbols, numerals, make possible reliable symbolic reasoning about quantities.

The next triumph was arguably the invention of the abacus and similar mechanisms. The abacus was clearly a machine, even tho it was not self-powered. It automated the symbolic computation with numerals.

Syllogisms

Aristotle’s classification of valid syllogisms was a big step forward for symbolic logic. He even used variables. For example, the syllogism

  All A’s are B’s
  All B’s are C’s
  —————–
  All A’s are C’s

is valid but

  Some A’s are B’s
  Some B’s are C’s
  ——————–
  Some A’s are C’s

is not valid.

It was 1000 years before symbolic logic made a further significant advance..

The invention of mechanical calculators was a near triumph except they were not reliable: they had trouble with long carry chains.

Calculus

The calculus was a real breakthrough – now symbol manipulation could derive results about changing quantities and irregular shapes. About the same time as the calculus was emerging Euler and others perfected the mathematical notation we still use today.

This mathematical notation made possible Boolean algebra, extending the domain of symbolic reasoning to logic itself. At the end of the 19th century began the development of predicate calculus and full first order logic. This was applied to set theory and the foundations of mathematics by Frege and Russel.

Russell’s Paradox

The greatest of triumphs was almost within reach: Frege’s axiomatization of set theory and with it all of mathematics. Then disaster struck. Bertrand Russel’s attention was drawn towards a seemingly harmless Frege axiom (actually a schema) namely that for every property there its a set of all objects having that property. In symbols, for every φ there is a set

  {x: φ(x)}

(call it s) such that x ∈ s iff φ(x).

Let φ(x) be x∉x and let s be

  {x: x ∉ x}

the set s of all sets which are not members of themselves. Simple manipulations show that both s ∈ s and s ∉ s, impossible.

The End of Frege’s Project

This was curtains for Frege’s project because an axiom scheme has to be consistent, otherwise you can prove anything.

There were various proposals as to what to do about the problematic axiom and what won out eventually was a comprehension schema where for every property φ and every set E there is a set s

  {x ∈ E: φ(x)}

of all elements of E with property φ.

This seems to avoid the paradox, but who can be sure? The alternative was a hierarchy of sets, with comprehension on sets of level 𝛼 yielding a set of level 𝛼+1. Not very attractive.

The consensus was to proceed with the axiomatization of set theory without the assurance of consistency.

Gödel’s completeness/incompleteness

There was soon a triumph due to Kurt Gödel, who proved the completeness of first order logic: that every formula true in all models has a finite proof. But disaster struck again with Gödel’s second result, confusingly called his incompleteness theorem. He showed that any formal system powerful enough to formalize arithmetic is incomplete: there will exist true formulas that are not provable.

Gödel in particular produced a formula which was logically equivalent to asserting its own unprovabilty
Proving it would produce a contradiction, so unless the system is already inconsistent, the formula is unprovable but true. With a little extra work you can show that a system can’t prove its own consistency – unless it’s already inconsistent.

ZFC

That was the end of the attempt to produce a provably consistent axiomatization of set theory. Any axiomatization could only hopefully be consistent and would certainly be incomplete. Eventually logicians settled on Zermelo-Frankl set theory (ZF). ZF is apparently incomplete : it can’t prove the Axiom of Choice (AC) though Gödel showed that AC is consistent with ZF. We can therefore add AC to ZF, yielding ZFC, without increasing the chance of inconsistency.

Is ZFC consistent? There is something to be said. It has been used and heavily studied for more than a century without any contradiction showing up. This is strong experimental evidence for the consistency of ZFC.

But that is already an admission of failure for GOFAI. GOFAI is supposed to rely on symbolic calculation, not experimental evidence. This suggests that with the consistency of ZFC we’ve reached the limit of what GOFAI can achieve.

One Step Forward, Two Steps Back

GOFAI certainly didn’t shut down after the bad news about consistency. In the 30’s they invented the λ calculus which enabled symbolic reasoning about functions, including higher order functions. And Turing machines enabled symbolic reasoning about machine computation.

With Turing machines and the lambda calculus it was possible to ask whether certain problems have mechanical solutions. The news here was uniformly bad.

A procedure to determine if a predicate logic formula is a tautology? Nope.

To determine if a formula of arithmetic is true? Nope.

To determine if two λ calculus formulas are equivalent? Nope.

If two Turing machines are equivalent? Nope.

And so on.

LISP and Eliza

By 1956 and the Dartmouth conference the limits of GOFAI were clear. At the conference they introduced a new tool, the programming language LISP, based closely on the λ calculus. Lisp was a success and allowed people to program algorithms based on symbolic reasoning.

Not even LISP led to real breakthroughs. Ironically, one of the best known was Eliza, an example of what’s now known as a chatbot. It imitated a therapist responding to the user’s input. It used heuristics, not algorithms. For example, if the user mentions that they have a brother, it asks “tell me more about your brother”.

It proved very convincing and popular with users. Ironically, ELIZA wasn’t written in LISP itself, but in a LISP like script.

A lot of effort went into machine translation and game playing. Finally IBMs Deep Blue was able to play Chess at world champion level using traditional GOFAI game tree search. That was a high water mark for GOFAI.

GOFAI Superseded

It was only when researchers abandoned GOFAI and adopted ML and similar approaches not based on symbolic reasoning that progress was made in games and translation.

In retrospect we can plot the rise of GOFAI from the very dawn of writing and numbers to the end of Frege’s dream around 1900. After 1900 researchers discovered new and more powerful tools, like the λ calculus, but uncovered new and even more difficult problems, like deciding arithmetic truth. In the end almost every interesting problem is beyond the power of purely symbolic reasoning.

The same situation recurred with practical problems like machine translation – totally beyond the reach of purely symbolic computation.

The logjam was not broken till about 2000, when computers became powerful enough to support ML and similar approaches. Since then there have been dramatic advances in machine translation, game playing, image generation and other applications. GOFAI has been superseded in these areas by techniques not based on symbolic reasoning.

NOTE As many readers have pointed out, symbolic reasoning is not dead, and they bring up in particular the Lean theorem prover. But of how much interest is theorem proving compared to language translation? And how much of True Arithmetic can Lean cover?

About Bill Wadge

I am a retired Professor in Computer Science at UVic.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.