The Rise and Fall of GOFAI [2600 views]

Recently various pundits (including myself) have announced the end of Good Old Fashioned AI (GOFAI). But it has an impressive history, and encountered failure only on the verge of what would have been its greatest triumph.

What is GOFAI? Some say it is what grew out of the 1956 Dartmouth AI meeting. However others have described it as based on “symbolic reasoning”, which has a history well before 1956. We’ll adopt this definition.

Numbers and the Abacus

The invention of numbers and numerals was probably the first triumph of GOFAI. Numbers and their symbols, numerals, make possible reliable symbolic reasoning about quantities.

The next triumph was arguably the invention of the abacus and similar mechanisms. The abacus was clearly a machine, even tho it was not self-powered. It automated the symbolic computation with numerals.

Syllogisms

Aristotle’s classification of valid syllogisms was a big step forward for symbolic logic. He even used variables. For example, the syllogism

  All A’s are B’s
  All B’s are C’s
  —————–
  All A’s are C’s

is valid but

  Some A’s are B’s
  Some B’s are C’s
  ——————–
  Some A’s are C’s

is not valid.

It was 1000 years before symbolic logic made a further significant advance..

The invention of mechanical calculators was a near triumph except they were not reliable: they had trouble with long carry chains.

Calculus

The calculus was a real breakthrough – now symbol manipulation could derive results about changing quantities and irregular shapes. About the same time as the calculus was emerging Euler and others perfected the mathematical notation we still use today.

This mathematical notation made possible Boolean algebra, extending the domain of symbolic reasoning to logic itself. At the end of the 19th century began the development of predicate calculus and full first order logic. This was applied to set theory and the foundations of mathematics by Frege and Russel.

Russell’s Paradox

The greatest of triumphs was almost within reach: Frege’s axiomatization of set theory and with it all of mathematics. Then disaster struck. Bertrand Russel’s attention was drawn towards a seemingly harmless Frege axiom (actually a schema) namely that for every property there its a set of all objects having that property. In symbols, for every φ there is a set

  {x: φ(x)}

(call it s) such that x ∈ s iff φ(x).

Let φ(x) be x∉x and let s be

  {x: x ∉ x}

the set s of all sets which are not members of themselves. Simple manipulations show that both s ∈ s and s ∉ s, impossible.

The End of Frege’s Project

This was curtains for Frege’s project because an axiom scheme has to be consistent, otherwise you can prove anything.

There were various proposals as to what to do about the problematic axiom and what won out eventually was a comprehension schema where for every property φ and every set E there is a set s

  {x ∈ E: φ(x)}

of all elements of E with property φ.

This seems to avoid the paradox, but who can be sure? The alternative was a hierarchy of sets, with comprehension on sets of level 𝛼 yielding a set of level 𝛼+1. Not very attractive.

The consensus was to proceed with the axiomatization of set theory without the assurance of consistency.

Gödel’s completeness/incompleteness

There was soon a triumph due to Kurt Gödel, who proved the completeness of first order logic: that every formula true in all models has a finite proof. But disaster struck again with Gödel’s second result, confusingly called his incompleteness theorem. He showed that any formal system powerful enough to formalize arithmetic is incomplete: there will exist true formulas that are not provable.

Gödel in particular produced a formula which was logically equivalent to asserting its own unprovabilty
Proving it would produce a contradiction, so unless the system is already inconsistent, the formula is unprovable but true. With a little extra work you can show that a system can’t prove its own consistency – unless it’s already inconsistent.

ZFC

That was the end of the attempt to produce a provably consistent axiomatization of set theory. Any axiomatization could only hopefully be consistent and would certainly be incomplete. Eventually logicians settled on Zermelo-Frankl set theory (ZF). ZF is apparently incomplete : it can’t prove the Axiom of Choice (AC) though Gödel showed that AC is consistent with ZF. We can therefore add AC to ZF, yielding ZFC, without increasing the chance of inconsistency.

Is ZFC consistent? There is something to be said. It has been used and heavily studied for more than a century without any contradiction showing up. This is strong experimental evidence for the consistency of ZFC.

But that is already an admission of failure for GOFAI. GOFAI is supposed to rely on symbolic calculation, not experimental evidence. This suggests that with the consistency of ZFC we’ve reached the limit of what GOFAI can achieve.

One Step Forward, Two Steps Back

GOFAI certainly didn’t shut down after the bad news about consistency. In the 30’s they invented the λ calculus which enabled symbolic reasoning about functions, including higher order functions. And Turing machines enabled symbolic reasoning about machine computation.

With Turing machines and the lambda calculus it was possible to ask whether certain problems have mechanical solutions. The news here was uniformly bad.

A procedure to determine if a predicate logic formula is a tautology? Nope.

To determine if a formula of arithmetic is true? Nope.

To determine if two λ calculus formulas are equivalent? Nope.

If two Turing machines are equivalent? Nope.

And so on.

LISP and Eliza

By 1956 and the Dartmouth conference the limits of GOFAI were clear. At the conference they introduced a new tool, the programming language LISP, based closely on the λ calculus. Lisp was a success and allowed people to program algorithms based on symbolic reasoning.

Not even LISP led to real breakthroughs. Ironically, one of the best known was Eliza, an example of what’s now known as a chatbot. It imitated a therapist responding to the user’s input. It used heuristics, not algorithms. For example, if the user mentions that they have a brother, it asks “tell me more about your brother”.

It proved very convincing and popular with users. Ironically, ELIZA wasn’t written in LISP itself, but in a LISP like script.

A lot of effort went into machine translation and game playing. Finally IBMs Deep Blue was able to play Chess at world champion level using traditional GOFAI game tree search. That was a high water mark for GOFAI.

GOFAI Superseded

It was only when researchers abandoned GOFAI and adopted ML and similar approaches not based on symbolic reasoning that progress was made in games and translation.

In retrospect we can plot the rise of GOFAI from the very dawn of writing and numbers to the end of Frege’s dream around 1900. After 1900 researchers discovered new and more powerful tools, like the λ calculus, but uncovered new and even more difficult problems, like deciding arithmetic truth. In the end almost every interesting problem is beyond the power of purely symbolic reasoning.

The same situation recurred with practical problems like machine translation – totally beyond the reach of purely symbolic computation.

The logjam was not broken till about 2000, when computers became powerful enough to support ML and similar approaches. Since then there have been dramatic advances in machine translation, game playing, image generation and other applications. GOFAI has been superseded in these areas by techniques not based on symbolic reasoning.

NOTE As many readers have pointed out, symbolic reasoning is not dead, and they bring up in particular the Lean theorem prover. But of how much interest is theorem proving compared to language translation? And how much of True Arithmetic can Lean cover?

Posted in Uncategorized | Tagged , , , , | Leave a comment

A Bad Trip to Infinity [5000 views]

The fear of infinity is a form of myopia that destroys the possibility of seeing the actual infinite
~ Georg Cantor

Recently NETFLIX released a documentary on mathematical concept of infinity, titled A Trip to Infinity. NETFLIX’s trip is a bad one.

Continue reading

Posted in Uncategorized | Leave a comment

GOFAI is dead – long live (NF) AI! [10,000 views]

Art is what you can get away with.
– 
Marshall McLuhan

[All the images in this post were produced with generative AI – Midjourney,DALL-E 2, Stable diffusion.]

I’d like to give you my thoughts on the recent amazing developments in AI (Artificial Intelligence).

I’m a retired (emeritus) professor of computer science at the University of Victoria, Canada. I ought to know a bit about AI because I taught the Department’s introduction to AI course many times. 

All I can say is thank God I’m retired. I couldn’t have kept up with the breakthroughs in translation, game playing, and especially generative AI.

Continue reading

Posted in Uncategorized | 1 Comment

50 Years of Wow- I lived through 5 decades of computing milestones [4500 views]

Everyone’s all, “Wow, chatGPT, amazing, a real milestone, everything will change from now on”. And they’re right – but probably don’t realize that this is not the first time something like this has happened. In fact there’s been wave after wave of computing technological innovation ever since the industry got started in the 1950’s. Here are some of the waves I’ve experienced personally

The Stone Age of computing.

When I got into computing, in 1965 at UBC in Vancouver, it was all very primitive. I took a numerical analysis course while doing a math degree.

IBM 7040

I learned Fortran and later, that summer, IBM 7040 assembler, which I got pretty good at. However when I moved on to Berkeley to do a math Phd I almost completely dropped computing because I thought it was primitive – punched cards and Fortran – and not really going anywhere. But I kept my hand in.

LISP

While I was at Berkeley I was introduced to LISP and again my mind was boggled. I was very taken by recursively defined algorithms working on hierarchical structures. FORTRAN didn’t support recursion nor did it have hierarchical data structures.

Given that FORTRAN was my first language these omissions could have scarred me for life but instead I really took to LISP. In retrospect LISP was a major milestone because it took recursion into the mainstream.

However the first LISP systems were pretty sluggish and I mistakenly dismissed LISP as being impractical. I continued my math research into what would be known as Wadge degrees.

I was wise to pursue my interest in computing because by the time I finished my PhD the math job market had collapsed. My first teaching position was in computer science, at the University of Waterloo. In those days there weren’t enough CSC PhDs to staff a department so they hired mathematicians, physicists, engineers etc.

Time Sharing

Between my time at UBC and my arrival at Waterloo there was one big milestone – time sharing. Dozens of people could share connections to mainframes, which had gotten bigger and more powerful. This was mind-boggling, because before you had to wait your turn for computer access. At UBC I punched my program on to cards and left the deck at reception. I’d come back one or two hours later (the “turnaround time”) when the programs in the “batch” had been run and get the output. Which would typically contain an error report.

It took forever to produce a correct program. Time sharing cut turnaround time to seconds and greatly speeded up the software development process. Thus was born “interactive” computing.

Time sharing needed a lot of supporting technology. An operating system with files. Terminals for users, running some form of file browser, compilers and file editors.

UNIX

All this technology quickly arrived with the UNIX operating system, a non-official project at Bell Labs developed on an abandoned computer. UNIX conquered the world and still dominates to this day.

When UNIX arrived in a department it was like Christmas because there was a whole tape full of goodies. You got the C language and its compiler, the vi screen editor, and utilities like yacc, awk, sed, and grep. UNIX was a huge leap forward and an unforgettable milestone.

Soon there was a whole ecosystem of software written in C by UNIX users. Unix was a real game changer.

Terminals

One of the UNIX utilities was the shell (sh), a simple CLI. It was easy to set up a terminal, like a vt00, to run the shell. The shell allowed the user to define their own shell commands, and allowed recursive definitions. An irreplaceable command was vi, the screen editor. It replaced line editors, which were very difficult to use.

The end result was that anyone with a UNIX terminal had at their disposal the equivalent of their own powerful computer. At first the terminals were put in terminal rooms but gradually they were moved into individual offices.

Digital typesetting

About the time UNIX showed up, departments started receiving laser printers that could print any image. In particular they could print beautifully typeset documents, including ones with mathematical formulas. The only question was, how to author them?

UNIX had the answer, a utility called (eventually) troff. To author a document you created a file with a mixture of your text and what we now call markup. You ran the file through troff and sent the output to the printer.

The only drawback was that you had to print the document to see it – a vt100 could only display characters. This problem was solved by the next milestone, namely the individual (personal) computer.

Workstations

A personal computer (often called a “workstation”) was a stand-alone computer designed to be used by one individual at a time – a step back from timesharing. The workstations were still networked and usually ran their own UNIX. the crucial point is that they had a graphics screen and could display documents with graphs, mathematics, and even photographs and other images.

As workstations became more common, timesharing decreased in importance till only communication between stations was left.

EMAIL

Originally communication was by file transfer but this was quickly replaced by email as we know it. At first senders had to provide a step-by-step path to the receiver but then the modern system of email addresses was introduced.

At this point many computer people relaxed, thinking we’re finally reached a point where there are few opportunities for innovating – boy were they wrong!

The Web

At CERN in Switzerland Tim Berners-Lee decided that email lists were inefficient for distributing abstracts and drafts of physics papers. He devised a system whereby each department could mount a sort of bulletin board. This was the origin of the Web.

Unfortunately at first no one used it, until Berners-Lee put the CERN phone directory on the Web and its popularity took off. Soon there were thousands of web sites opening every day.

If anything the Web was a more significant milestone than even UNIX or timesharing. I remember playing around and discovering Les Tres Riches Heures du Duc de Berry – medieval images in glorious color on the Web (via a Sun workstation). My mind was, of course, boggled, and obviously not for the first time.

Minor milestones

After Timesharing, UNIX and the Web there are other important milestones that seem almost minor by comparison. There’s Microsoft’s Office suite including the spreadsheet Excel. Then there’s the iPhone and other smart phones, People with a phone carry around a computer thousands of times more powerful than the old 7040 I learned computing on.

And yet the innovation doesn’t stop. For a long time AI was a bit of a joke amongst computer scientists-we called it “natural stupidity”. The translations were of poor quality and although they mastered Chess a more complex game like Go (Baduk) was beyond their reach. Periodically industries and granting agencies would get discouraged and an “AI winter” would set in.

AI Summer

Then a very few years ago this changed dramatically. Go was solved and the translations became almost perfect (at least between European languages). What I call the “AI Summer” had arrived.

This was all thanks to employing new strategies (ML, neural nets) and using the vast stores of data on the internet.

And now we have Midjourney etc and chatGPT. Very significant milestones but not, as we have seen, the first – or the most significant.

Posted in Uncategorized | 1 Comment

Just How Smart are You, ChatGPT? I quiz chatGPT about math.[7600 views]

Everyone’s heard about chatGPT, the latest and most sophisticated chatbot to date.We all know it can bs proficiently about ‘soft’ topics like English literature. I decided to quiz it about a hard topic, mathematics. As you probably know, I have a PhD in math, so I won’t go easy.

Continue reading
Posted in Uncategorized | 2 Comments

To Be or Not to Be – Mathematical Existence and the Axiom of Choice [4200 views]

The axiom of choice (AC) seems harmless enough. It says that given a family of non empty sets, there is a choice function that assigns to each set an element of that set.

AC is practically indispensable for doing modern mathematics. It is an existential axiom that implies the existence of all kinds of objects. But it gives no guidance on how to find examples of these objects. So in what sense do they exist?

Continue reading
Posted in Uncategorized | Leave a comment

We Demand Data – the story of Lucid and Eduction [1800 views]

Power concedes nothing without a demand.
-Frederick Douglass

When the late Ed Ashcroft and I invented Lucid, we had no idea what we were getting in for.

Continue reading

Posted in Uncategorized | 1 Comment

Hyperstreams – Nesting in Lucid [1100 views]

When Ed Ashcroft and I invented Lucid, we intended it be a general purpose language like Pascal (very popular at the time.)

Pascal had while loops and we managed to do iteration with equations. However in Pascal you can nest while loops and have iterations that run their course while enclosing loops are frozen. This was a problem for us.

Continue reading

Posted in Uncategorized | Leave a comment

PyLucid – where to get it, how to use it [150 views]

Recently there was a post on the HN front page pointing to a GitHub repository containing an old (2019) version of the source for PyLucid (I don’t know who posted it). It generated a lot of interest in Lucid but unfortunately the 2019 version is crude and out of date and probably put people off.

I’m going to set things right by releasing an up to date version of PyLucid (Python-based Lucid) and up to date instructions on how to use it to run PyLucid programs. The source can be found at pyflang.com and this blog post is the instructions. (The source also has a brief README text file.)

Continue reading

Posted in Uncategorized | Leave a comment

Shennat dissertation: Dimensional analysis of Lucid programs [420 views]

I recently graduated my 17th and last PhD student, Monem Shennat. This is the abstract of his dissertation with my annotations (the abstract of a University of Victoria dissertation is limited to 500 words).

The problem he tackled was that of dimensional analysis of multidimensional Lucid programs. This means determining, for each variable in the program, the set of relevant dimensions, those whose coordinates are necessary for evaluating individual components.

Objective: to design Dimensional Analysis (DA) algorithms for the multidimensional dialect PyLucid of Lucid, the equational dataflow language. 

Continue reading

Posted in Uncategorized | Leave a comment