I’ve already written about the origins of Lucid but that was a dry, technical, and incomplete post. Here is the real story, with all the drama and passion, the thrill of victory, the agony of defeat.
Well maybe not quite. But with the human element.
I arrive in style
I arrived in Waterloo nearly 50 years ago, my first academic job. As assistant professor in the young Computer Science department. (On a “definite term” contract, whose meaning I would eventually discover).
I arrived in a limousine – like everybody who came in from the Toronto airport. I’d flown in from Berkeley California where I’d spent five idyllic years as a Math grad student. Boy those days were over, talk about culture shock.
Nevertheless I fit in pretty quickly and soon made many good friends in both Computer Science and Pure Math. I’m still in touch with some of them.
I didn’t know much about computer science, just assembly and FORTRAN programming, at which I was pretty good. But I had ideas.
I see the light
I was very lucky in that many of my new colleagues knew a lot about computer science and, more importantly, were happy to share their knowledge. One in particular was Al Davis from Utah, who told me about this character Edsger Dijkstra and this radical “structured programming” idea.
I read Dijkstra’s “Goto considered harmful”. I was an instant convert.
Al and I wrote a paper (my first) about a structured language DPL which we invented (but didn’t implement). It had some interesting ideas, including variable binding operators.
I thought more and more about structured programming and became more radical in my views, eventually concluding that the assignment statement itself was “harmful” (Al bowed out at this point).
Meanwhile little did I realize that I was becoming a huge disappointment to the powers that be in the Department. When they hired me they thought I would work on computational complexity, which was a big deal (and still is). In fact when I was at Berkley Steve Cook was there as a young assistant professor. He gave a course on complexity which I took.
Unfortunately (for me and complexity) I never took to it. Not Cook’s fault – he gave a good course. I just wasn’t interested. Pity really (at least careerwise), because I could have been in the ground floor in this area with a huge future. But it seemed to me, rightly, that inefficiency was not the cause of the software crisis. Structured programming seemed more relevant and I became more radical in my views.
Assignment considered harmful
In particular statements like I=I+1 really bothered me since the variable I seemed to denote different things on either side of the so called equation. This offended my mathematical sensibilities and also made reasoning about the statement difficult.
I came up with a solution, which was to preface these pseudo equations with the keywords first and next. Thus if I is initially 1 and increases by 1 on each iteration we write
first I = 1
next I = I+1
I’m not sure why I thought this solved anything. Note that originally the keywords applied to the whole assignments, as if they were parenthesized
first (I = 1)
next (I = I+1)
Anyway one day I was sitting in my red-doored windowless office … Yes, windowless. Waterloo had windowless inner corridor offices. Even for assistant professors. You had to earn a window, and I never did in my three years there. Also the low status offices had red doors, whereas the more prestigious ones had smart black doors. I had a lot to learn about academia.
Anyway there I am in my stuffy office behind my low status red door looking at these lines when I saw them with new eyes. As if they were parenthesized like this:
first(I) = 1
next(I) = I+1
and it occurred to me that this may actually solve something. The second no longer claims that I is itself, I, plus one, it says that something called next(I) is I + 1. As an equation it no longer implies 0=1.
I was so excited I left my cramped office and hurried down the Pure Math corridor. (My office was in the Pure Math area. I was never moved to the computer science area).
I crossed into the computer science area and went straight to Ed Ashcroft’s black doored office. Ed (who I’d been friends with for a while) welcomed me in and offered me a seat in the glorious natural light of his window. As I shaded my eyes from the unaccustomed illumination, I explained my discovery. You can define iteration with consistent equations!
Ed thought it was intriguing and asked me what exactly was “next(I)”. I answered that … err … it was “the next value of I”. This didn’t seem to satisfy him, and he explained he’d made a resolution not to work on programming language ideas that don’t have a formal semantics.
Semantics? I’m not sure I’d even heard the word before (in a computer science context. In logic, obviously). Ed encouraged me to work on this.
Back in my broom closet I mulled over the meaning of next. It seemed to act like a function. The expression next(next(I)) made sense, it was I+2. As a function next was linear, clearly next(I+J) was equal to next(I)+next(J) and next(2*I) must be 2*next(I).
On the other hand next(0) is definitely 0, since 0 doesn’t change. And next(1) must be 1, and next(2) must be 2, and so on. Normally, this means next is the identity and so next(I) is I.
But it isn’t. The whole point is next(I) is I+1. Stalemate.
Finally, finally, it dawned on me that I is neither 0, nor 1, nor 2 … it’s not a constant, it’s a variable that changes with time. It contains multitudes.
Let’s equate I with the series of values it takes on,
I = <1,2,3,4,…>
and sure enough it’s I+1 if
1 = <1,1,1,1,…>
and addition works pointwise. Looked like semantics to me.
I’d written these equations on a file card that was at hand (I saved it for many years). I grabbed the card, shot down the pure math corridor, crossed the little lobby, entered the computer science section, knocked at Ed’s black door and sat blinking in the glare of his window.
“Looks like semantics to me”, he said. “Let’s get to work!” Or words to that effect.
By then I’d found out what “definite term” meant. Two years and out, with no right to be considered for renewal. I was not tenure track.
However they generously reconsidered me for renewal anyway … and rejected it. But Pure Math gave me a one year postdoc, for which I was grateful.
Ironically, it was during this third year for Pure Math that Ed and I got to work on what a Pure Math friend suggested we call “Lucid”.
Ed was a wonderful collaborator. He was technically even stricter than me (with my logic background) and also inventive. Two examples will make this clear.
At one point I came up with the fby operator so that the two first/next equations for I above could be combined as
I = 1 fby I+1
As a result every program could be made definitional: every variable having a single unique defining equation. Ed pointed out that each such program has a unique least fixed point, which we can take as the meaning of the program (more semantics). I’d assumed we’d need syntactic constraints of some kind and was very relieved.
Another time I wanted to write a bubble sort and was lamenting the lack of arrays. Ed pointed out that we didn’t need them. We simply pass the unsorted stream through a series of bubble filters till they’re in order. (Al Davis had filled us in about dataflow).
And finally I admired his work ethic. He got us to start writing a paper about stuff we were working on even before we’d figured it out. Then when the research was done, the paper was done. I should have done this with my own work.
After I was definitely terminated I moved to Warwick in the UK where, amazingly, I immediately got an office to myself with a window and – I believe – a black door.
I’d made it.
Is the implementation ‘pLucid’ a play on words of the word ‘pellucid’ by chance?
I really enjoyed reading your book on Lucid, partly because it brings the reader along on this process of discovering the language much the way you invented it. ‘Lifting’ ordinary functions and individual constants to pointwise operators on streams and constant streams is one of those classic mind-expanding ideas. Another one is the realization that assignments run afoul of Dijkstra’s “premature optimization is the root of all evil” philosophy — with a pure functional language, the result of a function could be cached, but it isn’t necessary — that’s an optional trade-off between space and time complexity, as your “retirement schedule” illustrates nicely. Imperative programming mixes memory management into the ‘business logic.’ Another one is putting the human inside the loop: “the person sitting at the terminal will be able to act as a `human filter’. Values of x will be sent one by one to the terminal, and values of y will be read from the keyboard.” So many gems.
Also, “the order of the statements within the given block is irrelevant — does not affect the meaning of the program.” The nice thing about graphical representations of pure functional languages is that they *force* this to be true of the representation. I wish someone would write a nice graphical editor as a supplement to parenthesis-oriented programming. It’s always nice to have more than one way to think about a language.
Thanks for the encouraging comments
I don’t remember if your door at Warwick was black or not, but I’ve always remembered the sign you used to have behind your desk: ‘Bugs are for losers’. It influenced my early career in software development more than you would imagine! Thank you.