PyLucid : Calculating Dimensionalities with Yaghi Code [850 views]

When Lucid first came out decades ago it was a very primitive language. It had only program variables and built-in operators and functions, like next or fby. Users could not define their own functions (or “subroutines” as they were often called). Yaghi code would change all that.

Nevertheless this simple (in modern terminology “0-order”) language was rich enough to make the point that iterative algorithms could usefully be expressed in terms of equations.

However Ed Ashcroft and I keenly felt the absence of subroutines/procedures/functions (or whatever you want to call them) and agonized over the form they should take and the semantics they should have. Finally it dawned on us that in their most general form they denoted functions from streams to streams – what are now called “filters”, after UNIX.

For  example, the square function maps e.g. the stream

1, 2, 3, 4, 5, …

into the stream

1, 4,  9. 16, 25, …

But filters like these – which correspond to data functions – are only a tiny fraction of the possibilities. More interesting is the running sum filter which transforms, say,

1, 2, 0, 3, 1, 5, …

into

1, 3, 3, 4, 9, …

In terms  of computations, it acts like an OO object that has an internal memory which stores the sum so  far.

All this alarmed us because the question was, how were we going to implement these filters? Implementing OO is a big job. Will we need coroutines and concurrency? It didn’t look very hopeful.

We’d already run into a problem with the 0-order language. We thought we could translate programs into dataflow nets but the standard dataflow model is eager (data-push). It is inconsistent with the interpretation of programs as sets of equations.

Eduction

Eventually David May (at Warwick)  and Tom Cargill (at Waterloo) (both grad students at the time) came up with the solution (separately), namely time stamped data-pull. Their interpreters worked by demanding values of variables at specified times, these demands usually generating demands for possibly different variables at possibly different times. The whole is driven by demands for output at times 0, 1, 2, 3, …

This works without any memory but is hopelessly inefficient. Acceptable performance requires a cache of variable,timetag→result entries in what we called the “warehouse”. We named the whole process “eduction“.

Eduction solves the problem of implementing the 0-order language, but not that of “UDFs” (User Defined Filters/Functions). What does it mean to demand the 5th component of foo(X,Y)? This must produce demands for elements of X and Y; but how?

The Ostrum Interpreter 

The solution showed up one day in the form of a large C program written by Calvin Ostrum, then an undergraduate student at Waterloo. It was an interpreter for what Ostrum called “Luthid”, a version of Lucid based on Lisp datatypes. It had limited error handling, no storage management of the warehouse, and the source was incomprehensible. But it supported UDFs.

At that time my first student, Azio (Tony) Faustini was a Warwick postdoc. We studied the Luthid interpreter very carefully and discovered that Ostrum had worked out a coordinate system for the calling trees of programs with UDFs. These coordinates constituted a second dimension (which we called the “place” dimension) and the place tag together with the time tag and variable name form the demand. A user defined function foo(p,q) becomes an ordinary variable that varies over place and time, as do its formals p and q. In other words, a first order program is compiled into a 0-order program with extra operators. Ostrum did not use the OO object interpretation of filters.

The pLucid Interpreter

Faustini cleaned up the Luthid interpreter, added excellent error handling and reporting and implemented  our “retirement scheme” cache management system. We also converted it to use, not Lisp, but the data types of POP-2, an AI language invented at Edinburgh. POP-2 had basically the same data types and operations as Lisp but used a more user-friendly infix notation. The resulting language we called “pLucid” (POP-2 based Lucid) and was widely used and was the language discussed in the “Lucid the Dataflow Language” book. (Incidentally POP-2 is itself dead but  lives on  as POP-11.)

Yaghi Code

At about the same time my PhD student Ali A. G. Yaghi formalized and extended Ostrum’s place coordinates. He described them algebraically in terms of intensional logic, the logic of context sensitive expressions.

I’ll  describe the simplest case. Suppose we have the program

fib(a+b)
   where
    a = 1 fby a+1;
    b = 5;
    fib(n) = if n<2 then 1 else fib(n-1)+fib(n-2) fi;
   end

To transform this into Yaghi code we number the occurrences of calls to fib: call 0 is the call with argument (actual parameter) a+b and calls 1 and 2 have actuals parameters n-1 and n-2 respectively. We replace the ith call with the operator calli applied to fib and define n to be the operator actuals applied to the actuals listed in order. The program becomes

call0(fib)
    where
     a = 1 fby a+1;
     b = 5;
     fib = if n<2 then 1 else call1(fib)+call2(fib) fi;
     n = actuals(a+b,n-1,n-2)
    end

and the magic is done – we have transformed an order 1 program into an order 0 program (calli and actuals are built-in operators, not UDFs).

The transformed program is evaluated in the context of a place code, a finite sequence of small natural numbers (in this case 0, 1, or 2) that can be thought of as coding a position in the calling tree of fib.

Screen Shot 2022-02-19 at 8.13.04 AM

The basic rules are that

calli(f) @ <p0, p1, p2, ….> = f @ <i, p0, p1, p2, …>
actuals(a0, a1, a2, …) @ <p0, p1, p2, ….> = a(p0) @ <p1, p2, p3, ….>

Recursive programs like those above can require arbitrarily long place codes. However the interpreter does not have to drag around arbitrarily long sequences. We use a technique called hash-consing that gives every sequence a unique natural number code and only this code is used, say, as part of the warehouse tag.

pLucid resurrected

The language pLucid fell out of use in the 1980s and was succeeded by GLU (granular Lucid), a Lucid dialect designed and implemented at SRI in California. GLU’s most novel feature was unlimited user-declared dimensions, in addition to place and time. (pLucid had multiple space dimensions but this was undocumented). GLU was developed by Faustini, Jagannathan, and Ashcroft.

Unfortunately GLU also fell out of use when its inventors all left SRI. That was also the end of the available Lucid interpreters. The GLU interpreter was privately funded and proprietary. Even I have  no access to it.  It will likely never see the light of day again.

As for the pLucid interpreter, it was written in a day when C compilers were much more forgiving. Contemporary attempts to compile it produce warnings or errors approximately every 10 lines. A lost  cause.

Thus began a dark age – a Lucid winter – of about two decades in which there was no Lucid interpreter available. I’d started a project based on what I called the “pop shop” written in C (a lot of C) but it was seriously over-engineered and in the end came to nothing.

Flash forward to 2019. Having recently retired I had time on my hands and decided (foolishly?) to redo the popshop but in Python and minus the over engineering. What made me choose Python was simplicity, popularity, and dictionaries (which incorporate hash consing).

It all went quicker than I had any right to hope for and by early 2020 (otherwise the anus horribilis) I had a full-featured pLucid interpreter which, of course, I had no alternative but to call PyLucid. PyLucid extended pLucid by the tiny step of having a single space dimension. (I’m  still not sure how to implement pLucid’s (unofficial) infinitely many space dimensions.)

Dimensionality Analysis

One problem remained namely that of determining the dimensionality of variables and expressions – the set of dimensions actually needed to compute a value, leaving out those that are irrelevant. This is important because to store a value in the warehouse the tag (label) must not contain irrelevant dimensions because then the same result my be stored multiple times with tags differing only in irrelevant dimensions.

My current student Monem Shennat has been working on this problem for 0-order programs. The basic idea is that you iteratively refine approximations to the dimensionalities by recomputing them following rules such as that the next approximation to the dimensionality of A fby B is the union of the approximations of the dimensionalities of A and B and {t}. Recomputation continues until the approximations settle down, i.e. there is no change anywhere from one computation to the next.

We both got stuck, however, on programs with UDFs. How does the dimensionality of foo(A,B) depend on the dimensionalities of A and B? That depends on the definition of foo, but how? One of the ideas was to abstract some sort of matrix that captures the dimensionality behaviour of foo, but how?

Then it occurred to me that we could evaluate the Yaghi coded version of the program over the (small) domain of dimensionalities ({{}, {t}, {s}, {s,t}} with subset ordering). There was a big snag with this, namely that for programs with recursion the evaluation does not terminate. The reason is that if-then-else-fi uses a worst case analysis and doesn’t handle actual Booleans.

Finally it dawned on me that we could instrument the interpreter with counts that measure how far the (nonterminating) evaluation has proceeded. These counts could include the number of evaluations and the length of the longest place code (the recursion depth). Then we could run the interpreter with small bounds on these measures, re run it with larger bounds, until it settles down.

I adapted the evaluator to a version that ran over dimensionalities. It worked better than I expected and settled down for very small values, e.g. place codes of length less than 4. When I ran it on the example given above, it produced output {t} meaning that the program varies in time, correct. Notice that fib has no temporal operators. The alternate evaluator correctly detected that fib passes on the time sensitivity of its actual a+b (+ in turn passes on the time sensitivity of a).

Of course the alternate evaluator also works fine on 0-order programs without Yaghi operators. This saved Shennat the trouble of implementing his 0-order rules.

One more thing

With Yaghi code 40 year old technology saved the day. But in spite of its age it still has a lot to offer. For example, in the 1990s P. Rondogiannis and I discovered that a simple extension of Yaghi code could allow us to compile some arbitrarily higher  order programs. 

The idea is simple enough: use Yaghi operators to eliminate 0-order arguments turning a (say) order 5 program into an order 4 program. Then use a duplicate set of Yaghi operators to eliminate what were order 1 arguments. In this way we can compile an order 5 program into a 0-order program with 5 sets of Yaghi operators using 5 isomorphic place dimensions.

Unfortunately this doesn’t work for all programs, even all typed higher order programs. It can’t handle partial application, for example, a function which returns another function as it value. Nevertheless it will handle many useful programs, for example one with a second order function which takes a first  order function as its argument and returns an approximation to the integral of this function over a given range. Typical, most numerical analysis programs are at most second order and amenable to this extended Yaghi treatment.

So why doesn’t PyLucid have such higher order functions? The problem is the form such programs must take. We all worked hard to make PyLucid dynamic: type errors are caught at runtime and  programmers  do not have to provide cumbersome complex type declarations. If we could find a way to write higher order programs without them, we would waste no time adding higher order functions to PyLucid. One idea is to have the interpreter infer the necessary types – we’re looking into this.

Almost Forgot …

… that I don’t know how to handle the place dimension in computing dimensionalities. We need to know whether or not the place dimension p is relevant to a variable to avoid warehouse duplication etc. Obviously this means using the set-inclusion partial order over the domain

{{}, {s}, {t}, {p}, {s, t}, {s, p}, {t, p}, {s, t, p}}

but it’s not clear what the rules are. Definitely actuals(x,y,z,…) is sensitive to p but calli(f) may not be (the example program’s output is call0(fib) and is not sensitive to p). The interpreter should compute the dimensionality of the program as {t}, not {t, p}.

This is related to the problem with infinite dimensions. Faustini’s pLucid allowed arrays of arrays of arrays … and this implied an indefinite series 

s0, s1, s2, …

of space dimensions. How do you evaluate over an infinite domain? Perhaps by incrementally restricting it to finite subsets ?   

I’m looking into it …

About Bill Wadge

I am a retired Professor in Computer Science at UVic.
This entry was posted in Uncategorized. Bookmark the permalink.

1 Response to PyLucid : Calculating Dimensionalities with Yaghi Code [850 views]

  1. Pingback: PyLucid : Calculating Dimensionalities with Yaghi Code – Random 101

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.