[This reports on research carried out in collaboration with my PhD student Monem Shennat and former student Omar Alaqeeli]

Dimensionality is a big issue with multidimensional Lucid – it means figuring out which dimensions are relevant to calculating the values of a variable.

It’s a simple idea but surprisingly subtle. In this post I’ll deal with the simplest case, time sensistivity in ‘classical’ time-only Lucid. Calculating which variables need to know the value of the time parameter *t*.

Here’s a simple program to compute the square root of the input (a floating point number) using Newton’s method

A asa abs(A*A – X) <eps

where

X = first input;

eps = 0.0001;

A = 1 fby (X+A/X)/2;

end

The first step (which the pyLucid interpreter takes) is to convert it to a set of *atomic equations*, in which each variable is defined as a single operator applied to arguments. The conversion involves introducing extra ‘temporary’ variables *T00, T01, T02*, …. etc:

output = A asa T00;

T00 = T01 < eps;

T01 = abs(T02);

T02 = T03 – X:

T03 = A * A;

X = first input;

eps = real 0.0001;

A = 1 fby T04;

T04 = T05 / 2

T05 = X + T06;

T06 = A / X;

The algorithm is iterative. It begins by assuming that none of the variables other than input are time sensitive and works through the equations accumulating variables that can’t be assumed to be constant.

In the first pass we notice that *A* is defined by *fby*, and any such definition is assumed to be time sensitive. (It may not be but this is a worst case analysis.)

On the other hand, on this first pass no other variables are added to our list. For example, *eps* is defined as a constant and *T02* is the difference of two variables that at this stage are still assumed to be constant.

On the next pass the time sensitivity of *A* is passed on to *T03* and *T0*6. Again, to be on the safe side, we have to assume that an arithmetic operation applied to variables at least one of which is time sensitive is itself time sensitive.

At this stage our list of possibly time sensitive variables is *A, T03, T0*6. The next pass adds *T02* and *T0*5 to the list and the pass after that adds *T04* and *T01*. Another pass adds *T00* but the next pass makes no changes.

We’re finished, and at this point we can be sure that the variables not in the list – *output*, *X*, and *eps* – are time constant. If they weren’t we would have found out by now.

Already this information is useful. It tells us the output of the program is a single constant. There is no need for the program to produce successive values of *output* because they will all be the same.

The general rules for the evaluation steps are very simple. Sensitivity is propagated (or not) depending on the operator. In particular

if V = first X then V remains insensitive

if V = next X then V becomes sensitive if X is sensitive

if V = X fby Y then V becomes sensitive

if V = X asa P then V remains insensitive

if V = X + Y and either X or Y is sensitive then V becomes sensitive

(and the same for multiplication, division, and any other data operations including if-then-else-fi)

input is time sensitive

The reevaluation proceeds until it settles down; until no new sensitive variables are found.

In addition to allowing us to simplify output when the variable *output* is constant, it allows us to save space when caching. The values of variables like *A* above are cached and in so doing each value is tagged with the value of the *t* coordinate – because different values correspond to different timepoints. But this is not necessary with a variable like *X* which is known to be constant. If we don’t take this information into account we will store many copies of the same value.

Dimensionality analysis is simple enough when we’re dealing just with the time dimension. Adding even one extra dimension – say space *s* – seriously complicates the analysis, as we will see.

It seems odd to me that you phrase the algorithm as an iterative process that eventually “settles down”. It seems to me that a more natural way to view it would be that the variables are nodes in a certain graph, and you’re finding nodes to which there is no path from a known-sensitive node. Of course, algorithms to compute this will involve iteration, but by taking the more abstract view of the problem you may be able to find understandings that may not have been obvious before. For instance, your description implies that the search is done breadth-first. By framing the question as one about graph traversal, you can pull in the enormous body of known results about such problems.

Have I misunderstood something?