By Bill Wadge
In the original Lucid language, the index domain (the set of natural numbers) was thought of as a set of time points-Lucid was designed as a temporal functional programming language.
Of course by choosing the set of natural numbers as the set of time points we are at the same time choosing a very simple model of time. In this model there is a first instant, and every instant has a unique successor. This is the bare minimum required to formulate conventional iterative constructs.
The intensional paradigm, however, has no commitment to any particular model of time or to any corresponding particular set of timepoints. This would suggest investigating temporal programming based on a different notion of time.
In a previous post we described a very modest variant in which there are negative timepoints – allowing pre-initialization of forward loops and even iteration into the past.
In this post we examine the consequences of dropping the assumption that each timepoint has a unique successor. Instead, we will allow (some) timepoints to have multiple incomparable successors. We will also assume that the descendants of distinct timepoints remain distinct, so that the set of all timepoints forms a tree.
This notion of branching time may seem counterintuitive, but it is mathematically simple and has been studied by logicians. And, as we shall see, it has already proved to be very useful from the intensional programming point of view.
Let’s begin with the simplest branching-time model, one in which every instant has exactly two successors. The set of timepoints then forms a binary tree. If we let 0 denote the first successor and 1 the second, every timepoint is uniquely identifiable by a finite binary sequence. The first digit in the sequence identifies which successor of the starting timepoint was taken, the second, which successor of the successor, and so on. The length of the Boolean sequence corresponds to the linear notion of “elapsed” time.
We can easily adapt the traditional linear-time primitives (first, next, and fby) to binary branching time. The operator first is unchanged, because there is still a unique initial timepoint. Instead of a single next, however, we need two: next0 and next1. Finally, we still have only a single fby operator, but it must take two right-hand arguments.
If we identify the timepoints with the corresponding binary sequences, we can give the formal semantics of these operators as follows:
(first A)t = Aε
(next0 A)t = At0
(next1 A)t = At1
(A fby(X,Y))ε = Aε
(A fby(X,Y))0t = Xt
(A fby(X,Y))1t = Yt
Here are simple iterative branching-time definitions
N = 0 fby (N + 1, N + 1);
S = 0 fby (S, S + N );
The first defines N to be a simple counter: N is initially 0, and on each step it is increased by 1, no matter which choice is made. The second defines S to be a running sum of the values of N, except that the current value of N is added to S only when we take the second branch. Thus, for example, the value of S at the timepoint corresponding to the sequence 0110101 is 13, because 13 = 1 + 2 + 4 + 6.
One way of understanding branching-time programs is to think of them as nondeterministic iterative programs. For example, we can understand the definition of S as saying that S is initialized to 0, and that on each step S either remains the same or has the current value of N added on, depending on the choice made.
Branching time is useful in writing search programs—in particular, binary searches. A good example is the “knapsack” problem: we are given a list L of natural numbers, and we must discover if a target number T is the sum of a sublist of L. Here is a branching-time program for the knapsack problem.
first F
where
K = L fby(tail(K), tail(K)) ; H = head(K);
S = 0 fby (S,S+H );
F =if S eq T then true
elsif S > T then false
elsif K eq [ ] then false
else next0 F or next1 F fi;
end
The program can be explained quite easily in terms of nondeterministic iteration. The definitions of K and H arrange that H enumerates the elements of the list L; after n steps, H is the nth element of L, no matter which choices have been made.
S is defined as a nondeterministic running sum of some values of H; at each step, H is added to S only if the second branch is taken.
The search of the possibilities will take place during the evaluation of F, which is defined in terms of its own future(s). F will be true at a given timepoint if and only if the target T has been “hit” at that timepoint or will be at one of the timepoints in the given points future(s). The value of F at the origin is the required result of the search.
Informally, the program determines whether or not, in some possible future sequence of events, the running sum hits the target—or, more precisely, hits the target before the list is exhausted. This formulation suggests introducing general-purpose temporal operators for searching backwards and forwards in time, so that the subject of the clause could be replaced by the expression
S eq T before S >T or K eq [] .
Then P before Q could be defined using future recursion, as in the program, or in terms of other general purpose operations, for example as
eventually (P and hitherto not Q)
The knapsack problem is usually solved using recursion. This would suggest some connection between (first-order) recursion and branching-time iteration, and this is exactly the case. In fact, GLU and other Lucid implementations handle recursion by translating it into branching-time iteration; in a sense, the tree of function calls is mapped onto the tree of time points.
To see how the translation works, consider the following recursive “schema”
F(A)
where
F(X) = if p(X) then q(X) else h(F(k0(X)), F(k1(X))) fi;
end
We can translate it into branching time as
first F
where
F = if p(X) then q(X) else h(next0 F, next1 F ) fi;
X = A fby(k0(X), k1(X));
end
Notice that the function and its formal parameter both become simple variables varying in branching time. The formal parameter X is no longer bound (a dummy). It now denotes an indexed family of actual parameters. The function variable F has become a simple variable, which denotes the indexed family of the corresponding results.
P. Rondogiannis has now provided a correctness proof for this translation, and has shown it can be extended to (typed) higher-order programs. To translate a third-order program, for example apply this procedure once to “de-functionalize” all the functions with second-order parameters. The result is a second-order program with branching time. We then eliminate the second-order functions, in the same way, using a new (orthogonal) branching-time dimension. After one more step, we have translated a third-order program into a zero-order (function-free) program that uses three orthogonal branching-time dimensions. The translated program can then be implemented using eduction in the normal way.
Another context in which branching dimensions arise naturally is in the study of attribute grammars: an attribute can clearly be formalized as an intension that varies over the (branching) space of all nodes in the parse tree. S. Tao has used this idea as the basis of her Intensional Attribute Grammar (IAG) system. The IAG approach has several advantages: the definition language is declarative, the attributes may vary in other dimensions (such as linear time), and the trees can be evaluated using eduction.
Pingback: New top story on Hacker News: Branching Time Iteration – Hckr News