# Concatenative with a Side of Variables

Concatenative language theory, or at least what little there is of it, has generally been advanced in the form of various combinator languages, starting with Joy. Most concatenative enthusiasts seem to be happy with this state of affairs, and the goal of many concatenative languages is to enable and encourage a point free style. So much so, that concatenative programming is often thought of as a subset of point-free programming. However, the two are disjoint concepts despite the variable-free code present in so much Factor and Joy.

I will present a small, mostly informal concatenative calculus in the style of the lambda calculus, rather than in the style of combinatory calculi. For a great introduction to concatenative combinatory calculi, see Brent Kerby’s The Theory of Concatenative Combinators. Also note that I came up with the majority of this in a bar and haven’t much worked on it since, so it can probably be improved upon.

## Syntax

First we will need to define the syntax of the language, enabling us to tell whether a sentence is a term in the concatenative calculus or not. Assume V is a countably infinite set of variables `V = {a,b,c,...}`. We can then inductively define the set of all terms in the concatenative calculus to be:

```T :=
1. (Empty) => ε ϵ T
2. (Var)   => If v ϵ V and t ϵ T, then tv ϵ T
3. (App)   => If t ϵ T, then t! ϵ T
4. (Abs)   => If v1,...,vn, n >= 0 ϵ V and t1, t2 ϵ T, then t1(v1...vn.t2) ϵ T.```

The intuition behind this syntax is that a term is a list, which is why we need rule 1. The lambda calculus is structured as a binary tree in the application rule. That works for how application is defined there, but it makes specifying the reduction rules for concatenative languages a little too complicated in all the iterations I tried. To support a binary tree, some sort of flattening rule is needed. It is much easier to enforce the flatness of the syntax by modeling an `abstract syntax list` rather than the more usual `abstract syntax tree`, and we regain a simpler set of reduction rules.

There are also two major differences in the App and Abs constructions that may not be immediately clear. Note that the Abs construction allows multiple binding variables (`parameters` or `arguments`). This is actually not strictly necessary, but it matches better with the intuition that concatenative functions act on a set of inputs to generate a set of outputs. This is a crucial difference from the lambda calculus: a function in the concatenative calculus may return zero, one, or many more values. This often results in the idea that concatenative functions take a stack and return a stack, but that is more of an implementation detail rather than a core concept, and in fact the basic reduction rules for our calculus will make no mention of a stack.

The App construction will become much clearer when the reduction rules are introduced. If it helps for now, think of the `!` as saying, “apply the function that precedes me to the terms that precede it”. That’s a high level view and good enough for hand waving, but as we’ll see, what it really does is a little bit different.

Moving forward, here are some example terms in our new syntax:

```Example 1: εa
Example 2: ε(.ε)
Example 3: εa(.εb)a```

In the rest of the post I will elide all instances of the empty string (ε). They are there implicitly, they are just not written to improve brevity. Here are Kerby’s eight basic combinators translated into our concatenative calculus with variables:

```1. dup  := (a.aa)!
2. swap := (ba.ab)!
3. zap  := (a.)!
4. unit := (a.(.a))!
5. cons := (ba.(.ba!))!
6. i    := (a.a!)!
7. dip  := (ba.a!b)!
8. cat  := (ba.(.b!a!))!```

Readers familiar with Kerby’s combinator paper may notice that the functions almost mirror the `stack effect` rewrite rules of Kerby’s basic combinators.

## Reduction

So now, how do we evaluate terms in our new calculus? In the combinatory calculi, you need at least one rewrite rule for each of the combinators. In the lambda calculus, there is the almighty beta reduction. I found it easier in this calculus to split the equivalent of beta reduction into two steps, each of which are independent of the other: application and substitution.

But before that, we will need a way to combine two terms into one term. Any ideas? That’s right, we’ll use concatenation! We’ll define the concatenation operation on terms like so:

```(t1 ++ t2) :=
1. If t2 is ε        => t1
2. If t2 is t3v      => (t1 ++ t3)v
3. If t2 is t3!      => (t1 ++ t3)!
4. If t2 is t3(vs.s) => (t1 ++ t3)(vs.s)```

This is just regular list concatenation with the arguments flipped, complicated by the fact that we have three different `cons` constructs. Now that we can concatenate two terms, we can define function application:

`(Application) t1(.t2)! ==> t1 ++ t2`

Well this is a bit unusual! With this application rule, we can only apply functions which take no arguments! This seems highly counterproductive, but remember we haven’t yet seen the substitution rules:

```(SubVar) t1v(a1...an.t2) ==> t1 ++ (a1...an-1.t2[an/(εv)]
(SubAbs) t1(b1...bn.t2)(a1...an.t3) ==> t1 ++ (a1...an-1.t3[an/(ε(b1...bn.t2))]```

This is a lot to take in, but it’s really very similar to beta reduction, only with some nuances introduced by the fact that we’re working on a list rather than a tree. Both rules replace all occurrences in the function body of the last variable in the list of arguments with either the function or variable that immediately precedes the function. We splice that argument out of the list and concatenate the new function, which now has one less argument, with all the previous terms. When a function no longer has arguments, the substitution rules no longer apply, and only the application rule can be used to yield further reductions.

There is another subtly that should be addressed: the substitution method doesn’t merely replace the variable with the substituted term, because the way we defined our terms doesn’t allow that to happen. Rather, it concatenates the substituted term with everything that preceded that occurrence of the variable in the function body.

Some basic examples to (hopefully) make things a little clearer:

```Example 1: a(b.bc)!
==> (.ac)!
==> ac
Example 2: a(bc.b)!
==> (c.a)!
Example 3: a(b.(.b))!
==> (.(.a))!
==> (.a)
Example 4: (.(a.aa)!(a.a!)!)(a.aa)!(a.a!)!
==> (.(.(a.aa)!(a.a!)!)(.(a.aa)!(a.a!)!))!(a.a!)!
==> (.(a.aa)!(a.a!)!)(.(a.aa)!(a.a!)!)(a.a!)!
==> (.(a.aa)!(a.a!)!)(.(.(a.aa)!(a.a!)!)!)!
==> (.(a.aa)!(a.a!)!)(.(a.aa)!(a.a!)!)!
==> (.(a.aa)!(a.a!)!)(a.aa)!(a.a!)!
==> to infinity and beyond...```

The last example is the equivalent of the omega combinator in our concatenative calculus, written in the lambda calculus as `(\x.xx)(\x.xx)`. It would be written in Kerby’s system as `[dup i] dup i`.

That’s all for now on this subject. I haven’t written any proofs about this system, and I don’t really intend to since I’m not sure how useful it would be compared to how much work would be necessary. It seems pretty clear that this calculus shares many of the properties of the lambda calculus, but, until it’s proven, we can’t be certain. The usefulness of the system itself is debatable, but maybe it could provide a solid foundation for the concatenative languages of the future. In any case, I’m happy with how it turned out, though no doubt it could be improved. Maybe there’s a better solution that encoding terms as lists?