My previous few posts have been about cartesian closed categories (CCCs). In *From Haskell to hardware via cartesian closed categories*, I gave a brief motivation: typed lambda expressions and the CCC vocabulary are equally expressive, but have different strengths:

- In Haskell, the CCC vocabulary is overloadable and so can be interpreted more flexibly than lambda and application.
- Lambda expressions are friendlier for human programmers to write and read.

By automatically translating lambda expressions to CCC form (as in *Overloading lambda*), I hope to get the best of both options.

An interpretation I’m especially keen on—and the one that inspired this series of posts—is circuits, as described in this post.

**Edits:**

- 2013–09–17: “defined for all products with categories” ⇒ “defined for all categories with products”. Thanks to Tom Ellis.
- 2013–09–17: Clarified first CCC/lambda contrast above: “In Haskell, the CCC vocabulary is overloadable and so can be interpreted more flexibly than lambda and application.” Thanks to Darryl McAdams.

First a reminder about CCCs, taken from *From Haskell to hardware via cartesian closed categories*:

You may have heard of “cartesian closed categories” (CCCs). CCC is an abstraction having a small vocabulary with associated laws:

- The “category” part means we have a notion of “morphisms” (or “arrows”) each having a domain and codomain “object”. There is an identity morphism for and associative composition operator. If this description of morphisms and objects sounds like functions and types (or sets), it’s because functions and types are one example, with
`id`

and`(∘)`

.- The “cartesian” part means that we have products, with projection functions and an operator to combine two functions into a pair-producing function. For Haskell functions, these operations are
`fst`

,`snd`

and`(△)`

. (The latter is called “`(&&&)`

” in`Control.Arrow`

.)- The “closed” part means that we have a way to represent morphisms as objects, referred to as “exponentials”. The corresponding operations are
`curry`

,`uncurry`

, and`apply`

. Since Haskell is a higher-order language, these exponential objects are simply (first class) functions.

As mentioned in *Overloading lambda*, I also want coproducts (corresponding to sum types in Haskell), extending CCCs to *bicartesian* close categories, or “biCCCs”.

Normally, I’d formalize a notion like (bi)CCC with a small collection of type classes (e.g., as in Edward Kmett’s categories package). Due to a technical problem with associated constraints (to be explored in a future post), I’ve so far been unable to find a satisfactory such formulation. Instead, I’ll convert the biCCC term representation given in the post *Overloading lambda*.

How might we think of circuits? A simple-sounding idea is that circuits are directed graphs of components components (logic gates, adders, flip-flops, etc), in which the graph edges represent wires. Each component has some input pins and some output pins, and each wire connects an output pin of some component to an input pin of another component.

On closer examination, some questions arise:

- How to identify intended inputs and outputs?
- How to ensure that the graphs are fully connected other than intended external inputs and outputs?
- How to ensure that input pins are driven by at most one output pin, while allowing output pins to drive any number of input pins?
- How to sequentially compose graphs, matching up and consuming free outputs and inputs?

Note that similar questions arose in the design of programming languages. In functional languages (or even semi-functional ones like Fortran, ML, and Haskell+IO), we answer the connectivity/composition questions by nesting function applications. Overall inputs are identified syntactically as function parameters, while output corresponds to the body of the function.

We can adapt this technique to the construction of circuits as follows. Instead of building graph fragments directly and then adding edges/wires to connect those fragments, let’s build recipes that *consume* output pins, build a graph fragment, and indicate the output pins of that fragment. A circuit (generator) is thus a function that takes some output pins as arguments, instantiates a collection of components, and yields some output pins. The passed-in output pins come from other component instantiations, *or* are the top-level external inputs to a circuits. The number and arrangement of the pins consumed and yielded vary and so will appear as type parameters. Since distinct pins are generated as needed, a circuit will also consume part of a given supply of pins, passing on the remainder for further component construction.

`type CircuitG b = PinSupply → (PinSupply, [Comp], b) -- first try`

Note that the input type is absent, because it can show up as part of a function type: `a → CircuitG b`

. This factoring is typical in monadic formulations.

Of course, we’ve seen this pattern before, in writer and state monads. Moreover, the writer will want to append component lists, so for efficiency, we’ll replace `[a]`

with an append-friendly data type, namely sequences represented as finger trees (`Seq`

from `Data.Sequence`

).

`type CircuitM = WriterT (Seq Comp) (State PinSupply)`

One very simple operation is generation of a single pin (producing no components).

```
newPin ∷ CircuitM Pin
newPin = do { (p:ps') ← get ; put ps' ; return p }
```

In fact, this definition has a considerably more general type, because it doesn’t use the `WriterT`

aspect of `CircuitM`

. The `get`

and `put`

operations come from the `MonadState`

in the `mtl`

package, so we can use any `MonadState`

instance with `PinSupply`

as the state. For convenience, define a constraint abbreviation:

`type MonadPins = MonadState PinSupply`

and use the more general type:

`newPin ∷ MonadPins m ⇒ m Pin`

We’ll need this extra generality later.

I know I promised you a category rather than a monad. We’ll get there.

Each pin will represent a channel to convey one bit of information, but varying with time, i.e., a signal. The values conveyed on these wires will not be available until the circuit is realized in hardware and run. While constructing the graph/circuit, we’ll only need a way of distinguishing the pins and generating new ones. Given these simple requirements, we’ll represent pins simply as integers, but `newtype`

-wrapped for type-safety:

```
newtype Pin = Pin Int deriving (Eq,Ord,Show,Enum)
type PinSupply = [Pin]
```

Each circuit component is an instance of an underlying primitive and has three characteristics:

- the underlying “primitive”, which determines the functionality and interface (type of information in and out),
- the pins carrying information into the instance (and coming from the outputs of other components), and
- the pins carrying information out of the instance.

Components can have different interface types, but we’ll have to combine them all into a single collection, so we’ll use an existential type:

```
data Comp = ∀ a b. IsSource2 a b ⇒ Comp (Prim a b) a b
deriving instance Show Comp
```

For now, a primitive will be identified simply by a name:

```
newtype Prim a b = Prim String
instance Show (Prim a b) where show (Prim str) = str
```

The `IsSource2`

constraint is an abbreviation for `IsSource`

constraints on the domain and range types:

`type IsSource2 a b = (IsSource a, IsSource b)`

Sources will be structures of pins. We’ll need to flatten them into sequences, generate them for the outputs of a new instance, and inquire the number of pins based on the type (i.e., without evaluation):

```
class Show a ⇒ IsSource a where
toPins ∷ a → Seq Pin
genSource ∷ MonadPins m ⇒ m a
numPins ∷ a → Int
```

Instances of `IsSource`

are straightforward to define. For instance,

```
instance IsSource () where
toPins () = ∅
genSource = return ()
numPins _ = 0
instance IsSource Pin where
toPins p = singleton p
genSource = newPin
numPins _ = 1
instance IsSource2 a b ⇒ IsSource (a × b) where
toPins (sa,sb) = toPins sa ⊕ toPins sb
genSource = liftM2 (,) genSource genSource
numPins ~(a,b) = numPins a + numPins b
```

Note that we’re taking care never to evaluate the argument to `numPins`

, which will be `⊥`

in practice.

I promised you a circuit category but gave you a monad. There’s a standard construction to turn monads into categories, namely `Kleisli`

from `Control.Category`

, so you might think we could simply define

`type a ⇴ b = Kleisli CircuitM -- first try`

What I don’t like about this definition is that it requires parameter types like `Pin`

and `Pin × Pin`

, which expose aspects of the implementation. I’d prefer to use `Bool`

and `Bool × Bool`

instead, to reflect the conceptual types of information *flowing through* circuits. Moreover, I want to generate computations parametrized over the underlying category (and indeed generate these category-generic computations automatically from Haskell source). Explicit mention of representation notions like `Pin`

would thwart this genericity, restricting to circuits.

To get type parameters like `Bool`

and `Bool × Bool`

, we’ll have to convert value types to pin types. Type families gives us this ability:

`type family Pins a`

Now we can say that circuits can pass a `Bool`

value by means of a single pin:

`type instance Pins Bool = Pin`

We can pass the unit with no pins at all:

`type instance Pins () = ()`

The pins for `a × b`

include pins for `a`

and pins for `b`

:

`type instance Pins (a × b) = Pins a × Pins b`

Sum types are trickier. We’ll get there in a bit.

Now we can define our improved circuit category:

`newtype a ⇴ b = C (Kleisli CircuitM (Pins a) (Pins b))`

As we say above, the `Pins`

type family distributes over `()`

and pairing. The same is true for every fixed-shape type, i.e., every type in which all values have the same representation shape, including $n$-tuples, length-typed vectors and depth-typed perfect leaf trees.

The canonical example of a type whose elements can vary in shape is sums, represented in Haskell as the `Either`

algebraic data type, for instance `Either Bool (Bool,Bool)`

, which I’ll write instead as `Bool + Bool × Bool`

. Can `Pins`

distribute over `+`

, i.e., can we define

`type instance Pins (a + b) = Pins a + Pins b -- ??`

We cannot use this definition, because it implies that the we must choose a shape *statically*, i.e., when constructing the circuit. The data may, however, change shape *dynamically*, so no one static choice suffices.

I’ll give a solution, which seems to work out okay. However, it lacks the elegance and inevitability that I always look for, so if you have other ideas, please leave suggestions in comments on this post.

The idea is that we’ll use enough pins for the larger of the two representations. Since the two `Pins`

representations (`Pins a`

vs `Pins b`

) can be arbitrarily different, flatten them into a common shape, namely a sequence. To distinguish the two summands, throw in an additional bit/pin:

```
data a :++ b = UP { sumPins ∷ Seq Pin, sumFlag ∷ Pin }
type instance Pins (a + b) = Pins a :++ Pins b
```

Now we’ll want to define an `IsSource`

instance. Recall the class definition:

```
class Show a ⇒ IsSource a where
toPins ∷ a → Seq Pin
genSource ∷ MonadPins m ⇒ m a
numPins ∷ a → Int
```

It’s easy to generate a sequence of pins:

```
instance IsSource2 a b ⇒ IsSource (a :++ b) where
toPins (UP ps f) = ps ⊕ singleton f
```

The number of pins in `a :++ b`

is the maximum number of pins in `a`

or `b`

, plus one for the flag bit:

` numPins _ = (numPins (⊥ ∷ a) `max` numPins (⊥ ∷ b)) + 1`

To generate an `a :++ b`

, generate this many pins, using one for `sumFlag`

and the rest for `sumPins`

:

```
genSource =
liftM2 UP (Seq.replicateM (numPins (⊥ ∷ (a :++ b)) - 1) newPin)
newPin
```

where `Seq.replicateM`

function here comes from `Data.Sequence`

:

`replicateM ∷ Monad m ⇒ Int → m a → m (Seq a)`

This `genSource`

definition is one motivation for the `numPins`

method. Another is coming up in the next section.

I’m working toward a representation of circuits that is both simple and able to implement the standard collection of operations for a cartesian closed category, plus coproducts (i.e., a bicartesian closed category, IIUC). Here, I’ll show how to implement these operations, which are also mentioned in my recent post *Overloading lambda*.

A category has an identity and sequential composition. As defined in `Control.Category`

,

```
class Category k where
id ∷ a `k` a
(∘) ∷ (b `k` c) → (a `k` b) → (a `k` c)
```

The required laws are that `id`

is both left- and right-identity for `(∘)`

and that `(∘)`

is associative.

Recall that our circuit category `(⇴)`

is *almost* the same as `Kleisli CircuitM`

, where `CircuitM`

is a monad (defined via standard monadic building blocks). Thus we *almost* have for free that `(⇴)`

is a category, but we still need a little bit of work.

`newtype a ⇴ b = C (Kleisli CircuitM (Pins a) (Pins b))`

Since this representation wraps `Kleisli CircuitM`

, which is already a category, we need only do a little more unwrapping and wrapping:

```
instance Category (⇴) where
id = C id
C g ∘ C f = C (g ∘ f)
```

The category laws for `(⇴)`

follow easily. For instance,

`id ∘ C f ≡ C id ∘ C f ≡ C (id ∘ f) ≡ C f`

I’ll leave the other two (right-identity and associativity) as a simple exercise.

There’s an idiom I like to use for definitions such as the `Category`

instance above, to automate the unwrapping and wrapping:

```
instance Category (⇴) where
id = C id
(∘) = inC2 (∘)
```

where

```
inC = C ↜ unC
inC2 = inC ↜ unC
```

The `(↜)`

operator here adds post- and pre-processing:

`(h ↜ f) g = h ∘ g ∘ f`

Next, let’s add product types and a minimal set of associated operations: One simple formulation:

```
class Category k ⇒ ProductCat k where
exl ∷ (a × b) `k` a
exr ∷ (a × b) `k` b
(△) ∷ (a `k` c) → (a `k` d) → (a `k` (c × d))
```

If you’ve used `Control.Arrow`

, you’ll recognize `(△)`

as “`(&&&)`

”. The `exl`

and `exr`

methods generalize `fst`

and `snd`

. There are other operations from `Arrow`

methods that can be defined in terms of these primitives, including `first`

, `second`

, and `(×)`

(called “`(***)`

” in `Control.Arrow`

):

```
(×) ∷ ProductCat k ⇒ (a `k` c) → (b `k` d) → (a × b `k` c × d)
f × g = f ∘ exl △ g ∘ exr
first ∷ ProductCat k ⇒ (a `k` c) → ((a × b) `k` (c × b))
first f = f × Id
second ∷ ProductCat k ⇒ (b `k` d) → ((a × b) `k` (a × d))
second g = Id × g
```

Notibly missing is the `Arrow`

class’s `arr`

method, which converts an arbitrary Haskell function into an arrow. If I could implement `arr`

, I’d have my Haskell-to-circuit compiler. I took the names “`exl`

”, “`exr`

”, and “`(△)`

” (pronounced “fork”) from Jeremy Gibbon’s delightful paper *Calculating Functional Programs*.

Again, it’s easy to define a `ProductCat`

instance for `(⇴)`

using the `ProductCat`

instance for the underlying ProductCat for `Kleisli CircuitM`

(which exists because `CircuitM`

is a monad):

```
instance ProductCat (⇴) where
exl = C exl
exr = C exr
(△) = inC2 (△)
```

There is a subtlety in type-checking this instance definition. In the `exl`

definition, the RHS `exl`

above has type

`Kleisli CircuitM (Pins a × Pins b) (Pins b)`

but the `exl`

definition requires type

`Kleisli CircuitM (Pins (a × b)) (Pins b)`

Fortunately, these two types are equivalent, thanks to the `Pins`

instance for products given above:

`type instance Pins (a × b) = Pins a × Pins b`

Again, the class law proofs are again straightforward.

The product laws are given in *Calculating Functional Programs* (p 155) and again are straightforward to verify. For instance,

`exl ∘ (u △ v) ≡ u`

Proof:

```
exl ∘ (C f △ C g)
≡ C exl ∘ C (f △ g)
≡ C (exl ∘ (f △ g))
≡ C f
```

The coproduct/sum operations are exactly the duals of the product operations. The method signatures thus result from those of `ProductCat`

by inverting the category arrows and replacing products by coproducts:

```
class Category k ⇒ CoproductCat k where
inl ∷ a `k` (a + b)
inr ∷ b `k` (a + b)
(▽) ∷ (a `k` c) → (b `k` c) → ((a + b) `k` c)
```

The coproduct laws are also exactly dual to the product laws, i.e., the operations are replaced by their counterparts, and the the compositions are reversed. For instance,

`exl ∘ (u △ v) ≡ u`

becomes

`(u ▽ v) ∘ inl ≡ u`

Just as the `IsSource`

definition for sums above is more complex than the one for products, similarly, the `CoproductCat`

instance I’ve found is much trickier than the `ProductCat`

instance. I’d really love to find much simpler definitions, as the extra complexity worries me. If you think of simpler angles, please do suggest them in comments on this post. Alternatively, if you understand the essential cause of the loss of simplicity in going from products to coproducts, please chime in as well.

For the left-injection, `inl ∷ a ⇴ a + b`

, flatten the `a`

pins, pad to the longer of the two representations as needed, and add a flag of `False`

(left):

```
inl = C ∘ Kleisli $ λ a →
do x ← constM False a
let na = numPins (⊥ ∷ Pins a)
nb = numPins (⊥ ∷ Pins b)
pad = Seq.replicate (max na nb - na) x
return (UP (toPins a ⊕ pad) x)
```

Similarly for `inr`

. (The implementation refactors to remove redundancy.)

There is a problem with this definition, however. Its type is

`inlC ∷ IsSourceP2 a b ⇒ a ⇴ a + b`

where

```
type IsSourceP a = IsSource (Pins a)
type IsSourceP2 a b = (IsSourceP a, IsSourceP b)
```

In contrast, the `CoproductCat`

class definition insists on full generality (unconstrained `a`

and `b`

). I don’t know how to resolve this problem. We can change the `CoproductCat`

class definition to add associated constraints, but when I tried, the types of derived operations (definable via the class methods) became terribly complicated. For now, I’ll settle for a near miss, implementing operations like those of `CoproductCat`

but with the extra constraints that thwarts the instance definition I’m seeking.

For the `(▽)`

operation, let’s assume we have a conditional operation, taking two values and a boolean, with the `False`

/`else`

case coming first:

`condC ∷ IsSource (Pins c) ⇒ ((c × c) × Bool) ⇴ c`

Now, given an `a :++ b`

representation,

- extract the
`sumFlag`

for the`Bool`

, - extract pins for
`a`

and feed them to`f`

, - extract pins for
`b`

and feed them to`g`

, and - feed these three results into
`condC`

:

`f ▽ g = condC ∘ ((f × g) ∘ extractBoth △ pureC sumFlag)`

The `(×)`

operation here is simple parallel composition and is defined for all categories with products:

```
(×) ∷ ProductCat k ⇒
(a `k` c) → (b `k` d) → ((a × b) `k` (c * d))
f × g = f ∘ exl △ g ∘ exr
```

The `pureC`

function wraps a pins-to-pins function as a circuit and is easily defined thanks to our use of the `Kleisli`

arrow:

```
pureC ∷ (Pins a → Pins b) → (a ⇴ b)
pureC = C ∘ arr
```

The `extractBoth`

function extracts *both* interpretations of a sum:

```
extractBoth ∷ IsSourceP2 a b ⇒ a + b ⇴ a × b
extractBoth = pureC ((pinsSource △ pinsSource) ∘ sumPins)
```

Finally, `pinsSource`

builds a source from a pin sequence, using the `genSource`

method from `IsSource`

.

```
pinsSource ∷ IsSource a ⇒ Seq Pin → a
pinsSource pins = Mtl.evalState genSource (toList pins)
```

It is for this function that I wanted `genSource`

to work with monads other than `CircuitM`

. Here, we’re using simply `State PinSupply`

.

So here we have circuits as a category with coproducts. It “seems to work”, but I have a few points of dissatisfaction:

- We don’t quite get the
`CoproductCat`

instance, because of the`IsSource`

constraints imposed by all three would-be method definitions. - The definitions are considerably more complex than the
`ProductCat`

instance and don’t exhibit an apparent duality to those definitions. - The use of
`extractBoth`

frightens me, as it implies a sort of dynamic cast between any two types. (Consider`exr ∘ extractBoth ∘ inl`

.)

My compilation scheme relies on translating Haskell programs to biCCC (bicartesian closed category) form. We’ve seen above how to interpret the category, cartesian, and cocartesian (coproduct) aspects as circuits. What about closed? I don’t have a precise and implemented answer. Below are some thoughts.

Recall from *Overloading lambda*, that a (cartesian) *closed* category is one with exponential objects `b ⇨ c`

(often written “${c}^{b}$”) with the following operations:

```
apply ∷ (a ⇨ b) × a ↝ b
curry ∷ (a × b ↝ c) → (a ↝ (b ⇨ c))
uncurry ∷ (a ↝ (b ⇨ c)) → (a × b ↝ c)
```

What is a circuit exponential, i.e., a “hardware closure”?

We could operate on lambda expressions, removing inner lambdas, as traditional (I think) in defunctionalization. (See, e.g., *Defunctionalization at Work* and *Polymorphic Typed Defunctionalization*.) In this case, `curry`

would not appear in the generated CCC term, and application (by other than statically known primitives) would be replaced by pattern matching and invocation of statically known functions/circuits.

Alternatively, first convert to CCC form, simplify, and then look at the remaining uses of `curry`

, `uncurry`

, and `apply`

. I’m not sure I really need to handle `uncurry`

, which is not generated by the lambda-to-CCC translation. I think I currently use it only for uncurrying primitives. In any case, focus on `curry`

and `apply`

.

As in defunctionalization, do a global sweep of the code, and extract all of the closure formations. If we’ve already translated to CCC form, those formations are just the explicit arguments to `curry`

applications. Assemble all of these `curry`

applications into a single GADT:

```
data b ⇨ c where
⋯
```

For every application `curry f`

in our program, where `f ∷ A × B ↣ C`

for some types `A`

, `B`

, and `C`

, generate a GADT constructor/tag like the following:

` Clo_xyz ∷ A → (B ⇨ C)`

where “`xyz`

” is generated automatically for distinctness. Note that we cannot use simple algebraic data types, because the type `B ⇨ C`

is restricted. Furthermore, if `f`

is polymorphic, we may have an existential constructor. Since there are only finitely many occurrences of `curry`

, we can represent the GADT constructors with finitely many bits, generalizing the treatment of coproducts described above. If we monomorphize, then we can use several different closure data types for different types `b`

and `c`

, reducing the required number of bits.

Now consider `apply`

. Each occurrence will have some type of the form `(b ⇨ c) × b ↝ c`

. The implementation of apply will extract the closure constructor and its argument of some type `a`

, use the constructor to identify the intended circuit `f ∷ a × b ↣ c`

, and feed the `a`

and `b`

into `f`

, yielding `c`

.

What about `uncurry`

?

`uncurry ∷ (a ↝ (b ⇨ c)) → (a × b ↝ c)`

The constructed circuit would work as follows: given `(a,b)`

, feed `a`

to the argument morphism to get a closure of type `b ⇨ c`

, which has an `a'`

and a tag that refers to some `a' × b → c`

. Feed the `a'`

and the `b`

into that circuit to get a `c`

, which is returned.

I have not yet made the general plan for exponentials precise enough to implement, so I expect some surprises. And perhaps there are better approaches. Please offer suggestions!

Recursive types need thought. If simply translated to sums and products, we’d get infinite representations. Instead, I think we’ll have to use indirections through some kind of memory, as is typically done in software implementations. In this case, dynamic memory management seems inevitable. Indirection might best be used for sums whether appearing in a recursive type or not, depend on the disparity and magnitude of representation sizes of the summand types.

]]>In the post *Overloading lambda*, I gave a translation from a typed lambda calculus into the vocabulary of cartesian closed categories (CCCs). This simple translation leads to unnecessarily complex expressions. For instance, the simple lambda term, “`λ ds → (λ (a,b) → (b,a)) ds`

”, translated to a rather complicated CCC term:

`apply ∘ (curry (apply ∘ (apply ∘ (const (,) △ (id ∘ exr) ∘ exr) △ (id ∘ exl) ∘ exr)) △ id)`

(Recall from the previous post that `(∘)`

binds more tightly than `(△)`

and `(▽)`

.)

However, we can do much better, translating to

`exr △ exl`

which says to pair the right and left halves of the argument pair, i.e., swap.

This post applies some equational properties to greatly simplify/optimize the result of translation to CCC form, including example above. First I’ll show the equational reasoning and then how it’s automated in the lambda-ccc library.

First, use the identity/composition laws:

```
f ∘ id ≡ f
id ∘ g ≡ g
```

Our example is now slightly simpler:

`apply ∘ (curry (apply ∘ (apply ∘ (const (,) △ exr ∘ exr) △ exl ∘ exr)) △ id)`

Next, consider the subterm `apply ∘ (const (,) △ exr ∘ exr)`

:

```
apply ∘ (const (,) △ exr ∘ exr)
≡ {- definition of (∘) -}
λ x → apply ((const (,) △ exr ∘ exr) x)
≡ {- definition of (△) -}
λ x → apply (const (,) x, (exr ∘ exr) x)
≡ {- definition of apply -}
λ x → const (,) x ((exr ∘ exr) x)
≡ {- definition of const -}
λ x → (,) ((exr ∘ exr) x)
≡ {- η-reduce -}
(,) ∘ (exr ∘ exr)
```

We didn’t use any properties of `(,)`

or of `(exr ∘ exr)`

, so let’s generalize:

```
apply ∘ (const g △ f)
≡ λ x → apply ((const g △ f) x)
≡ λ x → apply (const g x, f x)
≡ λ x → const g x (f x)
≡ λ x → g (f x)
≡ g ∘ f
```

(Note that I’ve cheated here by appealing to the *function* interpretations of `apply`

and `const`

. *Question:* Is there a purely algebraic proof, using only the CCC laws?)

With this equivalence, our example simplifies further:

`apply ∘ (curry (apply ∘ ((,) ∘ exr ∘ exr △ exl ∘ exr)) △ id)`

Next, lets focus on `apply ∘ ((,) ∘ exr ∘ exr △ exl ∘ exr)`

. Generalize to `apply ∘ (h ∘ f △ g)`

and fiddle about:

```
apply ∘ (h ∘ f △ g)
≡ λ x → apply (h (f x), g x)
≡ λ x → h (f x) (g x)
≡ λ x → uncurry h (f x, g x)
≡ uncurry h ∘ (λ x → (f x, g x))
≡ uncurry h ∘ (f △ g)
```

Apply to our example:

`apply ∘ (curry (uncurry (,) ∘ (exr ∘ exr △ exl ∘ exr)) △ id)`

We can simplify `uncurry (,)`

as follows:

```
uncurry (,)
≡ λ (x,y) → uncurry (,) (x,y)
≡ λ (x,y) → (,) x y
≡ λ (x,y) → (x,y)
≡ id
```

Together with the left identity law, our example now becomes

`apply ∘ (curry (exr ∘ exr △ exl ∘ exr) △ id)`

Next use the law that relates `(∘)`

and `(△)`

:

`f ∘ r △ g ∘ r ≡ (f △ g) ∘ r`

In our example, `exr ∘ exr △ exl ∘ exr`

becomes `(exr △ exl) ∘ exr`

, so we have

`apply ∘ (curry ((exr △ exl) ∘ exr) △ id)`

Let’s now look at how `apply`

, `(△)`

, and `curry`

interact:

```
apply ∘ (curry h △ g)
≡ λ p → apply ((curry h △ g) p)
≡ λ p → apply (curry h p, g p)
≡ λ p → curry h p (g p)
≡ λ p → h (p, g p)
≡ h ∘ (id △ g)
```

We can add more variety for other uses:

```
apply ∘ (curry h ∘ f △ g)
≡ λ p → apply ((curry h ∘ f △ g) p)
≡ λ p → apply (curry h (f p), g p)
≡ λ p → curry h (f p) (g p)
≡ λ p → h (f p, g p)
≡ h ∘ (f △ g)
```

With this rule (even in its more specialized form),

`apply ∘ (curry ((exr △ exl) ∘ exr) △ id)`

becomes

`(exr △ exl) ∘ exr ∘ (id △ id)`

Next use the universal property of `(△)`

, which is that it is the unique solution of the following two equations (universally quantified over `f`

and `g`

):

```
exl ∘ (f △ g) ≡ f
exr ∘ (f △ g) ≡ g
```

(See *Calculating Functional Programs*, Section 1.3.6.)

Applying the second rule to `exr ∘ (id △ id)`

gives `id`

, so our `swap`

example becomes

`exr △ exl`

By using a collection of equational properties, we’ve greatly simplified our CCC example. These properties and more are used in `LambdaCCC.CCC`

to simplify CCC terms during construction. As a general technique, whenever building terms, rather than applying the GADT constructors directly, we’ll use so-called “smart constructors” with built-in optimizations. I’ll show a few smart constructor definitions here. See the `LambdaCCC.CCC`

source code for others.

As a first simple example, consider the identity laws for composition:

```
f ∘ id ≡ f
id ∘ g ≡ g
```

Since the top-level operator on the LHSs (left-hand sides) is `(∘)`

, we can easily implement these laws in a “smart constructor” for `(∘)`

, which handles special cases and uses the plain (dumb) constructor if no simplifications apply:

```
infixr 9 @∘
(@∘) ∷ (b ↣ c) → (a ↣ b) → (a ↣ c)
⋯ -- simplifications go here
g @∘ f = g :∘ f
```

where `↣`

is the GADT that represents biCCC terms, as shown in *Overloading lambda*.

The identity laws are easy to implement:

```
f @∘ Id = f
Id @∘ g = g
```

Next, the `apply`

/`const`

law derived above:

`apply ∘ (const g △ f) ≡ g ∘ f`

This rule translates fairly easily:

`Apply @∘ (Const g :△ f) = prim g @∘ f`

where `prim`

is a smart constructor for `Prim`

.

There are some details worth noting:

- The LHS uses only dumb constructors and variables except for the smart constructor being defined (here
`(@∘)`

). - Besides variables bound on the LHS, the RHS uses only smart constructors, so that the constructed combinations are optimized as well. For instance,
`f`

might be`Id`

here.

Despite these details, this definition is inadequate in many cases. Consider the following example:

`apply ∘ ((const u △ v) ∘ w)`

*Syntactically*, the LHS of our rule *does not* match this term, because the two compositions are associated to the right instead of the left. *Semantically*, the rules does match, since composition is associative. In order to apply this rule, we can first left-associate and then apply the rule.

We could associate *all* compositions to the left during construction, in which case this rule will apply purely via syntactic matching. However, there will be other rewrites that require *right*-association in order to apply. Instead, for rules like this one, let’s explicitly left-decompose.

Suppose we have a smart constructor `composeApply g`

that constructs an optimized version of `apply ∘ g`

. This equivalence implies the following type:

`composeApply ∷ (z ↣ (a ⇨ b) × a) → (z ↣ b)`

Thus

```
apply ∘ (g ∘ f)
≡ (apply ∘ g) ∘ f
≡ composeApply g ∘ f
```

Now we can define a general rule for composing `apply`

:

`Apply @∘ (decompL → g :∘ f) = composeApply g @∘ f`

The function `decompL`

(defined below) does a left-decomposition and is conveniently used here in a view pattern. It decomposes a given term into `g ∘ f`

, where `g`

is as small as possible, but not `Id`

. Where `decompL`

finds such a decomposition, it yields a term with a top-level `(:∘)`

constructor, and `composeApply`

is used. Otherwise, the clause fails.

The implementation of `decompL`

:

```
decompL ∷ (a ↣ c) → (a ↣ c)
decompL Id = Id
decompL ((decompL → h :∘ g) :∘ f) = h :∘ (g @∘ f)
decompL comp@(_ :∘ _) = comp
decompL f = f :∘ Id
```

There’s also `decompR`

for right-factoring, similarly defined.

Note that I broke my rule of using only smart constructors on RHSs, since I specifically want to generate a `(:∘)`

term.

With this re-association trick in place, we can now look at compose/apply rules.

The equivalence

`apply ∘ (const g △ f) ≡ g ∘ f`

becomes

`composeApply (Const p :△ f) = prim p @∘ f`

Likewise, the equivalence

`apply ∘ (h ∘ f △ g) ≡ uncurry h ∘ (f △ g)`

becomes

`composeApply (h :∘ f :△ g) = uncurryE h @∘ (f △ g)`

where `(△)`

is the smart constructor for `(:△)`

, and `uncurryE`

is a smart constructor for `Uncurry`

:

```
uncurryE ∷ (a ↣ (b ⇨ c)) → (a × b ↣ c)
uncurryE (Curry f) = f
uncurryE (Prim PairP) = Id
uncurryE h = Uncurry h
```

Two more `(∘)`

/`apply`

properties:

```
apply ∘ (curry (g ∘ exr) △ f)
≡ λ x → curry (g ∘ exr) x (f x)
≡ λ x → (g ∘ exr) (x, f x)
≡ λ x → g (f x)
≡ g ∘ f
```

```
apply ∘ first f
≡ λ p → apply (first f p)
≡ λ (a,b) → apply (first f (a,b))
≡ λ (a,b) → apply (f a, b)
≡ λ (a,b) → f a b
≡ uncurry f
```

The `first`

combinator is not represented directly in our `(↣)`

data type, but rather is defined via simpler parts in `LambdaCCC.CCC`

:

```
first ∷ (a ↣ c) → (a × b ↣ c × b)
first f = f × Id
(×) ∷ (a ↣ c) → (b ↣ d) → (a × b ↣ c × d)
f × g = f @∘ Exl △ g @∘ Exr
```

Implementations of these two properties:

```
composeApply (Curry (decompR → g :∘ Exr) :△ f) = g @∘ f
composeApply (f :∘ Exl :△ Exr) = uncurryE f
```

These properties arose while examining CCC terms produced by translation from lambda terms. See the `LambdaCCC.CCC`

for more optimizations. I expect that others will arise with more experience.

As Haskell is a higher-order functional language in the heritage of Church’s (typed) lambda calculus, it also supports “lambda abstraction”.

Sadly, however, these two forms of abstraction don’t go together. When we use the vocabulary of lambda abstraction (“`λ x → ⋯`

”) and application (“`u v`

”), our expressions can only be interpreted as one type (constructor), namely functions. (Note that I am not talking about parametric polymorphism, which is available with both lambda abstraction and type-class-style overloading.) Is it possible to overload lambda and application using type classes, or perhaps in the same spirit? The answer is yes, and there are some wonderful benefits of doing so. I’ll explain the how in this post and hint at the why, to be elaborated in futures posts.

First, let’s look at a related question. Instead of generalized interpretation of the particular *vocabulary* of lambda abstraction and application, let’s look at re-expressing functions via an alternative vocabulary that can be generalized more readily. If you are into math or have been using Haskell for a while, you may already know where I’m going: the mathematical notion of a *category* (and the embodiment in the `Category`

and `Arrow`

type classes).

Much has been written about categories, both in the setting of math and of Haskell, so I’ll give only the most cursory summary here.

Recall that every function has two associated sets (or types, CPOs, etc) often referred to as the function’s “domain” and “range”. (As explained elsewhere, the term “range” can be misleading.) Moreover, there are two general building blocks (among others) for functions, namely the identity function and composition of compatibly typed functions, satisfying the following properties:

*left identity:*`id ∘ f ≡ f`

*right identity:*`f ∘ id ≡ f`

*associativity:*`h ∘ (g ∘ f) ≡ (h ∘ g) ∘ f`

Now we can separate these properties from the other specifics of functions. A *category* is something that has these properties but needn’t be function-like in other ways. Each category has *objects* (e.g., sets) and *morphisms/arrows* (e.g., functions), and two building blocks `id`

and `(∘)`

on compatible morphisms. Rather than “domain” and “range”, we usually use the terms (a) “domain” and “codomain” or (b) “source” and “target”.

Examples of categories include sets & functions (as we’ve seen), restricted sets & functions (e.g., vector spaces & linear transformations), preorders, and any monoid (as a one-object category).

The notion of category is very general and correspondingly weak. By imposing so few constraints, it embraces a wide range mathematical notions (including many appearing in programming) but gives correspondingly little leverage with which to define and prove more specific ideas and theorems. Thus we’ll often want additional structure, including products, coproducts (with products distributing over coproducts) and a notion of “exponential”, which is an object that represents a morphism. For the familiar terrain of set/types and functions, products correspond to pairing, coproducts to sums (and choice), and exponentials to functions as things/values. (In programming, we often refer to exponentials as the types of “first class functions”. Some languages have them, and some don’t.) These aspects—together with associated laws—are called “cartesian”, “cocartesian”, and “closed”, respectively. Altogether, we have “bicartesian closed categories”, more succinctly called “biCCCs” (or “CCCs”, without the cocartesian requirement).

The *cartesian* vocabulary consists a product operation on objects, `a × b`

, plus three morphism building blocks:

`exl ∷ a × b ↝ a`

`exr ∷ a × b ↝ b`

`f △ g ∷ a ↝ b × c`

where`f ∷ a ↝ b`

and`g ∷ a ↝ c`

I’m using “`↝`

” to refer to morphisms.

We’ll also want the dual notion of coproducts, `a + b`

, with building blocks and laws exactly dual to products:

`inl ∷ a ↝ a + b`

`inr ∷ b ↝ a + b`

`f ▽ g ∷ a + b ↝ c`

where`f ∷ a ↝ c`

and`g ∷ b ↝ c`

You may have noticed that (a) `exl`

and `exr`

generalize `fst`

and `snd`

, (b) `inl`

and `inr`

generalize `Left`

and `Right`

, and (c) `(△)`

and `(▽)`

come from `Control.Arrow`

, where they’re called “`(&&&)`

” and “`(|||)`

”. I took the names above from *Calculating Functional Programs*, where `(△)`

and `(▽)`

are also called “fork” and “join”.

For product and coproduct laws, see *Calculating Functional Programs* (pp 155–156) or *Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire* (p 9).

The *closed* vocabulary consists of an exponential operation on objects, `a ⇨ b`

(often written “${b}^{a}$”), plus three morphism building blocks:

`uncurry h ∷ a × b ↝ c`

where`h ∷ a ↝ (b ⇨ c)`

`curry f ∷ a ↝ (b ⇨ c)`

where`f ∷ a × b ↝ c`

`apply ∷ (a ⇨ b) × a ↝ b`

(sometimes called “`eval`

”)

Again, there are laws associated with `exl`

, `exr`

, `(△)`

, `inl`

, `inr`

, `(▽)`

, and with `curry`

, `uncurry`

, and `apply`

.

In reading the signatures above, the operators `×`

, `+`

, and `⇨`

all bind more tightly than `↝`

, and `(∘)`

binds more tightly than `(△)`

and `(▽)`

.

Keep in mind the distinction between morphisms (“`↝`

”) and exponentials (“`⇨`

”). The latter is a sort of data/object representation of the former.

I suggested that the *vocabulary* of the lambda calculus—namely lambda abstraction and application—can be generalized beyond functions. Then I showed something else, which is that an *alternative* vocabulary (biCCC) that applies to functions can be overloaded beyond functions. Instead of overloading the lambda calculus notation, we could simply use the alternative algebraic notation of biCCCs. Unfortunately, doing so leads to rather ugly results. The lambda calculus is a much more human-friendly notation than the algebraic language of biCCC.

I’m not just wasting your time and mine, however; there is a way to combine the flexibility of biCCC with the friendliness of lambda calculus: *automatically translate from lambda calculus to biCCC form*. The discovery that typed lambda calculus can be interpreted in any CCC is due to Joachim Lambek. See pointers on John Baez’s blog. (Coproducts do not arise in translation unless the source language has a constraint like `if-then-else`

or definition by cases with pattern matching.)

We’re going to need a few pieces to complete this story and have it be useful in a language like Haskell:

- a representation of lambda expressions,
- a representation of biCCC expressions,
- a translation of lambda expressions to biCCC, and
- a translation of Haskell to lambda expressions.

This last step (which is actually the first step in turning Haskell into biCCC) is already done by a typical compiler. We start with a syntactically rich language and desugar it into a much smaller lambda calculus. GHC in particular has a small language called “Core”, which is much smaller than the Haskell source language.

I originally intended to convert from Core directly to biCCC form, but I found it difficult to do correctly. Core is dynamically typed, so a type-correct Haskell program can manipulate Core in type-incorrect ways. In other words, a type-correct Haskell program can construct type-incorrect Core. Moreover, Core representations contain an enormous amount of type information, since all type inference has already been done and recorded, so it is tedious to get all of the type information correct and thus likely to get it incorrect. For just this reason, GHC includes an explicit type-checker, “Core Lint”, for catching type inconsistencies (but not their causes) after the fact. While Core Lint is much better than nothing, it is less helpful than static checking, which points to inconsistencies in the source code (of the Core-manipulation).

Because I want static checking of my source code for lambda-to-biCCC conversion, I defined my own alternative to Core, using a generalized algebraic data type (GADT). The first step of translation then is conversion from GHC Core into this GADT.

The source fragments I’ll show below are from the Github project lambda-ccc.

In Haskell, pair types are usually written “`(a,b)`

”, sums as “`Either a b`

”, and functions as “`a → b`

”. For the categorical generalizations (products, coproducts, and exponentials), I’ll instead use the notation “`a × b`

”, “`a + b`

”, and “`a ⇨ b`

”. (My blogging software typesets some operators differently from what you’ll see in the source code.)

```
infixl 7 ×
infixl 6 +
infixr 1 ⇨
```

For reasons to become clearer in future posts, I’ll want a typed representation of types. The data constructors named to reflect the types they construct:

```
data Ty ∷ * → * where
Unit ∷ Ty Unit
(×) ∷ Ty a → Ty b → Ty (a × b)
(+) ∷ Ty a → Ty b → Ty (a + b)
(⇨) ∷ Ty a → Ty b → Ty (a ⇨ b)
```

Note that `Ty a`

is a singleton or empty for every type `a`

. I could instead use promoted data type constructors and singletons.

Next, names and typed variables:

```
type Name = String
data V a = V Name (Ty a)
```

Lambda expressions contain binding patterns. For now, we’ll have just the unit pattern, variables, and pair of patterns:

```
data Pat ∷ * → * where
UnitPat ∷ Pat Unit
VarPat ∷ V a → Pat a
(:#) ∷ Pat a → Pat b → Pat (a × b)
```

Finally, we have lambda expressions, with constructors for variables, constants, application, and abstraction:

```
infixl 9 :^
data E ∷ * → * where
Var ∷ V a → E a
ConstE ∷ Prim a → Ty a → E a
(:^) ∷ E (a ⇨ b) → E a → E b
Lam ∷ Pat a → E b → E (a ⇨ b)
```

The `Prim`

GADT contains typed primitives. The `ConstE`

constructor accompanies a `Prim`

with its specific type, since primitives can be polymorphic.

The data type `a ↣ b`

contains biCCC expressions that represent morphisms from `a`

to `b`

:

```
data (↣) ∷ * → * → * where
-- Category
Id ∷ a ↣ a
(:∘) ∷ (b ↣ c) → (a ↣ b) → (a ↣ c)
-- Products
Exl ∷ a × b ↣ a
Exr ∷ a × b ↣ b
(:△) ∷ (a ↣ b) → (a ↣ c) → (a ↣ b × c)
-- Coproducts
Inl ∷ a ↣ a + b
Inr ∷ b ↣ a + b
(:▽) ∷ (b ↣ a) → (c ↣ a) → (b + c ↣ a)
-- Exponentials
Apply ∷ (a ⇨ b) × a ↣ b
Curry ∷ (a × b ↣ c) → (a ↣ (b ⇨ c))
Uncurry ∷ (a ↣ (b ⇨ c)) → (a × b ↣ c)
-- Primitives
Prim ∷ Prim (a → b) → (a ↣ b)
Const ∷ Prim b → (a ↣ b)
```

The actual representation has some constraints on the type variables involved. I could have used type classes instead of a GADT here, except that the existing classes do not allow polymorphism constraints on the methods. The `ConstraintKinds`

language extension allows instance-specific constraints, but I’ve been unable to work out the details in this case.

I’m not happy with the similarity of `Prim`

and `Const`

. Perhaps there’s a simpler formulation.

We’ll always convert terms of the form `λ p → e`

, and we’ll keep the pattern `p`

and expression `e`

separate:

`convert ∷ Pat a → E b → (a ↣ b)`

The pattern argument gets built up from patterns appearing in lambdas and serves as a variable binding “context”. To begin, we’ll strip the pattern off of a lambda, eta-expanding if necessary:

```
toCCC ∷ E (a ⇨ b) → (a ↣ b)
toCCC (Lam p e) = convert p e
toCCC e = toCCC (etaExpand e)
```

(We could instead begin with a dummy unit pattern/context, giving `toCCC`

the type `E c → (() ↣ c)`

.)

The conversion algorithm uses a collection of simple equivalences.

For constants, we have a simple equivalence:

`λ p → c ≡ const c`

Thus the implementation:

`convert _ (ConstE o _) = Const o`

For applications, split the expression in two (repeating the context), compute the function and argument parts separately, combine with `(△)`

, and then `apply`

:

`λ p → u v ≡ apply ∘ ((λ p → u) △ (λ p → v))`

The implementation:

`convert p (u :^ v) = Apply :∘ (convert p u :△ convert p v)`

For lambda expressions, simply curry:

`λ p → λ q → e ≡ curry (λ (p,q) → e)`

Assume that there is no variable shadowing, so that `p`

and `q`

have no variables in common. The implementation:

`convert p (Lam q e) = Curry (convert (p :# q) e)`

Finally, we have to deal with variables. Given `λ p → v`

for a pattern `p`

and variable `v`

appearing in `p`

, either `v ≡ p`

, or `p`

is a pair pattern with `v`

appearing in the left or the right part. To handle these three possibilities, appeal to the following equivalences:

```
λ v → v ≡ id
λ (p,q) → e ≡ (λ p → e) ∘ exl -- if q not free in e
λ (p,q) → e ≡ (λ q → e) ∘ exr -- if p not free in e
```

By a pattern not occurring freely, I mean that no variable in the pattern occurs freely.

These properties lead to an implementation:

```
convert (VarPat u) (Var v) | u ≡ v = Id
convert (p :# q) e | not (q `occurs` e) = convert p e :∘ Exl
convert (p :# q) e | not (p `occurs` e) = convert q e :∘ Exr
```

There are two problems with this code. The first is a performance issue. The recursive `convert`

calls will do considerable redundant work due to the recursive nature of `occurs`

.

To fix this performance problem, handle only `λ p → v`

(variables), and search through the pattern structure only once, returning a `Maybe (a ↣ b)`

. The return value is `Nothing`

when `v`

does not occur in `p`

.

```
convert p (Var v) =
fromMaybe (error ("convert: unbound variable: " ++ show v)) $
convertVar p k
```

If a sub-pattern search succeeds, tack on the `( ∘ Exl)`

or `( ∘ Exr)`

using `(<$>)`

(i.e., `fmap`

). Backtrack using `mplus`

.

```
convertVar ∷ ∀ b a. V b → Pat a → Maybe (a ↣ b)
convertVar u = conv
where
conv ∷ Pat c → Maybe (c ↣ b)
conv (VarPat v) | u ≡ v = Just Id
| otherwise = Nothing
conv UnitPat = Nothing
conv (p :# q) = ((:∘ Exr) <$> conv q) `mplus` ((:∘ Exl) <$> conv p)
```

(The explicit type quantification and the `ScopedTypeVariables`

language extension relate the `b`

the signatures of `convertVar`

and `conv`

. Note that we’ve solved the problem of redundant `occurs`

testing, eliminating those tests altogether.

The second problem is more troubling: the definitions of `convert`

for `Var`

above do not type-check. Look again at the first try:

```
convert ∷ Pat a → E b → (a ↣ b)
convert (VarPat u) (Var v) | u ≡ v = Id
```

The error message:

```
Could not deduce (b ~ a)
...
Expected type: V a
Actual type: V b
In the second argument of `(==)', namely `v'
In the expression: u == v
```

The bug here is that we cannot compare `u`

and `v`

for equality, because they may differ. The definition of `convertVar`

has a similar type error.

There’s a trick I’ve used in many libraries to handle this situation of wanting to compare for equality two values that may or may not have the same type. For equal values, don’t return simply `True`

, but rather a proof that the types do indeed match. For unequal values, we simply fail to return an equality proof. Thus the comparison operation on `V`

has the following type:

`varTyEq ∷ V a → V b → Maybe (a :=: b)`

where `a :=: b`

is populated only proofs that `a`

and `b`

are the same type.

The type of type equality proofs is defined in Data.Proof.EQ from the ty package:

`data (:=:) ∷ * → * → * where Refl ∷ a :=: a`

The `Refl`

constructor is name to suggest the axiom of reflexivity, which says that anything is equal to itself. There are other utilities for commutativity, associativity, and lifting of equality to type constructors.

In fact, this pattern comes up often enough that there’s a type class in the Data.IsTy module of the ty package:

```
class IsTy f where
tyEq ∷ f a → f b → Maybe (a :=: b)
```

With this trick, we can fix our type-incorrect code above. Instead of

`convert (VarPat u) (Var v) | u ≡ v = Id`

define

`convert (VarPat u) (Var v) | Just Refl ← u `tyEq` v = Id`

During type-checking, GHC uses the guard (“`Just Refl ← u `tyEq` v`

”) to deduce an additional *local* constraint to use in type-checking the right-hand side (here `Id`

). That constraint (`a ~ b`

) suffices to make the definition type-correct.

In the same way, we can fix the more efficient implementation:

```
convertVar ∷ ∀ b a. V b → Pat a → Maybe (a ↣ b)
convertVar u = conv
where
conv ∷ Pat c → Maybe (c ↣ b)
conv (VarPat v) | Just Refl ← v `tyEq` u = Just Id
| otherwise = Nothing
conv UnitPat = Nothing
conv (p :# q) = ((:∘ Exr) <$> conv q) `mplus` ((:∘ Exl) <$> conv p)
```

To see how conversion works in practice, consider a simple swap function:

`swap (a,b) = (b,a)`

When reified (as explained in a future post), we get

`λ ds → (λ (a,b) → (b,a)) ds`

Lambda expressions can be optimized at construction, in which case an $\eta $-reduction would yield the simpler `λ (a,b) → (b,a)`

. However, to make the translation more interesting, I’ll leave the lambda term unoptimized.

With the conversion algorithm given above, the (unoptimized) lambda term gets translated into the following:

`apply ∘ (curry (apply ∘ (apply ∘ (const (,) △ (id ∘ exr) ∘ exr) △ (id ∘ exl) ∘ exr)) △ id)`

Reformatted with line breaks:

```
apply
. ( curry (apply ∘ ( apply ∘ (const (,) △ (id ∘ exr) ∘ exr)
△ (id ∘ exl) ∘ exr) )
△ id )
```

If you squint, you may be able to see how this CCC expression relates to the lambda expression. The “`λ ds →`

” got stripped initially. The remaining application “`(λ (a,b) → (b,a)) ds`

” became `apply ∘ (⋯ △ ⋯)`

, where the right “`⋯`

” is `id`

, which came from `ds`

. The left “`⋯`

” has a `curry`

from the “`λ (a,b) →`

” and two `apply`

s from the curried application of `(,)`

to `b`

and `a`

. The variables `b`

and `a`

become `(id ∘ exr) ∘ exr`

and `(id ∘ exl) ∘ exr`

, which are paths to `b`

and `a`

in the constructed binding pattern `(ds,(a,b))`

.

I hope this example gives you a feeling for how the lambda-to-CCC translation works in practice, *and* for the complexity of the result. Fortunately, we can simplify the CCC terms as they’re constructed. For this example, as we’ll see in the next post, we get a much simpler result:

`exr △ exl`

This combination is common enough that it pretty-prints as

`swapP`

when CCC desugaring is turned on. (The “`P`

” suffix refers to “product”, to distinguish from coproduct swap.)

I’ll close this blog post now to keep it digestible. Upcoming posts will address optimization of biCCC expressions, circuit generation and analysis as biCCCs, and the GHC plugin that handles conversion of Haskell code to biCCC form, among other topics.

]]>Since fall of last year, I’ve been working at Tabula, a Silicon Valley start-up developing an innovative programmable hardware architecture called “Spacetime”, somewhat similar to an FPGA, but much more flexible and efficient. I met the founder, Steve Teig, at a Bay Area Haskell Hackathon in February of 2011. He described his Spacetime architecture, which is based on the geometry of the same name, developed by Hermann Minkowski to elegantly capture Einstein’s theory of special relativity. Within the first 30 seconds or so of hearing what Steve was up to, I knew I wanted to help.

The vision Steve shared with me included not only a better alternative for *hardware* designers (programmed in hardware languages like Verilog and VHDL), but also a platform for massively parallel execution of *software* written in a purely functional language. Lately, I’ve been working mainly on this latter aspect, and specifically on the problem of how to compile Haskell. Our plan is to develop the Haskell compiler openly and encourage collaboration. If anything you see in this blog series interests you, and especially if have advice or you’d like to collaborate on the project, please let me know.

In my next series of blog posts, I’ll describe some of the technical ideas I’ve been working with for compiling Haskell for massively parallel execution. For now, I want to introduce a central idea I’m using to approach the problem.

I’m used to thinking of the typed lambda calculi as languages for describing functions and other mathematical values. For instance, if the type of an expression `e`

is `Bool → Bool`

, then the meaning of `e`

is a function from Booleans to Booleans. (In non-strict pure languages like Haskell, both Boolean types include `⊥`

. In hypothetically pure strict languages, the range is extend to include `⊥`

, but the domain isn’t.)

However, there are other ways to interpret typed lambda-calculi.

You may have heard of “cartesian closed categories” (CCCs). CCC is an abstraction having a small vocabulary with associated laws:

- The “category” part means we have a notion of “morphisms” (or “arrows”) each having a domain and codomain “object”. There is an identity morphism for and associative composition operator. If this description of morphisms and objects sounds like functions and types (or sets), it’s because functions and types are one example, with
`id`

and`(∘)`

. - The “cartesian” part means that we have products, with projection functions and an operator to combine two functions into a pair-producing function. For Haskell functions, these operations are
`fst`

and`snd`

, together with`(&&&)`

from`Control.Arrow`

. - The “closed” part means that we have a way to represent morphisms via objects, referred to as “exponentials”. The corresponding operations are
`curry`

,`uncurry`

, and`apply`

. Since Haskell is a higher-order language, these exponential objects are simply (first class) functions.

A wonderful thing about the CCC interface is that it suffices to translate any lambda expression, as discovered by Joachim Lambek. In other words, lambda expressions can be systematically translated into the CCC vocabulary. Any (law-abiding) interpretation of that vocabulary is thus an interpretation of the lambda calculus.

Besides intellectual curiosity, why might one care about interpreting lambda expressions in terms of CCCs other than the one we usually think of for functional programs? I got interested because I’ve been thinking about how to compile Haskell programs to “circuits”, both the standard static kind and more dynamic variants. Since Haskell is a typed lambda calculus, if we can formulate circuits as a CCC, we’ll have our Haskell-to-circuit compiler. Other interpretations enable analysis of timing and demand propagation (including strictness).

- Converting lambda expressions to CCC form.
- Optimizing CCC expressions.
- Plugging into GHC, to convert from Haskell source to CCC.
- Applications of this translation, including the following:
- Circuits
- Timing analysis
- Strictness/demand analysis
- Type simplification (normalization)

to make strange things settled, so much as

to make settled things strange.

- G.K. Chesterton

Why is matrix multiplication defined so very differently from matrix addition? If we didn’t know these procedures, could we derive them from first principles? What might those principles be?

This post gives a simple semantic model for matrices and then uses it to systematically *derive* the implementations that we call matrix addition and multiplication. The development illustrates what I call “denotational design”, particularly with type class morphisms. On the way, I give a somewhat unusual formulation of matrices and accompanying definition of matrix “multiplication”.

For more details, see the linear-map-gadt source code.

**Edits:**

- 2012–12–17: Replaced lost $B$ entries in description of matrix addition. Thanks to Travis Cardwell.
- 2012–12018: Added note about math/browser compatibility.

**Note:** I’m using MathML for the math below, which appears to work well on Firefox but on neither Safari nor Chrome. I use Pandoc to generate the HTML+MathML from markdown+lhs+LaTeX. There’s probably a workaround using different Pandoc settings and requiring some tweaks to my WordPress installation. If anyone knows how (especially the WordPress end), I’d appreciate some pointers.

For now, I’ll write matrices in the usual form: $$\left(\begin{array}{ccc}{A}_{11}& \cdots & {A}_{1m}\\ \vdots & \ddots & \vdots \\ {A}_{n1}& \cdots & {A}_{nm}\end{array}\right)$$

To add two matrices, we add their corresponding components. If $$A=\left(\begin{array}{ccc}{A}_{11}& \cdots & {A}_{1m}\\ \vdots & \ddots & \vdots \\ {A}_{n1}& \cdots & {A}_{nm}\end{array}\right)\phantom{\rule{0.167em}{0ex}}\phantom{\rule{0.167em}{0ex}}\mathrm{\text{and}}\phantom{\rule{0.333em}{0ex}}B=\left(\begin{array}{ccc}{B}_{11}& \cdots & {B}_{1m}\\ \vdots & \ddots & \vdots \\ {B}_{n1}& \cdots & {B}_{nm}\end{array}\right),$$ then $$A+B=\left(\begin{array}{ccc}{A}_{11}+{B}_{11}& \cdots & {A}_{1m}+{B}_{1m}\\ \vdots & \ddots & \vdots \\ {A}_{n1}+{B}_{n1}& \cdots & {A}_{nm}+{B}_{nm}\end{array}\right).$$ More succinctly, $$(A+B{)}_{ij}={A}_{ij}+{B}_{ij}.$$

Multiplication, on the other hand, works quite differently. If $$A=\left(\begin{array}{ccc}{A}_{11}& \cdots & {A}_{1m}\\ \vdots & \ddots & \vdots \\ {A}_{n1}& \cdots & {A}_{nm}\end{array}\right)\phantom{\rule{0.167em}{0ex}}\phantom{\rule{0.167em}{0ex}}\mathrm{\text{and}}\phantom{\rule{0.333em}{0ex}}B=\left(\begin{array}{ccc}{B}_{11}& \cdots & {B}_{1p}\\ \vdots & \ddots & \vdots \\ {B}_{m1}& \cdots & {B}_{mp}\end{array}\right),$$ then $$(A\bullet B{)}_{ij}=\sum _{k=1}^{m}{A}_{ik}\cdot {B}_{kj}.$$ This time, we form the dot product of each $A$ row and $B$ column.

Why are these two matrix operations defined so differently? Perhaps these two operations are *implementations* of more fundamental *specifications*. If so, then making those specifications explicit could lead us to clear and compelling explanations of matrix addition and multiplication.

Simplifying from matrix multiplication, we have transformation of a vector by a matrix. If $$A=\left(\begin{array}{ccc}{A}_{11}& \cdots & {A}_{1m}\\ \vdots & \ddots & \vdots \\ {A}_{n1}& \cdots & {A}_{nm}\end{array}\right)\phantom{\rule{0.167em}{0ex}}\phantom{\rule{0.167em}{0ex}}\mathrm{\text{and}}\phantom{\rule{0.333em}{0ex}}x=\left(\begin{array}{c}{x}_{1}\\ \vdots \\ {x}_{m}\end{array}\right),$$ then $$A\cdot x=\left(\begin{array}{ccccc}{A}_{11}\cdot {x}_{1}& +& \cdots & +& {A}_{1m}\cdot {x}_{m}\\ \vdots & & \ddots & & \vdots \\ {A}_{n1}\cdot {x}_{1}& +& \cdots & +& {A}_{nm}\cdot {x}_{m}\end{array}\right)$$ More succinctly, $$(A\cdot x{)}_{i}=\sum _{k=1}^{m}{A}_{ik}\cdot {x}_{k}.$$

We can interpret matrices *as* transformations. Matrix addition then *adds* transformations:

$$(A+B)\phantom{\rule{0.167em}{0ex}}x=A\phantom{\rule{0.167em}{0ex}}x+B\phantom{\rule{0.167em}{0ex}}x$$

Matrix “multiplication” *composes* transformations:

$$(A\bullet B)\phantom{\rule{0.167em}{0ex}}x=A\phantom{\rule{0.167em}{0ex}}(B\phantom{\rule{0.167em}{0ex}}x)$$

What kinds of transformations?

Matrices represent *linear* transformations. To say that a transformation (or “function” or “map”) $f$ is “linear” means that $f$ preserves the structure of addition and scalar multiplication. In other words, $$\begin{array}{ccc}\hfill f\phantom{\rule{0.167em}{0ex}}\phantom{\rule{0.167em}{0ex}}(x+y)& \hfill =\hfill & f\phantom{\rule{0.167em}{0ex}}x+f\phantom{\rule{0.167em}{0ex}}y\hfill \\ \hfill f\phantom{\rule{0.167em}{0ex}}\phantom{\rule{0.167em}{0ex}}(c\cdot x)& \hfill =\hfill & c\cdot f\phantom{\rule{0.167em}{0ex}}x\hfill \end{array}$$ Equivalently, $f$ preserves all *linear combinations*: $$f\phantom{\rule{0.167em}{0ex}}({c}_{1}\cdot {x}_{1}+\cdots +{c}_{m}\cdot {x}_{m})={c}_{1}\cdot f\phantom{\rule{0.167em}{0ex}}{x}_{1}+\cdots +{c}_{m}\cdot f\phantom{\rule{0.167em}{0ex}}{x}_{m}$$

What does it mean to say that “matrices represent linear transformations”? As we saw in the previous section, we can use a matrix to transform a vector. Our semantic function will exactly be this use, i.e., the *meaning* of matrix is as a function (map) from vectors to vectors. Moreover, these functions will satisfy the linearity properties above.

For simplicity, I’m going structure matrices in a unconventional way. Instead of a rectangular arrangement of numbers, use the following generalized algebraic data type (GADT):

```
data a ⊸ b where
Dot ∷ InnerSpace b ⇒
b → (b ⊸ Scalar b)
(:&&) ∷ VS3 a c d ⇒ -- vector spaces with same scalar field
(a ⊸ c) → (a ⊸ d) → (a ⊸ c × d)
```

I’m using the notation “`c × d`

” in place of the usual “`(c,d)`

”. Precedences are such that “`×`

” binds more tightly than “`⊸`

”, which binds more tightly than “`→`

”.

This definition builds on the `VectorSpace`

class, with its associated `Scalar`

type and `InnerSpace`

subclass. Using `VectorSpace`

is overkill for linear maps. It suffices to use modules over semirings, which means that we don’t assume multiplicative or additive inverses. The more general setting enables many more useful applications than vector spaces do, some of which I will describe in future posts.

The idea here is that a linear map results in either (a) a scalar, in which case it’s equivalent to `dot v`

(partially applied dot product) for some `v`

, or (b) a product, in which case it can be decomposed into two linear maps with simpler range types. Each row in a conventional matrix corresponds to `Dot v`

for some vector `v`

, and the stacking of rows corresponds to nested applications of `(:&&)`

.

The semantic function, `apply`

, interprets a representation of a linear map as a function (satisfying linearity):

```
apply ∷ (a ⊸ b) → (a → b)
apply (Dot b) = dot b
apply (f :&& g) = apply f &&& apply g
```

where, `(&&&)`

is from `Control.Arrow`

.

`(&&&) ∷ Arrow (↝) ⇒ (a ↝ b) → (a ↝ c) → (a ↝ (b,c))`

For functions,

`(f &&& g) a = (f a, g a)`

Functions form a vector space, with scaling and addition defined “pointwise”. Instances from the vector-space package:

```
instance AdditiveGroup v ⇒ AdditiveGroup (a → v) where
zeroV = pure zeroV
(^+^) = liftA2 (^+^)
negateV = fmap negateV
instance VectorSpace v ⇒ VectorSpace (a → v) where
type Scalar (a → v) = a → Scalar v
(*^) s = fmap (s *^)
```

I wrote the definitions in this form to fit a template for applicative functors in general. Inlining the definitions of `pure`

, `liftA2`

, and `fmap`

on functions, we get the following equivalent instances:

```
instance AdditiveGroup v ⇒ AdditiveGroup (a → v) where
zeroV = λ _ → zeroV
f ^+^ g = λ a → f a ^+^ g a
negateV f = λ a → negateV (f a)
instance VectorSpace v ⇒ VectorSpace (a → v) where
type Scalar (a → v) = a → Scalar v
s *^ f = λ a → s *^ f a
```

In math, we usually say that dot product is “bilinear”, or “linear in each argument”, i.e.,

```
dot (s *^ u,v) ≡ s *^ dot (u,v)
dot (u ^+^ w, v) ≡ dot (u,v) ^+^ dot (w,v)
```

Similarly for the second argument:

```
dot (u,s *^ v) ≡ s *^ dot (u,v)
dot (u, v ^+^ w) ≡ dot (u,v) ^+^ dot (u,w)
```

Now recast the first of these properties in a curried form:

`dot (s *^ u) v ≡ s *^ dot u v`

i.e.,

```
dot (s *^ u)
≡ {- η-expand -}
λ v → dot (s *^ u) v
≡ {- "bilinearity" -}
λ v → s *^ dot u v
≡ {- (*^) on functions -}
λ v → (s *^ dot u) v
≡ {- η-contract -}
s *^ dot u
```

Likewise,

```
dot (u ^+^ v)
≡ {- η-expand -}
λ w → dot (u ^+^ v) w
≡ {- "bilinearity" -}
λ w → dot u w ^+^ dot v w
≡ {- (^+^) on functions -}
dot u ^+^ dot v
```

Thus, when “bilinearity” is recast in terms of curried functions, it becomes just linearity. (The same reasoning applies more generally to multilinearity.)

Note that we could also define function addition as follows:

`f ^+^ g = add ∘ (f &&& g)`

where

`add = uncurry (^+^)`

This uncurried form will come in handy in derivations below.

We’ll add two linear maps using the `(^+^)`

operation from `Data.AdditiveGroup`

.

`(^+^) ∷ (a ⊸ b) → (a ⊸ b) → (a ⊸ b)`

Following the principle of semantic type class morphisms, the specification simply says that the meaning of the sum is the sum of the meanings:

`apply (f ^+^ g) ≡ apply f ^+^ apply g`

which is half of the definition of “linearity” for `apply`

.

The game plan (as always) is to use the semantic specification to derive (or “calculate”) a correct implementation of each operation. For addition, this goal means we want to come up with a definition like

`f ^+^ g = <rhs>`

where `<rhs>`

is some expression in terms of `f`

and `g`

whose *meaning* is the same as the meaning as `f ^+^ g`

, i.e., where

`apply (f ^+^ g) ≡ apply <rhs>`

Since Haskell has convenient pattern matching, we’ll use it for our definition of `(^+^)`

above. Addition has two arguments, and our data type has two constructors, there are at most four different cases to consider.

First, add `Dot`

and `Dot`

. The specification

`apply (f ^+^ g) ≡ apply f ^+^ apply g`

specializes to

`apply (Dot b ^+^ Dot c) ≡ apply (Dot b) ^+^ apply (Dot c)`

Now simplify the right-hand side (RHS):

```
apply (Dot b) ^+^ apply (Dot c)
≡ {- apply definition -}
dot b ^+^ dot c
≡ {- (bi)linearity of dot, as described above -}
dot (b ^+^ c)
≡ {- apply definition -}
apply (Dot (b ^+^ c))
```

So our specialized specification becomes

`apply (Dot b ^+^ Dot c) ≡ apply (Dot (b ^+^ c))`

which is implied by

`Dot b ^+^ Dot c ≡ Dot (b ^+^ c)`

and easily satisfied by the following partial definition (replacing “`≡`

” by “`=`

”):

`Dot b ^+^ Dot c = Dot (b ^+^ c)`

Now consider the case of addition with two `(:&&)`

constructors:

The specification specializes to

`apply ((f :&& g) ^+^ (h :&& k)) ≡ apply (f :&& g) ^+^ apply (h :&& k)`

As with `Dot`

, simplify the RHS:

```
apply (f :&& g) ^+^ apply (h :&& k)
≡ {- apply definition -}
(apply f &&& apply g) ^+^ (apply h &&& apply k)
≡ {- See below -}
(apply f ^+^ apply h) &&& (apply g ^+^ apply k)
≡ {- induction -}
apply (f ^+^ h) &&& apply (g ^+^ k)
≡ {- apply definition -}
apply ((f ^+^ h) :&& (g ^+^ k))
```

I used the following property (on functions):

`(f &&& g) ^+^ (h &&& k) ≡ (f ^+^ h) &&& (g ^+^ k)`

Proof:

```
(f &&& g) ^+^ (h &&& k)
≡ {- η-expand -}
λ x → ((f &&& g) ^+^ (h &&& k)) x
≡ {- (&&&) definition for functions -}
λ x → (f x, g x) ^+^ (h x, k x)
≡ {- (^+^) definition for pairs -}
λ x → (f x ^+^ h x, g x ^+^ k x)
≡ {- (^+^) definition for functions -}
λ x → ((f ^+^ h) x, (g ^+^ k) x)
≡ {- (&&&) definition for functions -}
(f ^+^ h) &&& (g ^+^ k)
```

The specification becomes

`apply ((f :&& g) ^+^ (h :&& k)) ≡ apply ((f ^+^ h) :&& (g ^+^ k))`

which is easily satisfied by the following partial definition

`(f :&& g) ^+^ (h :&& k) = (f ^+^ h) :&& (g ^+^ k)`

The other two cases are (a) `Dot`

and `(:&&)`

, and (b) `(:&&)`

and `Dot`

, but they don’t type-check (assuming that pairs are not scalars).

I’ll write linear map composition as “`g ∘ f`

”, with type

`(∘) ∷ (b ⊸ c) → (a ⊸ b) → (a ⊸ c)`

This notation is thanks to a `Category`

instance, which depends on a generalized `Category`

class that uses the recent `ConstraintKinds`

language extension. (See the source code.)

Following the semantic type class morphism principle again, the specification says that the meaning of the composition is the composition of the meanings:

`apply (g ∘ f) ≡ apply g ∘ apply f`

In the following, note that the `∘`

operator binds more tightly than `&&&`

, so `f ∘ h &&& g ∘ h`

means `(f ∘ h) &&& (g ∘ h)`

.

Again, since there are two constructors, we have four possible cases cases. We can handle two of these cases together, namely `(:&&)`

and anything. The specification:

`apply ((f :&& g) ∘ h) ≡ apply (f :&& g) ∘ apply h`

Reasoning proceeds as above, simplifying the RHS of the constructor-specialized specification.

Simplify the RHS:

```
apply (f :&& g) ∘ apply h
≡ {- apply definition -}
(apply f &&& apply g) ∘ apply h
≡ {- see below -}
apply f ∘ apply h &&& apply g ∘ apply h
≡ {- induction -}
apply (f ∘ h) &&& apply (g ∘ h)
≡ {- apply definition -}
apply (f ∘ h :&& g ∘ h)
```

This simplification uses the following property of functions:

`(p &&& q) ∘ r ≡ p ∘ r &&& q ∘ r`

Sufficient definition:

`(f :&& g) ∘ h = f ∘ h :&& g ∘ h`

We have two more cases, specified as follows:

```
apply (Dot c ∘ Dot b) ≡ apply (Dot c) ∘ apply (Dot b)
apply (Dot c ∘ (f :&& g)) ≡ apply (Dot c) ∘ apply (f :&& g)
```

Based on types, `c`

must be a scalar in the first case and a pair in the second. (`Dot b`

produces a scalar, while `f :&& g`

produces a pair.) Thus, we can write these two cases more specifically:

```
apply (Dot s ∘ Dot b) ≡ apply (Dot s) ∘ apply (Dot b)
apply (Dot (a,b) ∘ (f :&& g)) ≡ apply (Dot (a,b)) ∘ apply (f :&& g)
```

In the derivation, I won’t spell out as many details as before. Simplify the RHSs:

```
apply (Dot s) ∘ apply (Dot b)
≡ dot s ∘ dot b
≡ dot (s *^ b)
≡ apply (Dot (s *^ b))
```

```
apply (Dot (a,b)) ∘ apply (f :&& g)
≡ dot (a,b) ∘ (apply f &&& apply g)
≡ add ∘ (dot a ∘ apply f &&& dot b ∘ apply g)
≡ dot a ∘ apply f ^+^ dot b ∘ apply g
≡ apply (Dot a ∘ f ^+^ Dot b ∘ g)
```

I’ve used the following properties of functions:

```
dot (a,b) ≡ add ∘ (dot a *** dot b)
(r *** s) ∘ (p &&& q) ≡ r ∘ p &&& s ∘ q
add ∘ (p &&& q) ≡ p ^+^ q
apply (f ^+^ g) ≡ apply f ^+^ apply g
```

Implementation:

```
Dot s ∘ Dot b = Dot (s *^ b)
Dot (a,b) ∘ (f :&& g) = Dot a ∘ f ^+^ Dot b ∘ g
```

Another `Arrow`

operation handy for linear maps is the parallel composition (product):

`(***) ∷ (a ⊸ c) → (b ⊸ d) → (a × b ⊸ c × d)`

The specification says that `apply`

distributes over `(***)`

. In other words, the meaning of the product is the product of the meanings.

`apply (f *** g) = apply f *** apply g`

Where, on functions,

```
p *** q = λ (a,b) → (p a, q b)
≡ p ∘ fst &&& q ∘ snd
```

Simplify the specifications RHS:

```
apply f *** apply g
≡ apply f ∘ fst &&& apply g ∘ snd
```

If we knew how to represent `fst`

and `snd`

via our linear map constructors, we’d be nearly done. Instead, let’s suppose we have the following functions.

```
compFst ∷ VS3 a b c ⇒ a ⊸ c → a × b ⊸ c
compSnd ∷ VS3 a b c ⇒ b ⊸ c → a × b ⊸ c
```

specified as follows:

```
apply (compFst f) ≡ apply f ∘ fst
apply (compSnd g) ≡ apply g ∘ snd
```

With these two functions (to be defined) in hand, let’s try again.

```
apply f *** apply g
≡ apply f ∘ fst &&& apply g ∘ snd
≡ apply (compFst f) &&& apply (compSnd g)
≡ apply (compFst f :&& compSnd g)
```

`fst`

and `snd`

I’ll elide even more of the derivation this time, focusing reasoning on the meanings. Relating to the representation is left as an exercise. The key steps in the derivation:

```
dot a ∘ fst ≡ dot (a,0)
(f &&& g) ∘ fst ≡ f ∘ fst &&& g ∘ fst
dot b ∘ snd ≡ dot (0,b)
(f &&& g) ∘ snd ≡ f ∘ snd &&& g ∘ snd
```

Implementation:

```
compFst (Dot a) = Dot (a,zeroV)
compFst (f :&& g) = compFst f &&& compFst g
compSnd (Dot b) = Dot (zeroV,b)
compSnd (f :&& g) = compSnd f &&& compSnd g
```

where `zeroV`

is the zero vector.

Given `compFst`

and `compSnd`

, we can implement `fst`

and `snd`

as linear maps simply as `compFst id`

and `compSnd id`

, where `id`

is the (polymorphic) identity linear map.

This post reflects an approach to programming that I apply wherever I’m able. As a summary:

- Look for an elegant
*what*behind a familiar*how*. *Define*a semantic function for each data type.*Derive*a correct implementation from the semantics.

You can find more examples of this methodology elsewhere in this blog and in the paper *Denotational design with type class morphisms*.

I’ve been thinking much more about parallel computation for the last couple of years, especially since starting to work at Tabula a year ago. Until getting into parallelism explicitly, I’d naïvely thought that my pure functional programming style was mostly free of sequential bias. After all, functional programming lacks the implicit accidental dependencies imposed by the imperative model. Now, however, I’m coming to see that designing parallel-friendly algorithms takes attention to minimizing the depth of the remaining, explicit data dependencies.

As an example, consider binary addition, carried out from least to most significant bit (as usual). We can immediately compute the first (least significant) bit of the result, but in order to compute the second bit, we’ll have to know whether or not a carry resulted from the first addition. More generally, the $(n+1)$*th* sum & carry require knowing the $n$*th* carry, so this algorithm does not allow parallel execution. Even if we have one processor per bit position, only one processor will be able to work at a time, due to the linear chain of dependencies.

One general technique for improving parallelism is *speculation*—doing more work than might be needed so that we don’t have to wait to find out exactly what *will* be needed. In this post, we’ll see a progression of definitions for bitwise addition. We’ll start with a linear-depth chain of carry dependencies and end with logarithmic depth. Moreover, by making careful use of abstraction, these versions will be simply different type specializations of a single polymorphic definition with an extremely terse definition.

Let’s start with an adder for two one-bit numbers. Because of the possibility of overflow, the result will be two bits, which I’ll call “sum” and “carry”. So that we can chain these one-bit adders, we’ll also add a carry input.

`addB ∷ (Bool,Bool) → Bool → (Bool,Bool)`

In the result, the first `Bool`

will be the sum, and the second will be the carry. I’ve curried the carry input to make it stand out from the (other) addends.

There are a few ways to define `addB`

in terms of logic operations. I like the following definition, as it shares a little work between sum & carry:

```
addB (a,b) cin = (axb ≠ cin, anb ∨ (cin ∧ axb))
where
axb = a ≠ b
anb = a ∧ b
```

I’m using `(≠)`

on `Bool`

for exclusive or.

Now suppose we have not just two bits, but two *sequences* of bits, interpreted as binary numbers arranged from least to most significant bit. For simplicity, I’d like to assume that these sequences to have the same length, so rather than taking a pair of bit lists, let’s take a list of bit pairs:

`add ∷ [(Bool,Bool)] → Bool → ([Bool],Bool)`

To implement `add`

, traverse the list of bit pairs, threading the carries:

```
add [] c = ([] , c)
add (p:ps) c = (s:ss, c'')
where
(s ,c' ) = addB p c
(ss,c'') = add ps c'
```

This `add`

definition contains a familiar pattern. The carry values act as a sort of *state* that gets updated in a linear (non-branching) way. The `State`

monad captures this pattern of computation:

`newtype State s a = State (s → (a,s))`

By using `State`

and its `Monad`

instance, we can shorten our `add`

definition. First we’ll need a new full adder definition, tweaked for `State`

:

```
addB ∷ (Bool,Bool) → State Bool Bool
addB (a,b) = do cin ← get
put (anb ∨ cin ∧ axb)
return (axb ≠ cin)
where
anb = a ∧ b
axb = a ≠ b
```

And then the multi-bit adder:

```
add ∷ [(Bool,Bool)] → State Bool [Bool]
add [] = return []
add (p:ps) = do s ← addB p
ss ← add ps
return (s:ss)
```

We don’t really need the `Monad`

interface to define `add`

. The simpler and more general `Applicative`

interface suffices:

```
add [] = pure []
add (p:ps) = liftA2 (:) (addB p) (add ps)
```

This pattern also looks familiar. Oh — the `Traversable`

instance for lists makes for a very compact definition:

`add = traverse addB`

Wow. The definition is now so simple that it doesn’t depend on the specific choice of lists. To find out the most general type `add`

can have (with this definition), remove the type signature, turn off the monomorphism restriction, and see what GHCi has to say:

`add ∷ Traversable t ⇒ t (Bool,Bool) → State Bool (t Bool)`

This constraint is *very* lenient. `Traversable`

can be derived automatically for *all* algebraic data types, including nested/non-regular ones.

For instance,

```
data Tree a = Leaf a | Branch (Tree a) (Tree a)
deriving (Functor,Foldable,Traversable)
```

We can now specialize this general `add`

back to lists:

```
addLS ∷ [(Bool,Bool)] → State Bool [Bool]
addLS = add
```

We can also specialize for trees:

```
addTS ∷ Tree (Bool,Bool) → State Bool (Tree Bool)
addTS = add
```

Or for depth-typed perfect trees (e.g., as described in *From tries to trees*):

```
addTnS ∷ IsNat n ⇒
T n (Bool,Bool) → State Bool (T n Bool)
addTnS = add
```

Binary trees are often better than lists for parallelism, because they allow quick recursive splitting and joining. In the case of ripple adders, we don’t really get parallelism, however, because of the single-threaded (linear) nature of `State`

. Can we get around this unfortunate linearization?

The linearity of carry propagation interferes with parallel execution even when using a tree representation. The problem is that each `addB`

(full adder) invocation must access the carry out from the previous (immediately less significant) bit position and so must wait for that carry to be computed. Since each bit addition must wait for the previous one to finish, we get linear running time, even with unlimited parallel processing available. If we didn’t have to wait for carries, we could instead get logarithmic running time using the tree representation, since subtrees could be added in parallel.

A way out of this dilemma is to speculatively compute the bit sums for *both* possibilities, i.e., for carry and no carry. We’ll do more work, but much less waiting.

Recall the `State`

definition:

`newtype State s a = State (s → (a,s))`

Rather than using a *function* of `s`

, let’s use a *table* indexed by `s`

. Since `s`

is `Bool`

in our use, a table is simply a uniform pair, so we could replace `State Bool a`

with the following:

`newtype BoolStateTable a = BST ((a,Bool), (a,Bool))`

*Exercise:* define `Functor`

, `Applicative`

, and `Monad`

instances for `BoolStateTable`

.

Rather than defining such a specialized type, let’s stand back and consider what’s going on. We’re replacing a function by an isomorphic data type. This replacement is exactly what memoization is about. So let’s define a general *memoizing state monad*:

`newtype StateTrie s a = StateTrie (s ⇰ (a,s))`

Note that the definition of memoizing state is nearly identical to `State`

. I’ve simply replaced “`→`

” by “`⇰`

”, i.e., memo tries. For the (simple) source code of `StateTrie`

, see the github project. (Poking around on Hackage, I just found monad-memo, which looks related.)

The full-adder function `addB`

is restricted to `State`

, but unnecessarily so. The most general type is inferred as

`addB ∷ MonadState Bool m ⇒ (Bool,Bool) → m Bool`

where the `MonadState`

class comes from the mtl package.

With the type-generalized `addB`

, we get a more general type for `add`

as well:

```
add ∷ (Traversable t, Applicative m, MonadState Bool m) ⇒
t (Bool,Bool) → m (t Bool)
add = traverse addB
```

Now we can specialize `add`

to work with memoized state:

```
addLM ∷ [(Bool,Bool)] → StateTrie Bool [Bool]
addLM = add
addTM ∷ Tree (Bool,Bool) → StateTrie Bool (Tree Bool)
addTM = add
```

The essential tricks in this post are to (a) boost parallelism by speculative evaluation (an old idea) and (b) express speculation as memoization (new, to me at least). The technique wins for binary addition thanks to the small number of possible states, which then makes memoization (full speculation) affordable.

I’m not suggesting that the code above has impressive parallel execution when compiled under GHC. Perhaps it could with some `par`

and `pseq`

annotations. I haven’t tried. This exploration helps me understand a little of the space of hardware-oriented algorithms.

The conditional sum adder looks quite similar to the development above. It has the twist, however, of speculating carries on blocks of a few bits rather than single bits. It’s astonishingly easy to adapt the development above for such a hybrid scheme, forming traversable structures of sequences of bits:

```
addH ∷ Tree [(Bool,Bool)] → StateTrie Bool (Tree [Bool])
addH = traverse (fromState ∘ add)
```

I’m using the adapter `fromState`

so that the inner list additions will use `State`

while the outer tree additions will use `StateTrie`

, thanks to type inference. This adapter memoizes and rewraps the state transition function:

```
fromState ∷ HasTrie s ⇒ State s a → StateTrie s a
fromState = StateTrie ∘ trie ∘ runState
```

]]>A few recent posts have played with trees from two perspectives. The more commonly used I call "top-down", because the top-level structure is most immediately apparent. A top-down binary tree is either a leaf or a pair of such trees, and that pair can be accessed without wading through intervening structure. Much less commonly used are "bottom-up" trees. A bottom-up binary tree is either a leaf or a single such tree of pairs. In the non-leaf case, the pair structure of the tree elements is accessible by operations like mapping, folding, or scanning. The difference is between a pair of trees and a tree of pairs.

As an alternative to the top-down and bottom-up views on trees, I now want to examine a third view, which is a hybrid of the two. Instead of pairs of trees or trees of pairs, this hybrid view is of trees of trees, and more specifically of bottom-up trees of top-down trees. As we’ll see, these hybrid trees emerge naturally from the top-down and bottom-up views. A later post will show how this third view lends itself to an *in-place* (destructive) scan algorithm, suitable for execution on modern GPUs.

**Edits:**

- 2011-06-04: "Suppose we have a bottom-up tree of top-down trees, i.e.,
`t ∷ TB (TT a)`

. Was backwards. (Thanks to Noah Easterly.) - 2011-06-04: Notation: "
`f ➶ n`

" and "`f ➴ n`

".

The post *Parallel tree scanning by composition* defines "top-down" and a "bottom-up" binary trees as follows (modulo type and constructor names):

`data TT a = LT a | BT { unBT ∷ Pair (TT a) } deriving Functor`

data TB a = LB a | BB { unBB ∷ TB (Pair a) } deriving Functor

So, while a non-leaf `TT`

(top-down tree) has a pair at the top (outside), a non-leaf `TB`

(bottom-up tree) has pairs at the bottom (inside).

Combining these two observations leads to an interesting possibility. Suppose we have a bottom-up tree of top-down trees, i.e., `t ∷ TB (TT a)`

. If `t`

is not a leaf, then `t ≡ BB tt`

where `tt`

is a bottom-up tree whose leaves are pairs of top-down trees, i.e., `tt ∷ TB (Pair (TT a))`

. Each of those leaves of type `Pair (TT a)`

can be converted to type `TT a`

(single tree), simply by applying the `BT`

constructor. Moreover, this transformation is invertible. For convenience, define a type alias for hybrid trees:

`type TH a = TB (TT a)`

Then the two conversions:

`upT ∷ TH a → TH a`

upT = fmap BT ∘ unBB

downT ∷ TH a → TH a

downT = BB ∘ fmap unBT

*Exercise:* Prove `upT`

and `downT`

are inverses where defined.

Answer:

` upT ∘ downT`

≡ fmap BT ∘ unBB ∘ BB ∘ fmap unBT

≡ fmap BT ∘ fmap unBT

≡ fmap (BT ∘ unBT)

≡ fmap id

≡ id

downT ∘ upT

≡ BB ∘ fmap unBT ∘ fmap BT ∘ unBB

≡ BB ∘ fmap (unBT ∘ BT) ∘ unBB

≡ BB ∘ fmap id ∘ unBB

≡ BB ∘ id ∘ unBB

≡ BB ∘ unBB

≡ id

Consider a perfect binary leaf tree of depth $n$, i.e., an $n$-deep binary tree with each level full and data only at the leaves (where a leaf is depth $0$ tree.) We can view such a tree as top-down, or bottom-up, or as a hybrid.

Each of these three views is really $n+1$ views:

- Top-down: a depth $n$ tree, or a pair of depth $n-1$ trees, or a pair of pairs of depth $n-2$ trees, etc.
- Bottom-up: a depth $n$ tree, or a depth $n-1$ tree of pairs, or a depth $n-2$ tree of pairs of pairs, etc.
- Hybrid: a depth $n$ tree of depth $0$ trees, or a depth $n-1$ tree of depth $1$ trees, or, …, or a depth $0$ tree of depth $n$ trees.

In the hybrid case, counting from $0$ to $n$, the ${k}^{th}$ such view is a depth $n-k$ bottom-up tree whose elements (leaf values) are depth $k$ top-down trees. When $k=n$, we have a bottom-up tree whose leaves are all single-leaf trees, and when $k=0$, we have a single-leaf bottom-up tree containing a top-down tree. Imagine a horizontal line at depth $k$, dividing the bottom-up outer structure from the top-down inner structure. The `downT`

function moves the dividing line downward, and the `upT`

function moves the line upward. Both functions are partial.

The role of `Pair`

in the tree types above is simple and regular. We can abstract out this particular type constructor, generalizing to an arbitrary functor. I’ll call this generalization "functor trees". Again, there are top-down and bottom-up versions:

`data FT f a = FLT a | FBT { unFBT ∷ f (FT f a) } deriving Functor`

data FB f a = FLB a | FBB { unFBB ∷ FB f (f a) } deriving Functor

And a hybrid version, with generalized versions of `upT`

and `downT`

:

`type FH f a = FB f (FT f a)`

upH ∷ Functor f ⇒ FH f a → FH f a

upH = fmap FBT ∘ unFBB

downH ∷ Functor f ⇒ FH f a → FH f a

downH = FBB ∘ fmap unFBT

These definitions specialize to the ones (for binary trees) by substituting `Pair`

for the parameter `f`

.

The upward and downward view-changing functions above are partial, as they can fail at extreme tree views (at depth $0$ or $n$). We could make this partiality explicit by changing the result type to `Maybe (TH a)`

for binary hybrid trees and to `Maybe (FH f a)`

for the functor generalization. Alternatively, make the tree sizes *explicit* in the types, as in a few recent posts, including *A trie for length-typed vectors*. (In those posts, I used the terms "right-folded" and "left-folded" in place of "top-down" and "bottom-up", reflecting the right- or left-folding of functor composition. The "folded" terms led to some confusion, especially in the context of data type folds and scans.) In the depth-typed versions, "leaves" are zero-ary compositions, and "branches" are $(m+1)$-ary compositions for some $m$.

Top-down:

`data (➴) ∷ (* → *) → * → (* → *) where`

ZeroT ∷ a → (f ➴ Z) a

SuccT ∷ IsNat n ⇒ f ((f ➴ n) a) → (f ➴ S n) a

unZeroT ∷ (f ➴ Z) a → a

unZeroT (ZeroT a) = a

unSuccT ∷ (f ➴ S n) a → f ((f ➴ n) a)

unSuccT (SuccT fsa) = fsa

instance Functor f ⇒ Functor (f ➴ n) where

fmap h (ZeroT a) = ZeroT (h a)

fmap h (SuccT fs) = SuccT ((fmap∘fmap) h fs)

Bottom-up:

`data (➶) ∷ (* → *) → * → (* → *) where`

ZeroB ∷ a → (f ➶ Z) a

SuccB ∷ IsNat n ⇒ (f ➶ n) (f a) → (f ➶ S n) a

unZeroB ∷ (f ➶ Z) a → a

unZeroB (ZeroB a) = a

unSuccB ∷ (f ➶ S n) a → (f ➶ n) (f a)

unSuccB (SuccB fsa) = fsa

instance Functor f ⇒ Functor (f ➶ n) where

fmap h (ZeroB a) = ZeroB (h a)

fmap h (SuccB fs) = SuccB ((fmap∘fmap) h fs)

Hybrid:

`type H p q f a = (f ➶ p) ((f ➴ q) a)`

Upward and downward shift become total functions, and their types explicitly describe how the line shifts between $(p+1)/q$ and $p/(q+1)$:

`up ∷ (Functor f, IsNat q) ⇒ H (S p) q f a → H p (S q) f a`

up = fmap SuccT ∘ unSuccB

down ∷ (Functor f, IsNat p) ⇒ H p (S q) f a → H (S p) q f a

down = SuccB ∘ fmap unSuccT

Why care about the multitude of views on trees?

- It’s pretty.
- A future post will show how these hybrid trees enable an elegant formulation of parallel scanning that lends itself to an in-place, GPU-friendly implementation.

My last few blog posts have been on the theme of *scans*, and particularly on *parallel* scans. In *Composable parallel scanning*, I tackled parallel scanning in a very general setting. There are five simple building blocks out of which a vast assortment of data structures can be built, namely constant (no value), identity (one value), sum, product, and composition. The post defined parallel prefix and suffix scan for each of these five "functor combinators", in terms of the same scan operation on each of the component functors. Every functor built out of this basic set thus has a parallel scan. Functors defined more conventionally can be given scan implementations simply by converting to a composition of the basic set, scanning, and then back to the original functor. Moreover, I expect this implementation could be generated automatically, similarly to GHC’s `DerivingFunctor`

extension.

Now I’d like to show two examples of parallel scan composition in terms of binary trees, namely the top-down and bottom-up variants of perfect binary leaf trees used in previous posts. (In previous posts, I used the terms "right-folded" and "left-folded" instead of "top-down" and "bottom-up".) The resulting two algorithms are expressed nearly identically, but have differ significantly in the work performed. The top-down version does $\Theta (n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n)$ work, while the bottom-up version does only $\Theta (n)$, and thus the latter algorithm is work-efficient, while the former is not. Moreover, with a *very* simple optimization, the bottom-up tree algorithm corresponds closely to Guy Blelloch’s parallel prefix scan for arrays, given in *Programming parallel algorithms*. I’m delighted with this result, as I had been wondering how to think about Guy’s algorithm.

**Edit:**

- 2011-05-31: Added
`Scan`

and`Applicative`

instances for`T2`

and`T4`

.

In *Composable parallel scanning*, we saw the `Scan`

class:

`class Scan f where`

prefixScan, suffixScan ∷ Monoid m ⇒ f m → (m, f m)

Given a structure of values, the prefix and suffix scan methods generate the overall `fold`

(of type `m`

), plus a structure of the same type as the input. (In contrast, the usual Haskell `scanl`

and `scanr`

functions on lists yield a single list with one more element than the source list. I changed the interface for generality and composability.) The post gave instances for the basic set of five functor combinators.

Most functors are not defined via the basic combinators, but as mentioned above, we can scan by conversion to and from the basic set. For convenience, encapsulate this conversion in a type class:

`class EncodeF f where`

type Enc f ∷ * → *

encode ∷ f a → Enc f a

decode ∷ Enc f a → f a

and define scan functions via `EncodeF`

:

`prefixScanEnc, suffixScanEnc ∷`

(EncodeF f, Scan (Enc f), Monoid m) ⇒ f m → (m, f m)

prefixScanEnc = second decode ∘ prefixScan ∘ encode

suffixScanEnc = second decode ∘ suffixScan ∘ encode

As a first example, consider

`instance EncodeF [] where`

type Enc [] = Const () + Id × []

encode [] = InL (Const ())

encode (a : as) = InR (Id a × as)

decode (InL (Const ())) = []

decode (InR (Id a × as)) = a : as

And declare a boilerplate `Scan`

instance via `EncodeF`

:

`instance Scan [] where`

prefixScan = prefixScanEnc

suffixScan = suffixScanEnc

I haven’t checked the details, but I think with this instance, suffix scanning has okay performance, while prefix scan does quadratic work. The reason is the in the `Scan`

instance for products, the two components are scanned independently (in parallel), and then the whole second component is adjusted for `prefixScan`

, while the whole first component is adjusted for `suffixScan`

. In the case of lists, the first component is the list head, and second component is the list tail.

For your reading convenience, here’s that `Scan`

instance again:

`instance (Scan f, Scan g, Functor f, Functor g) ⇒ Scan (f × g) where`

prefixScan (fa × ga) = (af ⊕ ag, fa' × ((af ⊕) <$> ga'))

where (af,fa') = prefixScan fa

(ag,ga') = prefixScan ga

suffixScan (fa × ga) = (af ⊕ ag, ((⊕ ag) <$> fa') × ga')

where (af,fa') = suffixScan fa

(ag,ga') = suffixScan ga

The lop-sidedness of the list type thus interferes with parallelization, and makes the parallel scans perform much worse than cumulative sequential scans.

Let’s next look at a more balanced type.

We’ll get better parallel performance by organizing our data so that we can cheaply partition it into roughly equal pieces. Tree types allows such partitioning.

We’ll try a few variations, starting with a simple binary tree.

`data T1 a = L1 a | B1 (T1 a) (T1 a) deriving Functor`

Encoding and decoding is straightforward:

`instance EncodeF T1 where`

type Enc T1 = Id + T1 × T1

encode (L1 a) = InL (Id a)

encode (B1 s t) = InR (s × t)

decode (InL (Id a)) = L1 a

decode (InR (s × t)) = B1 s t

instance Scan T1 where

prefixScan = prefixScanEnc

suffixScan = suffixScanEnc

Note that these definitions could be generated automatically from the data type definition.

For *balanced trees*, prefix and suffix scan divide the problem in half at each step, solve each half, and do linear work to patch up one of the two halves. Letting $n$ be the number of elements, and $W(n)$ the work, we have the recurrence $W(n)=2\phantom{\rule{0.167em}{0ex}}W(n/2)+c\phantom{\rule{0.167em}{0ex}}n$ for some constant factor $c$. By the Master theorem, therefore, the work done is $\Theta (n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n)$. (Use case 2, with $a=b=2$, $f(n)=c\phantom{\rule{0.167em}{0ex}}n$, and $k=0$.)

Again assuming a *balanced* tree, the computation dependencies have logarithmic depth, so the ideal parallel running time (assuming sufficient processors) is $\Theta (\mathrm{log}n)$. Thus we have an algorithm that is depth-efficient (modulo constant factors) but work-inefficient.

A binary tree as defined above is either a leaf or a pair of binary trees. We can make this pair-ness more explicit with a reformulation:

`data T2 a = L2 a | B2 (Pair (T2 a)) deriving Functor`

where `Pair`

, as in *Composable parallel scanning*, is defined as

`data Pair a = a :# a deriving Functor`

or even

`type Pair = Id × Id`

For encoding and decoding, we could use the same representation as with `T1`

, but let’s instead use a more natural one for the definition of `T2`

:

`instance EncodeF T2 where`

type Enc T2 = Id + Pair ∘ T2

encode (L2 a) = InL (Id a)

encode (B2 st) = InR (O st)

decode (InL (Id a)) = L2 a

decode (InR (O st)) = B2 st

Boilerplate scanning:

`instance Scan T2 where`

prefixScan = prefixScanEnc

suffixScan = suffixScanEnc

for which we’ll need an applicative instance:

`instance Applicative T2 where`

pure = L2

L2 f <*> L2 x = L2 (f x)

B2 (fs :# gs) <*> B2 (xs :# ys) = B2 ((fs <*> xs) :# (gs <*> ys))

_ <*> _ = error "T2 (<*>): structure mismatch"

The `O`

constructor is for functor composition.

With a small change to the tree type, we can make the composition of `Pair`

and `T`

more explicit:

`data T3 a = L3 a | B3 ((Pair ∘ T3) a) deriving Functor`

Then the conversion becomes even simpler, since there’s no need to add or remove `O`

wrappers:

`instance EncodeF T3 where`

type Enc T3 = Id + Pair ∘ T3

encode (L3 a) = InL (Id a)

encode (B3 st) = InR st

decode (InL (Id a)) = L3 a

decode (InR st) = B3 st

In the formulations above, a non-leaf tree consists of a pair of trees. I’ll call these trees "top-down", since visible pair structure begins at the top.

With a very small change, we can instead use a tree of pairs:

`data T4 a = L4 a | B4 (T4 (Pair a)) deriving Functor`

Again an applicative instance allows a standard `Scan`

instance:

`instance Scan T4 where`

prefixScan = prefixScanEnc

suffixScan = suffixScanEnc

instance Applicative T4 where

pure = L4

L4 f <*> L4 x = L4 (f x)

B4 fgs <*> B4 xys = B4 (liftA2 h fgs xys)

where h (f :# g) (x :# y) = f x :# g y

_ <*> _ = error "T4 (<*>): structure mismatch"

or a more explicitly composed form:

`data T5 a = L5 a | B5 ((T5 ∘ Pair) a) deriving Functor`

I’ll call these new variations "bottom-up" trees, since visible pair structure begins at the bottom. After stripping off the branch constructor, `B4`

, we can get at the pair-valued leaves by means of `fmap`

, `fold`

, or `traverse`

(or variations). For `B5`

, we’d also have to strip off the `O`

wrapper (functor composition).

Encoding is nearly the same as with top-down trees. For instance,

`instance EncodeF T4 where`

type Enc T4 = Id + T4 ∘ Pair

encode (L4 a) = InL (Id a)

encode (B4 t) = InR (O t)

decode (InL (Id a)) = L4 a

decode (InR (O t)) = B4 t

We’ll need to scan on the `Pair`

functor. If we use the definition of `Pair`

above in terms of `Id`

and `(×)`

, then we’ll get scanning for free. For *using* `Pair`

, I find the explicit data type definition above more convenient. We can then derive a `Scan`

instance by conversion. Start with a standard specification:

`data Pair a = a :# a deriving Functor`

And encode & decode explicitly:

`instance EncodeF Pair where`

type Enc Pair = Id × Id

encode (a :# b) = Id a × Id b

decode (Id a × Id b) = a :# b

Then use our boilerplate `Scan`

instance for `EncodeF`

instances:

`instance Scan Pair where`

prefixScan = prefixScanEnc

suffixScan = suffixScanEnc

We’ve seen the `Scan`

instance for `(×)`

above. The instance for `Id`

is very simple:

`newtype Id a = Id a`

instance Scan Id where

prefixScan (Id m) = (m, Id ∅)

suffixScan = prefixScan

Given these definitions, we can calculate a more streamlined `Scan`

instance for `Pair`

:

` prefixScan (a :# b)`

≡ {- specification -}

prefixScanEnc (a :# b)

≡ {- prefixScanEnc definition -}

(second decode ∘ prefixScan ∘ encode) (a :# b)

≡ {- (∘) -}

second decode (prefixScan (encode (a :# b)))

≡ {- encode definition for Pair -}

second decode (prefixScan (Id a × Id b))

≡ {- prefixScan definition for f × g -}

second decode

(af ⊕ ag, fa' × ((af ⊕) <$> ga'))

where (af,fa') = prefixScan (Id a)

(ag,ga') = prefixScan (Id b)

≡ {- Definition of second on functions -}

(af ⊕ ag, decode (fa' × ((af ⊕) <$> ga')))

where (af,fa') = prefixScan (Id a)

(ag,ga') = prefixScan (Id b)

≡ {- prefixScan definition for Id -}

(af ⊕ ag, decode (fa' × ((af ⊕) <$> ga')))

where (af,fa') = (a, Id ∅)

(ag,ga') = (b, Id ∅)

≡ {- substitution -}

(a ⊕ b, decode (Id ∅ × ((a ⊕) <$> Id ∅)))

≡ {- fmap/(<$>) for Id -}

(a ⊕ b, decode (Id ∅ × Id (a ⊕ ∅)))

≡ {- Monoid law -}

(a ⊕ b, decode (Id ∅ × Id a))

≡ {- decode definition on Pair -}

(a ⊕ b, (∅ :# a))

Whew! And similarly for `suffixScan`

.

Now let’s recall the `Scan`

instance for `Pair`

given in *Composable parallel scanning*:

`instance Scan Pair where`

prefixScan (a :# b) = (a ⊕ b, (∅ :# a))

suffixScan (a :# b) = (a ⊕ b, (b :# ∅))

Hurray! The derivation led us to the same definition. A "sufficiently smart" compiler could do this derivation automatically.

With this warm-up derivation, let’s now turn to trees.

Given the tree encodings above, how does scan work? We’ll have to consult `Scan`

instances for some of the functor combinators. The product instance is repeated above. We’ll also want the instances for sum and composition. Omitting the `suffixScan`

definitions for brevity:

`data (f + g) a = InL (f a) | InR (g a)`

instance (Scan f, Scan g) ⇒ Scan (f + g) where

prefixScan (InL fa) = second InL (prefixScan fa)

prefixScan (InR ga) = second InR (prefixScan ga)

newtype (g ∘ f) a = O (g (f a))

instance (Scan g, Scan f, Functor f, Applicative g) ⇒ Scan (g ∘ f) where

prefixScan = second (O ∘ fmap adjustL ∘ zip)

∘ assocR

∘ first prefixScan

∘ unzip

∘ fmap prefixScan

∘ unO

This last definition uses a few utility functions:

`zip ∷ Applicative g ⇒ (g a, g b) → g (a,b)`

zip = uncurry (liftA2 (,))

unzip ∷ Functor g ⇒ g (a,b) → (g a, g b)

unzip = fmap fst &&& fmap snd

assocR ∷ ((a,b),c) → (a,(b,c))

assocR ((a,b),c) = (a,(b,c))

adjustL ∷ (Functor f, Monoid m) ⇒ (m, f m) → f m

adjustL (m, ms) = (m ⊕) <$> ms

Let’s consider how the `Scan (g ∘ f)`

instance plays out for top-down vs bottom-up trees, given the functor-composition encodings above. The critical definitions:

`type Enc T2 = Id + Pair ∘ T2`

type Enc T4 = Id + T4 ∘ Pair

Focusing on the branch case, we have `Pair ∘ T2`

vs `T4 ∘ Pair`

, so we’ll use the `Scan (g ∘ f)`

instance either way. Let’s consider the work implied by that instance. There are two calls to `prefixScan`

, plus a linear amount of other work. The meanings of those two calls differ, however:

- For top-down trees (
`T2`

), the recursive tree scans are in`fmap prefixScan`

, mapping over the pair of trees. The`first prefixScan`

is a pair scan and so does constant work. Since there are two recursive calls, each working on a tree of half size (assuming balance), plus linear other work, the total work $\Theta (n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n)$, as explained above. - For bottom-up trees (
`T4`

), there is only one recursive recursive tree scan, which appears in`first prefixScan`

. The`prefixScan`

in`fmap prefixScan`

is pair scan and so does constant work but is mapped over the half-sized tree (of pairs), and so does linear work altogether. Since there only one recursive tree scan, at half size, plus linear other work, the total work is then proportional to $n+n/2+n/4+\dots \approx 2\phantom{\rule{0.167em}{0ex}}n=\Theta (n)$. So we have a work-efficient algorithm!

In addition to the simple analysis above of scanning over top-down and over bottom-up, let’s look in detail at what transpires and how each case can be optimized. This section may well have more detail than you’re interested in. If so, feel free to skip ahead.

Beginning as with `Pair`

,

` prefixScan t`

≡ {- specification -}

prefixScanEnc t

≡ {- prefixScanEnc definition -}

(second decode ∘ prefixScan ∘ encode) t

≡ {- (∘) -}

second decode (prefixScan (encode t))

Take `T2`

, with `T3`

being quite similar. Now split into two cases for the two constructors of `T2`

. First leaf:

` prefixScan (L2 m)`

≡ {- as above -}

second decode (prefixScan (encode (L2 m)))

≡ {- encode for L2 -}

second decode (prefixScan (InL (Id m)))

≡ {- prefixScan for functor sum -}

second decode (second InL (prefixScan (Id m)))

≡ {- prefixScan for Id -}

second decode (second InL (m, Id ∅))

≡ {- second for functions -}

second decode (m, InL (Id ∅))

≡ {- second for functions -}

(m, decode (InL (Id ∅)))

≡ {- decode for L2 -}

(m, L2 ∅)

Then branch:

` prefixScan (B2 (s :# t))`

≡ {- as above -}

second decode (prefixScan (encode (B2 (s :# t))))

≡ {- encode for L2 -}

second decode (prefixScan (InR (O (s :# t))))

≡ {- prefixScan for (+) -}

second decode (second InR (prefixScan (O (s :# t))))

≡ {- property of second -}

second (decode ∘ InR) (prefixScan (O (s :# t)))

Focus on the `prefixScan`

application:

` prefixScan (O (s :# t)) =`

≡ {- prefixScan for (∘) -}

( second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan

∘ unzip ∘ fmap prefixScan ∘ unO ) (O (s :# t))

≡ {- unO/O -}

( second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan

∘ unzip ∘ fmap prefixScan ) (s :# t)

≡ {- fmap on Pair -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan ∘ unzip)

(prefixScan s :# prefixScan t)

≡ {- expand prefixScan -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan ∘ unzip)

((ms,s') :# (mt,t'))

where (ms,s') = prefixScan s

(mt,t') = prefixScan t

≡ {- unzip -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan)

((ms :# mt), (s' :# t')) where ⋯

≡ {- first -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR)

(prefixScan (ms :# mt), (s' :# t')) where ⋯

≡ {- prefixScan for Pair -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR)

((ms ⊕ mt, (∅ :# ms)), (s' :# t')) where ⋯

≡ {- assocR -}

(second (O ∘ fmap adjustL ∘ zip))

(ms ⊕ mt, ((∅ :# ms), (s' :# t'))) where ⋯

≡ {- second -}

( ms ⊕ mt

, (O ∘ fmap adjustL ∘ zip) ((∅ :# ms), (s' :# t')) ) where ⋯

≡ {- zip -}

( ms ⊕ mt

, (O ∘ fmap adjustL) ((∅,s') :# (ms,t')) ) where ⋯

≡ {- fmap for Pair -}

( ms ⊕ mt

, O (adjustL (∅,s') :# adjustL (ms,t')) ) where ⋯

≡ {- adjustL -}

( ms ⊕ mt

, O (((∅ ⊕) <$> s') :# ((ms ⊕) <$> t')) ) where ⋯

≡ {- Monoid law (left identity) -}

( ms ⊕ mt

, O ((id <$> s') :# ((ms ⊕) <$> t')) ) where ⋯

≡ {- Functor law (fmap id) -}

( ms ⊕ mt

, O (s' :# ((ms ⊕) <$> t')) )

where (ms,s') = prefixScan s

(mt,t') = prefixScan t

Continuing from above,

` prefixScan (B2 (s :# t))`

≡ {- see above -}

second (decode ∘ InR) (prefixScan (O (s :# t)))

≡ {- prefixScan focus from above -}

second (decode ∘ InR)

( ms ⊕ mt

, O (s' :# ((ms ⊕) <$> t')) )

where (ms,s') = prefixScan s

(mt,t') = prefixScan t

≡ {- definition of second on functions -}

(ms ⊕ mt, (decode ∘ InR) (O (s' :# ((ms ⊕) <$> t')))) where ⋯

≡ {- (∘) -}

(ms ⊕ mt, decode (InR (O (s' :# ((ms ⊕) <$> t'))))) where ⋯

≡ {- decode for B2 -}

(ms ⊕ mt, B2 (s' :# ((ms ⊕) <$> t'))) where ⋯

This final form is as in *Deriving parallel tree scans*, changed for the new scan interface. The derivation saved some work in wrapping & unwrapping and method invocation, plus one of the two adjustment passes over the sub-trees. As explained above, this algorithm performs $\Theta (n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n)$ work.

I’ll leave `suffixScan`

for you to do yourself.

What happens if we switch from top-down to bottom-up binary trees? I’ll use `T4`

(though `T5`

would work as well):

`data T4 a = L4 a | B4 (T4 (Pair a))`

The leaf case is just as with `T2`

above, so let’s get right to branches.

` prefixScan (B4 t)`

≡ {- as above -}

second decode (prefixScan (encode (B4 t)))

≡ {- encode for L2 -}

second decode (prefixScan (InR (O t)))

≡ {- prefixScan for (+) -}

second decode (second InR (prefixScan (O t)))

≡ {- property of second -}

second (decode ∘ InR) (prefixScan (O t))

As before, now focus on the `prefixScan`

call.

` prefixScan (O t) =`

≡ {- prefixScan for (∘) -}

( second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan

∘ unzip ∘ fmap prefixScan ∘ unO ) (O t)

≡ {- unO/O -}

( second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan

∘ unzip ∘ fmap prefixScan ) t

≡ {- prefixScan on Pair (derived above) -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan ∘ unzip)

fmap (λ (a :# b) → (a ⊕ b, (∅ :# a))) t

≡ {- unzip/fmap -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR ∘ first prefixScan)

( fmap (λ (a :# b) → (a ⊕ b)) t

, fmap (λ (a :# b) → (∅ :# a)) t )

≡ {- first on functions -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR)

( prefixScan (fmap (λ (a :# b) → (a ⊕ b)) t)

, fmap (λ (a :# b) → (∅ :# a)) t )

≡ {- expand prefixScan -}

(second (O ∘ fmap adjustL ∘ zip) ∘ assocR)

((mp,p'), fmap (λ (a :# b) → (∅ :# a)) t)

where (mp,p') = prefixScan (fmap (λ (a :# b) → (a ⊕ b)) t)

≡ {- assocR -}

(second (O ∘ fmap adjustL ∘ zip))

(mp, (p', fmap (λ (a :# b) → (∅ :# a)) t))

where ⋯

≡ {- second on functions -}

(mp, (O ∘ fmap adjustL ∘ zip) (p', fmap (λ (a :# b) → (∅ :# a)) t))

where ⋯

≡ {- fmap/zip/fmap -}

(mp, O (liftA2 tweak p' t))

where tweak s (a :# _) = adjustL (s, (∅ :# a))

(mp,p') = prefixScan (fmap (λ (a :# b) → (a ⊕ b)) t)

≡ {- adjustL, then simplify -}

(mp, O (liftA2 tweak p' t))

where tweak s (a :# _) = (s :# s ⊕ a)

(mp,p') = prefixScan (fmap (λ (a :# b) → (a ⊕ b)) t)

Now re-introduce the context of `prefixScan (O t)`

:

` prefixScan (B4 t)`

≡ {- see above -}

second (decode ∘ InR) (prefixScan (O t))

≡ {- see above -}

second (decode ∘ InR)

(mp, O (liftA2 tweak p' t))

where ⋯

≡ {- decode for T4 -}

(mp, B4 (liftA2 tweak p' t))

where p = fmap (λ (e :# o) → (e ⊕ o)) t

(mp,p') = prefixScan p

tweak s (e :# _) = (s :# s ⊕ e)

Notice how much this bottom-up tree scan algorithm differs from the top-down algorithm derived above. In particular, there’s only one recursive tree scan (on a half-sized tree) instead of two, plus linear additional work, for a total of $\Theta (n)$ work.

In *Programming parallel algorithms*, Guy Blelloch gives the following algorithm for parallel prefix scan, expressed in the parallel functional language NESL:

`function scan(a) =`

if #a ≡ 1 then [0]

else

let es = even_elts(a);

os = odd_elts(a);

ss = scan({e+o: e in es; o in os})

in interleave(ss,{s+e: s in ss; e in es})

This algorithm is nearly identical to the `T4`

scan algorithm above. I was very glad to find this route to Guy’s algorithm, which had been fairly mysterious to me. I mean, I could believe that the algorithm worked, but I had no idea how I might have discovered it myself. With the functor composition approach to scanning, I now see how Guy’s algorithm emerges as well as how it generalizes to other data structures.

Most of the recursive algebraic data types that appear in Haskell programs are *regular*, meaning that the recursive instances are instantiated with the same type parameter as the containing type. For instance, a top-down tree of elements of type `a`

is either a leaf or has two trees whose elements have that same type `a`

. In contrast, in a bottom-up tree, the (single) recursively contained tree is over elements of type `(a,a)`

. Such non-regular data types are called "nested". The two tree scan algorithms above suggest to me that nested data types are particularly useful for efficient parallel algorithms.

The post *Deriving list scans* gave a simple specification of the list-scanning functions `scanl`

and `scanr`

, and then transformed those specifications into the standard optimized implementations. Next, the post *Deriving parallel tree scans* adapted the specifications and derivations to a type of binary trees. The resulting implementations are parallel-friendly, but not work-efficient, in that they perform $n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n$ work vs linear work as in the best-known sequential algorithm.

Besides the work-inefficiency, I don’t know how to extend the critical `initTs`

and `tailTs`

functions (analogs of `inits`

and `tails`

on lists) to depth-typed, perfectly balanced trees, of the sort I played with in *A trie for length-typed vectors* and *From tries to trees*. The difficulty I encounter is that the functions `initTs`

and `tailTs`

make unbalanced trees out of balanced ones, so I don’t know how to adapt the specifications when types prevent the existence of unbalanced trees.

This new post explores an approach to generalized scanning via type classes. After defining the classes and giving a simple example, I’ll give a simple & general framework based on composing functor combinators.

**Edits:**

- 2011-03-02: Fixed typo. "constant functor is easiest" (instead of "identity functor"). Thanks, frguybob.
- 2011-03-05: Removed final unfinished sentence.
- 2011-07-28: Replace "
`assocL`

" with "`assocR`

" in`prefixScan`

derivation for`g ∘ f`

.

The left and right scan functions on lists have an awkward feature. The output list has one more element than the input list, corresponding to the fact that the number of prefixes (`inits`

) of a list is one more than the number of elements, and similarly for suffixes (`tails`

).

While it’s easy to extend a list by adding one more element, it’s not easy with other functors. In *Deriving parallel tree scans*, I simply removed the `∅`

element from the scan. In this post, I’ll instead change the interface to produce an output of exactly the same shape, plus one extra element. The extra element will equal a `fold`

over the complete input. If you recall, we had to search for that complete fold in an input subtree in order to adjust the other subtree. (See `headT`

and `lastT`

and their generalizations in *Deriving parallel tree scans*.) Separating out this value eliminates the search.

Define a class with methods for prefix and suffix scan:

`class Scan f where`

prefixScan, suffixScan ∷ Monoid m ⇒ f m → (m, f m)

Prefix scans (`prefixScan`

) accumulate moving left-to-right, while suffix scans (`suffixScan`

) accumulate moving right-to-left.

To get a first sense of generalized scans, let’s use see how to scan over a pair functor.

`data Pair a = a :# a deriving (Eq,Ord,Show)`

With GHC’s `DeriveFunctor`

option, we could also derive a `Functor`

instance, but for clarity, define it explicitly:

`instance Functor Pair where`

fmap f (a :# b) = (f a :# f b)

The scans:

`instance Scan Pair where`

prefixScan (a :# b) = (a ⊕ b, (∅ :# a))

suffixScan (a :# b) = (a ⊕ b, (b :# ∅))

As you can see, if we eliminated the `∅`

elements, we could shift to the left or right and forgo the extra result.

Naturally, there is also a `Fold`

instance, and the scans produce the fold results as well sub-folds:

`instance Foldable Pair where`

fold (a :# b) = a ⊕ b

The `Pair`

functor also has unsurprising instances for `Applicative`

and `Traversable`

.

`instance Applicative Pair where`

pure a = a :# a

(f :# g) <*> (x :# y) = (f x :# g y)

instance Traversable Pair where

sequenceA (fa :# fb) = (:#) <$> fa <*> fb

We don’t really have to figure out how to define scans for every functor separately. We can instead look at how functors are are composed out of their essential building blocks.

To see how to scan over a broad range of functors, let’s look at each of the functor combinators, e.g., as in *Elegant memoization with higher-order types*.

The constant functor is easiest.

`newtype Const x a = Const x`

There are no values to accumulate, so the final result (fold) is `∅`

.

`instance Scan (Const x) where`

prefixScan (Const x) = (∅, Const x)

suffixScan = prefixScan

The identity functor is nearly as easy.

`newtype Id a = Id a`

`instance Scan Id where`

prefixScan (Id m) = (m, Id ∅)

suffixScan = prefixScan

Scanning in a sum is just scanning in a summand:

`data (f + g) a = InL (f a) | InR (g a)`

`instance (Scan f, Scan g) ⇒ Scan (f + g) where`

prefixScan (InL fa) = second InL (prefixScan fa)

prefixScan (InR ga) = second InR (prefixScan ga)

suffixScan (InL fa) = second InL (suffixScan fa)

suffixScan (InR ga) = second InR (suffixScan ga)

These definitions correspond to simple "commutative diagram" properties, e.g.,

`prefixScan ∘ InL ≡ second InL ∘ prefixScan`

Product scannning is a little trickier.

`data (f × g) a = f a × g a`

Scan each of the two parts separately, and then combine the final (`fold`

) part of one result with each of the non-final elements of the other.

`instance (Scan f, Scan g, Functor f, Functor g) ⇒ Scan (f × g) where`

prefixScan (fa × ga) = (af ⊕ ag, fa' × ((af ⊕) <$> ga'))

where (af,fa') = prefixScan fa

(ag,ga') = prefixScan ga

suffixScan (fa × ga) = (af ⊕ ag, ((⊕ ag) <$> fa') × ga')

where (af,fa') = suffixScan fa

(ag,ga') = suffixScan ga

Finally, composition is the trickiest.

`newtype (g ∘ f) a = O (g (f a))`

The target signatures:

` prefixScan, suffixScan ∷ Monoid m ⇒ (g ∘ f) m → (m, (g ∘ f) m)`

To find the prefix and suffix scan definitions, fiddle with types beginning at the domain type for `prefixScan`

or `suffixScan`

and arriving at the range type.

Some helpers:

`zip ∷ Applicative g ⇒ (g a, g b) → g (a,b)`

zip = uncurry (liftA2 (,))

unzip ∷ Functor g ⇒ g (a,b) → (g a, g b)

unzip = fmap fst &&& fmap snd

`assocR ∷ ((a,b),c) → (a,(b,c))`

assocR ((a,b),c) = (a,(b,c))

`adjustL ∷ (Functor f, Monoid m) ⇒ (m, f m) → f m`

adjustL (m, ms) = (m ⊕) <$> ms

adjustR ∷ (Functor f, Monoid m) ⇒ (m, f m) → f m

adjustR (m, ms) = (⊕ m) <$> ms

First `prefixScan`

:

`gofm ∷ (g ∘ f) m`

unO '' ∷ g (f m)

fmap prefixScan '' ∷ g (m, f m)

unzip '' ∷ (g m, g (f m))

first prefixScan '' ∷ ((m, g m), g (f m))

assocR '' ∷ (m, (g m, g (f m)))

second zip '' ∷ (m, g (m, f m))

second (fmap adjustL) '' ∷ (m, g (f m))

second O '' ∷ (m, (g ∘ f) m)

Then `suffixScan`

:

`gofm ∷ (g ∘ f) m`

unO '' ∷ g (f m)

fmap suffixScan '' ∷ g (m, f m)

unzip '' ∷ (g m, g (f m))

first suffixScan '' ∷ ((m, g m), g (f m))

assocR '' ∷ (m, (g m, g (f m)))

second zip '' ∷ (m, (g (m, f m)))

second (fmap adjustR) '' ∷ (m, (g (f m)))

second O '' ∷ (m, ((g ∘ f) m))

Putting together the pieces and simplifying just a bit leads to the method definitions:

`instance (Scan g, Scan f, Functor f, Applicative g) ⇒ Scan (g ∘ f) where`

prefixScan = second (O ∘ fmap adjustL ∘ zip)

∘ assocR

∘ first prefixScan

∘ unzip

∘ fmap prefixScan

∘ unO

suffixScan = second (O ∘ fmap adjustR ∘ zip)

∘ assocR

∘ first suffixScan

∘ unzip

∘ fmap suffixScan

∘ unO

- What might not easy to spot at this point is that the
`prefixScan`

and`suffixScan`

methods given in this post do essentially the same job as in*Deriving parallel tree scans*, when the binary tree type is deconstructed into functor combinators. A future post will show this connection. - Switch from standard (right-folded) trees to left-folded trees (in the sense of
*A trie for length-typed vectors*and*From tries to trees*), which reduces the running time from $\Theta \phantom{\rule{0.167em}{0ex}}(n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n)$ to $\Theta \phantom{\rule{0.167em}{0ex}}n$. - Scanning in place, i.e., destructively replacing the values in the input structure rather than allocating a new structure.

The post *Deriving list scans* explored folds and scans on lists and showed how the usual, efficient scan implementations can be derived from simpler specifications.

Let’s see now how to apply the same techniques to scans over trees.

This new post is one of a series leading toward algorithms optimized for execution on massively parallel, consumer hardware, using CUDA or OpenCL.

**Edits:**

- 2011-03-01: Added clarification about "
`∅`

" and "`(⊕)`

". - 2011-03-23: corrected "linear-time" to "linear-work" in two places.

Our trees will be non-empty and binary:

`data T a = Leaf a | Branch (T a) (T a)`

instance Show a ⇒ Show (T a) where

show (Leaf a) = show a

show (Branch s t) = "("++show s++","++show t++")"

Nothing surprising in the instances:

`instance Functor T where`

fmap f (Leaf a) = Leaf (f a)

fmap f (Branch s t) = Branch (fmap f s) (fmap f t)

instance Foldable T where

fold (Leaf a) = a

fold (Branch s t) = fold s ⊕ fold t

instance Traversable T where

sequenceA (Leaf a) = fmap Leaf a

sequenceA (Branch s t) =

liftA2 Branch (sequenceA s) (sequenceA t)

BTW, my type-setting software uses "`∅`

" and "`(⊕)`

" for Haskell’s "mempty" and "mappend".

Also handy will be extracting the first and last (i.e., leftmost and rightmost) leaves in a tree:

`headT ∷ T a → a`

headT (Leaf a) = a

headT (s `Branch` _) = headT s

lastT ∷ T a → a

lastT (Leaf a) = a

lastT (_ `Branch` t) = lastT t

*Exercise:* Prove that

`headT ∘ fmap f ≡ f ∘ headT`

lastT ∘ fmap f ≡ f ∘ lastT

Answer:

Consider the `Leaf`

and `Branch`

cases separately:

` headT (fmap f (Leaf a))`

≡ {- fmap on T -}

headT (Leaf (f a))

≡ {- headT def -}

f a

≡ {- headT def -}

f (headT (Leaf a))

` headT (fmap f (Branch s t))`

≡ {- fmap on T -}

headT (Branch (fmap f s) (fmap f t))

≡ {- headT def -}

headT (fmap f s)

≡ {- induction -}

f (headT s)

≡ {- headT def -}

f (headT (Branch s t))

Similarly for `lastT`

.

We can flatten trees into lists:

`flatten ∷ T a → [a]`

flatten = fold ∘ fmap (:[])

Equivalently, using `foldMap`

:

`flatten = foldMap (:[])`

Alternatively, we could define `fold`

via `flatten`

:

`instance Foldable T where fold = fold ∘ flatten`

`flatten ∷ T a → [a]`

flatten (Leaf a) = [a]

flatten (Branch s t) = flatten s ++ flatten t

We can also "unflatten" lists into balanced trees:

`unflatten ∷ [a] → T a`

unflatten [] = error "unflatten: Oops! Empty list"

unflatten [a] = Leaf a

unflatten xs = Branch (unflatten prefix) (unflatten suffix)

where

(prefix,suffix) = splitAt (length xs `div` 2) xs

Both `flatten`

and `unflatten`

can be implemented more efficiently.

For instance,

`t1,t2 ∷ T Int`

t1 = unflatten [1‥3]

t2 = unflatten [1‥16]

`*T> t1`

(1,(2,3))

*T> t2

((((1,2),(3,4)),((5,6),(7,8))),(((9,10),(11,12)),((13,14),(15,16))))

The post *Deriving list scans* gave specifications for list scanning in terms of `inits`

and `tails`

. One consequence of this specification is that the output of scanning has one more element than the input. Alternatively, we could use non-empty variants of `inits`

and `tails`

, so that the input & output are in one-to-one correspondence.

`inits' ∷ [a] → [[a]]`

inits' [] = []

inits' (x:xs) = map (x:) ([] : inits' xs)

The cons case can also be written as

`inits' (x:xs) = [x] : map (x:) (inits' xs)`

`tails' ∷ [a] → [[a]]`

tails' [] = []

tails' xs@(_:xs') = xs : tails' xs'

For instance,

`*T> inits' "abcd"`

["a","ab","abc","abcd"]

*T> tails' "abcd"

["abcd","bcd","cd","d"]

Our tree functor has a symmetric definition, so we get more symmetry in the counterparts to `inits'`

and `tails'`

:

`initTs ∷ T a → T (T a)`

initTs (Leaf a) = Leaf (Leaf a)

initTs (s `Branch` t) =

Branch (initTs s) (fmap (s `Branch`) (initTs t))

tailTs ∷ T a → T (T a)

tailTs (Leaf a) = Leaf (Leaf a)

tailTs (s `Branch` t) =

Branch (fmap (`Branch` t) (tailTs s)) (tailTs t)

Try it:

`*T> t1`

(1,(2,3))

*T> initTs t1

(1,((1,2),(1,(2,3))))

*T> tailTs t1

((1,(2,3)),((2,3),3))

*T> unflatten [1‥5]

((1,2),(3,(4,5)))

*T> initTs (unflatten [1‥5])

((1,(1,2)),(((1,2),3),(((1,2),(3,4)),((1,2),(3,(4,5))))))

*T> tailTs (unflatten [1‥5])

((((1,2),(3,(4,5))),(2,(3,(4,5)))),((3,(4,5)),((4,5),5)))

*Exercise:* Prove that

`lastT ∘ initTs ≡ id`

headT ∘ tailTs ≡ id

Answer:

` lastT (initTs (Leaf a))`

≡ {- initTs def -}

lastT (Leaf (Leaf a))

≡ {- lastT def -}

Leaf a

lastT (initTs (s `Branch` t))

≡ {- initTs def -}

lastT (Branch (⋯) (fmap (s `Branch`) (initTs t)))

≡ {- lastT def -}

lastT (fmap (s `Branch`) (initTs t))

≡ {- lastT ∘ fmap f -}

(s `Branch`) (lastT (initTs t))

≡ {- trivial -}

s `Branch` lastT (initTs t)

≡ {- induction -}

s `Branch` t

Now we can specify prefix & suffix scanning:

`scanlT, scanrT ∷ Monoid a ⇒ T a → T a`

scanlT = fmap fold ∘ initTs

scanrT = fmap fold ∘ tailTs

Try it out:

`t3 ∷ T String`

t3 = fmap (:[]) (unflatten "abcde")

`*T> t3`

(("a","b"),("c",("d","e")))

*T> scanlT t3

(("a","ab"),("abc",("abcd","abcde")))

*T> scanrT t3

(("abcde","bcde"),("cde",("de","e")))

To test on numbers, I’ll use a handy notation from Matt Hellige to add pre- and post-processing:

`(↝) ∷ (a' → a) → (b → b') → ((a → b) → (a' → b'))`

(f ↝ h) g = h ∘ g ∘ f

And a version specialized to functors:

`(↝*) ∷ Functor f ⇒ (a' → a) → (b → b')`

→ (f a → f b) → (f a' → f b')

f ↝* g = fmap f ↝ fmap g

`t4 ∷ T Integer`

t4 = unflatten [1‥6]

t5 ∷ T Integer

t5 = (Sum ↝* getSum) scanlT t4

Try it:

`*T> t4`

((1,(2,3)),(4,(5,6)))

*T> initTs t4

((1,((1,2),(1,(2,3)))),(((1,(2,3)),4),(((1,(2,3)),(4,5)),((1,(2,3)),(4,(5,6))))))

*T> t5

((1,(3,6)),(10,(15,21)))

*Exercise*: Prove that we have properties similar to the ones relating `fold`

, `scanlT`

, and `scanrT`

on list:

`fold ≡ lastT ∘ scanlT`

fold ≡ headT ∘ scanrT

Answer:

` lastT ∘ scanlT`

≡ {- scanlT spec -}

lastT ∘ fmap fold ∘ initTs

≡ {- lastT ∘ fmap f -}

fold ∘ lastT ∘ initTs

≡ {- lastT ∘ initTs -}

fold

headT ∘ scanrT

≡ {- scanrT def -}

headT ∘ fmap fold ∘ tailTs

≡ {- headT ∘ fmap f -}

fold ∘ headT ∘ tailTs

≡ {- headT ∘ tailTs -}

fold

For instance,

`*T> fold t3`

"abcde"

*T> (lastT ∘ scanlT) t3

"abcde"

*T> (headT ∘ scanrT) t3

"abcde"

Recall the specifications:

`scanlT = fmap fold ∘ initTs`

scanrT = fmap fold ∘ tailTs

To derive more efficient implementations, proceed as in *Deriving list scans*. Start with prefix scan (`scanlT`

), and consider the `Leaf`

and `Branch`

cases separately.

` scanlT (Leaf a)`

≡ {- scanlT spec -}

fmap fold (initTs (Leaf a))

≡ {- initTs def -}

fmap fold (Leaf (Leaf a))

≡ {- fmap def -}

Leaf (fold (Leaf a))

≡ {- fold def -}

Leaf a

scanlT (s `Branch` t)

≡ {- scanlT spec -}

fmap fold (initTs (s `Branch` t))

≡ {- initTs def -}

fmap fold (Branch (initTs s) (fmap (s `Branch`) (initTs t)))

≡ {- fmap def -}

Branch (fmap fold (initTs s)) (fmap fold (fmap (s `Branch`) (initTs t)))

≡ {- scanlT spec -}

Branch (scanlT s) (fmap fold (fmap (s `Branch`) (initTs t)))

≡ {- functor law -}

Branch (scanlT s) (fmap (fold ∘ (s `Branch`)) (initTs t))

≡ {- rework as λ -}

Branch (scanlT s) (fmap (λ t' → fold (s `Branch` t')) (initTs t))

≡ {- fold def -}

Branch (scanlT s) (fmap (λ t' → fold s ⊕ fold t')) (initTs t))

≡ {- rework λ -}

Branch (scanlT s) (fmap ((fold s ⊕) ∘ fold) (initTs t))

≡ {- functor law -}

Branch (scanlT s) (fmap (fold s ⊕) (fmap fold (initTs t)))

≡ {- scanlT spec -}

Branch (scanlT s) (fmap (fold s ⊕) (scanlT t))

≡ {- lastT ∘ scanlT ≡ fold -}

Branch (scanlT s) (fmap (lastT (scanlT s) ⊕) (scanlT t))

≡ {- factor out defs -}

Branch s' (fmap (lastT s' ⊕) t')

where s' = scanlT s

t' = scanlT t

Suffix scan has a similar derivation.

` scanrT (Leaf a)`

≡ {- scanrT def -}

fmap fold (tailTs (Leaf a))

≡ {- tailTs def -}

fmap fold (Leaf (Leaf a))

≡ {- fmap on T -}

Leaf (fold (Leaf a))

≡ {- fold def -}

Leaf a

scanrT (s `Branch` t)

≡ {- scanrT spec -}

fmap fold (tailTs (s `Branch` t))

≡ {- tailTs def -}

fmap fold (Branch (fmap (`Branch` t) (tailTs s)) (tailTs t))

≡ {- fmap def -}

Branch (fmap fold (fmap (`Branch` t) (tailTs s))) (fmap fold (tailTs t))

≡ {- scanrT spec -}

Branch (fmap fold (fmap (`Branch` t) (tailTs s))) (scanrT t)

≡ {- functor law -}

Branch (fmap (fold ∘ (`Branch` t)) (tailTs s)) (scanrT t)

≡ {- rework as λ -}

Branch (fmap (λ s' → fold (s' `Branch` t)) (tailTs s)) (scanrT t)

≡ {- functor law -}

Branch (fmap (λ s' → fold s' ⊕ fold t) (tailTs s)) (scanrT t)

≡ {- rework λ -}

Branch (fmap ((⊕ fold t) ∘ fold) (tailTs s)) (scanrT t)

≡ {- scanrT spec -}

Branch (fmap (⊕ fold t) (scanrT s)) (scanrT t)

≡ {- headT ∘ scanrT -}

Branch (fmap (⊕ headT (scanrT t)) (scanrT s)) (scanrT t)

≡ {- factor out defs -}

Branch (fmap (⊕ headT t') s') t'

where s' = scanrT s

t' = scanrT t

Extract code from these derivations:

`scanlT' ∷ Monoid a ⇒ T a → T a`

scanlT' (Leaf a) = Leaf a

scanlT' (s `Branch` t) =

Branch s' (fmap (lastT s' ⊕) t')

where s' = scanlT' s

t' = scanlT' t

scanrT' ∷ Monoid a ⇒ T a → T a

scanrT' (Leaf a) = Leaf a

scanrT' (s `Branch` t) =

Branch (fmap (⊕ headT t') s') t'

where s' = scanrT' s

t' = scanrT' t

Try it:

`*T> t3`

(("a","b"),("c",("d","e")))

*T> scanlT' t3

(("a","ab"),("abc",("abcd","abcde")))

*T> scanrT' t3

(("abcde","bcde"),("cde",("de","e")))

Although I was just following my nose, without trying to get anywhere in particular, this result is exactly the algorithm I first thought of when considering how to parallelize tree scanning.

Let’s now consider the running time of this algorithm. Assume that the tree is *balanced*, to maximize parallelism. (I think balancing is optimal for parallelism here, but I’m not certain.)

For a tree with $n$ leaves, the work $W\phantom{\rule{0.167em}{0ex}}n$ will be constant when $n=1$ and $2\cdot W\phantom{\rule{0.167em}{0ex}}(n/2)+n$ when $n>1$. Using the *Master Theorem* (explained more here), $W\phantom{\rule{0.167em}{0ex}}n=\Theta \phantom{\rule{0.167em}{0ex}}(n\phantom{\rule{0.167em}{0ex}}\mathrm{log}n)$.

This result is disappointing, since scanning can be done with linear work by threading a single accumulator while traversing the input tree and building up the output tree.

I’m using the term "work" instead of "time" here, since I’m not assuming sequential execution.

We have a parallel algorithm that performs $n\phantom{\rule{0.167em}{0ex}}\mathrm{log}\phantom{\rule{0.167em}{0ex}}n$ work, and a sequential program that performs linear work. Can we construct a linear-parallel algorithm?

Yes. Guy Blelloch came up with a clever linear-work parallel algorithm, which I’ll derive in another post.

`head`

and `last`

Can we replace the ad hoc (tree-specific) `headT`

and `lastT`

functions with general versions that work on all foldables? I’d want the generalization to also generalize the list functions `head`

and `last`

or, rather, to *total* variants (ones that cannot error due to empty list). For totality, provide a default value for when there are no elements.

`headF, lastF ∷ Foldable f ⇒ a → f a → a`

I also want these functions to be as efficient on lists as `head`

and `last`

and as efficient on trees as `headT`

and `lastT`

.

The `First`

and `Last`

monoids provide left-biased and right-biased choice. They’re implemented as `newtype`

wrappers around `Maybe`

:

`newtype First a = First { getFirst ∷ Maybe a }`

instance Monoid (First a) where

∅ = First Nothing

r@(First (Just _)) ⊕ _ = r

First Nothing ⊕ r = r

`newtype Last a = Last { getLast ∷ Maybe a }`

instance Monoid (Last a) where

∅ = Last Nothing

_ ⊕ r@(Last (Just _)) = r

r ⊕ Last Nothing = r

For `headF`

, embed all of the elements into the `First`

monoid (via `First ∘ Just`

), fold over the result, and extract the result, using the provided default value in case there are no elements. Similarly for `lastF`

.

`headF dflt = fromMaybe dflt ∘ getFirst ∘ foldMap (First ∘ Just)`

lastF dflt = fromMaybe dflt ∘ getLast ∘ foldMap (Last ∘ Just)

For instance,

`*T> headF 3 [1,2,4,8]`

1

*T> headF 3 []

3

When our elements belong to a monoid, we can use `∅`

as the default:

`headFM ∷ (Foldable f, Monoid m) ⇒ f m → m`

headFM = headF ∅

lastFM ∷ (Foldable f, Monoid m) ⇒ f m → m

lastFM = headF ∅

For instance,

`*T> lastFM ([] ∷ [String])`

""

Using `headFM`

and `lastFM`

in place of `headT`

and `lastT`

, we can easily handle addition of an `Empty`

case to our tree functor in this post. The key choice is that `fold Empty ≡ ∅`

and `fmap _ Empty ≡ Empty`

. Then `headFM`

will choose the first *leaf*, and `lastT`

What about efficiency? Because `headF`

and `lastF`

are defined via `foldMap`

, which is a composition of `fold`

and `fmap`

, one might think that we have to traverse the entire structure when used with functors like `[]`

or `T`

.

Laziness saves us, however, and we can even extract the head of an infinite list or a partially defined one. For instance,

` foldMap (First ∘ Just) [5 ‥]`

≡ foldMap (First ∘ Just) (5 : [6 ‥])

≡ First (Just 5) ⊕ foldMap (First ∘ Just) [6 ‥]

≡ First (Just 5)

So

` headF d [5 ‥]`

≡ fromMaybe d (getFirst (foldMap (First ∘ Just) [5 ‥]))

≡ fromMaybe d (getFirst (First (Just 5)))

≡ fromMaybe d (Just 5)

≡ 5

And, sure enough,

`*T> foldMap (First ∘ Just) [5 ‥]`

First {getFirst = Just 5}

*T> headF ⊥ [5 ‥]

5

- As mentioned above, the derived scanning implementations perform asymtotically more work than necessary. Future posts explore how to derive parallel-friendly, linear-work algorithms. Then we’ll see how to transform the parallel-friendly algorithms so that they work
*destructively*, overwriting their input as they go, and hence suitably for execution entirely in CUDA or OpenCL. - The functions
`initTs`

and`tailTs`

are still tree-specific. To generalize the specification and derivation of list and tree scanning, find a way to generalize these two functions. The types of`initTs`

and`tailTs`

fit with the`duplicate`

method on comonads. Moreover,`tails`

is the usual definition of`duplicate`

on lists, and I think`inits`

would be`extend`

for "snoc lists". For trees, however, I don’t think the correspondence holds. Am I missing something? - In particular, I want to extend the derivation to depth-typed, perfectly balanced trees, of the sort I played with in
*A trie for length-typed vectors*and*From tries to trees*. The functions`initTs`

and`tailTs`

make unbalanced trees out of balanced ones, so I don’t know how to adapt the specifications given here to the setting of depth-typed balanced trees. Maybe I could just fill up the to-be-ignored elements with`∅`

.