Overloading lambda

Haskell’s type class facility is a powerful abstraction mechanism. Using it, we can overload multiple interpretations onto a single vocabulary, with each interpretation corresponding to a different type. The class laws constrain these interpretations and allow reasoning that is valid over all (law-abiding) instances—even ones not yet defined.

As Haskell is a higher-order functional language in the heritage of Church’s (typed) lambda calculus, it also supports “lambda abstraction”.

Sadly, however, these two forms of abstraction don’t go together. When we use the vocabulary of lambda abstraction (“λ x → ⋯”) and application (“u v”), our expressions can only be interpreted as one type (constructor), namely functions. (Note that I am not talking about parametric polymorphism, which is available with both lambda abstraction and type-class-style overloading.) Is it possible to overload lambda and application using type classes, or perhaps in the same spirit? The answer is yes, and there are some wonderful benefits of doing so. I’ll explain the how in this post and hint at the why, to be elaborated in futures posts.

Generalizing functions

First, let’s look at a related question. Instead of generalized interpretation of the particular vocabulary of lambda abstraction and application, let’s look at re-expressing functions via an alternative vocabulary that can be generalized more readily. If you are into math or have been using Haskell for a while, you may already know where I’m going: the mathematical notion of a category (and the embodiment in the Category and Arrow type classes).

Much has been written about categories, both in the setting of math and of Haskell, so I’ll give only the most cursory summary here.

Recall that every function has two associated sets (or types, CPOs, etc) often referred to as the function’s “domain” and “range”. (As explained elsewhere, the term “range” can be misleading.) Moreover, there are two general building blocks (among others) for functions, namely the identity function and composition of compatibly typed functions, satisfying the following properties:

  • left identity:  id ∘ f ≡ f
  • right identity:  f ∘ id ≡ f
  • associativity:  h ∘ (g ∘ f) ≡ (h ∘ g) ∘ f

Now we can separate these properties from the other specifics of functions. A category is something that has these properties but needn’t be function-like in other ways. Each category has objects (e.g., sets) and morphisms/arrows (e.g., functions), and two building blocks id and (∘) on compatible morphisms. Rather than “domain” and “range”, we usually use the terms (a) “domain” and “codomain” or (b) “source” and “target”.

Examples of categories include sets & functions (as we’ve seen), restricted sets & functions (e.g., vector spaces & linear transformations), preorders, and any monoid (as a one-object category).

The notion of category is very general and correspondingly weak. By imposing so few constraints, it embraces a wide range mathematical notions (including many appearing in programming) but gives correspondingly little leverage with which to define and prove more specific ideas and theorems. Thus we’ll often want additional structure, including products, coproducts (with products distributing over coproducts) and a notion of “exponential”, which is an object that represents a morphism. For the familiar terrain of set/types and functions, products correspond to pairing, coproducts to sums (and choice), and exponentials to functions as things/values. (In programming, we often refer to exponentials as the types of “first class functions”. Some languages have them, and some don’t.) These aspects—together with associated laws—are called “cartesian”, “cocartesian”, and “closed”, respectively. Altogether, we have “bicartesian closed categories”, more succinctly called “biCCCs” (or “CCCs”, without the cocartesian requirement).

The cartesian vocabulary consists a product operation on objects, a × b, plus three morphism building blocks:

  • exl ∷ a × b ↝ a
  • exr ∷ a × b ↝ b
  • f △ g ∷ a ↝ b × c where f ∷ a ↝ b and g ∷ a ↝ c

I’m using “” to refer to morphisms.

We’ll also want the dual notion of coproducts, a + b, with building blocks and laws exactly dual to products:

  • inl ∷ a ↝ a + b
  • inr ∷ b ↝ a + b
  • f ▽ g ∷ a + b ↝ c where f ∷ a ↝ c and g ∷ b ↝ c

You may have noticed that (a) exl and exr generalize fst and snd, (b) inl and inr generalize Left and Right, and (c) (△) and (▽) come from Control.Arrow, where they’re called “(&&&)” and “(|||)”. I took the names above from Calculating Functional Programs, where (△) and (▽) are also called “fork” and “join”.

For product and coproduct laws, see Calculating Functional Programs (pp 155–156) or Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire (p 9).

The closed vocabulary consists of an exponential operation on objects, a ⇨ b (often written “ba”), plus three morphism building blocks:

  • uncurry h ∷ a × b ↝ c where h ∷ a ↝ (b ⇨ c)
  • curry f ∷ a ↝ (b ⇨ c) where f ∷ a × b ↝ c
  • apply ∷ (a ⇨ b) × a ↝ b (sometimes called “eval”)

Again, there are laws associated with exl, exr, (△), inl, inr, (▽), and with curry, uncurry, and apply.

In reading the signatures above, the operators ×, +, and all bind more tightly than , and (∘) binds more tightly than (△) and (▽).

Keep in mind the distinction between morphisms (“”) and exponentials (“”). The latter is a sort of data/object representation of the former.

Where are we going?

I suggested that the vocabulary of the lambda calculus—namely lambda abstraction and application—can be generalized beyond functions. Then I showed something else, which is that an alternative vocabulary (biCCC) that applies to functions can be overloaded beyond functions. Instead of overloading the lambda calculus notation, we could simply use the alternative algebraic notation of biCCCs. Unfortunately, doing so leads to rather ugly results. The lambda calculus is a much more human-friendly notation than the algebraic language of biCCC.

I’m not just wasting your time and mine, however; there is a way to combine the flexibility of biCCC with the friendliness of lambda calculus: automatically translate from lambda calculus to biCCC form. The discovery that typed lambda calculus can be interpreted in any CCC is due to Joachim Lambek. See pointers on John Baez’s blog. (Coproducts do not arise in translation unless the source language has a constraint like if-then-else or definition by cases with pattern matching.)

Overview: from lambda expressions to biCCC

We’re going to need a few pieces to complete this story and have it be useful in a language like Haskell:

  • a representation of lambda expressions,
  • a representation of biCCC expressions,
  • a translation of lambda expressions to biCCC, and
  • a translation of Haskell to lambda expressions.

This last step (which is actually the first step in turning Haskell into biCCC) is already done by a typical compiler. We start with a syntactically rich language and desugar it into a much smaller lambda calculus. GHC in particular has a small language called “Core”, which is much smaller than the Haskell source language.

I originally intended to convert from Core directly to biCCC form, but I found it difficult to do correctly. Core is dynamically typed, so a type-correct Haskell program can manipulate Core in type-incorrect ways. In other words, a type-correct Haskell program can construct type-incorrect Core. Moreover, Core representations contain an enormous amount of type information, since all type inference has already been done and recorded, so it is tedious to get all of the type information correct and thus likely to get it incorrect. For just this reason, GHC includes an explicit type-checker, “Core Lint”, for catching type inconsistencies (but not their causes) after the fact. While Core Lint is much better than nothing, it is less helpful than static checking, which points to inconsistencies in the source code (of the Core-manipulation).

Because I want static checking of my source code for lambda-to-biCCC conversion, I defined my own alternative to Core, using a generalized algebraic data type (GADT). The first step of translation then is conversion from GHC Core into this GADT.

The source fragments I’ll show below are from the Github project lambda-ccc.

A typeful lambda calculus representation

In Haskell, pair types are usually written “(a,b)”, sums as “Either a b”, and functions as “a → b”. For the categorical generalizations (products, coproducts, and exponentials), I’ll instead use the notation “a × b”, “a + b”, and “a ⇨ b”. (My blogging software typesets some operators differently from what you’ll see in the source code.)

infixl 7 ×
infixl 6 +
infixr 1

For reasons to become clearer in future posts, I’ll want a typed representation of types. The data constructors named to reflect the types they construct:

data Ty  *  * where
  Unit  Ty Unit
  (×)   Ty a  Ty b  Ty (a × b)
  (+)   Ty a  Ty b  Ty (a + b)
  (⇨)   Ty a  Ty b  Ty (a ⇨ b)

Note that Ty a is a singleton or empty for every type a. I could instead use promoted data type constructors and singletons.

Next, names and typed variables:

type Name = String
data V a = V Name (Ty a)

Lambda expressions contain binding patterns. For now, we’ll have just the unit pattern, variables, and pair of patterns:

data Pat  *  * where
  UnitPat  Pat Unit
  VarPat   V a  Pat a
  (:#)     Pat a  Pat b  Pat (a × b)

Finally, we have lambda expressions, with constructors for variables, constants, application, and abstraction:

infixl 9 :^
data E  *  * where
  Var     V a  E a
  ConstE  Prim a  Ty a  E a
  (:^)    E (a ⇨ b)  E a  E b
  Lam     Pat a  E b  E (a ⇨ b)

The Prim GADT contains typed primitives. The ConstE constructor accompanies a Prim with its specific type, since primitives can be polymorphic.

A typeful biCCC representation

The data type a ↣ b contains biCCC expressions that represent morphisms from a to b:

data (↣)  *  *  * where
  -- Category
  Id       a ↣ a
  (:∘)     (b ↣ c)  (a ↣ b)  (a ↣ c)
  -- Products
  Exl      a × b ↣ a
  Exr      a × b ↣ b
  (:△)     (a ↣ b)  (a ↣ c)  (a ↣ b × c)
  -- Coproducts
  Inl      a ↣ a + b
  Inr      b ↣ a + b
  (:▽)     (b ↣ a)  (c ↣ a)  (b + c ↣ a)
  -- Exponentials
  Apply    (a ⇨ b) × a ↣ b
  Curry    (a × b ↣ c)  (a ↣ (b ⇨ c))
  Uncurry  (a ↣ (b ⇨ c))  (a × b ↣ c)
  -- Primitives
  Prim     Prim (a  b)  (a ↣ b)
  Const    Prim       b   (a ↣ b)

The actual representation has some constraints on the type variables involved. I could have used type classes instead of a GADT here, except that the existing classes do not allow polymorphism constraints on the methods. The ConstraintKinds language extension allows instance-specific constraints, but I’ve been unable to work out the details in this case.

I’m not happy with the similarity of Prim and Const. Perhaps there’s a simpler formulation.

Lambda to CCC

We’ll always convert terms of the form λ p → e, and we’ll keep the pattern p and expression e separate:

convert  Pat a  E b  (a ↣ b)

The pattern argument gets built up from patterns appearing in lambdas and serves as a variable binding “context”. To begin, we’ll strip the pattern off of a lambda, eta-expanding if necessary:

toCCC  E (a ⇨ b)  (a ↣ b)
toCCC (Lam p e) = convert p e
toCCC e = toCCC (etaExpand e)

(We could instead begin with a dummy unit pattern/context, giving toCCC the type E c → (() ↣ c).)

The conversion algorithm uses a collection of simple equivalences.

For constants, we have a simple equivalence:

λ p  c ≡ const c

Thus the implementation:

convert _ (ConstE o _) = Const o

For applications, split the expression in two (repeating the context), compute the function and argument parts separately, combine with (△), and then apply:

λ p  u v ≡ apply ∘ ((λ p  u) △ (λ p  v))

The implementation:

convert p (u :^ v) = Apply :∘ (convert p u :△ convert p v)

For lambda expressions, simply curry:

λ p  λ q  e  ≡ curry (λ (p,q)  e)

Assume that there is no variable shadowing, so that p and q have no variables in common. The implementation:

convert p (Lam q e) = Curry (convert (p :# q) e)

Finally, we have to deal with variables. Given λ p → v for a pattern p and variable v appearing in p, either v ≡ p, or p is a pair pattern with v appearing in the left or the right part. To handle these three possibilities, appeal to the following equivalences:

λ v  v     ≡ id
λ (p,q)  e ≡ (λ p  e) ∘ exl  -- if q not free in e
λ (p,q)  e ≡ (λ q  e) ∘ exr  -- if p not free in e

By a pattern not occurring freely, I mean that no variable in the pattern occurs freely.

These properties lead to an implementation:

convert (VarPat u) (Var v) | u ≡ v              = Id
convert (p :# q)   e       | not (q `occurs` e) = convert p e :Exl
convert (p :# q)   e       | not (p `occurs` e) = convert q e :Exr

There are two problems with this code. The first is a performance issue. The recursive convert calls will do considerable redundant work due to the recursive nature of occurs.

To fix this performance problem, handle only λ p → v (variables), and search through the pattern structure only once, returning a Maybe (a ↣ b). The return value is Nothing when v does not occur in p.

convert p (Var v) =
  fromMaybe (error ("convert: unbound variable: " ++ show v)) $
  convertVar p k

If a sub-pattern search succeeds, tack on the ( ∘ Exl) or ( ∘ Exr) using (<$>) (i.e., fmap). Backtrack using mplus.

convertVar   b a. V b  Pat a  Maybe (a ↣ b)
convertVar u = conv
 where
   conv  Pat c  Maybe (c ↣ b)
   conv (VarPat v) | u ≡ v    = Just Id
                   | otherwise = Nothing
   conv UnitPat  = Nothing
   conv (p :# q) = ((:Exr) <$> conv q) `mplus` ((:Exl) <$> conv p)

(The explicit type quantification and the ScopedTypeVariables language extension relate the b the signatures of convertVar and conv. Note that we’ve solved the problem of redundant occurs testing, eliminating those tests altogether.

The second problem is more troubling: the definitions of convert for Var above do not type-check. Look again at the first try:

convert  Pat a  E b  (a ↣ b)
convert (VarPat u) (Var v) | u ≡ v = Id

The error message:

Could not deduce (b ~ a)
...
Expected type: V a
  Actual type: V b
In the second argument of `(==)', namely `v'
In the expression: u == v

The bug here is that we cannot compare u and v for equality, because they may differ. The definition of convertVar has a similar type error.

Taking care with types

There’s a trick I’ve used in many libraries to handle this situation of wanting to compare for equality two values that may or may not have the same type. For equal values, don’t return simply True, but rather a proof that the types do indeed match. For unequal values, we simply fail to return an equality proof. Thus the comparison operation on V has the following type:

varTyEq  V a  V b  Maybe (a :=: b)

where a :=: b is populated only proofs that a and b are the same type.

The type of type equality proofs is defined in Data.Proof.EQ from the ty package:

data (:=:)  *  *  * where Refl  a :=: a

The Refl constructor is name to suggest the axiom of reflexivity, which says that anything is equal to itself. There are other utilities for commutativity, associativity, and lifting of equality to type constructors.

In fact, this pattern comes up often enough that there’s a type class in the Data.IsTy module of the ty package:

class IsTy f where
  tyEq  f a  f b  Maybe (a :=: b)

With this trick, we can fix our type-incorrect code above. Instead of

convert (VarPat u) (Var v) | u ≡ v = Id

define

convert (VarPat u) (Var v) | Just Refl  u `tyEq` v = Id

During type-checking, GHC uses the guard (“Just Refl ← u `tyEq` v”) to deduce an additional local constraint to use in type-checking the right-hand side (here Id). That constraint (a ~ b) suffices to make the definition type-correct.

In the same way, we can fix the more efficient implementation:

convertVar   b a. V b  Pat a  Maybe (a ↣ b)
convertVar u = conv
 where
   conv  Pat c  Maybe (c ↣ b)
   conv (VarPat v) | Just Refl  v `tyEq` u = Just Id
                   | otherwise              = Nothing
   conv UnitPat  = Nothing
   conv (p :# q) = ((:Exr) <$> conv q) `mplus` ((:Exl) <$> conv p)

Example

To see how conversion works in practice, consider a simple swap function:

swap (a,b) = (b,a)

When reified (as explained in a future post), we get

λ ds  (λ (a,b)  (b,a)) ds

Lambda expressions can be optimized at construction, in which case an η-reduction would yield the simpler λ (a,b) → (b,a). However, to make the translation more interesting, I’ll leave the lambda term unoptimized.

With the conversion algorithm given above, the (unoptimized) lambda term gets translated into the following:

apply ∘ (curry (apply ∘ (apply ∘ (const (,) △ (id ∘ exr) ∘ exr) △ (id ∘ exl) ∘ exr)) △ id)

Reformatted with line breaks:

  apply
. ( curry (apply ∘ ( apply ∘ (const (,) △ (id ∘ exr) ∘ exr)
                   △ (id ∘ exl) ∘ exr) )
  △ id )

If you squint, you may be able to see how this CCC expression relates to the lambda expression. The “λ ds →” got stripped initially. The remaining application “(λ (a,b) → (b,a)) ds” became apply ∘ (⋯ △ ⋯), where the right “” is id, which came from ds. The left “” has a curry from the “λ (a,b) →” and two applys from the curried application of (,) to b and a. The variables b and a become (id ∘ exr) ∘ exr and (id ∘ exl) ∘ exr, which are paths to b and a in the constructed binding pattern (ds,(a,b)).

I hope this example gives you a feeling for how the lambda-to-CCC translation works in practice, and for the complexity of the result. Fortunately, we can simplify the CCC terms as they’re constructed. For this example, as we’ll see in the next post, we get a much simpler result:

exr △ exl

This combination is common enough that it pretty-prints as

swapP

when CCC desugaring is turned on. (The “P” suffix refers to “product”, to distinguish from coproduct swap.)

Coming up

I’ll close this blog post now to keep it digestible. Upcoming posts will address optimization of biCCC expressions, circuit generation and analysis as biCCCs, and the GHC plugin that handles conversion of Haskell code to biCCC form, among other topics.

5 Comments

  1. Conal Elliott » Blog Archive » Optimizing CCCs:

    […] the post Overloading lambda, I gave a translation from a typed lambda calculus into the vocabulary of cartesian closed […]

  2. Dan Krejsa:

    Looking at the expression

    apply ∘ (curry (apply ∘ (apply ∘ (const (,) △ (id ∘ exr) ∘ exr) △ (id ∘ exl) ∘ exr)) △ id)

    I was asking myself, what is the interpretation of ‘const’ and of ‘(,)’ in the CCC?

    The only answer I came up with that seemed to make sense was that const is the function

    const :: (1 ↣ b) → (a ↣ b)
    const e = e ∘ it

    where ‘it’ is the unique CCC morphism a ↣ 1, 1 being Unit; and ‘(,)’ means the CCC morphism

    (,) :: 1 ↣ (a ⇨ (b ⇨ a × b))
    (,) = curry (curry (exr ∘ exl △ exr))

    I was curious why your ‘↣’ type representing the CCC morphisms didn’t have a fundamental constructor

    It :: a ↣ Unit

    corresponding to ‘it’, given that a terminal object 1 (or Unit) is required? I guess you can use Const as a generalization of that, but it seems to go somewhat outside of the category language…

  3. Conal Elliott » Blog Archive » Circuits as a bicartesian closed category:

    […] previous few posts have been about cartesian closed categories (CCCs). In From Haskell to hardware via cartesian […]

  4. Paolo G. Giarrusso:

    Answering to:

    -- Primitives
      Prim    ∷ Prim (a → b) → (a ↣ b)
      Const   ∷ Prim       b  → (a ↣ b)

    I’m not happy with the similarity of Prim and Const. Perhaps there’s a simpler formulation.

    Wouldn’t you want to model Const b as a primitive p of type Unit → b, and then construct Prim p of type Unit ↣ b? In a category with an initial object, AFAIK it’s standard to encode constants as morphisms from the initial object.

    The polymorphic a in Const result type seems especially unfortunate, since you can inadvertently pass whatever argument (you will likely still get some error, but probably not the best one).

    I see you’re sharing Prim with Haskell code, so those constants should also be encoded without the Unit → part of the type in some places. Unfortunately I don’t see a perfect solution to that: it might be best to have a different type of primitives for CCCs, with a constructor for function primitives and one for constants which adds the Unit → part, because that at least keeps the CCC language precisely matching its spec.

    (Sorry for my bad quoting, I can’t find how to format comments properly, so I just tried out some Markdown dialect which kind-of-works).

  5. Paolo G. Giarrusso:

    In a category with an initial object, AFAIK it’s standard to encode constants as morphisms from the initial object.

    Sorry, of course I meant terminal object (1 or Unit).

Leave a comment