## Details for non-strict memoization, part 1

In Non-strict memoization, I sketched out a means of memoizing non-strict functions. I gave the essential insight but did not show the details of how a nonstrict memoization library comes together. In this new post, I give details, which are a bit delicate, in terms of the implementation described in Elegant memoization with higher-order types.

Near the end, I run into some trouble with regular data types, which I don’t know how to resolve cleanly and efficiently.

Edits:

• 2010-09-10: Fixed minor typos.

### Hyper-strict memo tries

Strict memoization (really hyper-strict) is centered on a family of trie functors, defined as a functor `Trie k`, associated with a type `k`.

``````type k :→: v = Trie k v

class HasTrie k where
type Trie k :: * → *
trie   :: (k  →  v) → (k :→: v)
untrie :: (k :→: v) → (k  →  v)
``````

The simplest instance is for the unit type:

``````instance HasTrie () where
type Trie ()  = Id
trie   f      = Id (f ())
untrie (Id v) = λ () → v
``````

For consistency with other types, I just made a small change from the previous version, which used `const v` instead of the stricter `λ () → v`.

Sums and products are a little more intricate:

``````instance (HasTrie a, HasTrie b) ⇒ HasTrie (Either a b) where
type Trie (Either a b) = Trie a :*: Trie b
trie   f           = trie (f ∘ Left) :*: trie (f ∘ Right)
untrie (ta :*: tb) = untrie ta `either` untrie tb

instance (HasTrie a, HasTrie b) ⇒ HasTrie (a , b) where
type Trie (a , b) = Trie a :. Trie b
trie   f = O (trie (trie ∘ curry f))
untrie (O tt) = uncurry (untrie ∘ untrie tt)
``````

These trie types are not just strict, they’re hyper-strict. During trie search, arguments get thorougly evaluated. (See Section 9 in the paper Denotational design with type class morphisms.) In other words, all of the points of possible undefinedness are lost.

### Strict and non-strict memo tries

The formulation of strict tries will look very like the hyper-strict tries we’ve already seen, with new names for the associated trie type and the conversion methods:

``````type k :→ v = STrie k v

class HasTrie k where
type STrie k :: * → *
sTrie   ::             (k  → v) → (k :→ v)
sUntrie :: HasLub v ⇒ (k :→ v) → (k  → v)
``````

Besides renaming, I’ve also added a `HasLub` constraint for `sUntrie`, which we’ll need later.

For instance, the (almost) simplest strict trie is the one for the unit type, defined exactly as before (with new names):

``````instance HasTrie () where
type STrie ()  = Id
sTrie   f      = Id (f ())
sUntrie (Id v) = λ () → v
``````

For non-strict memoization, we’ll want to recover all of the points of possible undefinedness lost in hyper-strict memoization. At every level of a structured value, there is the possibility of ⊥ or of a non-⊥ value. Correspondingly, a non-strict trie consists of the value corresponding to the argument ⊥, together with a strict (but not hyper-strict) trie for the non-⊥ values:

``````data Trie k v = Trie v (k :→ v)

type k :→: v = Trie k v
``````

The conversions between functions and non-strict tries are no longer methods, as they can be defined uniformly for all domain types. To form a non-strict trie, capture the function’s value at ⊥, and build a strict (but not hyper-strict) trie:

``````trie   :: (HasTrie k          ) ⇒ (k  →  v) → (k :→: v)
trie f = Trie (f ⊥) (sTrie f)
``````

To convert back from a non-strict trie to a (now memoized) function, combine the information from two sources: the original function’s value at ⊥, and the function resulting from the strict (but not hyper-strict) trie:

``````untrie :: (HasTrie k, HasLub v) ⇒ (k :→: v) → (k  →  v)
untrie (Trie b t) = const b ⊔ sUntrie t
``````

The least-upper-bound (⊔) here is well-defined because its arguments are information-compatible (consistent, non-contradictory). More strongly, `const b ⊑ sUntrie t`, i.e., the first argument is an information approximation to (contains no information absent from) the second argument. Now we see the need for `HasLub v` in the type of `sUntrie` above: functions are ⊔-able exactly when their result types are.

### Sums

Just as non-strict tries contain strict tries, so also strict tries contain non-strict tries. For instance, consider a sum type, `Either a b`. An element is either ⊥ or `Left x` or `Right y`, for `x :: a` and `y :: b`. The types `a` and `b` also contain a bottom element, so we’ll need non-strict memo tries for them:

``````instance (HasTrie a, HasTrie b) ⇒ HasTrie (Either a b) where
type STrie (Either a b) = Trie a :*: Trie b
sTrie   f           = trie (f ∘ Left) :*: trie (f ∘ Right)
sUntrie (ta :*: tb) = untrie ta `either` untrie tb
``````

Just as in the unit instance (above), the only visible change from hyper-strict to strict is that the left-hand sides use the strict trie type and operations. The right-hand sides are written exactly as before, though now they refer to non-strict tries and their operations.

### Products

With product, we run into some trouble. As a first attempt, change only the names on the left-hand side:

``````instance (HasTrie a, HasTrie b) ⇒ HasTrie (a , b) where
type STrie (a , b) = Trie a :. Trie b
sTrie   f      = O (trie (trie ∘ curry f))
sUntrie (O tt) = uncurry (untrie ∘ untrie tt)
``````

This `sUntrie` definition, however, leads to an error in type-checking:

``````Could not deduce (HasLub (Trie b v)) from the context (HasLub v)
arising from a use of `untrie'
``````

The troublesome `untrie` use is the one applied directly to `tt`. (Thank you for column numbers, GHC.)

So what’s going on here? Since `sUntrie` in this definition takes a `(a,b) :→ v`, or equivalently, `STrie (a,b) v`,

``````O tt :: (a,b) :→ v
:: STrie (a,b) v
:: (Trie a :. Trie b) v
``````

The definition of type composition (from an earlier post) is

``````newtype (g :. f) x = O (g (f x))
``````

So

``````tt :: Trie a (Trie b v)
:: a :→: b :→: v
``````

and

``````untrie tt :: HasLub (b :→: v) ⇒ a → (b :→: v)
``````

The `HasLub` constraint comes from the type of `untrie` (above).

Continuing,

``````untrie ∘ untrie tt ::
(HasLub v, HasLub (b :→: v)) ⇒ a → (b → v)

uncurry (untrie ∘ untrie tt) ::
(HasLub v, HasLub (b :→: v)) ⇒ (a , b) → v
``````

which is almost the required type but contains the extra requirement that `HasLub (b :→: v)`.

Hm.

Looking at the definition of `Trie` and the definitions of `STrie` for various domain types `b`, I think it’s the case that `HasLub (b :→: v)`, whenever `HasLub v`, exactly as needed. In principle, I could make this requirement of `b` explicit as a superclass for `HasTrie`:

``````class (forall v. HasLub v ⇒ HasLub (b :→: v)) ⇒ HasTrie k where ...
``````

However, Haskell’s type system isn’t quite expressive enough, even with GHC extensions (as far as I know).

#### A possible solution

We could instead define a functor-level variant of `HasLub`:

``````class HasLubF f where
lubF :: HasLub v ⇒ f v → f v → f v
``````

and then use `lubF` instead of `(⊔)` in `sUntrie`. The revised `HasTrie` class definition:

``````class HasLubF (Trie k) ⇒ HasTrie k where
type STrie k :: * → *
sTrie   ::             (k  → v) → (k :→ v)
sUntrie :: HasLub v ⇒ (k :→ v) → (k  → v)
``````

I would rather not replicate and modify the `HasLub` class and all of its instances, so I’m going to set this idea aside and look for another.

#### Another route

Let’s return to the problematic definition of `sUntrie` for pairs:

``````sUntrie (O tt) = uncurry (untrie ∘ untrie tt)
``````

and recall that `tt :: a :→: b :→: v`. The strategy here was to first convert the outer trie (with domain `a`) and then the inner trie (with domain `b`).

Alternatively, we might reverse the order.

If we’re going to convert inside-out instead of outside-in, then we’ll need a way to transform each of the range elements of a trie. Which is exactly what `fmap` is for. If only we had a functor instance for `Trie a`, then we could re-define `sUntrie` on pair tries as follows:

``````sUntrie (O tt) = uncurry (untrie (fmap untrie tt))
``````

As a sanity check, try compiling this definition. Sure enough, it’s okay except for a missing `Functor` instance:

``````Could not deduce (Functor (Trie a))
from the context (HasTrie (a, b), HasTrie a, HasTrie b)
arising from a use of `fmap'
``````

Fixed easily enough:

``````instance Functor (STrie k) ⇒ Functor (Trie k) where
fmap f (Trie b t) = Trie (f b) (fmap f t)
``````

Or even, using the GHC language extensions `DeriveFunctor` and `StandaloneDeriving`, just

``````deriving instance Functor (STrie k) ⇒ Functor (Trie k)
``````

Now we get a slightly different error message. We’re now missing a Functor instance for `STrie a` instead of `Trie a`:

``````Could not deduce (Functor (STrie a))
from the context (HasTrie (a, b), HasTrie a, HasTrie b)
arising from a use of `fmap'
``````

By the way, we can also construct tries inside-out, if we want:

``````sTrie f = O (fmap trie (trie (curry f)))
``````

So we’ll be in good shape if we can satisfy the `Functor` requirement on strict tries. Fortunately, all of the strict trie (higher-order) types appearing are indeed functors, since we built them up using functor combinators.

Still, we’ll have to help the type-checker prove that all of the trie types it involved must indeed be functors. Again, a superclass constraint can capture this requirement:

``````class Functor (STrie k) ⇒ HasTrie k where ...
``````

Unlike `HasLub`, this time the required constraint is already at the functor level, so we don’t have to define a new class. We don’t even have to define any new instances, as our functor combinators come with `Functor` instances, all of which can be derived automatically by GHC.

With this one change, all of the `HasTrie` instances go through!

### Isomorphisms

As pointed out in Memoizing higher-order functions, type isomorphism is the central, repeated theme of functional memoization. In addition to the isomorphism between functions and tries, the tries for many types are given via isomorphism with other types that have tries. In this way, we only have to define tries for our tiny set of functor combinators.

Isomorphism support is as in Elegant memoization with higher-order types, just using the new names:

``````#define HasTrieIsomorph(Context,Type,IsoType,toIso,fromIso)
instance Context ⇒ HasTrie (Type) where {
type STrie (Type) = STrie (IsoType);
sTrie f = sTrie (f ∘ (fromIso));
sUntrie t = sUntrie t ∘ (toIso);
}
``````

Note the use of strict tries even on the right-hand sides.

Aside: as mentioned in Composing memo tries, `trie`/`untrie` forms not just an isomorphism but a pair of type class morphisms (TCMs). (For motivation and examples of TCMs in software design, see Denotational design with type class morphisms.)

### Regular data types

Regular data types are isomorphic to fixed-points of functors. Elegant memoization with higher-order types gives a brief introduction to these notions and pointers to more information. That post also shows how to use the `Regular` type class and its instances (defined for other purposes as well) to provide hyper-strict memo tries for all regular data types.

Switching from hyper-strict to non-strict raises an awkward issue. The functor isomorphisms we used are only correct for fully defined data-types. When we allow full or partial undefinedness, as in a lazy language like Haskell, our isomorphisms break down.

Following A Lightweight Approach to Datatype-Generic Rewriting, here is the class I used, where “`PF`” stands for “pattern functor”:

``````class Functor (PF t) ⇒ Regular t where
type PF t :: * → *
unwrap :: t → PF t t
wrap   :: PF t t → t
``````

The `unwrap` method peels off a single layer from a regular type. For example, the top level of a list is either a unit (nil) or a pair (cons) of an element and a hole in which a list can be placed.

``````instance Regular [a] where
type PF [a] = Unit :+: Const a :*: Id   -- note Unit == Const ()

unwrap []     = InL (Const ())
unwrap (a:as) = InR (Const a :*: Id as)

wrap (InL (Const ()))          = []
wrap (InR (Const a :*: Id as)) = a:as
``````

The catch here is that the `unwrap` and `wrap` methods do not really form an isomorphism. Instead, they satisfy a weaker connection: they form embedding/projection pair. That is,

``````wrap ∘ unwrap ≡ id
unwrap ∘ wrap ⊑ id
``````

To see the mismatch between `[a]` and `PF [a] [a]`, note that the latter has opportunities for partial undefinedness that have no corresponding opportunities in `[a]`. Specifically, ⊥ could occur at type `Const () [a]`, `()`, `(Const a :*: Id) [a]`, `Const a [a]`, or `Id [a]`. Any of these ⊥ values will result in `wrap` returning ⊥ altogether. For instance, if

``````oops :: PF [Integer]
oops = InR (⊥ :*: Id [3,5])
``````

then

``````unwrap (wrap oops) ≡ unwrap ⊥ ≡ ⊥ ⊑ oops
``````

By examining various cases, we can prove that `unwrap (wrap p) ⊑ p` for all `p`, which is to say `unwrap ∘ wrap ⊑ id`, since information ordering on functions is defined point-wise. (See Merging partial values.)

Examining the definition of `unwrap` above shows that it does not give rise to the troublesome ⊥ points, and so a trivial equational proof shows that `wrap ∘ unwrap ≡ id`.

In the context of memoization, the additional undefined values are problematic. Consider the case of lists. The specification macro

``````HasTrieRegular1([], ListSTrie)
``````

expands into a `newtype` and its `HasTrie` instance. Changing only the associated type and method names in the version for hyper-strict memoization:

``````newtype ListSTrie a v = ListSTrie (PF [a] [a] :→: v)

instance HasTrie a ⇒ HasTrie [a] where
type STrie [a] = ListSTrie a
sTrie f = ListSTrie (sTrie (f . wrap))
sUntrie (ListSTrie t) = sUntrie t . unwrap
``````

Note that the trie in `ListSTrie` trie contains entries for many ⊥ sub-elements that do not correspond to any list values. The memoized function is `f ∘ wrap`, which will have many fewer ⊥ possibilities than the trie structure supports. At each of the superfluous ⊥ points, the function sampled is strict, so the `Trie` (rather than `STrie`) will contain a predictable ⊥. Considering the definition of `untrie`:

``````untrie (Trie b t) = const b ⊔ sUntrie t
``````

we know `b ≡ ⊥`, and so `const b ⊔ sUntrie t ≡ sUntrie t`. Thus, at these points, the ⊥ value is never helpful, and we could use a strict (though not hyper-strict) trie instead of a non-strict trie.

Perhaps we could safely ignore this whole issue and lose only some efficiency, rather than correctness. Still, I’d rather build and traverse just the right trie for our regular types.

As this post is already longer than I intended, and my attention is wandering, I’ll publish it here and pick up later. Comments & suggestions please!

1. #### Felipe Lessa:

How do these lub’s affect performance in practice? IIRC, each of them creates two threads and uses some synchronizations between them. Are there cases where the non-strictness would outperform the performance penalty?

Cheers! =)

2. #### conal:

Hi Felipe,

How do these lub’s affect performance in practice? IIRC, each of them creates two threads and uses some synchronizations between them.

The performance depends a lot on the implementation of `unamb` and `lub` and their susceptibility to compiler optimization. The most naive implementation I know of works as you mentioned: each `unamb` starts two threads, with an `MVar` to coordinate them. However, the threads can be avoided when (a) either argument is already evaluated to a non-⊥ form or (b) either argument is known to be ⊥ (e.g., typically an error of some sort, including evaluating `undefined` or a failed pattern-match). In other words,

```a ⊔ ⊥ ≡ a
⊥ ⊔ b ≡ b

a `unamb` ⊥ ≡ a
⊥ `unamb` b ≡ b

a `unamb` b ≡ a  if  a ≠ ⊥
a `unamb` b ≡ b  if  b ≠ ⊥```

All with the proviso the preconditions of `unamb` or `lub` are satisfied.

I’m more interested in the soundness and usefulness of uses of `lub` than its performance right now. Once there are compelling demonstrations of both, then there will be enough motivation to make high-performance implementations. Great performance might require some hacking inside the language run-time system. This same evolution occurred for lazy evaluation.

Are there cases where the non-strictness would outperform the performance penalty?

Yes, since non-strictness sometimes replaces non-termination with termination — a pretty impressive speed-up.

Seriously, though, like laziness, lub-based programming is about correctness, simple & rigorous reasoning, and modularity, as well as performance.