Deriving list scans

I’ve been playing with deriving efficient parallel, imperative implementations of "prefix sum" or more generally "left scan". Following posts will explore the parallel & imperative derivations, but as a warm-up, I’ll tackle the functional & sequential case here.

Folds

You’re probably familiar with the higher-order functions for left and right "fold". The current documentation says:

`foldl`, applied to a binary operator, a starting value (typically the left-identity of the operator), and a list, reduces the list using the binary operator, from left to right:

``foldl f z [x1, x2, ⋯, xn] ≡ (⋯((z `f` x1) `f` x2) `f`⋯) `f` xn``

The list must be finite.

`foldr`, applied to a binary operator, a starting value (typically the right-identity of the operator), and a list, reduces the list using the binary operator, from right to left:

``foldr f z [x1, x2, ⋯, xn] ≡ x1 `f` (x2 `f` ⋯ (xn `f` z)⋯)``

And here are typical definitions:

``foldl ∷ (b → a → b) → b → [a] → bfoldl f z []     = zfoldl f z (x:xs) = foldl f (z `f` x) xsfoldr ∷ (a → b → b) → b → [a] → bfoldr f z []     = zfoldr f z (x:xs) = x `f` foldr f z xs``

Notice that `foldl` builds up its result one step at a time and reveals it all at once, in the end. The whole result value is locked up until the entire input list has been traversed. In contrast, `foldr` starts revealing information right away, and so works well with infinite lists. Like `foldl`, `foldr` also yields only a final value.

Sometimes it’s handy to also get to all of the intermediate steps. Doing so takes us beyond the land of folds to the kingdom of scans.

Scans

The `scanl` and `scanr` functions correspond to `foldl` and `foldr` but produce all intermediate accumulations, not just the final one.

``scanl ∷ (b → a → b) → b → [a] → [b]scanl f z [x1, x2,  ⋯ ] ≡ [z, z `f` x1, (z `f` x1) `f` x2, ⋯]scanr ∷ (a → b → b) → b → [a] → [b]scanr f z [⋯, xn_1, xn] ≡ [⋯, xn_1 `f` (xn `f` z), xn `f` z, z]``

As you might expect, the last value is the complete left fold, and the first value in the scan is the complete right fold:

``last (scanl f z xs) ≡ foldl f z xshead (scanr f z xs) ≡ foldr f z xs``

which is to say

``last ∘ scanl f z ≡ foldl f zhead ∘ scanr f z ≡ foldr f z``

The standard scan definitions are trickier than the fold definitions:

``scanl ∷ (b → a → b) → b → [a] → [b]scanl f z ls = z : (case ls of                     []   → []                     x:xs → scanl f (z `f` x) xs)scanr ∷ (a → b → b) → b → [a] → [b]scanr _ z []     = [z]scanr f z (x:xs) = (x `f` q) : qs                   where qs@(q:_) = scanr f z xs``

Every time I encounter these definitions, I have to walk through it again to see what’s going on. I finally sat down to figure out how these tricky definitions might emerge from simpler specifications. In other words, how to derive these definitions systematically from simpler but less efficient definitions.

Most likely, these derivations have been done before, but I learned something from the effort, and I hope you do, too.

Continue reading ‘Deriving list scans’ »

From tries to trees

This post is the last of a series of six relating numbers, vectors, and trees, revolving around the themes of static size-typing and memo tries. We’ve seen that length-typed vectors form a trie for bounded numbers, and can handily represent numbers as well. We’ve also seen that n-dimensional vectors themselves have an elegant trie, which is the n-ary composition of the element type’s trie functor:

``type VTrie n a = Trie a :^ n ``

where for any functor `f` and natural number type `n`,

``f :^ n ≅ f ∘ ⋯ ∘ f  -- (n times)``

This final post in the series places this elegant mechanism of n-ary functor composition into a familiar & useful context, namely trees. Again, type-encoded Peano numbers are central. Just as `BNat` uses these number types to (statically) bound natural numbers (e.g., for a vector index or a numerical digit), and `Vec` uses number types to capture vector length, we’ll next use number types to capture tree depth.

Edits:

• 2011-02-02: Changes thanks to comments from Sebastian Fischer
• Added pointer to Memoizing polymorphic functions via unmemoization for derivation of `Tree d a ≅ [d] → a`.
• Fixed signatures for some `Branch` variants, bringing type parameter `a` into parens.
• Clarification about number of `VecTree` vs pairing constructors in remarks on left- vs right-folded trees.
• 2011-02-06: Fixed link to From Fast Exponentiation to Square Matrices.

Continue reading ‘From tries to trees’ »

A trie for length-typed vectors

As you might have noticed, I’ve been thinking and writing about memo tries lately. I don’t mean to; they just keep coming up.

Memoization is the conversion of functions to data structures. A simple, elegant, and purely functional form of memoization comes from applying three common type isomorphisms, which also correspond to three laws of exponents, familiar from high school math, as noted by Ralf Hinze in his paper Generalizing Generalized Tries.

In Haskell, one can neatly formulate memo tries via an associated functor, `Trie`, with a convenient synonym "`k ↛ v`" for `Trie k v`, as in Elegant memoization with higher-order types. (Note that I’ve changed my pretty-printing from "`k :→: v`" to "`k ↛ v`".) The key property is that the data structure encodes (is isomorphic to) a function, i.e.,

``k ↛ a ≅ k → a``

In most cases, we ignore non-strictness, though there is a delightful solution for memoizing non-strict functions correctly.

My previous four posts explored use of types to statically bound numbers and to determine lengths of vectors.

Just as (infinite-only) streams are the natural trie for unary natural numbers, we saw in Reverse-engineering length-typed vectors that length-typed vectors (one-dimensional arrays) are the natural trie for statically bounded natural numbers.

``BNat n ↛ a ≡ Vec n a``

and so

``BNat n → a ≅ Vec n a``

In retrospect, this relationship is completely unsurprising, since a vector of length n is a collection of values, indexed by 0, . . . , n - 1.

In that same post, I noted that vectors are not only a trie for bounded numbers, but when the elements are also bounded numbers, the vectors can also be thought of as numbers. Both the number of digits and the number base are captured statically, in types:

``type Digits n b = Vec n (BNat b)``

The type parameters `n` and `b` here are type-encodigs of unary numbers, i.e., built up from zero and successor (`Z` and `S`). For instance, when `b ≡ S (S Z)`, we have n-bit binary numbers.

In this new post, I look at another question of tries and vectors. Given that `Vec n` is the trie for `BNat n`, is there also a trie for `Vec n`?

Edits:

• 2011-01-31: Switched trie notation to "`k ↛ v`" to avoid missing character on iPad.

Continue reading ‘A trie for length-typed vectors’ »

Reverse-engineering length-typed vectors

The last few posts posts followed a winding path toward a formulation of a type for length-typed vectors. In Fixing lists, I mused how something like lists could be a trie type. The `Stream` functor (necessarily infinite lists) is the natural trie for Peano numbers. The standard list functor `[]` (possibly finite lists) doesn’t seem to be a trie, since it’s built from sums. However, the functor `Vec n` of vectors ("fixed lists") of length `n` is built from (isomorphic to) products only (for any given `n`), and so might well be a trie.

Of what type is `Vec n` the corresponding trie? In other words, for what type `q` is `Vec n a` isomorphic to `q → a` (for all `a`).

Turning this question on its head, what simpler type gives rise to length-typed vectors in a standard fashion?

Edits:

• 2011-02-01: Define `Digits n b` as `BNat n ↛ BNat b`.

Continue reading ‘Reverse-engineering length-typed vectors’ »

Doing more with length-typed vectors

The post Fixing lists defined a (commonly used) type of vectors, whose lengths are determined statically, by type. In `Vec n a`, the length is `n`, and the elements have type `a`, where `n` is a type-encoded unary number, built up from zero and successor (`Z` and `S`).

``infixr 5 :<data Vec ∷ * → * → * where  ZVec ∷                Vec Z     a  (:<) ∷ a → Vec n a → Vec (S n) a``

It was fairly easy to define `foldr` for a `Foldable` instance, `fmap` for `Functor`, and `(⊛)` for `Applicative`. Completing the `Applicative` instance is tricky, however. Unlike `foldr`, `fmap`, and `(⊛)`, `pure` doesn’t have a vector structure to crawl over. It must create just the right structure anyway. I left this challenge as a question to amuse readers. In this post, I give a few solutions, including my current favorite.

You can find the code for this post and the two previous ones in a code repository.

Continue reading ‘Doing more with length-typed vectors’ »

Fixing lists

In the post Memoizing polymorphic functions via unmemoization, I toyed with the idea of lists as tries. I don’t think `[a]` is a trie, simply because `[a]` is a sum type (being either nil or a cons), while tries are built out of the identity, product, and composition functors. In contrast, `Stream` is a trie, being built solely with the identity and product functors. Moreover, `Stream` is not just any old trie, it is the trie that corresponds to Peano (unary natural) numbers, i.e., `Stream a ≅ N → a`, where

``data N = Zero | Succ Ndata Stream a = Cons a (Stream a)``

If we didn’t already know the `Stream` type, we would derive it systematically from `N`, using standard isomorphisms.

`Stream` is a trie (over unary numbers), thanks to it having no choice points, i.e., no sums in its construction. However, streams are infinite-only, which is not always what we want. In contrast, lists can be finite, but are not a trie in any sense I understand. In this post, I look at how to fix lists, so they can be finite and yet be a trie, thanks to having no choice points (sums)?

You can find the code for this post and the previous one in a code repository.

Edits:

Type-bounded numbers

I’ve been thinking a lot lately about how to derive low-level massively parallel programs from high-level specifications. One of the recurrent tools is folds (reductions) with an associative operator. Associativity allows a linear chain of computations to be restructured into a tree, exposing parallelism. I’ll write up some of my thoughts on deriving parallel programs, but first I’d like to share a few fun ideas I’ve encountered, relating natural numbers (represented in various bases), vectors (one-dimensional arrays), and trees. This material got rather long for a single blog post, so I’ve broken it up into six. A theme throughout will be using types to capture the sizes of the numbers, vectors, and trees.

In writing this series, I wanted to explore an idea for how binary numbers can emerge from simpler and/or more universal notions. And how trees can likewise emerge from binary numbers.

``data Unary = Zero | Succ Unary``

You might notice a similarity with the list type, which could be written as follows:

``data List a = Nil | Cons a (List a)``

or with a bit of renaming:

``data [a] = [] | a : [a]``

Specializing `a` to `()`, we could just as well have define `Unary` as a list of unit values:

``type Unary = [()]``

Though only if we’re willing to ignore bottom elements (i.e., `⊥ ∷ ()`).

Suppose, however, that we don’t want to use unary. We could define and use a type for binary natural numbers. A binary number is either zero, or a zero bit followed by a binary number, or a one bit followed by a binary number:

``data Binary = Zero | ZeroAnd Binary | OneAnd Binary``

Alternatively, combine the latter two cases into one, making the bit type explicit:

``data Binary = Zero | NonZero Bit Binary``

Equivalently,

``type Binary = [Bit]``

We could define the `Bit` type as a synonym for `Bool` or as its own distinct, two-valued data type.

Next, how about ternary numbers, decimal numbers, etc? Rather than defining an ad hoc collection of data types, how might we define a single general type of n-ary numbers?

You can find the code for this post in a code repository.

Edits:

• 2011-01-30: Example of finding the natural numbers greater than a given one
• 2011-01-30: Equality and comparison
• 2011-01-30: Mention of correspondence to commutative diagram
• 2011-01-30: Pointer to code repository.

Introduction

I’m starting to think about exact numeric computation. As a first step in getting into issues, I’ve been playing with addition on number representations, particularly carry look-ahead adders.

This post plays with adding numbers and explores a few variations, beginning with the standard algorithm I learned as a child, namely working from right to left (least to most significant), propagating carries. For fun & curiosity, I also try out a pseudo-parallel version using circular programming, as well as a state-monad formulation. Each of these variations has its own elegance.

While familiar and simple, right-to-left algorithms have a fundamental limitation. Since they begin with the least significant digit, they cannot be applied numbers that have infinitely many decreasingly significant digits. To add exact real numbers, we’ll need a different algorithm.

To exploit the right-to-left algorithms in exact real addition, I had to tweak the single-digit addition step to be a bit laxer (less strict). With this change, infinite-digit addition works just fine.

Memoizing polymorphic functions via unmemoization

Last year I wrote two posts on memoizing polymorphic functions. The first post gave a solution of sorts that uses a combination of patricia trees (for integer-keyed maps), stable names, and type equality proofs. The second post showed how to transform some functions that do not quite fit the required form so that they do fit.

Dan Piponi wrote a follow-up post Memoizing Polymorphic Functions with High School Algebra and Quantifiers showing a different approach that was more in the spirit of type-directed functional memoization, as it follows purely from mathematical properties, free of the deeply operational magic of stable names. Recently, I finally studied and worked with Dan’s post enough to understand what he did. It’s very clever and beautiful!

This post re-presents Dan’s elegant insight as I understand it, via some examples that helped it come together for me.

Continue reading ‘Memoizing polymorphic functions via unmemoization’ »

Fixing broken isomorphisms — details for non-strict memoization, part 2

The post Details for non-strict memoization, part 1 works out a systematic way of doing non-strict memoization, i.e., correct memoization of non-strict (and more broadly, non-hyper-strict) functions. As I mentioned at the end, there was an awkward aspect, which is that the purported “isomorphisms” used for regular types are not quite isomorphisms.

For instance, functions from triples are memoized by converting to and from nested pairs:

``````untriple ∷ (a,b,c) -> ((a,b),c)
untriple (a,b,c) = ((a,b),c)

triple ∷ ((a,b),c) -> (a,b,c)
triple ((a,b),c) = (a,b,c)
``````

Then `untriple` and `triple` form an embedding/projection pair, i.e.,

``````triple ∘ untriple ≡ id
untriple ∘ triple ⊑ id
``````

The reason for the inequality is that the nested-pair form permits `(⊥,c)`, which does not correspond to any triple.

``````untriple (triple (⊥,c)) ≡ untriple ⊥ ≡ ⊥
``````

Can we patch this problem by simply using an irrefutable (lazy) pattern in the definition of `triple`, i.e., `triple (~(a,b),c) = (a,b,c)`? Let’s try:

``````untriple (triple (⊥,c)) ≡ untriple (⊥,⊥,c) ≡ ((⊥,⊥),c)
``````

So isomorphism fails and so does even the embedding/projection property.

Similarly, to deal with regular algebraic data types, I used a class that describes regular data types as repeated applications of a single, associated pattern functor (following A Lightweight Approach to Datatype-Generic Rewriting):

``````class Functor (PF t) ⇒ Regular t where
type PF t ∷ * → *
unwrap ∷ t → PF t t
wrap   ∷ PF t t → t
``````

Here `unwrap` converts a value into its pattern functor form, and `wrap` converts back. For example, here is the `Regular` instance I had used for lists:

``````instance Regular [a] where
type PF [a] = Const () :+: Const a :*: Id

unwrap []     = InL (Const ())
unwrap (a:as) = InR (Const a :*: Id as)

wrap (InL (Const ()))          = []
wrap (InR (Const a :*: Id as)) = a:as
``````

Again, we have an embedding/projection pair, rather than a genuine isomorphism:

``````wrap ∘ unwrap ≡ id
unwrap ∘ wrap ⊑ id
``````

The inequality comes from ⊥ values occurring in `PF [a] [a]` at type `Const () [a]`, `()`, `(Const a :*: Id) [a]`, `Const a [a]`, or `Id [a]`.

Continue reading ‘Fixing broken isomorphisms — details for non-strict memoization, part 2’ »