Parallel speculative addition via memoization

I’ve been thinking much more about parallel computation for the last couple of years, especially since starting to work at Tabula a year ago. Until getting into parallelism explicitly, I’d naïvely thought that my pure functional programming style was mostly free of sequential bias. After all, functional programming lacks the implicit accidental dependencies imposed by the imperative model. Now, however, I’m coming to see that designing parallel-friendly algorithms takes attention to minimizing the depth of the remaining, explicit data dependencies.

As an example, consider binary addition, carried out from least to most significant bit (as usual). We can immediately compute the first (least significant) bit of the result, but in order to compute the second bit, we’ll have to know whether or not a carry resulted from the first addition. More generally, the (n+1)th sum & carry require knowing the nth carry, so this algorithm does not allow parallel execution. Even if we have one processor per bit position, only one processor will be able to work at a time, due to the linear chain of dependencies.

One general technique for improving parallelism is speculation—doing more work than might be needed so that we don’t have to wait to find out exactly what will be needed. In this post, we’ll see a progression of definitions for bitwise addition. We’ll start with a linear-depth chain of carry dependencies and end with logarithmic depth. Moreover, by making careful use of abstraction, these versions will be simply different type specializations of a single polymorphic definition with an extremely terse definition.

Continue reading ‘Parallel speculative addition via memoization’ »

From tries to trees

This post is the last of a series of six relating numbers, vectors, and trees, revolving around the themes of static size-typing and memo tries. We’ve seen that length-typed vectors form a trie for bounded numbers, and can handily represent numbers as well. We’ve also seen that n-dimensional vectors themselves have an elegant trie, which is the n-ary composition of the element type’s trie functor:

type VTrie n a = Trie a :^ n 

where for any functor f and natural number type n,

f :^ n  f ∘ ⋯ ∘ f  -- (n times)

This final post in the series places this elegant mechanism of n-ary functor composition into a familiar & useful context, namely trees. Again, type-encoded Peano numbers are central. Just as BNat uses these number types to (statically) bound natural numbers (e.g., for a vector index or a numerical digit), and Vec uses number types to capture vector length, we’ll next use number types to capture tree depth.

Edits:

  • 2011-02-02: Changes thanks to comments from Sebastian Fischer
    • Added note about number representations and leading zeros (without size-typing).
    • Added pointer to Memoizing polymorphic functions via unmemoization for derivation of Tree d a ≅ [d] → a.
    • Fixed signatures for some Branch variants, bringing type parameter a into parens.
    • Clarification about number of VecTree vs pairing constructors in remarks on left- vs right-folded trees.
  • 2011-02-06: Fixed link to From Fast Exponentiation to Square Matrices.

Continue reading ‘From tries to trees’ »

A trie for length-typed vectors

As you might have noticed, I’ve been thinking and writing about memo tries lately. I don’t mean to; they just keep coming up.

Memoization is the conversion of functions to data structures. A simple, elegant, and purely functional form of memoization comes from applying three common type isomorphisms, which also correspond to three laws of exponents, familiar from high school math, as noted by Ralf Hinze in his paper Generalizing Generalized Tries.

In Haskell, one can neatly formulate memo tries via an associated functor, Trie, with a convenient synonym "k ↛ v" for Trie k v, as in Elegant memoization with higher-order types. (Note that I’ve changed my pretty-printing from "k :→: v" to "k ↛ v".) The key property is that the data structure encodes (is isomorphic to) a function, i.e.,

k ↛ a  k → a

In most cases, we ignore non-strictness, though there is a delightful solution for memoizing non-strict functions correctly.

My previous four posts explored use of types to statically bound numbers and to determine lengths of vectors.

Just as (infinite-only) streams are the natural trie for unary natural numbers, we saw in Reverse-engineering length-typed vectors that length-typed vectors (one-dimensional arrays) are the natural trie for statically bounded natural numbers.

BNat n ↛ a ≡ Vec n a

and so

BNat n → a  Vec n a

In retrospect, this relationship is completely unsurprising, since a vector of length n is a collection of values, indexed by 0, . . . , n - 1.

In that same post, I noted that vectors are not only a trie for bounded numbers, but when the elements are also bounded numbers, the vectors can also be thought of as numbers. Both the number of digits and the number base are captured statically, in types:

type Digits n b = Vec n (BNat b)

The type parameters n and b here are type-encodigs of unary numbers, i.e., built up from zero and successor (Z and S). For instance, when b ≡ S (S Z), we have n-bit binary numbers.

In this new post, I look at another question of tries and vectors. Given that Vec n is the trie for BNat n, is there also a trie for Vec n?

Edits:

  • 2011-01-31: Switched trie notation to "k ↛ v" to avoid missing character on iPad.

Continue reading ‘A trie for length-typed vectors’ »

Reverse-engineering length-typed vectors

The last few posts posts followed a winding path toward a formulation of a type for length-typed vectors. In Fixing lists, I mused how something like lists could be a trie type. The Stream functor (necessarily infinite lists) is the natural trie for Peano numbers. The standard list functor [] (possibly finite lists) doesn’t seem to be a trie, since it’s built from sums. However, the functor Vec n of vectors ("fixed lists") of length n is built from (isomorphic to) products only (for any given n), and so might well be a trie.

Of what type is Vec n the corresponding trie? In other words, for what type q is Vec n a isomorphic to q → a (for all a).

Turning this question on its head, what simpler type gives rise to length-typed vectors in a standard fashion?

Edits:

  • 2011-02-01: Define Digits n b as BNat n ↛ BNat b.

Continue reading ‘Reverse-engineering length-typed vectors’ »

Doing more with length-typed vectors

The post Fixing lists defined a (commonly used) type of vectors, whose lengths are determined statically, by type. In Vec n a, the length is n, and the elements have type a, where n is a type-encoded unary number, built up from zero and successor (Z and S).

infixr 5 :<

data Vec *** where
ZVec Vec Z a
(:<) a → Vec n a → Vec (S n) a

It was fairly easy to define foldr for a Foldable instance, fmap for Functor, and (⊛) for Applicative. Completing the Applicative instance is tricky, however. Unlike foldr, fmap, and (⊛), pure doesn’t have a vector structure to crawl over. It must create just the right structure anyway. I left this challenge as a question to amuse readers. In this post, I give a few solutions, including my current favorite.

You can find the code for this post and the two previous ones in a code repository.

Continue reading ‘Doing more with length-typed vectors’ »

Fixing lists

In the post Memoizing polymorphic functions via unmemoization, I toyed with the idea of lists as tries. I don’t think [a] is a trie, simply because [a] is a sum type (being either nil or a cons), while tries are built out of the identity, product, and composition functors. In contrast, Stream is a trie, being built solely with the identity and product functors. Moreover, Stream is not just any old trie, it is the trie that corresponds to Peano (unary natural) numbers, i.e., Stream a ≅ N → a, where

data N = Zero | Succ N

data Stream a = Cons a (Stream a)

If we didn’t already know the Stream type, we would derive it systematically from N, using standard isomorphisms.

Stream is a trie (over unary numbers), thanks to it having no choice points, i.e., no sums in its construction. However, streams are infinite-only, which is not always what we want. In contrast, lists can be finite, but are not a trie in any sense I understand. In this post, I look at how to fix lists, so they can be finite and yet be a trie, thanks to having no choice points (sums)?

You can find the code for this post and the previous one in a code repository.

Edits:

Continue reading ‘Fixing lists’ »

Type-bounded numbers

I’ve been thinking a lot lately about how to derive low-level massively parallel programs from high-level specifications. One of the recurrent tools is folds (reductions) with an associative operator. Associativity allows a linear chain of computations to be restructured into a tree, exposing parallelism. I’ll write up some of my thoughts on deriving parallel programs, but first I’d like to share a few fun ideas I’ve encountered, relating natural numbers (represented in various bases), vectors (one-dimensional arrays), and trees. This material got rather long for a single blog post, so I’ve broken it up into six. A theme throughout will be using types to capture the sizes of the numbers, vectors, and trees.

In writing this series, I wanted to explore an idea for how binary numbers can emerge from simpler and/or more universal notions. And how trees can likewise emerge from binary numbers.

Let’s start with unary (Peano) natural numbers:

data Unary = Zero | Succ Unary

You might notice a similarity with the list type, which could be written as follows:

data List a = Nil | Cons a (List a)

or with a bit of renaming:

data [a] = [] | a : [a]

Specializing a to (), we could just as well have define Unary as a list of unit values:

type Unary = [()]

Though only if we’re willing to ignore bottom elements (i.e., ⊥ ∷ ()).

Suppose, however, that we don’t want to use unary. We could define and use a type for binary natural numbers. A binary number is either zero, or a zero bit followed by a binary number, or a one bit followed by a binary number:

data Binary = Zero | ZeroAnd Binary | OneAnd Binary

Alternatively, combine the latter two cases into one, making the bit type explicit:

data Binary = Zero | NonZero Bit Binary

Equivalently,

type Binary = [Bit]

We could define the Bit type as a synonym for Bool or as its own distinct, two-valued data type.

Next, how about ternary numbers, decimal numbers, etc? Rather than defining an ad hoc collection of data types, how might we define a single general type of n-ary numbers?

You can find the code for this post in a code repository.

Edits:

  • 2011-01-30: Example of finding the natural numbers greater than a given one
  • 2011-01-30: Equality and comparison
  • 2011-01-30: More section headings
  • 2011-01-30: Mention of correspondence to commutative diagram
  • 2011-01-30: Pointer to code repository.

Continue reading ‘Type-bounded numbers’ »

Adding numbers

Introduction

I’m starting to think about exact numeric computation. As a first step in getting into issues, I’ve been playing with addition on number representations, particularly carry look-ahead adders.

This post plays with adding numbers and explores a few variations, beginning with the standard algorithm I learned as a child, namely working from right to left (least to most significant), propagating carries. For fun & curiosity, I also try out a pseudo-parallel version using circular programming, as well as a state-monad formulation. Each of these variations has its own elegance.

While familiar and simple, right-to-left algorithms have a fundamental limitation. Since they begin with the least significant digit, they cannot be applied numbers that have infinitely many decreasingly significant digits. To add exact real numbers, we’ll need a different algorithm.

Given clear formulations of right-to-left addition, and with exact real addition in mind, I was curious about left-to-right addition. The circular formulation adapts straightforwardly. Delightfully, the monadic version adapts even more easily, by replacing the usual state monad with the backward state monad.

To exploit the right-to-left algorithms in exact real addition, I had to tweak the single-digit addition step to be a bit laxer (less strict). With this change, infinite-digit addition works just fine.

Continue reading ‘Adding numbers’ »