Doing more with length-typed vectors

The post Fixing lists defined a (commonly used) type of vectors, whose lengths are determined statically, by type. In Vec n a, the length is n, and the elements have type a, where n is a type-encoded unary number, built up from zero and successor (Z and S).

infixr 5 :<

data Vec *** where
ZVec Vec Z a
(:<) a → Vec n a → Vec (S n) a

It was fairly easy to define foldr for a Foldable instance, fmap for Functor, and (⊛) for Applicative. Completing the Applicative instance is tricky, however. Unlike foldr, fmap, and (⊛), pure doesn’t have a vector structure to crawl over. It must create just the right structure anyway. I left this challenge as a question to amuse readers. In this post, I give a few solutions, including my current favorite.

You can find the code for this post and the two previous ones in a code repository.

Continue reading ‘Doing more with length-typed vectors’ »

Fixing lists

In the post Memoizing polymorphic functions via unmemoization, I toyed with the idea of lists as tries. I don’t think [a] is a trie, simply because [a] is a sum type (being either nil or a cons), while tries are built out of the identity, product, and composition functors. In contrast, Stream is a trie, being built solely with the identity and product functors. Moreover, Stream is not just any old trie, it is the trie that corresponds to Peano (unary natural) numbers, i.e., Stream a ≅ N → a, where

data N = Zero | Succ N

data Stream a = Cons a (Stream a)

If we didn’t already know the Stream type, we would derive it systematically from N, using standard isomorphisms.

Stream is a trie (over unary numbers), thanks to it having no choice points, i.e., no sums in its construction. However, streams are infinite-only, which is not always what we want. In contrast, lists can be finite, but are not a trie in any sense I understand. In this post, I look at how to fix lists, so they can be finite and yet be a trie, thanks to having no choice points (sums)?

You can find the code for this post and the previous one in a code repository.

Edits:

Continue reading ‘Fixing lists’ »

Type-bounded numbers

I’ve been thinking a lot lately about how to derive low-level massively parallel programs from high-level specifications. One of the recurrent tools is folds (reductions) with an associative operator. Associativity allows a linear chain of computations to be restructured into a tree, exposing parallelism. I’ll write up some of my thoughts on deriving parallel programs, but first I’d like to share a few fun ideas I’ve encountered, relating natural numbers (represented in various bases), vectors (one-dimensional arrays), and trees. This material got rather long for a single blog post, so I’ve broken it up into six. A theme throughout will be using types to capture the sizes of the numbers, vectors, and trees.

In writing this series, I wanted to explore an idea for how binary numbers can emerge from simpler and/or more universal notions. And how trees can likewise emerge from binary numbers.

Let’s start with unary (Peano) natural numbers:

data Unary = Zero | Succ Unary

You might notice a similarity with the list type, which could be written as follows:

data List a = Nil | Cons a (List a)

or with a bit of renaming:

data [a] = [] | a : [a]

Specializing a to (), we could just as well have define Unary as a list of unit values:

type Unary = [()]

Though only if we’re willing to ignore bottom elements (i.e., ⊥ ∷ ()).

Suppose, however, that we don’t want to use unary. We could define and use a type for binary natural numbers. A binary number is either zero, or a zero bit followed by a binary number, or a one bit followed by a binary number:

data Binary = Zero | ZeroAnd Binary | OneAnd Binary

Alternatively, combine the latter two cases into one, making the bit type explicit:

data Binary = Zero | NonZero Bit Binary

Equivalently,

type Binary = [Bit]

We could define the Bit type as a synonym for Bool or as its own distinct, two-valued data type.

Next, how about ternary numbers, decimal numbers, etc? Rather than defining an ad hoc collection of data types, how might we define a single general type of n-ary numbers?

You can find the code for this post in a code repository.

Edits:

  • 2011-01-30: Example of finding the natural numbers greater than a given one
  • 2011-01-30: Equality and comparison
  • 2011-01-30: More section headings
  • 2011-01-30: Mention of correspondence to commutative diagram
  • 2011-01-30: Pointer to code repository.

Continue reading ‘Type-bounded numbers’ »

Adding numbers

Introduction

I’m starting to think about exact numeric computation. As a first step in getting into issues, I’ve been playing with addition on number representations, particularly carry look-ahead adders.

This post plays with adding numbers and explores a few variations, beginning with the standard algorithm I learned as a child, namely working from right to left (least to most significant), propagating carries. For fun & curiosity, I also try out a pseudo-parallel version using circular programming, as well as a state-monad formulation. Each of these variations has its own elegance.

While familiar and simple, right-to-left algorithms have a fundamental limitation. Since they begin with the least significant digit, they cannot be applied numbers that have infinitely many decreasingly significant digits. To add exact real numbers, we’ll need a different algorithm.

Given clear formulations of right-to-left addition, and with exact real addition in mind, I was curious about left-to-right addition. The circular formulation adapts straightforwardly. Delightfully, the monadic version adapts even more easily, by replacing the usual state monad with the backward state monad.

To exploit the right-to-left algorithms in exact real addition, I had to tweak the single-digit addition step to be a bit laxer (less strict). With this change, infinite-digit addition works just fine.

Continue reading ‘Adding numbers’ »

Memoizing polymorphic functions via unmemoization

Last year I wrote two posts on memoizing polymorphic functions. The first post gave a solution of sorts that uses a combination of patricia trees (for integer-keyed maps), stable names, and type equality proofs. The second post showed how to transform some functions that do not quite fit the required form so that they do fit.

Dan Piponi wrote a follow-up post Memoizing Polymorphic Functions with High School Algebra and Quantifiers showing a different approach that was more in the spirit of type-directed functional memoization, as it follows purely from mathematical properties, free of the deeply operational magic of stable names. Recently, I finally studied and worked with Dan’s post enough to understand what he did. It’s very clever and beautiful!

This post re-presents Dan’s elegant insight as I understand it, via some examples that helped it come together for me.

Continue reading ‘Memoizing polymorphic functions via unmemoization’ »

Fixing broken isomorphisms — details for non-strict memoization, part 2

The post Details for non-strict memoization, part 1 works out a systematic way of doing non-strict memoization, i.e., correct memoization of non-strict (and more broadly, non-hyper-strict) functions. As I mentioned at the end, there was an awkward aspect, which is that the purported “isomorphisms” used for regular types are not quite isomorphisms.

For instance, functions from triples are memoized by converting to and from nested pairs:

untriple ∷ (a,b,c) -> ((a,b),c)
untriple (a,b,c) = ((a,b),c)
 
triple ∷ ((a,b),c) -> (a,b,c)
triple ((a,b),c) = (a,b,c)

Then untriple and triple form an embedding/projection pair, i.e.,

triple ∘ untriple ≡ id
untriple ∘ triple ⊑ id

The reason for the inequality is that the nested-pair form permits (⊥,c), which does not correspond to any triple.

untriple (triple (,c)) ≡ untriple ⊥ ≡ ⊥

Can we patch this problem by simply using an irrefutable (lazy) pattern in the definition of triple, i.e., triple (~(a,b),c) = (a,b,c)? Let’s try:

untriple (triple (,c)) ≡ untriple (,,c)((,),c)

So isomorphism fails and so does even the embedding/projection property.

Similarly, to deal with regular algebraic data types, I used a class that describes regular data types as repeated applications of a single, associated pattern functor (following A Lightweight Approach to Datatype-Generic Rewriting):

class Functor (PF t) ⇒ Regular t where
  type PF t ∷ **
  unwrap ∷ t → PF t t
  wrap   ∷ PF t t → t

Here unwrap converts a value into its pattern functor form, and wrap converts back. For example, here is the Regular instance I had used for lists:

instance Regular [a] where
  type PF [a] = Const () :+: Const a :*: Id
 
  unwrap []     = InL (Const ())
  unwrap (a:as) = InR (Const a :*: Id as)
 
  wrap (InL (Const ()))          = []
  wrap (InR (Const a :*: Id as)) = a:as

Again, we have an embedding/projection pair, rather than a genuine isomorphism:

wrap ∘ unwrap ≡ id
unwrap ∘ wrap ⊑ id

The inequality comes from ⊥ values occurring in PF [a] [a] at type Const () [a], (), (Const a :*: Id) [a], Const a [a], or Id [a].

Continue reading ‘Fixing broken isomorphisms — details for non-strict memoization, part 2’ »

Lazier functional programming, part 2

In Lazier functional programming, part 1, I suggested that some of the standard tools of lazy functional programming are not as lazy as they might be, including even if-then-else. Generalizing from boolean conditionals, I posed the puzzle of how to define a lazier either function, which encapsulates case analysis for sum types. The standard definition:

data Either a b = Left a | Right b
 
either :: (a → c)(b → c)Either a b → c
either f g (Left  x) = f x
either f g (Right y) = g y

The comments to part 1 fairly quickly converged to something close to the laxer definition I had in mind:

eitherL f g = const (f ⊥ ⊓ g ⊥)either f g

which is a simple generalization of Luke Palmer‘s laxer if-then-else (reconstructed from memory):

bool c a b = (a ⊓ b)(if c then a else b)

Continue reading ‘Lazier functional programming, part 2’ »