A third view on trees

A few recent posts have played with trees from two perspectives. The more commonly used I call "top-down", because the top-level structure is most immediately apparent. A top-down binary tree is either a leaf or a pair of such trees, and that pair can be accessed without wading through intervening structure. Much less commonly used are "bottom-up" trees. A bottom-up binary tree is either a leaf or a single such tree of pairs. In the non-leaf case, the pair structure of the tree elements is accessible by operations like mapping, folding, or scanning. The difference is between a pair of trees and a tree of pairs.

As an alternative to the top-down and bottom-up views on trees, I now want to examine a third view, which is a hybrid of the two. Instead of pairs of trees or trees of pairs, this hybrid view is of trees of trees, and more specifically of bottom-up trees of top-down trees. As we’ll see, these hybrid trees emerge naturally from the top-down and bottom-up views. A later post will show how this third view lends itself to an in-place (destructive) scan algorithm, suitable for execution on modern GPUs.

Edits:

  • 2011-06-04: "Suppose we have a bottom-up tree of top-down trees, i.e., t ∷ TB (TT a). Was backwards. (Thanks to Noah Easterly.)
  • 2011-06-04: Notation: "f ➶ n" and "f ➴ n".

Continue reading ‘A third view on trees’ »

Parallel tree scanning by composition

My last few blog posts have been on the theme of scans, and particularly on parallel scans. In Composable parallel scanning, I tackled parallel scanning in a very general setting. There are five simple building blocks out of which a vast assortment of data structures can be built, namely constant (no value), identity (one value), sum, product, and composition. The post defined parallel prefix and suffix scan for each of these five "functor combinators", in terms of the same scan operation on each of the component functors. Every functor built out of this basic set thus has a parallel scan. Functors defined more conventionally can be given scan implementations simply by converting to a composition of the basic set, scanning, and then back to the original functor. Moreover, I expect this implementation could be generated automatically, similarly to GHC’s DerivingFunctor extension.

Now I’d like to show two examples of parallel scan composition in terms of binary trees, namely the top-down and bottom-up variants of perfect binary leaf trees used in previous posts. (In previous posts, I used the terms "right-folded" and "left-folded" instead of "top-down" and "bottom-up".) The resulting two algorithms are expressed nearly identically, but have differ significantly in the work performed. The top-down version does Θ(nlogn) work, while the bottom-up version does only Θ(n), and thus the latter algorithm is work-efficient, while the former is not. Moreover, with a very simple optimization, the bottom-up tree algorithm corresponds closely to Guy Blelloch’s parallel prefix scan for arrays, given in Programming parallel algorithms. I’m delighted with this result, as I had been wondering how to think about Guy’s algorithm.

Edit:

  • 2011-05-31: Added Scan and Applicative instances for T2 and T4.

Continue reading ‘Parallel tree scanning by composition’ »

Composable parallel scanning

The post Deriving list scans gave a simple specification of the list-scanning functions scanl and scanr, and then transformed those specifications into the standard optimized implementations. Next, the post Deriving parallel tree scans adapted the specifications and derivations to a type of binary trees. The resulting implementations are parallel-friendly, but not work-efficient, in that they perform nlogn work vs linear work as in the best-known sequential algorithm.

Besides the work-inefficiency, I don’t know how to extend the critical initTs and tailTs functions (analogs of inits and tails on lists) to depth-typed, perfectly balanced trees, of the sort I played with in A trie for length-typed vectors and From tries to trees. The difficulty I encounter is that the functions initTs and tailTs make unbalanced trees out of balanced ones, so I don’t know how to adapt the specifications when types prevent the existence of unbalanced trees.

This new post explores an approach to generalized scanning via type classes. After defining the classes and giving a simple example, I’ll give a simple & general framework based on composing functor combinators.

Edits:

  • 2011-03-02: Fixed typo. "constant functor is easiest" (instead of "identity functor"). Thanks, frguybob.
  • 2011-03-05: Removed final unfinished sentence.
  • 2011-07-28: Replace "assocL" with "assocR" in prefixScan derivation for g ∘ f.

Continue reading ‘Composable parallel scanning’ »

Fixing lists

In the post Memoizing polymorphic functions via unmemoization, I toyed with the idea of lists as tries. I don’t think [a] is a trie, simply because [a] is a sum type (being either nil or a cons), while tries are built out of the identity, product, and composition functors. In contrast, Stream is a trie, being built solely with the identity and product functors. Moreover, Stream is not just any old trie, it is the trie that corresponds to Peano (unary natural) numbers, i.e., Stream a ≅ N → a, where

data N = Zero | Succ N

data Stream a = Cons a (Stream a)

If we didn’t already know the Stream type, we would derive it systematically from N, using standard isomorphisms.

Stream is a trie (over unary numbers), thanks to it having no choice points, i.e., no sums in its construction. However, streams are infinite-only, which is not always what we want. In contrast, lists can be finite, but are not a trie in any sense I understand. In this post, I look at how to fix lists, so they can be finite and yet be a trie, thanks to having no choice points (sums)?

You can find the code for this post and the previous one in a code repository.

Edits:

Continue reading ‘Fixing lists’ »

Fixing broken isomorphisms — details for non-strict memoization, part 2

The post Details for non-strict memoization, part 1 works out a systematic way of doing non-strict memoization, i.e., correct memoization of non-strict (and more broadly, non-hyper-strict) functions. As I mentioned at the end, there was an awkward aspect, which is that the purported “isomorphisms” used for regular types are not quite isomorphisms.

For instance, functions from triples are memoized by converting to and from nested pairs:

untriple ∷ (a,b,c) -> ((a,b),c)
untriple (a,b,c) = ((a,b),c)

triple ∷ ((a,b),c) -> (a,b,c)
triple ((a,b),c) = (a,b,c)

Then untriple and triple form an embedding/projection pair, i.e.,

triple ∘ untriple ≡ id
untriple ∘ triple ⊑ id

The reason for the inequality is that the nested-pair form permits (⊥,c), which does not correspond to any triple.

untriple (triple (⊥,c)) ≡ untriple ⊥ ≡ ⊥

Can we patch this problem by simply using an irrefutable (lazy) pattern in the definition of triple, i.e., triple (~(a,b),c) = (a,b,c)? Let’s try:

untriple (triple (⊥,c)) ≡ untriple (⊥,⊥,c) ≡ ((⊥,⊥),c)

So isomorphism fails and so does even the embedding/projection property.

Similarly, to deal with regular algebraic data types, I used a class that describes regular data types as repeated applications of a single, associated pattern functor (following A Lightweight Approach to Datatype-Generic Rewriting):

class Functor (PF t) ⇒ Regular t where
  type PF t ∷ * → *
  unwrap ∷ t → PF t t
  wrap   ∷ PF t t → t

Here unwrap converts a value into its pattern functor form, and wrap converts back. For example, here is the Regular instance I had used for lists:

instance Regular [a] where
  type PF [a] = Const () :+: Const a :*: Id

  unwrap []     = InL (Const ())
  unwrap (a:as) = InR (Const a :*: Id as)

  wrap (InL (Const ()))          = []
  wrap (InR (Const a :*: Id as)) = a:as

Again, we have an embedding/projection pair, rather than a genuine isomorphism:

wrap ∘ unwrap ≡ id
unwrap ∘ wrap ⊑ id

The inequality comes from ⊥ values occurring in PF [a] [a] at type Const () [a], (), (Const a :*: Id) [a], Const a [a], or Id [a].

Continue reading ‘Fixing broken isomorphisms — details for non-strict memoization, part 2’ »

Another angle on zippers

The zipper is an efficient and elegant data structure for purely functional editing of tree-like data structures, first published by Gérard Huet. Zippers maintain a location of focus in a tree and support navigation operations (up, down, left, right) and editing (replace current focus).

The original zipper type and operations are customized for a single type, but it’s not hard to see how to adapt to other tree-like types, and hence to regular data types. There have been many follow-up papers to The Zipper, including a polytypic version in the paper Type-indexed data types.

All of the zipper adaptations and generalizations I’ve seen so far maintain the original navigation interface. In this post, I propose an alternative interface that appears to significantly simplify matters. There are only two navigation functions instead of four, and each of the two is specified and implemented via a fairly simple one-liner.

I haven’t used this new zipper formulation in an application yet, so I do not know whether some usefulness has been lost in simplifying the interface.

The code in this blog post is taken from the Haskell library functor-combo and completes the Holey type class introduced in Differentiation of higher-order types.

Edits:

  • 2010-07-29: Removed some stray Just applications in up definitions. (Thanks, illissius.)
  • 2010-07-29: Augmented my complicated definition of tweak2 with a much simpler version from Sjoerd Visscher.
  • 2010-07-29: Replaced fmap (first (:ds')) with (fmap.first) (:ds') in down definitions. (Thanks, Sjoerd.)

Continue reading ‘Another angle on zippers’ »

Differentiation of higher-order types

A “one-hole context” is a data structure with one piece missing. Conor McBride pointed out that the derivative of a regular type is its type of one-hole contexts. When a data structure is assembled out of common functor combinators, a corresponding type of one-hole contexts can be derived mechanically by rules that mirror the standard derivative rules learned in beginning differential calculus.

I’ve been playing with functor combinators lately. I was delighted to find that the data-structure derivatives can be expressed directly using the standard functor combinators and type families.

The code in this blog post is taken from the Haskell library functor-combo.

See also the Haskell Wikibooks page on zippers, especially the section called “Differentiation of data types”.

I mean this post not as new research, but rather as a tidy, concrete presentation of some of Conor’s delightful insight.

Continue reading ‘Differentiation of higher-order types’ »

Details for non-strict memoization, part 1

In Non-strict memoization, I sketched out a means of memoizing non-strict functions. I gave the essential insight but did not show the details of how a nonstrict memoization library comes together. In this new post, I give details, which are a bit delicate, in terms of the implementation described in Elegant memoization with higher-order types.

Near the end, I run into some trouble with regular data types, which I don’t know how to resolve cleanly and efficiently.

Edits:

  • 2010-09-10: Fixed minor typos.

Continue reading ‘Details for non-strict memoization, part 1’ »

Elegant memoization with higher-order types

A while back, I got interested in functional memoization, especially after seeing some code from Spencer Janssen using the essential idea of Ralf Hinze’s paper Generalizing Generalized Tries. The blog post Elegant memoization with functional memo tries describes a library, MemoTrie, based on both of these sources, and using associated data types. I would have rather used associated type synonyms and standard types, but I couldn’t see how to get the details to work out. Recently, while playing with functor combinators, I realized that they might work for memoization, which they do quite nicely.

This blog post shows how functor combinators lead to an even more elegant formulation of functional memoization. The code is available as part of the functor-combo package.

The techniques in this post are not so much new as they are ones that have recently been sinking in for me. See Generalizing Generalized Tries, as well as Generic programming with fixed points for mutually recursive datatypes.

Edits:

  • 2011-01-28: Fixed small typo: “b^^a^^” ⟼ “ba
  • 2010-09-10: Corrected Const definition to use newtype instead of data.
  • 2010-09-10: Added missing Unit type definition (as Const ()).

Continue reading ‘Elegant memoization with higher-order types’ »

Paper: Beautiful differentiation

I have another paper draft for submission to ICFP 2009. This one is called Beautiful differentiation, The paper is a culmination of the several posts I’ve written on derivatives and automatic differentiation (AD). I’m happy with how the derivation keeps getting simpler. Now I’ve boiled extremely general higher-order AD down to a Functor and Applicative morphism.

I’d love to get some readings and feedback. I’m a bit over the page the limit, so I’ll have to do some trimming before submitting.

The abstract:

Automatic differentiation (AD) is a precise, efficient, and convenient method for computing derivatives of functions. Its implementation can be quite simple even when extended to compute all of the higher-order derivatives as well. The higher-dimensional case has also been tackled, though with extra complexity. This paper develops an implementation of higher-dimensional, higher-order differentiation in the extremely general and elegant setting of calculus on manifolds and derives that implementation from a simple and precise specification.

In order to motivate and discover the implementation, the paper poses the question “What does AD mean, independently of implementation?” An answer arises in the form of naturality of sampling a function and its derivative. Automatic differentiation flows out of this naturality condition, together with the chain rule. Graduating from first-order to higher-order AD corresponds to sampling all derivatives instead of just one. Next, the notion of a derivative is generalized via the notions of vector space and linear maps. The specification of AD adapts to this elegant and very general setting, which even simplifies the development.

You can get the paper and see current errata here.

The submission deadline is March 2, so comments before then are most helpful to me.

Enjoy, and thanks!