Reimagining matrices

The function of the imagination is not
to make strange things settled, so much as
to make settled things strange.

- G.K. Chesterton

Why is matrix multiplication defined so very differently from matrix addition? If we didn’t know these procedures, could we derive them from first principles? What might those principles be?

This post gives a simple semantic model for matrices and then uses it to systematically derive the implementations that we call matrix addition and multiplication. The development illustrates what I call “denotational design”, particularly with type class morphisms. On the way, I give a somewhat unusual formulation of matrices and accompanying definition of matrix “multiplication”.

For more details, see the linear-map-gadt source code.

Edits:

  • 2012–12–17: Replaced lost B entries in description of matrix addition. Thanks to Travis Cardwell.
  • 2012–12018: Added note about math/browser compatibility.

Note: I’m using MathML for the math below, which appears to work well on Firefox but on neither Safari nor Chrome. I use Pandoc to generate the HTML+MathML from markdown+lhs+LaTeX. There’s probably a workaround using different Pandoc settings and requiring some tweaks to my WordPress installation. If anyone knows how (especially the WordPress end), I’d appreciate some pointers.

Continue reading ‘Reimagining matrices’ »

Thoughts on semantics for 3D graphics

The central question for me in designing software is always

What does it mean?

With functional programming, this question is especially crisp. For each data type I define, I want to have a precise and simple mathematical model. (For instance, my model for behavior is function-of-time, and my model of images is function-of-2D-space.) Every operation on the type is also given a meaning in terms of that semantic model.

This specification process, which is denotational semantics applied to data types, provides a basis for

  • correctness of the implementation,
  • user documentation free of implementation detail,
  • generating and proving properties, which can then be used in automated testing, and
  • evaluating and comparing the elegance and expressive power of design decisions.

For an example (2D images), some motivation of this process, and discussion, see Luke Palmer’s post Semantic Design. See also my posts on the idea and use of type class morphisms, which provide additional structure to denotational design.

In spring of 2008, I started working on a functional 3D library, FieldTrip. I’ve designed functional 3D libraries before as part of TBAG, ActiveVRML, and Fran. This time I wanted a semantics-based design, for all of the reasons given above. As always, I want a model that is

  • simple,
  • elegant, and
  • general.

For 3D, I also want the model to be GPU-friendly, i.e., to execute well on (modern) GPUs and to give access to their abilities.

I hadn’t thought of or heard a model that I was happy with, and so I didn’t have the sort of firm ground I like to stand on in working on FieldTrip. Last February, such a model occurred to me. I’ve had this blog post mostly written since then. Recently, I’ve been focused on functional 3D again for GPU-based rendering, and then Sean McDirmid posed a similar question, which got me thinking again.

Continue reading ‘Thoughts on semantics for 3D graphics’ »

Denotational design with type class morphisms

I’ve just finished a draft of a paper called Denotational design with type class morphisms, for submission to ICFP 2009. The paper is on a theme I’ve explored in several posts, which is semantics-based design, guided by type class morphisms.

I’d love to get some readings and feedback. Pointers to related work would be particularly appreciated, as well as what’s unclear and what could be cut. It’s an entire page over the limit, so I’ll have to do some trimming before submitting.

The abstract:

Type classes provide a mechanism for varied implementations of standard interfaces. Many of these interfaces are founded in mathematical tradition and so have regularity not only of types but also of properties (laws) that must hold. Types and properties give strong guidance to the library implementor, while leaving freedom as well. Some of the remaining freedom is in how the implementation works, and some is in what it accomplishes.

To give additional guidance to the what, without impinging on the how, this paper proposes a principle of type class morphisms (TCMs), which further refines the compositional style of denotational semantics. The TCM idea is simply that the instance’s meaning is the meaning’s instance. This principle determines the meaning of each type class instance, and hence defines correctness of implementation. In some cases, it also provides a systematic guide to implementation, and in some cases, valuable design feedback.

The paper is illustrated with several examples of type, meanings, and morphisms.

You can get the paper and see current errata here.

The submission deadline is March 2, so comments before then are most helpful to me.

Enjoy, and thanks!

What is automatic differentiation, and why does it work?

Bertrand Russell remarked that

Everything is vague to a degree you do not realize till you have tried to make it precise.

I’m mulling over automatic differentiation (AD) again, neatening up previous posts on derivatives and on linear maps, working them into a coherent whole for an ICFP submission. I understand the mechanics and some of the reasons for its correctness. After all, it’s "just the chain rule".

As usual, in the process of writing, I bumped up against Russell’s principle. I felt a growing uneasiness and realized that I didn’t understand AD in the way I like to understand software, namely,

  • What does it mean, independently of implementation?
  • How do the implementation and its correctness flow gracefully from that meaning?
  • Where else might we go, guided by answers to the first two questions?

Ever since writing Simply efficient functional reactivity, the idea of type class morphisms keeps popping up for me as a framework in which to ask and answer these questions. To my delight, this framework gives me new and more satisfying insight into automatic differentiation.

Continue reading ‘What is automatic differentiation, and why does it work?’ »

Another angle on functional future values

An earlier post introduced functional future values, which are values that cannot be known until the future, but can be manipulated in the present. That post presented a simple denotational semantics of future values as time/value pairs. With a little care in the definition of Time (using the Max monoid), the instances of Functor, Applicative, Monad are all derived automatically.

A follow-up post gave an implementation of Future values via multi threading. Unfortunately, that implementation did not necessarily satisfy the semantics, because it allowed the nondeterminism of thread scheduling to leak through. Although the implementation is usually correct, I wasn’t satisfied.

After a while, I hit upon an idea that really tickled me. My original simple semantics could indeed serve as a correct and workable implementation if I used a subtler form of time that could reveal partial information. Implementing this subtler form of time turned out to be quite tricky, and was my original motivation for the unamb operator described in the paper Push-pull functional reactive programming and the post Functional concurrency with unambiguous choice.

It took me several days of doodling, pacing outside, and talking to myself before the idea for unamb broke through. Like many of my favorite ideas, it’s simple and obvious in retrospect: to remove the ambiguity of nondeterministic choice (as in the amb operator), restrict its use to values that are equal when non-bottom. Whenever we have two different methods of answering the same question (or possibly failing), we can use unamb to try them both. Failures (errors or non-termination) are no problem in this context. A more powerful variation on unamb is the least upper bound operator lub, as described in Merging partial values.

I’ve been having trouble with the unamb implementation. When two (compatible) computations race, the loser gets killed so as to free up cycles that are no longer needed. My first few implementations, however, did not recursively terminate other threads spawned in service of abandoned computations (from nested use of unamb). I raised this problem in Smarter termination for thread racing, which suggested some better definitions. In the course of several helpful reader comments, some problems with my definitions were addressed, particularly in regard to blocking and unblocking exceptions. None of these definitions so far has done the trick reliably, and now it looks like there is a bug in the GHC run-time system. I hope the bug (if there is one) will be fixed soon, because I’m seeing more & more how unamb and lub can make functional programming even more modular (just as laziness does, as explained by John Hughes in Why Functional Programming Matters).

I started playing with future values and unambiguous choice as a way to implement Reactive, a library for functional reactive programming (FRP). (See Reactive values from the future and Push-pull functional reactive programming.) Over the last few days, I’ve given some thought to ways to implement future values without unambiguous choice. This post describes one such alternative.

Edits:

Continue reading ‘Another angle on functional future values’ »

Sequences, segments, and signals

The post Sequences, streams, and segments offered an answer to the the question of what’s missing in the following box:

infinitefinite
discreteStream Sequence
continuousFunction ???

I presented a simple type of function segments, whose representation contains a length (duration) and a function. This type implements most of the usual classes: Monoid, Functor, Zip, and Applicative, as well Comonad, but not Monad. It also implements a new type class, Segment, which generalizes the list functions length, take, and drop.

The function type is simple and useful in itself. I believe it can also serve as a semantic foundation for functional reactive programming (FRP), as I’ll explain in another post. However, the type has a serious performance problem that makes it impractical for some purposes, including as implementation of FRP.

Fortunately, we can solve the performance problem by adding a simple layer on top of function segments, to get what I’ll call “signals”. With this new layer, we have an efficient replacement for function segments that implements exactly the same interface with exactly the same semantics. Pleasantly, the class instances are defined fairly simply in terms of the corresponding instances on function segments.

You can download the code for this post.

Edits:

  • 2008-12-06: dup [] = [] near the end (was [mempty]).
  • 2008-12-09: Fixed take and drop default definitions (thanks to sclv) and added point-free variant.
  • 2008-12-18: Fixed appl, thanks to sclv.
  • 2011-08-18: Eliminated accidental emoticon in the definition of dup, thanks to anonymous.

Continue reading ‘Sequences, segments, and signals’ »

Enhancing a Zip

This post is part four of the zip folding series inspired by Max Rabkin’s Beautiful folding post. I meant to write one little post, but one post turned into four. When I sense that something can be made simpler/clearer, I can’t leave it be.

To review:

  • Part One related Max’s data representation of left folds to type class morphisms, a pattern that’s been steering me lately toward natural (inevitable) design of libraries.
  • Part Two simplified that representation to help get to the essence of zipping, and in doing so lost the expressiveness necessary to define Functorand Applicative instaces.
  • Part Three proved the suitability of the zipping definitions in Part Two.

This post shows how to restore the Functor and Applicative (very useful composition tools) to folds but does so in a way that leaves the zipping functionality untouched. This new layer is independent of folding and can be layered onto any zippable type.

You can get the code described below.

Edits:

  • 2009-02-15: Simplified WithCont, collapsing two type parameters into one. Added functor comment about cfoldlc'.

Continue reading ‘Enhancing a Zip’ »

Another lovely example of type class morphisms

I read Max Rabkin’s recent post Beautiful folding with great excitement. He shows how to make combine multiple folds over the same list into a single pass, which can then drastically reduce memory requirements of a lazy functional program. Max’s trick is giving folds a data representation and a way to combine representations that corresponds to combining the folds.

Peeking out from behind Max’s definitions is a lovely pattern I’ve been noticing more and more over the last couple of years, namely type class morphisms.

Continue reading ‘Another lovely example of type class morphisms’ »

Composing memo tries

The post Elegant memoization with functional memo tries showed a simple type of search tries and their use for functional memoization of functions. This post provides some composition tools for memo tries, whose definitions are inevitable, in that they are determined by the principle presented in Simplifying semantics with type class morphisms.

Continue reading ‘Composing memo tries’ »

Simplifying semantics with type class morphisms

When I first started playing with functional reactivity in Fran and its predecessors, I didn’t realize that much of the functionality of events and reactive behaviors could be packaged via standard type classes. Then Conor McBride & Ross Paterson introduced us to applicative functors, and I remembered using that pattern to reduce all of the lifting operators in Fran to just two, which correspond to pure and (< *>) in the Applicative class. So, in working on a new library for functional reactive programming (FRP), I thought I’d modernize the interface to use standard type classes as much as possible.

While spelling out a precise (denotational) semantics for the FRP instances of these classes, I noticed a lovely recurring pattern:

The meaning of each method corresponds to the same method for the meaning.

In this post, I’ll give some examples of this principle and muse a bit over its usefulness. For more details, see the paper Simply efficient functional reactivity. Another post will start exploring type class morphisms and type composition, and ask questions I’m wondering about.

Continue reading ‘Simplifying semantics with type class morphisms’ »