Vector space bases via type families

A basis B of a vector space V is a subset B of V, such that the elements of B are linearly independent and span V. That is to say, every element (vector) of V is a linear combination of elements of B, and no element of B is a linear combination of the other elements of B. Moreover, every basis determines a unique decomposition of any member of V into coordinates relative to B.

This post gives a simple Haskell implementation for a canonical basis of a vector space, and a means of decomposing vectors into coordinates. It uses [indexed type families] (associated types), and is quite general, despite its simplicity.

The Haskell module described here is part of the vector-space library (version 0.4 or later), which available on Hackage and a darcs repository. See the wiki page, interface documentation, and source code. The library version described below (0.5 or later) relies on ghc 6.10.

Edits:

• 2008-11-09: Tweaked comment above about version.
• 2008-02-09: just fiddling around

We’ll need a bit of preliminary before jumping into basis types.

(^+^) v → v → v
zeroV v
negateV v → v

A vector space is an additive group with scalar multiplication, so it also has an associated type of scalars:

class AdditiveGroup v ⇒ VectorSpace v where
type Scalar v *
(*^) Scalar v → v → v

This associated type could instead by written as a functional dependency (fundep). In fact, the type family implementation in ghc 6.9 is not quite up to working with the code in this post, so the library contains a (6.9-friendly) version together with as a fundep (Data.VectorSpace) and the version given in this post (Data.AVectorSpace). Sometime after ghc-6.10 is released, I will retire the fundep version and rename AVectorSpace to VectorSpace. Similarly, the library temporarily contains Data.ABasis.

Basis types

Since Haskell doesn't have subtyping, we can't represent a basis type directly as a subset. Instead, for an arbitrary vector space v, represent the (canonical) basis as an associated type, Basis v, and a function that interprets a basis representation as a vector.

class VectorSpace v ⇒ HasBasis v where
type Basis v *
basisValue Basis v → v

Another method extracts coordinates (coefficients) for a vector. Each coordinate is associated with a basis element.

decompose     v → [(Basis v, Scalar v)]

We can also reverse the process, recomposing into a vector, by linearly combining the basis elements:

linearCombo  VectorSpace v ⇒ [(v,Scalar v)] → v
linearCombo ps = sumV [s *^ v | (v,s) ← ps]

recompose HasBasis v ⇒ [(Basis v, Scalar v)] → v
recompose = linearCombo ∘ fmap (first basisValue)

The defining property is

recompose ∘ decompose ≡ id

Exercise: why might decompose ∘ recompose not be the identity? What if the decomposition were represented instead as a finite map?

Primitive bases

Any scalar field is also a vector space over itself. For instance,

zeroV = 0.0
(^+^) = (+)
negateV = negate

instance VectorSpace Double where
type Scalar Double = Double
(*^) = (*)

The canonical basis of a one-dimensional space has only one element, namely 1, which can be represented with no information.

instance HasBasis Double where
type Basis Double = ()
basisValue () = 1
decompose s = [((),s)]

Composing bases

zeroV = (zeroV,zeroV)
(u,v) ^+^ (u',v') = (u^+^u',v^+^v')
negateV (u,v) = (negateV u,negateV v)

Pairs of vector spaces, over the same scalar field, form vector spaces:

instance (VectorSpace u,VectorSpace v, Scalar u ~ Scalar v)
VectorSpace (u,v) where
type Scalar (u,v) = Scalar u
s *^ (u,v) = (s*^u,s*^v)

Given vector spaces u and v, a basis representation for (u,v) will be one basis representation or the other, tagged with Left or Right:

instance (HasBasis u, HasBasis v, Scalar u ~ Scalar v)
HasBasis (u,v) where
type Basis (u,v) = Basis u `Either` Basis v

The basis vectors themselves will be (ub,0) or (0,vb), where ub is a basis vector for u, and vb is a basis vector for v. As expected then, the dimensionality of the cross product is the sum of the dimensions.

basisValue (Left  a) = (basisValue a, zeroV)
basisValue (Right b) = (zeroV, basisValue b)

The decomposition of a vector (u,v) contains left-tagged decompositions of u and right-tagged decompositions of v.

decompose (u,v) = decomp2 Left u ++ decomp2 Right v

where

decomp2  HasBasis w ⇒ (Basis w → b) → w → [(Scalar w, b)]
decomp2 inject = fmap (first inject) ∘ decompose

Triples etc, can be handled similarly. Instead, the library implementation reduces them to the pair case:

instance ( HasBasis u, s ~ Scalar u
, HasBasis v, s ~ Scalar v
, HasBasis w, s ~ Scalar w )
HasBasis (u,v,w) where
type Basis (u,v,w) = Basis (u,(v,w))

Exercise: complete this instance definition (without peeking).

Bases in spaces

What about other vector spaces, particularly infinite dimensional ones? The result type for decompose is not convenient:

decompose  v → [(Basis v, Scalar v)]

Moreover, its definition for pair types would have to be changed, e.g., to use interleaving instead of append. On the other hand, this type can be thought of as an association list, representing Basis v → Scalar v. Instead, we might use the function representation directly:

decompose  v → (Basis v → Scalar v)

In that case, the definition of decompose on pairs is

decompose (u,v) = decompose u `either` decompose v

which beautifully mirrors the basis type definition:

type Basis (u,v) = Basis u `Either` Basis v

I guess we'd have to somehow extend the definition of recompose as well, or avoid it.

One example of an infinite dimensional vector space is a function over an infinite domain. The additive group and vector space instances follow a standard form for applicative functors applied to an additive group or vector space. In this case, the applicative functor is ((→) a).

zeroV = pure zeroV
(^+^) = liftA2 (^+^)
negateV = fmap negateV

instance VectorSpace v ⇒ VectorSpace (a → v) where
type Scalar (a → v) = Scalar v
(*^) s = fmap ((*^) s)

As a basis for a function space a→u, let's use the subset of functions that map one domain value to some basis vector for u and map all other domain values to zero. By linearly combining these functions, we can construct any function in a→u that is nonzero for finitely many domain values. If we stretch the notion of linear combinations beyond finite combinations, perhaps we can go further and cover at least the countably infinite domain types.

To represent these functions, it suffices to record the choice of domain element and the representation of the corresponding basis vector for u:

instance (Eq a, HasBasis u) ⇒ HasBasis (a → u) where
type Basis (a → u) = (a, Basis u)

basisValue (a,b) a' | a ≡ a' = basisValue b
| otherwise = zeroV

The actual implementation is a bit more efficient, avoiding recomputation of basisValue b for each a'.

Decomposing a function into its coordinates is even simpler:

decompose g (a,b) = decompose (g a) b

Some isomorphisms

This instance rule for functions will be applied repeatedly for curried functions. For instance,

Basis (a → b → u) ≡ (a, (b, Basis u))

The isomorphic uncurried form has an isomorphic basis:

Basis ((a,b) → u) ≡ ((a,b), Basis u)

Pairing in the range instead of domain gives rise to another pair of isomorphisms:

Basis (a → (b,c)) ≡ (a, Basis b `Either` Basis c)

Basis ((a → b, a → c)) ≡ (a, Basis b) `Either` (a, Basis c)

The rules for basis types look like logarithms, especially if we add a basis for ():

type Basis () = Void

type Basis (u,v) = Basis u `Either` Basis v

type Basis (a → u) = (a, Basis u)

Compare with:

log 1 = 0

log (u × v) = log u + log v

log (ua) = a  ×  log u

The rules are also essentially the same as the ones used for memo tries, but phrased in terms of logarithms instead of (explicit) exponents.

1. Conal Elliott » Blog Archive » Simpler, more efficient, functional linear maps:

[…] About « Vector space bases via type families […]

2. Luke Palmer:

Nice article.

I have a feature request for vector-space. The definition of AdditiveGroup you gave is incomplete; it is missing essential documentation. Please help set the trend and document typeclass laws in plain sight.

3. Dominic Steinitz:

What do you mean by a canonical basis? A vector space doesn’t have one distinguished basis. Bases are related by invertible linear maps.

4. conal:

I agree. Thanks for the reminder, Luke. And not just as comments but as executable QuickCheck test generators.

5. conal:

What do you mean by a canonical basis? A vector space doesn’t have one distinguished basis.

The library distinguishes one basis out of the many possible. I mean a canonical basis in that sense, not universally.

6. traeger:

I just needed high dimentional vectors, looked at your source and saw the ‘vector-trick'; I wrote a little extention with lists instead of vectors. The Vector-Vector-operators (^+^, <.>) are using the zero element for component-vise operations if one list is shorter than the other.

e.g.: [1,1,1] ^+^ [1,1] = [2,2,1]

import Data.VectorSpace

zeroV         = []
v ^+^ u       = zipWithDefault (^+^) zeroV zeroV v u
negateV v     = map negateV v

instance (VectorSpace v, s ~ Scalar v) => VectorSpace [v] where
type Scalar [v] = Scalar v
s *^ v = map (s *^) v

instance (InnerSpace v, AdditiveGroup (Scalar v)) => InnerSpace [v] where
v <.> u = foldl (^+^) zeroV \$ zipWithDefault (<.>) zeroV zeroV v u

-- like zipWith(Prelude) but with default values which
-- will be used if one list is shorter than the other
zipWithDefault :: (a -> b -> c) -> a -> b -> [a] -> [b] -> [c]
zipWithDefault z ad bd (a:as) (b:bs) = z a b : zipWithDefault z ad bd as bs