The basic idea is to represent a 3D object as a “distance field”, i.e. a continuous function from R^3 to R that returns the distance to the object. The value of that function is zero inside the object, and grows as you get further away from it. That technique is well suited to ray marching on the GPU, because when you’re marching along the ray, evaluating the distance field at the current point gives you the size of the next step, so you can make bigger steps when the ray passes far from the object. Simple objects like spheres have distance fields that are easy to define, and simple operations on objects often correspond to simple operations on distance fields. It’s also easy to approximate things like normal direction, ambient occlusion, and even penumbras (send a ray to the light source and note how closely it passed to the occluder).

In a perfect world, we could have a combinator library in Haskell for rendering distance fields that would compile to a single GPU shader program. It’s scary how well this idea fits together.

]]>So do you have any intuition about what kind of shapes this essence you’re looking for might take?

Yes, I do. It will be simple, familiar, and general to math/types/FP folks. Sum/product/function — that sort of thing. It will explain known domain-specific operations as special cases of more general/simple operations. There will be plenty of TCMs in the API and not much else. Like Reactive’s FRP model built simply from product (Future) and function (Behavior).

As for extrapolation to yet unknown problems, my feeling is that it will practically come for free if the unification is successful.

Oh — then I guess we’re on the same track.

]]>So do you have any intuition about what kind of shapes this essence you’re looking for might take? Are you looking for the ‘SK calculus of 3D geometry’, or a higher-level view?

The notion of “real problems” is a slippery one. So often our difficulties are in the questions we’re used to asking and the old paradigms we’re attached to.

Of course it is slippery, since there are several ways to look at the same thing, which I also alluded to. But I was basically trying to say here that the ultimate ‘grand unified theory’ should be able to capture all these facets, so there will be a point where you’ll have to consider these connections. As for extrapolation to yet unknown problems, my feeling is that it will practically come for free if the unification is successful.

]]>Exposing the limitations of a model also gives me a clearer picture of it, which makes it simpler for me to reason about, although it loses a superficial appearance of simplicity in the process. Perhaps a good analogy would be my taste for Arch Linux over Ubuntu. They both share the same basic architecture, but Ubuntu hides its warts with a simple interface, having the advantage of not initially confusing a user, but the disadvantage of masking the true cause of problems that do arise, while Arch intentionally exposes the inherent complexity of its architecture, perhaps making the learning curve steeper, but also making it easier to reason about due to its full disclosure. While Ubuntu has an abstraction which Arch lacks, it is not a good abstraction because it leaks, therefore the user must learn not only the abstraction but also how it works. Masking or failing to make clear the inherent flaws of a model, I think, is kind of like a leaky abstraction.

]]>Namely, can you lift your discussion easily into one concerning smooth maps between manifolds of arbitrary dimension (or better yet, definable maps for some o-minimal structure, so you can handle corners in a principled fashion)?

Of course, to supply some hedged answers (though without a doubt you’re more aware of how to formalize this into Haskell than I) it seems that there are two major obstructions: 1. Abstracting over dimension seems to write. Lists for instance, don’t have compile-time length guarantees, and tuples get awkward when you must say (R,R,R,…R). Then again, I really appreciate your vector space library! 2. Bookkeeping manifold charts is bound to be kind of troublesome. On one hand, you might require charts to look the same, with specified ways to combine them (simplicial complexes are a really rigid way to manage this, Cw complexes are more flexible, but still restrictive, and of course neither fit the usual definitions). The essential problem is one of defining subsets of R^n in a nice way, perhaps o-minimal structures help here too.

In any event, this is idle speculation; but it would be a helpful semantics for computing on manifolds in Haskell!

]]>Hm. It feels good to come clean. No wonder Catholics like going to confession.

]]>I’m unclear on the question. The surface model *is* the image model (functions of 2D).

Or maybe you mean why didn’t I use 3D images, i.e., functions of *R ^{3}* (infinite, continuous voxel map). If so, then yes, mainly for GPU-friendliness.