The basic idea is to represent a 3D object as a “distance field”, i.e. a continuous function from R^3 to R that returns the distance to the object. The value of that function is zero inside the object, and grows as you get further away from it. That technique is well suited to ray marching on the GPU, because when you’re marching along the ray, evaluating the distance field at the current point gives you the size of the next step, so you can make bigger steps when the ray passes far from the object. Simple objects like spheres have distance fields that are easy to define, and simple operations on objects often correspond to simple operations on distance fields. It’s also easy to approximate things like normal direction, ambient occlusion, and even penumbras (send a ray to the light source and note how closely it passed to the occluder).

In a perfect world, we could have a combinator library in Haskell for rendering distance fields that would compile to a single GPU shader program. It’s scary how well this idea fits together.

]]>So do you have any intuition about what kind of shapes this essence you’re looking for might take?

Yes, I do. It will be simple, familiar, and general to math/types/FP folks. Sum/product/function — that sort of thing. It will explain known domain-specific operations as special cases of more general/simple operations. There will be plenty of TCMs in the API and not much else. Like Reactive’s FRP model built simply from product (Future) and function (Behavior).

As for extrapolation to yet unknown problems, my feeling is that it will practically come for free if the unification is successful.

Oh — then I guess we’re on the same track.

]]>So do you have any intuition about what kind of shapes this essence you’re looking for might take? Are you looking for the ‘SK calculus of 3D geometry’, or a higher-level view?

The notion of “real problems” is a slippery one. So often our difficulties are in the questions we’re used to asking and the old paradigms we’re attached to.

Of course it is slippery, since there are several ways to look at the same thing, which I also alluded to. But I was basically trying to say here that the ultimate ‘grand unified theory’ should be able to capture all these facets, so there will be a point where you’ll have to consider these connections. As for extrapolation to yet unknown problems, my feeling is that it will practically come for free if the unification is successful.

]]>Exposing the limitations of a model also gives me a clearer picture of it, which makes it simpler for me to reason about, although it loses a superficial appearance of simplicity in the process. Perhaps a good analogy would be my taste for Arch Linux over Ubuntu. They both share the same basic architecture, but Ubuntu hides its warts with a simple interface, having the advantage of not initially confusing a user, but the disadvantage of masking the true cause of problems that do arise, while Arch intentionally exposes the inherent complexity of its architecture, perhaps making the learning curve steeper, but also making it easier to reason about due to its full disclosure. While Ubuntu has an abstraction which Arch lacks, it is not a good abstraction because it leaks, therefore the user must learn not only the abstraction but also how it works. Masking or failing to make clear the inherent flaws of a model, I think, is kind of like a leaky abstraction.

]]>Namely, can you lift your discussion easily into one concerning smooth maps between manifolds of arbitrary dimension (or better yet, definable maps for some o-minimal structure, so you can handle corners in a principled fashion)?

Of course, to supply some hedged answers (though without a doubt you’re more aware of how to formalize this into Haskell than I) it seems that there are two major obstructions: 1. Abstracting over dimension seems to write. Lists for instance, don’t have compile-time length guarantees, and tuples get awkward when you must say (R,R,R,…R). Then again, I really appreciate your vector space library! 2. Bookkeeping manifold charts is bound to be kind of troublesome. On one hand, you might require charts to look the same, with specified ways to combine them (simplicial complexes are a really rigid way to manage this, Cw complexes are more flexible, but still restrictive, and of course neither fit the usual definitions). The essential problem is one of defining subsets of R^n in a nice way, perhaps o-minimal structures help here too.

In any event, this is idle speculation; but it would be a helpful semantics for computing on manifolds in Haskell!

]]>Hm. It feels good to come clean. No wonder Catholics like going to confession.

]]>I’m unclear on the question. The surface model *is* the image model (functions of 2D).

Or maybe you mean why didn’t I use 3D images, i.e., functions of *R ^{3}* (infinite, continuous voxel map). If so, then yes, mainly for GPU-friendliness.

It seems to me that the biggest challenge is not to come up with a semantics that’s consistent and elegant in isolation, but to provide a structure that gives elegant solutions to real problems.

I look at these challenges as in harmony rather than in tension. My faith (stemming personal intuition and experience) is that when I get to the heart of the matter in elegant isolation, I see pragmatic strengths and defects clearly. I don’t expect this perspective to be a popular one, and I don’t mind unpopularity. It’s a personal path that’s true to myself, which is enough for me.

So, reworking your words to speak for myself: It seems to me that the biggest challenge is to come up with a semantics that’s consistent and elegant in isolation and therefore provides a structure that gives elegant solutions to real problems, *including ones we haven’t yet imagined*.

The notion of “real problems” is a slippery one. So often our difficulties are in the questions we’re used to asking and the old paradigms we’re attached to. 3D artists might love using an elegant denotational design, or they might have to have their brains rewired, or just wait for a younger generation.

To sum up, I’ll repeat two principles that Luke Palmer shared in a post from last June:

]]>If extreme idealism isn’t working out for you, maybe it is because you are not wholeheartedly following your own ideals.

You must be the change you wish to see in the world (– Mahatma Gandhi). As applied to software: design software as if it were the beautiful paradise you want it to be, then build pieces of the scaffolding back to the status quo.

In a dependently typed setting, the two domains you discuss (pure types and characteristic functions) are the same, barring computability concerns.

You seem to have forgotten about derivatives by the end of this article. I get the impression that continuity and differentiability are pretty important to 3D geometry. Continuity has a general interpretation in the realm of computable functions: i.e. a function from [0,1] to Bool can be continuous as long as it includes a _|_ in between the intervals where it is True and those where it is False. I have been quietly nagged for a while about the corresponding notion for derivatives. What would a differentiable function from Either [0,1] [0,1] look like?

But I feel like if you can capture derivatives, the polymorphic domain approach is pretty good. Mathematicians (in diff. geometry) work with parameterized surfaces using whatever domain is the most convenient… and they have thought about this stuff a lot

]]>