Why program with continuous time?

For me functional reactive programming (FRP) has been mainly about two principles.

One is denotational design (i.e., based on simple, precise denotational semantics), which has been integral to FRP since the first incarnation as ActiveVRML. I’ve written several things recently about denotational design.

My second core principle is continuous time, which has been present since Fran & ActiveVRML’s predecessor TBAG.

Today I read a confusion that I’ve heard many times before about continuous time, which I’d like to bring up, in part because I love continuous time, and in part because there’s a much broader underlying principle of software design.

[…] I don’t see why the lack of continuous streams leaves a “gap”. In the end all streams are discrete.

“In the end”, yes. Just as in the end, numbers are displayed as ascii numerals. However, programming is more about the middle than the end, i.e., more about composition than about output. For that reason, we don’t generally use strings in place of numbers. If we did, composition operations (arithmetic) would be very awkward. Similarly, continuity in space and in time is better for composition/modularity, leaving discreteness to the output step.

Another name for “continuous” is “resolution-independent”, and thus able to be transformed in time and space with ease and without propagating and amplifying sampling artifacts.

As another example, consider the data types in a 3D graphics API. In the end, all graphics is pixels, isn’t it? So what gap is left in a pixel-oriented API that doesn’t address higher-level continuous notions like triangles or curved surfaces? (Hint: it’s not just speed.)

One could go further than strings and pixels, and say that “in the end” my data types will end up as phosphors or electrical impulses, so programming at those levels is perfectly adequate. Again, composability would suffer.

Another example is functional vs imperative programming. It’s all side-effects in the end. Functional programming excels in composability, as explained and illustrated by John Hughes in Why Functional Programming Matters. And I’m not just talking about pure functional programming, but the availability of compound expressions, as introduced in Fortran (controversially), despite that machines just execute sequences of atomic, side-effecting statements in the end.

Another example, and really the heart of John’s paper, is finite vs infinite data structures. We only access a finite amount of data in the end. However, allowing infinite data structures in the middle makes for a much more composable programming style.

Some unsolicited advice to us all: Next time you see someone doing something and you don’t understand their underlying motivation, ask them. Many issues are not immediately obvious, so don’t be shy! Reading papers can help as well.

For more on continuous time:

Edits:

  • 2010-01-03: Trimmed & tweaked the unsolicited advice. Hopefully less peevish/patronizing now. Thanks for the feedback.
  • 2010-01-07: Trimmed quote.

16 Comments

  1. Mattias Bengtsson:

    Nice post. Perhaps there is some pattern here that makes the kind of bad argument he’s using easy to spot in other contexts.

  2. conal:

    Thanks, Mattias. I’m with you. Let’s spell out patterns of broken reasoning that lead to poor conclusions, and for just the reason you mention — to make it easy to spot in other contexts.

    The pattern I see in its general form is “In the end, …”, as a would-be argument for programming library or language design. The big flaw I see in this general pattern is that it assumes “the end” (output or execution substrate) to be useful in driving the design. As I said above, “However, programming is more about the middle than the end, i.e., more about composition than about output.”

  3. Roly Perera:

    And I guess the point is that you never really need to reach “the end”. (It really is all about composition.) Every software system is part of some other system, right? (Well, the universe might be a genuinely closed system, but most real-world applications are properly contained by the universe.) What looks like imperative output can be just what you observe at the boundary between two subsystems.

    I think this is related to Peter Wegner’s claim that non-interactivity isn’t preserved by the operation of taking subsystems. Decomposition exposes interaction boundaries, which look imperative up close.

  4. conal:

    Roly wrote:

    … you never really need to reach “the end”. (It really is all about composition.)

    I wasn’t planning to go this far in this particular post, because I didn’t want to require readers to take such a holistic/big-picture perspective, and so lose readers attached to a “pragmatic”/narrower perspective.

    Now that you’ve spilled the beans, however: Yes!!

    What looks like imperative output can be just what you observe at the boundary between two subsystems.

    Yes again!! See upcoming post. Edit: Now at Can functional programming be liberated from the von Neumann paradigm?.

  5. Dan P:

    Automatic differentiation is itself a great example of why it can be good to use things that you don’t need “in the end”. You can always approximate derivatives with finite differences. But it turns out, for many problems, to be much easier to go to the limit. Curiously, that example is also about composability as well. You’ve hit on an interesting general principle.

  6. Conal Elliott » Blog Archive » Communication, curiosity, and personal style:

    […] without inquiry/curiosity, and hence some peevishness in some remarks in my previous post, Why program with continuous time?. See Fostering creativity by relinquishing the obvious for related […]

  7. conal:

    Update: I trimmed & tweaked the unsolicited advice near the end of the post. Hopefully less peevish/patronizing now. My thanks for the feedback on the reddit thread for this post. I was reacting out of a particular sensitivity (and matching personal value) of mine. See Communication, curiosity, and personal style.

  8. Conal Elliott » Blog Archive » Can functional programming be liberated from the von Neumann paradigm?:

    […] my recent post, Why program with continuous time?, I had suggested that the abstraction of continuous time is useful even if “in the end all […]

  9. Josef Svenningsson:

    Conal. I’ve been meaning to ask you this question for some time.

    You advocate a “semantic” or “denotational” way of designing things which establishes the nature of the problem we are trying to solve. One can also say that it acts as a specification for the problem. But you go further than using the semantics as a specification, you have demonstrated several times how to derive an actual computable implementation from the semantics.

    I wonder if you could elaborate on the case when deriving an implementation from the semantics inherently requires some form of approximation. It seems that in all your examples you haven’t had to deal with that question. But I have a background in program analysis. In that field it is also a good idea to establish the underlying semantic property one wishes to establish about programs before embarking on designing the actual program analysis. But one can never hope to derive a computable analysis from the semantic property. There is an inherent need for introducing some form of approximation in order to achieve that.

    One answer could be Abstract Interpretation. But it doesn’t give us a particular approximation, it only tells us what an approximation should/could look like.

    Anyway, I’d like to hear your thoughts on the matter.

  10. Josef Svenningsson:

    On second thought I want to revise some of the things I said above.

    You do indeed deal with approximations in some of your semantic designs. For instance sampling continuous time to make it discreet. I suppose that is one example of what I was looking for.

    Still, given a denotational semantics, it might not be obvious how and where to introduce the required approximations to derive an effectively computable implementation.

  11. conal:

    Hi Josef,

    I’ve been thinking about approximation and denotational design a fair bit lately, exploring two different paths. One is to work with exact values, and then allow extraction of any finite amount information to be extracted. We play this game as a matter of course in lazy functional programming, e.g., with infinite lists & trees. Additional examples are images and surfaces, defined over infinite & continuous space, and behaviors, defined over infinite & continuous time. Similarly, I recently realized a simple way to perform exact numeric integration.

    Another path is to compute only inexactly and use the ideal denotational semantics in order to quantify precisely the distance between ideal and implementation. With this path, we can still speak precisely about the accuracy of our implementations, and about choices that improve or degrade accuracy (no hand-waving needed).

  12. James McCartney:

    The biggest problem in my mind with continuous time is that the conversion of a signal from continuous time to discrete time is non-local. The sinc function has infinite extent. Simply sampling a non-bandlimited continuous time signal will cause aliasing frequencies.

  13. conal:

    Hi James,

    Thanks for raising this issue about sinc. Perhaps the infinite extent is not necessarily a problem. Suppose we want to extract only finite information/precision in the end (e.g., 44K samples per second and 32 bits per channel per sample), while (in theory) computing the exactly correct signal in the middle (during composition). What I really mean is that the extracted approximation must agree with the exact value to the precision of the approximation. Now the trick is to implement the composition steps finitely (and better yet, efficiently) and correctly.

    Although sinc has infinite extent, I’d guess that its influence on the finite information eventually extracted is (provably) finite. So, with some cleverness, an implementation could probably get away with carrying forward only finite information.

    What do you think?

  14. Peter Hancock:

    I’m inclined to agree with Conal that continuous time may not be, of itself, a problem. What strikes me as intensely problematic are non-continuous functions of time – such as step-functions f(t) = if t < 0 then 0 else 1, “delta” functions and so on. There are good reasons to think that no such function can be defined in a computable (ie. constructive) way.

    Actually “continuous time” seems like unfortunate language. That issue is whether time is the continuum, ie. the real line. “Continuous” should be kept for functions between topological spaces. I could be talking out of the wrong orifice, but it seems to me that the only functions from the reals to a discrete space like {0,1} that can be defined constructively, are precisely the constant functions. (But if we took time to be Baire-space, which is homeomorphic to the irrationals, there are lots of non-constant continuous functions that can be defined constructively.)

    Oh, on the topic of grep, I think in one respect it is not a (total) function on infinite streams. After all “grep a input” does not have such a value when the argument does not contain infinitely many ‘a’s.

  15. Peter T.:

    “For that reason, we don’t generally use strings in place of numbers. If we did, composition operations (arithmetic) would be very awkward.”

    As someone who has plumbed the depths of bigint optimization across multiple ISAs for elliptic curve cryptography, yes, it is indeed very awkward. :)

    But in the context of your analysis, wouldn’t decoupling arithmetic from machine word size–entirely–be the purest ADT?

  16. Conal:

    Peter T.: Yes. Why “But”?

Leave a comment