You start out by saying that all function futures should be created from simple futures, but of course caching futures violate that principle – at least if they themselves are not created from simple futures. And, at some point, you want to hook up actual external-world actions to get some of that reactivity the “R” in FRP is about, and caching futures are an entry obvious entry point.

Here’s the problem: Caching futures themselves are well-behaved, but caching futures combined with mappend are not: Consider two caching futures `f1`

and `f2`

, and only `f1`

gets triggered at time `t0`

. Mappend them to get future `f`

. If you ask the (try-future version of) f at time `t' > t0`

, you get `Some`

answer. If you ask at `MaxBound`

(which is greater than `t'`

), you get no answer, because the system waits for `f2`

. So monotonicity is violated. That, of course, is also the issue with my version of `fToS`

.

The deeper problem, I think, is the fact that the event occurrences the system talks about really refer to the programmatic event (i.e. the `SampleVar`

being filled), rather than *knowledge* about an external event, which only becomes available after it has actually happened. (How do you mappend two futures knowing that they’ll become known only after the timestamps that will be attached to them?)

All solutions I’ve come up with for this require going back to some notion of partial time. It can be pushed into a smaller corner than before, but it’s still there.

]]>fToS f = p (unFuture f maxBound) where p Nothing -> (maxBound, undefined) p (Some sf) -> sf

?

Sure, you might have to wait, but that’s true in general of F.FutureG, no?

]]>Please note the second paragraph in this post. What I’m going for here is a simple & precise semantics and a faithful implementation. If I understand your suggestion, it would be a simple implementation but I doubt it would be faithful to a simple & precise semantics. Maybe it could be faithful under certain assumptions about single- vs multi-threading, what it means for events to fire (in the imperative substrate), and how the resulting assignments interact with the mechanics of functional evaluation in the Haskell run-time system. I imagine following this path would require some very complicated reasoning and would yield fairly restrictive results. If anyone does try to apply rigorous reasoning to your suggestion, I’d be very interested in reading about the results.

If you ignore the “caching futures” aspect of this post, I hope you can see the answer to your question of sampling into the far future.

In the caching section, I’m reaching for general principles for extending the laziness machinery that we already know & love, while keeping its semantic purity. What justifies claiming purely functional semantics in the presence of these side-effects? In other words, what makes some assignments (semantically) benign and others malignant?

In brief, assignment is (semantically) *benign* when the new value is semantically equal to the old value.
A benign assignment is *beneficial* when it improves some sort of efficiency.
As an example, to implement laziness, the run-time system performs thunk overwriting.
The old thunk and the new value (whnf) are semantically equal.
The benefit is improved speed of access, perhaps at the cost of space.

I would like to explore the space of beneficial, benign assignments much more thoroughly than this one special case of thunk overwriting.

]]>What exactly does your code do when you are trying to sample the far future? I don’t understand where the magic future function comes from that can handle this case, but I assume this is what the (imperative?) framework should provide. And again, if we never want to sample with a future time, how is this different from an IORef after all and why is it better?

Gergely

]]>Thanks for the great suggestion! It also feeds very nicely into relative time:

order :: Future a -> Future a -> Future (a, Future a) order (t,a) (t',b) | t <= t' = (t, (a, (t'-t, b))) | otherwise = (t', (b, (t-t', a)))

Then use this `order`

function to define `mappend`

on events (temporal interleaving).

order :: Future a -> Future a -> Future (a, Future a) order (t,a) (t',b) | t <= t' = (t, (a, (t', b))) | otherwise = (t', (b, (t, a)))

So that the rest of the reactive code can be independent of your representation.

Oh, and this looks like a nice lead, by the way.

]]>I think what makes these futures be absolute time is the use of the `Max`

monoid. Switching to the `Sum`

monoid gives relative time.
Moreover, the definition of `until`

needs some tweaking.

Absolute version:

(b `until` Future (t',b')) `at` t = b'' `at` t where b'' = if t <= t' then b else b'

The relative version has only a small change:

(b `until` Future (t',b')) `at` t = b'' `at` t where b'' = if t <= t' then b else pad t' b'

where the `pad`

function prepends a segment of ⊥ to a behavior (or any `Segment`

type). (For lists, `pad n == (repeat n undefined ++)`

.)

Equivalently,

Using the ideas in *Sequences, streams, and segments* and in *Sequences, segments, and signals*, we get a much more elegant definition:

This formulation is what inspired those two blog posts and my renewed interest in relative time FRP.

With function futures or caching futures, `until`

would look like the following for absolute time:

(b `until` u) `at` t = b'' `at` t where b'' = case tryFuture u t of Nothing -> b Just (Future (_,b')) -> b'

For relative time,

(b `until` u) `at` t = b'' `at` t where b'' = case tryFuture u t of Nothing -> b Just (Future (t',b')) -> pad t' b'

I don’t know how to extend the more elegant `take`

/`mappend`

version to use `tryFuture`

.