I wonder if it is possible (or makes sense even) to apply the Banach fixed-point theorem to the continuous time model.

]]>In the time domain, differentiation and integration are difficult to compute and composing systems involves the usual gymnastics. In the s domain however, a differentiator is just D(s)=s, an integrator is I(s)=1/s, and composition is just multiplication.

The point of all this is that the s-domain somehow encapsulates all derivatives and integrals into one complex number to allow any causal system to be defined as a simple Complex->Complex. This seems to be what you are looking for, specifically the “behaviors are some sort of function with all of its derivatives” part.

I have been trying to apply these (control theory) ideas to FRP ever since I discovered FRP. The kinks are in a) Trying to connect s-domain transfer functions to time domain data streams (I think MATLAB (simulink) does it somehow). b) Extending the concept to arbitrary data (not just vectors of complex). And c) Somehow coping with dynamically determined transfer function networks. If you could work out these issues I think you would have a the right model for FRP.

The laplace transform seems to deal with discontinuities fairly well; the laplace transform of the delayed unit step is just Del(s)=exp(s-a) or something, though again, how to connect this to the outside world is unknown to me.

I think it would be a good idea to look into control theory for some ideas, even if the laplace transform and the s domain cant be used directly. The concept of using arbitrary access to past and future (at least in an abstract denonational sense) and then encapsulating it into some parameter might be useful for FRP.

I hope this is helpful. Good Luck.

]]>```
data Reaction t a b = Reaction [(t,b)] ((t,a) -> Reaction t a b)
```

t is the type of time. A Reaction has [(t,b)] the events that would come out if no events went in (where each t is the time that has passed since the last event, or since this reaction began in the case of the first event), and ((t,a) -> Reaction t a b) how it responds to an event after a certain amount of time.

I’m tentatively considering continuous time as events of behaviors, with behaviors represented as (t -> b) functions. When the event happens, the continuous behavior begins, when the next event happens, a new continuous behavior begins. This does allow one to use the future and the past, but only as though no events happened or will happen. The way I see it, this is similar to Conal’s idea of nature, since from a tower of derivatives you can get the behavior function to high precision at any time, but not through discontinuities. If a stone is thrown, we know where it will land, unless a bird grabs it.

]]>You write:

Instead, the abstraction is a signal transformer, SF a b, whose semantics is (T->a) -> (T->b). See Genuinely Functional User Interfaces and Functional Reactive Programming, Continued.

Note that the Yampa papes always insisted this was just a conceptual definition to convey the basic intuitions, a first approximation: Yampa’s signal functions were always *causal* by construction, which the FRP Continued paper does state explicitly, and the reason was preciciely to rule out the “junk”, i.e. the signal function we cannot hope to implement in a reactive setting where the input is only revealed as time progresses. The approximate nature of this intuitive definition of signal functions was made even more explicit in later papers by using the “apprixmately equal” symbol in the definition, and even later papers by Neil Sculthorpe and myself has elaboated further on the point of casuality (and other useful temporal properties).

(val1, val2, True) :- (val2, val1, False) `x`

(textBox, textBox, button).

where you unify forwards in time, and the program makes some attempt to use the latest state possible for any given unification.

I suppose it’s not really FRP if you do it that way, though…

]]>