I wonder if it is possible (or makes sense even) to apply the Banach fixed-point theorem to the continuous time model.

]]>In the time domain, differentiation and integration are difficult to compute and composing systems involves the usual gymnastics. In the s domain however, a differentiator is just D(s)=s, an integrator is I(s)=1/s, and composition is just multiplication.

The point of all this is that the s-domain somehow encapsulates all derivatives and integrals into one complex number to allow any causal system to be defined as a simple Complex->Complex. This seems to be what you are looking for, specifically the “behaviors are some sort of function with all of its derivatives” part.

I have been trying to apply these (control theory) ideas to FRP ever since I discovered FRP. The kinks are in a) Trying to connect s-domain transfer functions to time domain data streams (I think MATLAB (simulink) does it somehow). b) Extending the concept to arbitrary data (not just vectors of complex). And c) Somehow coping with dynamically determined transfer function networks. If you could work out these issues I think you would have a the right model for FRP.

The laplace transform seems to deal with discontinuities fairly well; the laplace transform of the delayed unit step is just Del(s)=exp(s-a) or something, though again, how to connect this to the outside world is unknown to me.

I think it would be a good idea to look into control theory for some ideas, even if the laplace transform and the s domain cant be used directly. The concept of using arbitrary access to past and future (at least in an abstract denonational sense) and then encapsulating it into some parameter might be useful for FRP.

I hope this is helpful. Good Luck.

]]>```
data Reaction t a b = Reaction [(t,b)] ((t,a) -> Reaction t a b)
```

t is the type of time. A Reaction has [(t,b)] the events that would come out if no events went in (where each t is the time that has passed since the last event, or since this reaction began in the case of the first event), and ((t,a) -> Reaction t a b) how it responds to an event after a certain amount of time.

I’m tentatively considering continuous time as events of behaviors, with behaviors represented as (t -> b) functions. When the event happens, the continuous behavior begins, when the next event happens, a new continuous behavior begins. This does allow one to use the future and the past, but only as though no events happened or will happen. The way I see it, this is similar to Conal’s idea of nature, since from a tower of derivatives you can get the behavior function to high precision at any time, but not through discontinuities. If a stone is thrown, we know where it will land, unless a bird grabs it.

]]>You write:

Instead, the abstraction is a signal transformer, SF a b, whose semantics is (T->a) -> (T->b). See Genuinely Functional User Interfaces and Functional Reactive Programming, Continued.

Note that the Yampa papes always insisted this was just a conceptual definition to convey the basic intuitions, a first approximation: Yampa’s signal functions were always *causal* by construction, which the FRP Continued paper does state explicitly, and the reason was preciciely to rule out the “junk”, i.e. the signal function we cannot hope to implement in a reactive setting where the input is only revealed as time progresses. The approximate nature of this intuitive definition of signal functions was made even more explicit in later papers by using the “apprixmately equal” symbol in the definition, and even later papers by Neil Sculthorpe and myself has elaboated further on the point of casuality (and other useful temporal properties).

(val1, val2, True) :- (val2, val1, False) `x`

(textBox, textBox, button).

where you unify forwards in time, and the program makes some attempt to use the latest state possible for any given unification.

I suppose it’s not really FRP if you do it that way, though…

]]>Great to have people exploring this area.

No, I haven’t tried your game yet.

]]>Why [b] instead of b?

I’m using an alternative to arrows where in merging machines I result with a sum-type result and not a product-type. A list of a sum-type allows to propagate an update on only some values while with product-types it’s hard to tell which has changed.

I’m not seeing continuous time here.

Yeah, you are right. Peaker was bugging me about it too. I should re-read your post about why continuous time matter because you guys tend to be right about these things

Is the Maybe there so you know you’re in a final state

Yes.

If so, and if you don’t really need to know, you could simplify away the Maybe and have all inputs lead back to the same interactive behavior.

I do it because I support the merging two machines by running them in sequence and for that I need to know when the machine ends.

Have you thought about how to combine machines that have different input types?

I combine these machines by having an input sum-type. I also have a counterpart to Program called Backend which is merge-able too. My example game and font editor have inputs from and outputs to several backends (GLUT, File, network, etc.)

btw, did you try running the game?

]]>I think I may have a nice model that doesn’t allow future access and junk.

data Program a b = Program { progVals :: [b] , progMore :: Maybe (a -> Program a b) }

It’s kinda a signal transformer though.

Looks like a Moore machine type but with multiple outputs instead of one. Why `[b]`

instead of `b`

? For a Mealy counterpart, see the `MutantBot`

type at
*Functional reactive partner dancing*.
(More variations in other bot articles.)

I’m not seeing continuous time here.
Maybe `Program`

could be composed with a simple type of continuous, non-reactive time functions, as in *Push-pull functional reactive programming*.

Is the `Maybe`

there so you know you’re in a final state? If so, and if you don’t really need to know, you could simplify away the `Maybe`

and have all inputs lead back to the same interactive behavior.

Since your `Program`

is not syntactic, maybe there’s a more fitting name to be found.

Have you thought about how to combine machines that have different input types?

]]>I think I may have a nice model that doesn’t allow future access and junk.

```
data Program a b = Program
{ progVals :: [b]
, progMore :: Maybe (a -> Program a b)
}
```

It’s kinda a signal transformer though.

It’s available in hackage package “peakachu”.

You can also cabal install DefendTheKing for an example RTS game implemented with it.

]]>Junk is exactly the stuff that the model(=semantics) has for which there is no interface(=syntax.) So “junk-freeness” conflicts with selective exposure pretty much by definition.

Here is a trivial example of a semantics with junk. Say we have a syntax for addition expressions, Exp ::= NATURAL | Exp + Exp. If our denotation function has type Exp -> Integer and has the “standard” definition then all the negative integers are junk because our denotation function is not surjective. There are two main ways to fix this. We can 1) cut down the semantic domain, e.g. using the naturals instead of the integers or 2) we can expand the syntax so that the extra elements, the junk, becomes expressible, e.g. we can add negation or subtraction. The latter route isn’t always possible, for example, if our semantic domain had been the reals then there is no way not to have junk without allowing infinite syntax.

Junk is inversely proportional to precision; the more junk you have, the less precision your semantics has.

]]>