<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Conal Elliott &#187; functional reactive programming</title>
	<atom:link href="http://conal.net/blog/tag/functional-reactive-programming/feed" rel="self" type="application/rss+xml" />
	<link>http://conal.net/blog</link>
	<description>Inspirations &#38; experiments, mainly about denotative/functional programming in Haskell</description>
	<lastBuildDate>Thu, 25 Jul 2019 18:15:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.17</generator>
	<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2F&amp;language=en_US&amp;category=text&amp;title=Conal+Elliott&amp;description=Inspirations+%26amp%3B+experiments%2C+mainly+about+denotative%2Ffunctional+programming+in+Haskell&amp;tags=blog" type="text/html" />
	<item>
		<title>Garbage collecting the semantics of FRP</title>
		<link>http://conal.net/blog/posts/garbage-collecting-the-semantics-of-frp</link>
		<comments>http://conal.net/blog/posts/garbage-collecting-the-semantics-of-frp#comments</comments>
		<pubDate>Mon, 04 Jan 2010 21:55:30 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[derivative]]></category>
		<category><![CDATA[design]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[semantics]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=96</guid>
		<description><![CDATA[Ever since ActiveVRML, the model we&#8217;ve been using in functional reactive programming (FRP) for interactive behaviors is (T-&#62;a) -&#62; (T-&#62;b), for dynamic (time-varying) input of type a and dynamic output of type b (where T is time). In &#8220;Classic FRP&#8221; formulations (including ActiveVRML, Fran &#38; Reactive), there is a &#8220;behavior&#8221; abstraction whose denotation is a [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Garbage collecting the semantics of FRP

Tags: FRP, functional reactive programming, semantics, design, derivative

URL: http://conal.net/blog/posts/garbage-collecting-the-semantics-of-frp/

-->

<!-- references -->

<!-- teaser -->

<p>Ever since <a href="http://conal.net/papers/ActiveVRML/" title="Tech report: &quot;A Brief Introduction to ActiveVRML&quot;">ActiveVRML</a>, the model we&#8217;ve been using in functional reactive programming (FRP) for interactive behaviors is <code>(T-&gt;a) -&gt; (T-&gt;b)</code>, for dynamic (time-varying) input of type <code>a</code> and dynamic output of type <code>b</code> (where <code>T</code> is time).
In &#8220;Classic FRP&#8221; formulations (including <a href="http://conal.net/papers/ActiveVRML/" title="Tech report: &quot;A Brief Introduction to ActiveVRML&quot;">ActiveVRML</a>, <a href="http://conal.net/papers/icfp97/" title="paper">Fran</a> &amp; <a href="http://conal.net/papers/push-pull-frp/" title="Paper by Conal Elliott and Paul Hudak">Reactive</a>), there is a &#8220;behavior&#8221; abstraction whose denotation is a function of time.
Interactive behaviors are then modeled as host language (e.g., Haskell) functions between behaviors.
Problems with this formulation are described in <em><a href="http://conal.net/blog/posts/why-classic-FRP-does-not-fit-interactive-behavior/" title="blog post">Why classic FRP does not fit interactive behavior</a></em>.
These same problems motivated &#8220;Arrowized FRP&#8221;.
In Arrowized FRP, behaviors (renamed &#8220;signals&#8221;) are purely conceptual.
They are part of the semantic model but do not have any realization in the programming interface.
Instead, the abstraction is a <em>signal transformer</em>, <code>SF a b</code>, whose semantics is <code>(T-&gt;a) -&gt; (T-&gt;b)</code>.
See <em><a href="http://conal.net/papers/genuinely-functional-guis.pdf" title="Paper by Antony Courtney and Conal Elliott">Genuinely Functional User Interfaces</a></em> and <em><a href="http://www.haskell.org/yale/papers/haskellworkshop02/" title="Paper by Henrik Nilsson, Antony Courtney, and John Peterson">Functional Reactive Programming, Continued</a></em>.</p>

<p>Whether in its classic or arrowized embodiment, I&#8217;ve been growing uncomfortable with this semantic model of functions between time functions.
A few weeks ago, I realized that one source of discomfort is that this model is <em>mostly junk</em>.</p>

<p>This post contains some partially formed thoughts about how to eliminate the junk (&#8220;garbage collect the semantics&#8221;), and what might remain.</p>

<!--
**Edits**:

* 2009-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-96"></span></p>

<p>There are two generally desirable properties for a denotational semantics: <em>full abstraction</em> and <em>junk-freeness</em>.
Roughly, &#8220;full abstraction&#8221; means we must not distinguish between what is (operationally) indistinguishable, while &#8220;junk-freeness&#8221; means that every semantic value must be denotable.</p>

<p>FRP&#8217;s semantic model, <code>(T-&gt;a) -&gt; (T-&gt;b)</code>, allows not only arbitrary (computable) transformation of input values, but also of time.
The output at some time can depend on the input at any time at all, or even on the input at arbitrarily many different times.
Consequently, this model allows respoding to <em>future</em> input, violating a principle sometimes called &#8220;causality&#8221;, which is that outputs may depend on the past or present but not the future.</p>

<p>In a causal system, the present can reach backward to the past but not forward the future.
I&#8217;m uneasy about this ability as well.
Arbitrary access to the past may be much more powerful than necessary.
As evidence, consult the system we call (physical) Reality.
As far as I can tell, Reality operates without arbitrary access to the past or to the future, and it does a pretty good job at expressiveness.</p>

<p>Moreover, arbitrary past access is also problematic to implement in its semantically simple generality.</p>

<p>There is a thing we call informally &#8220;memory&#8221;, which at first blush may look like access to the past, it isn&#8217;t really.
Rather, memory is access to a <em>present</em> input, which has come into being through a process of filtering, gradual accumulation, and discarding (forgetting).
I&#8217;m talking about &#8220;memory&#8221; here in the sense of what our brains do, but also what all the rest of physical reality does.
For instance, weather marks on a rock are part of the rock&#8217;s (present) memory of the past weather.</p>

<p>A very simple memory-less semantic model of interactive behavior is just <code>a -&gt; b</code>.
This model is too restrictive, however, as it cannot support <em>any</em> influence of the past on the present.</p>

<p>Which leaves a question: what is a simple and adequate formal model of interactive behavior that reaches neither into the past nor into the future, and yet still allows the past to influence the present?
Inspired in part by a design principle I call &#8220;what would reality do?&#8221; (WWRD), I&#8217;m happy to have some kind of infinitesimal access to the past, but nothing further.</p>

<p>My current intuition is that differentiation/integration plays a crucial role.
That information is carried forward moment by moment in time as &#8220;momentum&#8221; in some sense.</p>

<blockquote>
  <p><em>I call intuition cosmic fishing. You feel a nibble, then you&#8217;ve got to hook the fish.</em> &#8211; Buckminster Fuller</p>
</blockquote>

<p>Where to go with these intuitions?</p>

<p>Perhaps interactive behaviors are some sort of function with all of its derivatives.
See <em><a href="http://conal.net/blog/posts/beautiful-differentiation/" title="blog post">Beautiful differentiation</a></em> for an specification and derived implementation of numeric operations, and more generally of <code>Functor</code> and <code>Applicative</code>, on which much of FRP is based.</p>

<p>I suspect the whole event model can be replaced by integration.
Integration is the main remaining piece.</p>

<p>How weak a semantic model can let us define integration?</p>

<h3>Thanks</h3>

<p>My thanks to Luke Palmer and to Noam Lewis for some clarifying chats about these half-baked ideas.
And to the folks on #haskell IRC for <a href="http://tunes.org/~nef/logs/haskell/10.01.04">brainstorming titles for this post</a>.
My favorite suggestions were</p>

<ul>
<li>luqui: instance HasJunk FRP where</li>
<li>luqui: Functional reactive programming&#8217;s semantic baggage</li>
<li>sinelaw: FRP, please take out the trash!</li>
<li>cale: Garbage collecting the semantics of FRP</li>
<li>BMeph: Take out the FRP-ing Trash</li>
</ul>

<p>all of which I preferred over my original &#8220;FRP is mostly junk&#8221;.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=96&amp;md5=a0b309c313791bd63f34ab08b5fb4c3b"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/garbage-collecting-the-semantics-of-frp/feed</wfw:commentRss>
		<slash:comments>34</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fgarbage-collecting-the-semantics-of-frp&amp;language=en_GB&amp;category=text&amp;title=Garbage+collecting+the+semantics+of+FRP&amp;description=Ever+since+ActiveVRML%2C+the+model+we%26%238217%3Bve+been+using+in+functional+reactive+programming+%28FRP%29+for+interactive+behaviors+is+%28T-%26gt%3Ba%29+-%26gt%3B+%28T-%26gt%3Bb%29%2C+for+dynamic+%28time-varying%29+input+of+type+a+and+dynamic+output...&amp;tags=derivative%2Cdesign%2CFRP%2Cfunctional+reactive+programming%2Csemantics%2Cblog" type="text/html" />
	</item>
		<item>
		<title>3D rendering as functional reactive programming</title>
		<link>http://conal.net/blog/posts/3d-rendering-as-functional-reactive-programming</link>
		<comments>http://conal.net/blog/posts/3d-rendering-as-functional-reactive-programming#comments</comments>
		<pubDate>Mon, 12 Jan 2009 05:38:58 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[3D]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[monoid]]></category>
		<category><![CDATA[semantics]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=75</guid>
		<description><![CDATA[I&#8217;ve been playing with a simple/general semantics for 3D. In the process, I was surprised to see that a key part of the semantics looks exactly like a key part of the semantics of functional reactivity as embodied in the library Reactive. A closer look revealed a closer connection still, as described in this post. [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: 3D rendering as functional reactive programming

Tags: 3D, semantics, FRP, functional reactive programming, monoid

nURL: http://conal.net/blog/posts/3d-rendering-as-functional-reactive-programming/

-->

<!-- references -->

<!-- teaser -->

<p>I&#8217;ve been playing with a simple/general semantics for 3D.
In the process, I was surprised to see that a key part of the semantics looks exactly like a key part of the semantics of functional reactivity as embodied in the library <em><a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a></em>.
A closer look revealed a closer connection still, as described in this post.</p>

<!--
**Edits**:

* 2008-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-75"></span></p>

<h3>What is 3D rendering?</h3>

<p>Most programmers think of 3D rendering as being about executing sequence of side-effects on frame buffer or some other mutable array of pixels.
This way of thinking (sequences of side-effects) comes to us from the design of early sequential computers.
Although computer hardware architecture has evolved a great deal, most programming languages, and hence most programming thinking, is still shaped by the this first sequential model.
(See John Backus&#8217;s Turing Award lecture <em><a href="www.stanford.edu/class/cs242/readings/backus.pdf" title="Turing Award lecture by John Backus">Can Programming Be Liberated from the von Neumann Style?  A functional style and its algebra of programs</a></em>.)
The invention of monadic <em><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.2504" title="paper by Simon Peyton Jones and Philip Wadler">Imperative functional programming</a></em> allows Haskellers to think and program within the imperative paradigm as well.</p>

<p>What&#8217;s a <em>functional</em> alternative?
Rendering is a function from something to something else.
Let&#8217;s call these somethings (3D) &#8220;Geometry&#8221; and (2D) &#8220;Image&#8221;, where <code>Geometry</code> and <code>Image</code> are types of functional (immutable) values.</p>

<pre><code>type Rendering = Image Color

render :: Geometry -&gt; Rendering
</code></pre>

<p>To simplify, I&#8217;m assuming a fixed view.
What remains is to define what these two types <em>mean</em> and, secondarily, how to represent and implement them.</p>

<p>An upcoming post will suggest an answer for the meaning of <code>Geometry</code>.
For now, think of it as a collection of curved and polygonal surfaces, i.e., the <em>outsides</em> (boundaries) of solid shapes.
Each point on these surfaces has a location, a normal (perpendicular direction), and material properties (determining how light is reflected by and transmitted through the surface at the point).
The geometry will contain light sources.</p>

<p>Next, what is the meaning of <code>Image</code>?
A popular answer is that an image is a rectangular array of finite-precision encodings of color (e.g., with eight bits for each of red, blue, green and possibly opacity).
This answer is leads to poor compositionality and complex meanings for operations like scaling and rotation, so I prefer another model.
As in <a href="http://conal.net/Pan" title="project web page">Pan</a>, an image (the meaning of the type <code>Image Color</code>) is a function from infinite continuous 2D space to colors, where the <code>Color</code> type includes partial opacity.
For motivation of this model and examples of its use, see <em><a href="http://conal.net/papers/functional-images/" title="book chapter">Functional images</a></em> and the corresponding <a href="http://conal.net/Pan/Gallery" title="gallery of functional images">Pan gallery</a> of functional images.
<em>Composition</em> occurs on infinite &amp; continuous images.</p>

<p>After all composition is done, the resulting image can be sampled into a finite, rectangular array of finite precision color encodings.
I&#8217;m talking about a conceptual/semantic pipeline.
The implementation computes the finite sampling without having to compute the values for entire infinite image.</p>

<p>Rendering has several components.
I&#8217;ll just address one and show how it relates to functional reactive programming (FRP).</p>

<h3>Visual occlusion</h3>

<p>One aspect of 3D rendering is <a href="en.wikipedia.org/wiki/Hidden_surface_determination">hidden surface determination</a>.
Relative to the viewer&#8217;s position and orientation, some 3D objects may fully or partially occluded by nearer objects.</p>

<p>An image is a function of (infinite and continuous) 2D space, so specifying that function is determining its value at every sample point.
Each point can correspond to a number of geometric objects, some closer and some further.
If we assume for now that our colors are fully opaque, then we&#8217;ll need to know the color (after transformation and lighting) of the <em>nearest</em> surface point that is projected onto the sample point.
(We&#8217;ll remove this opacity assumption later.)</p>

<p>Let&#8217;s consider how we&#8217;ll combine two <code>Geometry</code> values into one:</p>

<pre><code>union :: Geometry -&gt; Geometry -&gt; Geometry
</code></pre>

<p>Because of occlusion, the <code>render</code> function cannot be compositional with respect to <code>union</code>.
If it were, then there would exist a functions <code>unionR</code> such that</p>

<pre><code>forall ga gb. render (ga `union` gb) == render ga `unionR` render gb
</code></pre>

<p>In other words, to render a union of two geometries, we can render each and combine the results.</p>

<p>The reason we can&#8217;t find such a <code>unionR</code> is that <code>render</code> doesn&#8217;t let <code>unionR</code> know how close each colored point is.
A solution then is simple: add in the missing depth information:</p>

<pre><code>type RenderingD = Image (Depth, Color)  -- first try

renderD :: Geometry -&gt; RenderingD
</code></pre>

<p>Now we have enough information for compositional rendering, i.e., we can define <code>unionR</code> such that</p>

<pre><code>forall ga gb. renderD (ga `union` gb) == renderD ga `unionR` renderD gb
</code></pre>

<p>where</p>

<pre><code>unionR :: RenderingD -&gt; RenderingD -&gt; RenderingD

unionR im im' p = if d &lt;= d' then (d,c) else (d',c')
 where
   (d ,c ) = im  p
   (d',c') = im' p
</code></pre>

<p>When we&#8217;re done composing, we can discard the depths:</p>

<pre><code>render g = snd . renderD g
</code></pre>

<p>or, with <em><a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">Semantic editor combinators</a></em>:</p>

<pre><code>render = (result.result) snd renderD
</code></pre>

<h3>Simpler, prettier</h3>

<p>The <code>unionR</code> is not very complicated, but still, I like to tease out common structure and reuse definitions wherever I can.
The first thing I notice about <code>unionR</code> is that it works pointwise.
That is, the value at a point is a function of the values of two other images at the same point.
The pattern is captured by <code>liftA2</code> on functions, thanks to the <code>Applicative</code> instance for functions.</p>

<pre><code>liftA2 :: (b -&gt; c -&gt; d) -&gt; (a -&gt; b) -&gt; (a -&gt; c) -&gt; (a -&gt; d)
</code></pre>

<p>So that</p>

<pre><code>unionR = liftA2 closer

closer (d,c) (d',c') = if d &lt;= d' then (d,c) else (d',c')
</code></pre>

<p>Or</p>

<pre><code>closer dc@(d,_) dc'@(d',_) = if d &lt;= d' then dc else dc'
</code></pre>

<p>Or even</p>

<pre><code>closer = minBy fst
</code></pre>

<p>where</p>

<pre><code>minBy f u v = if f u &lt;= f v then u else v
</code></pre>

<p>This definition of <code>unionR</code> is not only simpler, it&#8217;s quite a bit more general, as type inference reveals:</p>

<pre><code>unionR :: (Ord a, Applicative f) =&gt; f (a,b) -&gt; f (a,b) -&gt; f (a,b)

closer :: Ord a =&gt; (a,b) -&gt; (a,b) -&gt; (a,b)
</code></pre>

<p>Once again, simplicity and generality go hand-in-hand.</p>

<h3>Another type class morphism</h3>

<p>Let&#8217;s see if we can make <code>union</code> rendering simpler and more inevitable.
Rendering is <em>nearly</em> a homomorphism.
That is, <code>render</code> nearly distributes over <code>union</code>, but we have to replace <code>union</code> by <code>unionR</code>.
I&#8217;d rather eliminate this discrepancy, ending up with</p>

<pre><code>forall ga gb. renderD (ga `op` gb) == renderD ga `op` renderD gb
</code></pre>

<p>for some <code>op</code> that is equal to <code>union</code> on the left and <code>unionR</code> on the right.
Since <code>union</code> and <code>unionR</code> have different types (with neither being a polymorphic instance of the other), <code>op</code> will have to be a method of some type class.</p>

<p>My favorite binary method is <code>mappend</code>, from <code>Monoid</code>, so let&#8217;s give it a try.
<code>Monoid</code> requires there also to be an identity element <code>mempty</code> and that <code>mappend</code> be associative.
For <code>Geometry</code>, we can define</p>

<pre><code>instance Monoid Geometry where
  mempty  = emptyGeometry
  mappend = union
</code></pre>

<p>Images with depth are a little trickier.
Image already has a <code>Monoid</code> instance, whose semantics is determined by the principle of <a href="http://conal.net/blog/tag/type-class-morphism/" title="Posts on type class morphisms">type class morphisms</a>, namely</p>

<blockquote>
  <p><em>The meaning of an instance is the instance of the meaning</em></p>
</blockquote>

<p>The meaning of an image is a function, and that functions have a <code>Monoid</code> instance:</p>

<pre><code>instance Monoid b =&gt; Monoid (a -&gt; b) where
  mempty = const mempty
  f `mappend` g =  a -&gt; f a `mappend` g a
</code></pre>

<p>which simplifies nicely to a standard form, by using the <code>Applicative</code> instance for functions.</p>

<pre><code>instance Applicative ((-&gt;) a) where
  pure      = const
  hf &lt;*&gt; xf =  a -&gt; (hf a) (xf a)

instance Monoid b =&gt; Monoid (a -&gt; b) where
  mempty  = pure   mempty
  mappend = liftA2 mappend
</code></pre>

<p>We&#8217;re in luck.
Since we&#8217;ve defined <code>unionR</code> as <code>liftA2 closer</code>, so we just need it to turn out that <code>closer == mappend</code> and that <code>closer</code> is associative and has an identity element.</p>

<p>However, <code>closer</code> is defined on pairs, and the standard <code>Monoid</code> instance on pairs doesn&#8217;t fit.</p>

<pre><code>instance (Monoid a, Monoid b) =&gt; Monoid (a,b) where
  mempty = (mempty,mempty)
  (a,b) `mappend` (a',b') = (a `mappend` a', b `mappend` b')
</code></pre>

<p>To avoid this conflict, define a new data type to be used in place of pairs.</p>

<pre><code>data DepthG d a = Depth d a  -- first try
</code></pre>

<p>Alternatively,</p>

<pre><code>newtype DepthG d a = Depth { unDepth :: (d,a) }
</code></pre>

<p>I&#8217;ll go with this latter version, as it turns out to be more convenient.</p>

<p>Then we can define our monoid:</p>

<pre><code>instance Monoid (DepthG d a) where
  mempty  = Depth (maxBound,undefined)
  Depth p `mappend` Depth p' = Depth (p `closer` p')
</code></pre>

<p>The second method definition can be simplified nicely</p>

<pre><code>  mappend = inDepth2 closer
</code></pre>

<p>where</p>

<pre><code>  inDepth2 = unDepth ~&gt; unDepth ~&gt; Depth
</code></pre>

<p>using the ideas from <em><a href="http://conal.net/blog/posts/prettier-functions-for-wrapping-and-wrapping/" title="blog post">Prettier functions for wrapping and wrapping</a></em> and the notational improvement from Matt Hellige&#8217;s <em><a href="http://matt.immute.net/content/pointless-fun" title="blog post by Matt Hellige">Pointless fun</a></em>.</p>

<h3>FRP &#8212; Future values</h3>

<p>The <code>Monoid</code> instance for <code>Depth</code> may look familiar to you if you&#8217;ve been following along with my <a href="http://conal.net/blog/tag/future-value/" title="Posts on futures values">future value</a>s or have read the paper <em><a href="http://conal.net/papers/simply-reactive" title="Paper: &quot;Simply efficient functional reactivity&quot;">Simply efficient functional reactivity</a></em>.
A <em>future value</em> has a time and a value.
Usually, the value cannot be known until its time arrives.</p>

<pre><code>newtype FutureG t a = Future (Time t, a)

instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty = Future (maxBound, undefined)
  Future (s,a) `mappend` Future (t,b) =
    Future (s `min` t, if s &lt;= t then a else b)
</code></pre>

<p>When we&#8217;re using a non-lazy (flat) representation of time, this <code>mappend</code> definition can be written more simply:</p>

<pre><code>  mappend = minBy futTime

  futTime (Future (t,_)) = t
</code></pre>

<p>Equivalently,</p>

<pre><code>  mappend = inFuture2 (minBy fst)
</code></pre>

<p>The <code>Time</code> type is really nothing special about time.
It is just a synonym for the <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/Data-Max.html" title="module documentation"><code>Max</code> monoid</a>, as needed for the <code>Applicative</code> and <code>Monad</code> instances.</p>

<p>This connection with future values means we can discard more code.</p>

<pre><code>type RenderingD d = Image (FutureG d Color)
renderD :: (Ord d, Bounded d) =&gt; Geometry -&gt; RenderingD d
</code></pre>

<p>Now we have our monoid (homo)morphism properties:</p>

<pre><code>renderD mempty == mempty

renderD (ga `mappend` gb) == renderD ga `mappend` renderD gb
</code></pre>

<p>And we&#8217;ve eliminated the code for <code>renderR</code> by reusing and existing type (future values).</p>

<h3>Future values?</h3>

<p>What does it mean to think about depth/color pairs as being &#8220;future&#8221; colors?
If we were to probe outward along a ray, say at the speed of light, we would bump into some number of 3D objects.
The one we hit earliest is the nearest, so in this sense, <code>mappend</code> on futures (choosing the earlier one) is the right tool for the job.</p>

<p>I once read that a popular belief in the past was that vision (light) reaches outward to strike objects, as I&#8217;ve just described.
I&#8217;ve forgotten where I read about that belief, though I think in a book about perspective, and I&#8217;d appreciate a pointer from someone else who might have a reference.</p>

<p>We moderns believe that light travels to us from the objects we see.
What we see of nearby objects comes from the very recent past, while of further objects we see the more remote past.
From this modern perspective, therefore, the connection I&#8217;ve made with future values is exactly backward.
Now that I think about it in this way, of course it&#8217;s backward, because we see (slightly) into the past rather than the future.</p>

<p>Fixing this conceptual flaw is simple: define a type of &#8220;past values&#8221;.
Give them exactly the same representation as future values, and deriving its class instances entirely.</p>

<pre><code>newtype PastG t a = Past (FutureG t a)
  deriving (Monoid, Functor, Applicative, Monad)
</code></pre>

<p>Alternatively, choose a temporally neutral replacement for the name &#8220;future values&#8221;.</p>

<h3>The bug in Z-buffering</h3>

<p>The <code>renderD</code> function implements continuous, infinite Z-buffering, with <code>mappend</code> performing the z-compare and conditional overwrite.
Z-buffering is the dominant algorithm used in real-time 3D graphics and is supported in hardware on even low-end graphics hardware (though not in its full continuous and infinite glory).</p>

<p>However, Z-buffering also has a serious bug: it is only correct for fully opaque colors.
Consider a geometry <code>g</code> and a point <code>p</code> in the domain of the result image.
There may be many different points in <code>g</code> that project to <code>p</code>.
If <code>g</code> has only fully opaque colors, then at most one place on <code>g</code> contributes to the rendered image at <code>p</code>, and specifically, the nearest such point.
If <code>g</code> is the <code>union</code> (<code>mappend</code>) of two other geometries, <code>g == ga `union` gb</code>, then the nearest contribution of <code>g</code> (for <code>p</code>) will be the nearer (<code>mappend</code>) of the nearest contributions of <code>ga</code> and of <code>gb</code>.</p>

<p>When colors may be <em>partially</em> opaque, the color of the rendering at a point <code>p</code> can depend on <em>all</em> of the points in the geometry that get projected to <code>p</code>.
Correct rendering in the presence of partial opacity requires a <code>fold</code> that combines all of the colors that project onto a point, <em>in order of distance</em>, where the color-combining function (alpha-blending) is <em>not</em> commutative.
Consider again <code>g == ga `union` gb</code>.
The contributions of <code>ga</code> to <code>p</code> might be entirely closer than the contributions of <code>gb</code>, or entirely further, or interleaved.
If interleaved, then the colors generated from each cannot be combined into a single color for further combination.
To handle the general case, replace the single distance/color pair with an ordered <em>collection</em> of them:</p>

<pre><code>type RenderingD d = Image [FutureG d Color]  -- multiple projections, first try
</code></pre>

<p>Rendering a <code>union</code> (<code>mappend</code>) requires a merging of two lists of futures (distance/color pairs) into a single one.</p>

<h3>More FRP &#8212; Events</h3>

<p>Sadly, we&#8217;ve now lost our monoid morphism, because list <code>mappend</code> is <code>(++)</code>, not the required merging.
However, we can fix this problem as we did before, by introducing a new type.</p>

<p>Or, we can look for an existing type that matches our required semantics.
There is just such a thing in the <em><a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a></em> formulation of FRP, namely an <em>event</em>.
We can simply use the FRP <code>Event</code> type:</p>

<pre><code>type RenderingD d = Image (EventG d Color)

renderD :: (Ord d, Bounded d) =&gt; Geometry -&gt; RenderingD d
</code></pre>

<h3>Spatial transformation</h3>

<p>Introducing depths allowed rendering to be defined compositionally with respect to geometric union.
Is the depth model, enhanced with lists (events), sufficient for compositionality of rendering with respect to other <code>Geometry</code> operations as well?
Let&#8217;s look at spatial transformation.</p>

<pre><code>(*%)  :: Transform3 -&gt; Geometry -&gt; Geometry
</code></pre>

<p>Compositionally of rendering would mean that we can render <code>xf *% g</code> by rendering <code>g</code> and then using <code>xf</code> in some way to transform that rendering.
In other words there would have to exist a function <code>(*%%)</code> such that</p>

<pre><code>forall xf g. renderD (xf *% g) == xf *%% renderD g
</code></pre>

<p>I don&#8217;t know if the required <code>(*%%)</code> function exists, or what restrictions on <code>Geometry</code> or <code>Transform3</code> it implies, or whether such a function could be useful in practice.
Instead, let&#8217;s change the type of renderings again, so that rendering can accumulate transformations and apply them to surfaces.</p>

<pre><code>type RenderingDX = Transform3 -&gt; RenderingD

renderDX :: (Ord d, Bounded d) =&gt; Geometry -&gt; RenderingDX d
</code></pre>

<p>with or without correct treatment of partial opacity (i.e., using futures or events).</p>

<p>This new function has a simple specification:</p>

<pre><code>renderDX g xf == renderD (xf *% g)
</code></pre>

<p>from which it follows that</p>

<pre><code>renderD g == renderDX g identityX
</code></pre>

<p>Rendering a transformed geometry then is a simple accumulation, justified as follows:</p>

<pre><code>renderDX (xfi *% g)

  == {- specification of renderDX -}

 xfo -&gt; renderD (xfo *% (xfi *% g))

  == {- property of transformation -}

 xfo -&gt; renderD ((xfo `composeX` xfi) *% g)

  == {- specification of renderDX  -}

 xfo -&gt; renderDX g (xfo `composeX` xfi)
</code></pre>

<p>Render an empty geometry:</p>

<pre><code>renderDX mempty

  == {- specification of renderDX -}

 xf -&gt; renderD (xf *% mempty)

  == {- property of (*%) and mempty -}

 xf -&gt; renderD mempty

  == {- renderD is a monoid morphism -}

 xf -&gt; mempty

  == {- definition of pure on functions -}

pure mempty

  == {- definition of mempty on functions -}

mempty
</code></pre>

<p>Render a geometric union:</p>

<pre><code>renderDX (ga `mappend` gb)

  == {- specification of renderDX -}

 xf -&gt; renderD (xf *% (ga `mappend` gb))

  == {- property of transformation and union -}

 xf -&gt; renderD ((xf *% ga) `mappend` (xf *% gb))

  == {- renderD is a monoid morphism -}

 xf -&gt; renderD (xf *% ga) `mappend` renderD (xf *% gb)

  == {- specification of renderDX  -}

 xf -&gt; renderDX ga xf `mappend` renderDX gb xf

  == {- definition of liftA2/(&lt;*&gt;) on functions -}

liftA2 mappend (renderDX ga) (renderDX gb)

  == {- definition of mappend on functions -}

renderDX ga `mappend` renderDX gb
</code></pre>

<p>Hurray!
<code>renderDX</code> is still a monoid morphism.</p>

<p>The two properties of transformation and union used above say together that <code>(xf *%)</code> is a monoid morphism for all transforms <code>xf</code>.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=75&amp;md5=a47ae6e1e1a51016836d913e562dbd3e"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/3d-rendering-as-functional-reactive-programming/feed</wfw:commentRss>
		<slash:comments>11</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2F3d-rendering-as-functional-reactive-programming&amp;language=en_GB&amp;category=text&amp;title=3D+rendering+as+functional+reactive+programming&amp;description=I%26%238217%3Bve+been+playing+with+a+simple%2Fgeneral+semantics+for+3D.+In+the+process%2C+I+was+surprised+to+see+that+a+key+part+of+the+semantics+looks+exactly+like+a+key+part...&amp;tags=3D%2CFRP%2Cfunctional+reactive+programming%2Cmonoid%2Csemantics%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Another angle on functional future values</title>
		<link>http://conal.net/blog/posts/another-angle-on-functional-future-values</link>
		<comments>http://conal.net/blog/posts/another-angle-on-functional-future-values#comments</comments>
		<pubDate>Mon, 05 Jan 2009 04:01:05 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[caching]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[future value]]></category>
		<category><![CDATA[referential transparency]]></category>
		<category><![CDATA[type class morphism]]></category>
		<category><![CDATA[type composition]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=73</guid>
		<description><![CDATA[An earlier post introduced functional future values, which are values that cannot be known until the future, but can be manipulated in the present. That post presented a simple denotational semantics of future values as time/value pairs. With a little care in the definition of Time (using the Max monoid), the instances of Functor, Applicative, [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Another angle on functional future values

Tags: future value, type class morphism, type composition, caching, referential transparency, FRP, functional reactive programming

URL: http://conal.net/blog/posts/another-angle-on-functional-future-values/

-->

<!-- references -->

<!-- teaser -->

<p>An earlier post introduced functional <em><a href="http://conal.net/blog/posts/future-values/" title="blog post">future values</a></em>, which are values that cannot be known until the future, but can be manipulated in the present.
That post presented a simple denotational semantics of future values as time/value pairs.
With a little care in the definition of <code>Time</code> (using the <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/Data-Max.html" title="module documentation"><code>Max</code> monoid</a>), the instances of <code>Functor</code>, <code>Applicative</code>, <code>Monad</code> are all derived automatically.</p>

<p>A follow-up post gave an implementation of <em><a href="http://conal.net/blog/posts/future-values-via-multi-threading/" title="blog post">Future values via multi threading</a></em>.
Unfortunately, that implementation did not necessarily satisfy the semantics, because it allowed the nondeterminism of thread scheduling to leak through.
Although the implementation is usually correct, I wasn&#8217;t satisfied.</p>

<p>After a while, I hit upon an idea that really tickled me.
My original simple semantics could indeed serve as a correct and workable implementation if I used a subtler form of time that could reveal partial information.
Implementing this subtler form of time turned out to be quite tricky, and was my original motivation for the <code>unamb</code> operator described in the paper <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em> and the post <em><a href="http://conal.net/blog/posts/functional-concurrency-with-unambiguous-choice/" title="blog post">Functional concurrency with unambiguous choice</a></em>.</p>

<p>It took me several days of doodling, pacing outside, and talking to myself before the idea for <code>unamb</code> broke through.
Like many of my favorite ideas, it&#8217;s simple and obvious in retrospect: to remove the ambiguity of nondeterministic choice (as in the <code>amb</code> operator), restrict its use to values that are equal when non-bottom.
Whenever we have two different methods of answering the same question (or possibly failing), we can use <code>unamb</code> to try them both.
Failures (errors or non-termination) are no problem in this context.
A more powerful variation on <code>unamb</code> is the least upper bound operator <code>lub</code>, as described in <em><a href="http://conal.net/blog/posts/merging-partial-values/" title="blog post: &quot;Merging partial values&quot;">Merging partial values</a></em>.</p>

<p>I&#8217;ve been having trouble with the <code>unamb</code> implementation.
When two (compatible) computations race, the loser gets killed so as to free up cycles that are no longer needed.
My first few implementations, however, did not recursively terminate <em>other</em> threads spawned in service of abandoned computations (from nested use of <code>unamb</code>).
I raised this problem in <em><a href="http://conal.net/blog/posts/smarter-termination-for-thread-racing/" title="blog post">Smarter termination for thread racing</a></em>, which suggested some better definitions.
In the course of several helpful reader comments, some problems with my definitions were addressed, particularly in regard to blocking and unblocking exceptions.
None of these definitions so far has done the trick reliably, and now it looks like there is a bug in the GHC run-time system.
I hope the bug (if there is one) will be fixed soon, because I&#8217;m seeing more &amp; more how <code>unamb</code> and <code>lub</code> can make functional programming even more modular (just as laziness does, as explained by John Hughes in <em><a href="http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html" title="Paper by John Hughes">Why Functional Programming Matters</a></em>).</p>

<p>I started playing with future values and unambiguous choice as a way to implement <a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a>, a library for functional reactive programming (FRP).
(See <em><a href="http://conal.net/blog/posts/reactive-values-from-the-future/" title="blog post">Reactive values from the future</a></em> and <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em>.)
Over the last few days, I&#8217;ve given some thought to ways to implement future values without unambiguous choice.
This post describes one such alternative.</p>

<p><strong>Edits</strong>:</p>

<ul>
<li>2010-08-25: Replaced references to <em><a href="http://conal.net/papers/simply-reactive" title="Paper: &quot;Simply efficient functional reactivity (superceded)&quot;">Simply efficient functional reactivity</a></em> with <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em>.
The latter paper supercedes the former.</li>
<li>2010-08-25: Fixed the <code>unFuture</code> field of FutureG to be <code>TryFuture</code>.</li>
</ul>

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-73"></span></p>

<h3>Futures, presently</h3>

<p>The current <code>Future</code> type is just a time and a value, wrapped in a a <code>newtype</code>:</p>

<pre><code>newtype FutureG t a = Future (Time t, a)
  deriving (Functor, Applicative, Monad)
</code></pre>

<p>Where the <code>Time</code> type is defined via the <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/Data-Max.html" title="module documentation"><code>Max</code> monoid</a>.
The derived instances have exactly the intended meaning for futures, as explained in the post <em><a href="http://conal.net/blog/posts/future-values/" title="blog post">Future values</a></em> and the paper <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em>.
The &#8220;G&#8221; in the name <code>FutureG</code> refers to generalized over time types.</p>

<p>Note that <code>Future</code> is parameterized over both time and value.
Originally, I intended this definition as a denotational semantics of future values, but I realized that it could be a workable implementation with a lazy enough <code>t</code>.
In particular, the times have to reveal lower bounds and allow comparisons before they&#8217;re fully known.</p>

<p>Warren Burton explored an applicable notion in the 1980s, which he called &#8220;improving values&#8221;, having a concurrent implementation but deterministic functional semantics.
(See the paper <em><a href="http://journals.cambridge.org/action/displayAbstract?aid=1287720" title="paper by Warren Burton">Encapsulating nondeterminacy in an abstract data type with deterministic semantics</a></em> or the paper <em><a href="http://portal.acm.org/citation.cfm?id=99402" title="paper by Warren Burton">Indeterminate behavior with determinate semantics in parallel programs</a></em>.
I haven&#8217;t found a freely-available online copy of either.)
I adapted Warren&#8217;s idea and gave it an implementation via <code>unamb</code>.</p>

<p>Another operation finds the earlier of two futures.
This operation has an identity and is associative, so I wrapped it up as a <code>Monoid</code> instance:</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty = Future (maxBound, undefined)
  Future (s,a) `mappend` Future (t,b) =
    Future (s `min` t, if s &lt;= t then a else b)
</code></pre>

<p>This <code>mappend</code> definition could be written more simply:</p>

<pre><code>  u@(Future (t,_)) `mappend` u'@(Future (t',_)) =
    if t &lt;= t' then u else u'
</code></pre>

<p>However, the less simple version has more potential for laziness.
The time type might allow yielding partial information about a minimum before both of its arguments are fully known, which is the case with improving values.</p>

<h3>Futures as functions</h3>

<p>The <a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a> library uses futures to define and implement reactivity, i.e., behaviors specified piecewise.
Simplifying away the notion of <em>events</em> for now,</p>

<pre><code>until :: BehaviorG t a -&gt; FutureG t (BehaviorG t a) -&gt; BehaviorG t a
</code></pre>

<p>The semantics (but not implementation) of <code>BehaviorG</code> is given by</p>

<pre><code>at :: BehaviorG t a -&gt; (t -&gt; a)
</code></pre>

<p>The semantics of <code>until</code>:</p>

<pre><code>(b `until` Future (t',b')) `at` t = b'' `at` t
 where
   b'' = if t &lt;= t' then b else b'
</code></pre>

<p>FRP (multi-occurrence) events are then built on top of future values, and reactivity on top of <code>until</code>.</p>

<p>The semantics of <code>until</code> shows what information we need from futures: given a time <code>t</code>, we need to know whether <code>t</code> is later than the future&#8217;s time and, <em>if so</em>, what the future&#8217;s value is.
For other purposes, we&#8217;ll also want to know the future&#8217;s time, but again, only once we&#8217;re past that time.
We might, therefore, represent futures as a function that gives exactly this information.
I&#8217;ll call this function representation &#8220;function futures&#8221; and use the prefix &#8220;S&#8221; to distinguish the original &#8220;simple&#8221; futures from these function futures.</p>

<pre><code>type TryFuture t a = Time t -&gt; Maybe (S.FutureG t a)

tryFuture :: F.FutureG t a -&gt; TryFuture t a
</code></pre>

<p>Given a probe time, <code>tryFuture</code> gives <code>Nothing</code> if the time is before or at the future&#8217;s time, or <code>Just u</code> otherwise, where <code>u</code> is the simple future.</p>

<p>We could represent <code>F.FutureG</code> simply as <code>TryFuture</code>:</p>

<pre><code>type F.FutureG = TryFuture  -- first try
</code></pre>

<p>But then we&#8217;d be stuck with the <code>Functor</code> and <code>Applicative</code> instances for functions instead of futures.
Adding a <code>newtype</code> fixes that problem:</p>

<pre><code>newtype FutureG t a = Future { unFuture :: TryFuture t a } -- second try
</code></pre>

<p>With this representation we can easily construct and try out function futures:</p>

<pre><code>future :: TryFuture t a -&gt; FutureG t a
future = Future

tryFuture :: FutureG t a -&gt; TryFuture t a
tryFuture = unFuture
</code></pre>

<p>I like to define helpers for working inside representations:</p>

<pre><code>inFuture  :: (TryFuture t a -&gt; TryFuture t' a')
          -&gt; (FutureG   t a -&gt; FutureG   t' a')

inFuture2 :: (TryFuture t a -&gt; TryFuture t' a' -&gt; TryFuture t'' a'')
          -&gt; (FutureG   t a -&gt; FutureG   t' a' -&gt; FutureG   t'' a'')
</code></pre>

<p>The definitions of these helpers are very simple with the ideas from <em><a href="http://conal.net/blog/posts/prettier-functions-for-wrapping-and-wrapping/" title="blog post">Prettier functions for wrapping and wrapping</a></em> and a lovely notation from Matt Hellige&#8217;s <em><a href="http://matt.immute.net/content/pointless-fun" title="blog post by Matt Hellige">Pointless fun</a></em>.</p>

<pre><code>inFuture  = unFuture ~&gt; Future

inFuture2 = unFuture ~&gt; inFuture 

(~&gt;) :: (a' -&gt; a) -&gt; (b -&gt; b') -&gt; ((a -&gt; b) -&gt; (a' -&gt; b'))
g ~&gt; h = result h . argument g
</code></pre>

<p>These helpers make for some easy definitions in the style of <em><a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">Semantic editor combinators</a></em>:</p>

<pre><code>instance Functor (FutureG t) where
  fmap = inFuture.fmap.fmap.fmap

instance (Bounded t, Ord t) =&gt; Applicative (FutureG t) where
  pure  = Future . pure.pure.pure
  (&lt;*&gt;) = (inFuture2.liftA2.liftA2) (&lt;*&gt;)
</code></pre>

<h4>Type composition</h4>

<p>These <code>Functor</code> and <code>Applicative</code> instances (for <code>FutureG t</code>) may look mysterious, but they have a common and inevitable form.
Every type whose representation is the (semantic and representational) composition of three functors has this style of <code>Functor</code> instance, and similarly for <code>Applicative</code>.</p>

<p>Instead of repeating this common pattern, let&#8217;s make the type composition explicit, using <a href="http://hackage.haskell.org/packages/archive/TypeCompose/0.6.3/doc/html/Control-Compose.html#2" title="module documentation"><code>Control.Compose</code></a> from the <a href="http://haskell.org/haskellwiki/TypeCompose" title="Wiki page for the TypeCompose library">TypeCompose</a> library:</p>

<pre><code>type FutureG t = (-&gt;) (Time t) :. Maybe :. S.FutureG t  -- actual definition
</code></pre>

<p>Now we can throw out <code>inFuture</code>, <code>inFuture2</code>, <code>(~&gt;)</code>, and the <code>Functor</code> and <code>Applicative</code> instances.
These instances follow from the general instances for type composition.</p>

<h4>Monoid</h4>

<p>The <code>Monoid</code> instance could also come automatically from type composition:</p>

<pre><code>instance Monoid (g (f a)) =&gt; Monoid ((g :. f) a) where
  { mempty = O mempty; mappend = inO2 mappend }
</code></pre>

<p>The <code>O</code> here is just the <code>newtype</code> constructor for <code>(:.)</code>, and the <code>inO2</code> function is similar to <code>inFuture2</code> above.</p>

<p>However, there is another often-useful <code>Monoid</code> instance:</p>

<pre><code>-- standard Monoid instance for Applicative applied to Monoid
instance (Applicative (g :. f), Monoid a) =&gt; Monoid ((g :. f) a) where
  { mempty = pure mempty; mappend = liftA2 mappend }
</code></pre>

<p>Because these two instances &#8220;overlap&#8221; are are both useful, neither one is declared in the general case.
Instead, specialized instances are declared where needed, e.g.,</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty  = (  O .  O ) mempty   -- or future mempty
  mappend = (inO2.inO2) mappend
</code></pre>

<p>How does the <code>Monoid</code> instance work?  Start with <code>mempty</code>.  Expanding:</p>

<pre><code>mempty

  == {- definition -} 

O (O mempty)

  == {- mempty on functions -}

O (O (const mempty))

  == {- mempty on Maybe -}

O (O (const Nothing))
</code></pre>

<p>So, given any probe time, the empty (never-occurring) future says that it does not occur before the probe time.</p>

<p>Next, <code>mappend</code>:</p>

<pre><code>O (O f) `mappend` O (O f')

  == {- mappend on FutureG -}

O (O (f `mappend` f'))

  == {- mappend on functions -}

O (O ( t -&gt; f t `mappend` f' t))

  == {- mappend on Maybe -}

O (O ( t -&gt; f t `mappendMb` f' t))
  where
    Nothing `mappendMb` mb'    = mb'
    mb `mappendMb` Nothing     = mb
    Just u `mappendMb` Just u' = Just (u `mappend` u')
</code></pre>

<p>The <code>mappend</code> in this last line is on simple futures, as defined above, examining the (now known) times and choosing the earlier future.
Previously, I took special care in that <code>mappend</code> definition to enable <code>min</code> to produce information before knowing whether <code>t &lt;= t'</code>.
However, with this new approach to futures, I expect to use simple (flat) times, so it could instead be</p>

<pre><code>u@(Future (s,_)) `mappend` u'@(Future (s',_)) = if s &lt;= s' then u else u'
</code></pre>

<p>or</p>

<pre><code>u `mappend` u' = if futTime u &lt;= futTime u' then u else u'

futTime (Future (t,_)) = t
</code></pre>

<p>or just</p>

<pre><code>mappend = minBy futTime
</code></pre>

<p>How does <code>mappend</code> work on function futures?
Given a test time <code>t</code>, if both future times are at least <code>t</code>, then the combined future&#8217;s time is at least <code>t</code> (yielding <code>Nothing</code>).
If either future is before <code>t</code> and the other isn&#8217;t, then the combined future is the same as the one before <code>t</code>.
If both futures are before <code>t</code>, then the combined future is the earlier one.
Exactly the desired semantics!</p>

<h4>Relating function futures and simple futures</h4>

<p>The function-based representation of futures relates closely to the simple representation.
Let&#8217;s make this relationship explcit by defining mappings between them:</p>

<pre><code>sToF :: Ord t =&gt; S.FutureG t a -&gt; F.FutureG t a

fToS :: Ord t =&gt; F.FutureG t a -&gt; S.FutureG t a
</code></pre>

<p>The first one is easy:</p>

<pre><code>sToF u@(S.Future (t, _)) =
  future ( t' -&gt; if t' &lt;= t then Nothing else Just u)
</code></pre>

<p>The reverse mapping, <code>fToS</code>, is trickier and is only defined on the image (codomain) of <code>sToF</code>.
I think it can be defined mathematically but not computationally.
There are two cases: either the function always returns <code>Nothing</code>, or there is at least one <code>t</code> for which it returns a <code>Just</code>.
If the former, then the simple future is <code>mempty</code>, which is <code>S.Future (maxBound, undefined)</code>.
If the latter, then there is only one such <code>Just</code>, and the simple future is the one in that <code>Just</code>.
Together, <code>(sToF, fToS)</code> form a projection-embedding pair.</p>

<p>We won&#8217;t really have to implement or invoke these functions.
Instead, they serve to further specify the type <code>F.FutureG</code> and the correctness of operations on it.
The representation of <code>F.FutureG</code> as given allows many values that do not correspond to futures.
To eliminate these representations, require an invariant that a function future must be the result of applying <code>sToF</code> to some simple future.</p>

<p>We&#8217;ll require that each operation preserves this invariant.
However, let&#8217;s prove something stronger, namely that operations on on <code>F.FutureG</code> correspond precisely to the same operations on <code>S.FutureG</code>, via <code>sToF</code>.
In other words, <code>sToF</code> preserves the shape of the operations on futures.
For type classes, these correspondences are the type class morphisms.
For instance, <code>sToF</code> is a <code>Monoid</code> morphism:</p>

<pre><code>sToF mempty == mempty

sToF (u `mappend` u') == sToF u `mappend` sToF u'
</code></pre>

<h3>Caching futures</h3>

<p>This function representation eliminates the need for tricky times (using improving values and <code>unamb</code>), but it loses the caching benefit that lazy functional programming affords to non-function representations.
Now let&#8217;s reclaim that benefit.
The trick is to exploit the restriction that every function future must be 
(semantically) the image of a simple future under <code>sToF</code>.</p>

<p>Examining the definition of <code>sToF</code>, we can deduce the following monotonicity properties of (legitimate) function futures:</p>

<ul>
<li>If the probe function yields <code>Nothing</code> for some <code>t'</code>, then it yields <code>Nothing</code> for earlier times.</li>
<li>If the probe function yields <code>Just u</code> for some <code>t'</code>, then it yields <code>Just u</code> for all later times.</li>
</ul>

<p>We can exploit these monotonicity properties by caching information as we learn it.
Caching of this sort is what distinguishes call-by-need from call-by-name and allows lazy evaluation to work efficiently for data representations.</p>

<p>Specifically, let&#8217;s save a best-known lower bound for the future time and the simple future when known.
Since the lower bound may get modified a few times, I&#8217;ll use a <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent-SampleVar.html#v:SampleVar"><code>SampleVar</code></a> (thread-safe rewritable variable).
The simple future will be discovered only once, so I&#8217;ll use an <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/FRP-Reactive-Internal-IVar.html"><code>IVar</code></a>.
I&#8217;ll keep the function-future for probing when the cached information is not sufficient to answer a query.</p>

<p>Prefix this caching version with a &#8220;C&#8221;, to distinguish it from function futures (&#8220;F&#8221;) and the simple futures (&#8220;S&#8221;):</p>

<pre><code>data C.FutureG t a =
  Future (SampleVar t) (IVar (S.FutureG t a)) (F.FutureG t a)
</code></pre>

<p>Either the simple future or the function future will be useful, so we could replace the second two fields with a single one:</p>

<pre><code>data C.FutureG t a =
  Future (SampleVar t) (MVar (Either (F.FutureG t a) (FF.FutureG t a)))
</code></pre>

<p>We&#8217;ll have to be careful about multiple independent discoveries of the same simple future, which would correspond to multiple writes <code>IVar</code> with the same value.
(I imagine there are related mechanics in the GHC RTS for two threads evaluating the same thunk that would be helpful to understand.)
I guess I could use a <code>SampleVar</code> and just not worry about multiple writes, since they&#8217;d be equivalent.
For now, use the <code>IVar</code> version.</p>

<p>The caching representation relates to the function representation by means of two functions:</p>

<pre><code>dToF :: Ord     t =&gt; C.FutureG t a -&gt; F.FutureG t a

fToD :: Bounded t =&gt; F.FutureG t a -&gt; C.FutureG t a
</code></pre>

<p>The implementation</p>

<pre><code>dToF (C.Future tv uv uf) =
  F.Future $  t' -&gt; unsafePerformIO $
    do mb &lt;- tryReadIVar uv
       case mb of
         j@(Just (S.Future (Max t,_))) -&gt;
           return (if t' &lt;= t then Nothing else j)
         Nothing        -&gt;
           do tlo &lt;- readSampleVar tv
              if t' &lt;= tlo then
                 return Nothing
               else
                 do let mb' = F.unFuture uf t'
                    writeIVarMaybe uv mb'
                    return mb'

-- Perhaps write to an IVar
writeIVarMaybe :: IVar a -&gt; Maybe a -&gt; IO ()
writeIVarMaybe v = maybe (return ()) (writeIVar v)

fToD uf = unsafePerformIO $
          do tv &lt;- newSampleVar t0
             uv &lt;- newIVar
             writeIVarMaybe uv (F.unFuture uf t0)
             return (Future tv uv uf)
 where
   t0 = minBound
</code></pre>

<p>It&#8217;ll be handy to delegate operations to <code>F.Future</code>:</p>

<pre><code>inF :: (Ord t, Bounded t') =&gt;
       (F.FutureG t a -&gt; F.FutureG t' a')
    -&gt; (  FutureG t a -&gt;   FutureG t' a')
inF = dToF ~&gt; fToD

inF2 :: (Ord t, Bounded t', Ord t', Bounded t'') =&gt;
        (F.FutureG t a -&gt; F.FutureG t' a' -&gt; F.FutureG t'' a'')
     -&gt; (  FutureG t a -&gt;   FutureG t' a' -&gt;   FutureG t'' a'')
inF2 = dToF ~&gt; inF
</code></pre>

<p>Then</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty  = fToD mempty
  mappend = inF2 mappend

instance (Ord t, Bounded t) =&gt; Functor     (FutureG t) where
  fmap = inF . fmap

instance (Ord t, Bounded t) =&gt; Applicative (FutureG t) where
  pure  = fToD . pure
  (&lt;*&gt;) = inF2 (&lt;*&gt;)
</code></pre>

<h3>Wrap-up</h3>

<p>Well, that&#8217;s the idea.
I&#8217;ve gotten as far as type-checking the code in this post, but I haven&#8217;t yet tried running it.</p>

<p>What interests me most above is the use of <code>unsafePerformIO</code> here while preserving referential transparency, thanks to the invariant on <code>F.FutureG</code> (and the consequent monotonicity property).
The heart of lazy evaluation of <em>pure</em> functional programs is just such an update, replacing a thunk with its weak head normal form (whnf).
What general principles can we construct that allow us to use efficient, destructive updating and still have referential transparency?
The important thing above seems to be the careful definition of an abstract interface such that the effect of state updates is semantically invisible through the interface.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=73&amp;md5=7a9fda2441e621e5220ead93a09bb210"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/another-angle-on-functional-future-values/feed</wfw:commentRss>
		<slash:comments>11</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fanother-angle-on-functional-future-values&amp;language=en_GB&amp;category=text&amp;title=Another+angle+on+functional+future+values&amp;description=An+earlier+post+introduced+functional+future+values%2C+which+are+values+that+cannot+be+known+until+the+future%2C+but+can+be+manipulated+in+the+present.+That+post+presented+a+simple+denotational...&amp;tags=caching%2CFRP%2Cfunctional+reactive+programming%2Cfuture+value%2Creferential+transparency%2Ctype+class+morphism%2Ctype+composition%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Functional interactive behavior</title>
		<link>http://conal.net/blog/posts/functional-interactive-behavior</link>
		<comments>http://conal.net/blog/posts/functional-interactive-behavior#comments</comments>
		<pubDate>Wed, 10 Dec 2008 09:26:50 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[comonad]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[interaction]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=70</guid>
		<description><![CDATA[In a previous post, I presented a fundamental reason why classic FRP does not fit interactive behavior, which is that the semantic model captures only the influence of time and not other input. I also gave a simple alternative, with a simple and general model for temporal and spatial transformation, in which input behavior is [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Functional interactive behavior

Tags: FRP, functional reactive programming, interaction

URL: http://conal.net/blog/posts/functional-interactive-behavior/

-->

<!-- references -->

<!-- teaser -->

<p>In a previous post, I presented a fundamental reason <em><a href="http://conal.net/blog/posts/why-classic-FRP-does-not-fit-interactive-behavior/" title="blog post">why classic FRP does not fit interactive behavior</a></em>, which is that the semantic model captures only the influence of time and not other input.
I also gave a simple alternative, with a simple and general model for temporal and spatial transformation, in which input behavior is transformed inversely to the transformation of output behavior.</p>

<p>The semantic model I suggested is the same as used in &#8220;Arrow FRP&#8221;, from <a href="http://conal.net/papers/genuinely-functional-guis.pdf" title="Paper: &quot;Genuinely Functional User Interfaces&quot;">Fruit</a> and <a href="http://haskell.org/haskellwiki/Yampa" title="Wiki page">Yampa</a>.
I want, however, a more convenient and efficient way to package up that model, which is the subject of the post you are reading now.</p>

<p>Next, we took a close look at one awkward aspect of classic FRP for interactive behavior, namely the need to <a href="http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-programming/" title="blog post">trim inputs</a>, and how trimming relates to <a href="http://conal.net/blog/tag/comonad/">comonadic</a> FRP.
The <code>trim</code> function allows us to define multi-phase interactive behaviors correctly and efficiently, but its use is tedious and is easy to get wrong.
It thus fails to achieve what I want from functional programming in general and FRP in particular, which is to enable writing simple, natural descriptions, free of mechanical details.</p>

<p>The current post hides and automates the mechanics of trimming, so that the intent of an interactive behavior can be expressed directly and executed correctly and efficiently.</p>

<!--
**Edits**:

* 2008-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-70"></span></p>

<p>As before, I&#8217;ll adopt two abbreviations, for succinctness:</p>

<pre><code>type B = Behavior
type E = Event
</code></pre>

<h3>Safe and easy trimming</h3>

<p><a href="http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-programming/" title="blog post">Previously</a>, I defined an interactive cat behavior as follows:</p>

<pre><code>cat3 :: B World -&gt; B Cat
cat3 world = sleep world `switcher`
               ((uncurry prowl &lt;$&gt; trimf wake   world) `mappend`
                (uncurry eat   &lt;$&gt; trimf hunger world))
</code></pre>

<p>I&#8217;d really like to write the following</p>

<pre><code>-- ideal:
cat4 = sleep `switcher` ((prowl &lt;$&gt; wake) `mappend` (eat &lt;$&gt; hunger)) 
</code></pre>

<p>Let&#8217;s see how close we can get.</p>

<p>I can see right off I&#8217;ll have to replace or generalize <code>switcher</code>.
For now, I&#8217;ll replace it:</p>

<pre><code>switcherf :: (B i -&gt; B o)
          -&gt; (B i -&gt; E (B i -&gt; B o))
          -&gt; (B i -&gt; B o)
</code></pre>

<p>This function will have to manage trimming:</p>

<pre><code>bf `switcherf` ef =  i -&gt;
  bf i `switcher` (uncurry ($) &lt;$&gt; trimf ef i)
</code></pre>

<p>I won&#8217;t have to replace <code>mappend</code>, since it&#8217;s a method and so can have a variety of types.
In this case, <code>mappend</code> applies to a function from behaviors to events.
Fortunately, the function monoid is exactly what we need:</p>

<pre><code>instance Monoid b =&gt; Monoid (a -&gt; b) where
  mempty        = const mempty
  f `mappend` g =  a -&gt; f a `mappend` f b
</code></pre>

<p>or the more lovely standard form for applicative functors:</p>

<pre><code>instance Monoid b =&gt; Monoid (a -&gt; b) where
  mempty  = pure   mempty
  mappend = liftA2 mappend
</code></pre>

<p>The use of <code>(&lt;$&gt;)</code> (i.e., <code>fmap</code>) in <code>cat4</code> above won&#8217;t work.
We want instead to <code>fmap</code> inside the <em>result</em> of a function from behaviors to events.
Using the style of <a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">semantic editor combinators</a>, we get the following definition, which is fairly close to our ideal:</p>

<pre><code>cat4 = sleep `switcherf`
         ((result.fmap) prowl wake `mappend` (result.fmap) eat hunger)
</code></pre>

<h3>Generalized switching</h3>

<p>To generalize <code>switcher</code>, introduce a new type class:</p>

<pre><code>class Switchable b e where switcher :: b -&gt; e b -&gt; b
</code></pre>

<p>The original Reactive <code>switcher</code> is a special case:</p>

<pre><code>instance Switchable (B a) E where switcher = R.switcher
</code></pre>

<p>We can switch among tuples and among other containers of switchables.
For instance,</p>

<pre><code>instance (Functor e, Switchable b e, Switchable b' e)
      =&gt; Switchable (b,b') e where
  (b,b') `switcher` e = ( b  `switcher` (fst &lt;$&gt; e)
                        , b' `switcher` (snd &lt;$&gt; e) )
</code></pre>

<h3>Temporal values</h3>

<p>Looking through the examples above, all we really had to do with the input behavior was to compute all remainders.
I used</p>

<pre><code>duplicate :: B a -&gt; B (B a)
</code></pre>

<p>More generally,</p>

<pre><code>class Temporal a where remainders :: a -&gt; B a
</code></pre>

<p>Behaviors and events are included as a special case,</p>

<pre><code>instance Temporal (B a) where remainders = duplicate

instance Temporal (E a) where remainders = ...
</code></pre>

<p>Temporal values combine without losing their individuality, which allows efficient change-driven evaluation as in <em><a href="http://conal.net/blog/posts/simply-efficient-functional-reactivity/" title="blog post">Simply efficient functional reactivity</a></em>:</p>

<pre><code>instance (Temporal a, Temporal b) =&gt; Temporal (a,b) where
  remainders (a,b) = remainders a `zip` remainders b
</code></pre>

<p>Similarly for other triples and other data structures.</p>

<p>Sometimes it&#8217;s handy to carry static information along with dynamic information.
Static types can be made trivially temporal:</p>

<pre><code>instance Temporal Bool where remainders = pure
instance Temporal Int  where remainders = pure
-- etc
</code></pre>

<p>With this <code>Temporal</code> class, the trimming definitions above have more general types.</p>

<pre><code>trim  :: Temporal i =&gt; E o -&gt; i -&gt; E (o, i)

trimf :: Temporal i =&gt; (i -&gt; E o) -&gt; (i -&gt; E (o, i))
</code></pre>

<p>As does function switching:</p>

<pre><code>switcherf :: (Temporal i, Switchable o E) =&gt;
             (i -&gt; o) -&gt; (i -&gt; E (i -&gt; o)) -&gt; i -&gt; o
</code></pre>

<h3>Types for functional interactive behavior</h3>

<p>We&#8217;ve gotten almost to my ideal cat definition.
We cannot, however, use <code>switcher</code> or <code>(&lt;$&gt;)</code> here with functions from behaviors to behaviors, because the types don&#8217;t fit.</p>

<p>To cross the last gap, let&#8217;s define new types corresponding to the idioms we&#8217;ve seen repeatedly above.</p>

<pre><code>-- First try
type i :~&gt; o = BI (B i -&gt; B o)
type i :-&gt; o = EI (B i -&gt; E o)
</code></pre>

<p>Or, using <a href="http://haskell.org/haskellwiki/TypeCompose" title="Wiki page">type composition</a>:</p>

<pre><code>-- Second try
type (:~&gt;) i = (-&gt;) (B i) :. B
type (:-&gt;) i = (-&gt;) (B i) :. E
</code></pre>

<p>The advantage of type composition is that we get some useful definitions for free, including <code>Functor</code> and <code>Applicative</code> instances.</p>

<p>However, there&#8217;s a problem with both versions.
They limit us to a single behavior as input.
A realistic interactive environment has many inputs, including a mixture of behaviors and events.</p>

<p>In Yampa, that mixture is combined into a single behavior, leading to two difficulties:</p>

<ul>
<li>The distinction between behaviors and events gets lost, as well as (I think) accurate and minimal-latency event detection and response.</li>
<li>The bundled input environment changes whenever any component changes, leading to everything getting recomputed and redisplayed when anything changes.</li>
</ul>

<p>To avoid these problems, I&#8217;ll take a different approach.
Generalize inputs from behaviors to arbitrary <code>Temporal</code> values, which include behaviors, events and tuples and structures of temporal values.</p>

<p>The types for interactive behaviors and interactive events are</p>

<pre><code>type (:~&gt;) i = (-&gt;) i :. B
type (:-&gt;) i = (-&gt;) i :. E
</code></pre>

<p>So <code>i :~&gt; o</code> is like <code>i -&gt; B o</code>, and <code>i :-&gt; o</code> is like <code>i -&gt; E o</code>.</p>

<p>Switching for interactive behaviors wraps the <code>switcherf</code> function from above:</p>

<pre><code>instance Temporal i =&gt; Switchable (i :~&gt; o) ((:-&gt;) i) where
  switcher = inO2 $  bf ef -&gt; bf `switcherf` (result.fmap) unO ef
</code></pre>

<p>The <code>inO2</code> and <code>unO</code> functions from <a href="http://haskell.org/haskellwiki/TypeCompose" title="Wiki page">TypeCompose</a> just manipulate <code>newtype</code> wrappers.
See <em><a href="http://conal.net/blog/posts/prettier-functions-for-wrapping-and-wrapping/" title="blog post">Prettier functions for wrapping and wrapping</a></em>.</p>

<p>This definition is actually more general than the type given here.
For instance, it can be used to switch between interactive <em>events</em> as well as interactive <em>behaviors</em>.
To see the generalization, first abstract out the commonality between <code>(:~&gt;)</code> and <code>(:-&gt;)</code>:</p>

<pre><code>type i :-&gt;. f = (-&gt;) i :. f

type (:~&gt;) i = i :-&gt;. B
type (:-&gt;) i = i :-&gt;. E
</code></pre>

<p>The same instance code but with a more general type:</p>

<pre><code>instance (Temporal i, Switchable (f o) E)
      =&gt; Switchable ((i :-&gt;. f) o) ((:-&gt;) i) where
  switcher = inO2 $  bf ef -&gt; bf `switcherf` (result.fmap) unO ef
</code></pre>

<p>We can also switch between interactive <em>collections</em> of behaviors and events, though not with the <code>(:-&gt;.)</code> wrapping.</p>

<h3>Where are we?</h3>

<p>Almost all of the pieces are in place now.
Another post will relate input trimming to the time transformation of interactive behaviors, as discussed in 
<em><a href="http://conal.net/blog/posts/why-classic-FRP-does-not-fit-interactive-behavior/" title="blog post">Why classic FRP does not fit interactive behavior</a></em>.
Also, how interactive FRP relates to <em><a href="http://conal.net/blog/posts/sequences-segments-and-signals/" title="blog post">Sequences, segments, and signals</a></em>.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=70&amp;md5=adc2fb47d1bda8d4267464ef19f66859"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/functional-interactive-behavior/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Ffunctional-interactive-behavior&amp;language=en_GB&amp;category=text&amp;title=Functional+interactive+behavior&amp;description=In+a+previous+post%2C+I+presented+a+fundamental+reason+why+classic+FRP+does+not+fit+interactive+behavior%2C+which+is+that+the+semantic+model+captures+only+the+influence+of+time+and...&amp;tags=comonad%2CFRP%2Cfunctional+reactive+programming%2Cinteraction%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Trimming inputs in functional reactive programming</title>
		<link>http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-programming</link>
		<comments>http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-programming#comments</comments>
		<pubDate>Wed, 10 Dec 2008 09:00:33 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[comonad]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[interaction]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=71</guid>
		<description><![CDATA[This post takes a close look at one awkward aspect of classic (non-arrow) FRP for interactive behavior, namely the need to trim (or &#8220;age&#8221;) old input. Failing to trim results in behavior that is incorrect and grossly inefficient. Behavior trimming connects directly into the comonad interface mentioned in a few recent posts, and is what [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Trimming inputs in functional reactive programming

Tags: FRP, functional reactive programming, interaction, comonad

URL: http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-programming/

-->

<!-- references -->

<!-- teaser -->

<p>This post takes a close look at one awkward aspect of <em>classic</em> (non-arrow) FRP for interactive behavior, namely the need to <em>trim</em> (or &#8220;age&#8221;) old input.
Failing to trim results in behavior that is incorrect and grossly inefficient.</p>

<p>Behavior trimming connects directly into the <a href="http://conal.net/blog/tag/comonad/">comonad</a> interface mentioned in a few recent posts, and is what got me interested in comonads recently.</p>

<p>In absolute-time FRP, trimming has a purely operational significance.
Switching to relative time, trimming is recast as a semantically familiar operation, namely the generalized <code>drop</code> function used in two <a href="http://conal.net/blog/tag/segment/">recent posts</a>.</p>

<!--
**Edits**:

* 2008-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-71"></span></p>

<p>In the rest of this post, I&#8217;ll adopt two abbreviations, for succinctness:</p>

<pre><code>type B = Behavior
type E = Event
</code></pre>

<h3>Trimming inputs</h3>

<p>An awkward aspect of classic FRP has to do with switching phases of behavior.
Each phase is a function of some static (momentary) input and some dynamic (time-varying) input, e.g.,</p>

<pre><code>sleep :: B World -&gt; B Cat
eat   :: Appetite   -&gt; B World -&gt; B Cat
prowl :: Friskiness -&gt; B World -&gt; B Cat

wake   :: B World -&gt; E Friskiness
hunger :: B World -&gt; E Appetite
</code></pre>

<p>As a first try, our cat prowls upon waking and eats when hungry, taking into account its surrounding world:</p>

<pre><code>cat1 :: B World -&gt; B Cat
cat1 world = sleep world `switcher`
               ((flip prowl world &lt;$&gt; wake   world) `mappend`
                (flip eat   world &lt;$&gt; hunger world))
</code></pre>

<p>The <code>(&lt;$&gt;)</code> here, from <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html" title="Haskell module documentation">Control.Applicative</a>, is a synonym for <code>fmap</code>.
In this context,</p>

<pre><code>(&lt;$&gt;) :: (a -&gt; B Cat) -&gt; E a -&gt; E (B Cat)
</code></pre>

<p>The FRP <code>switcher</code> function switches to new behavior phases as they&#8217;re generated by an event, beginning with a given initial behavior:</p>

<pre><code>switcher :: B a -&gt; E (B a) -&gt; B a
</code></pre>

<p>And the <code>mappend</code> here merges two events into one, combining their occurrences.</p>

<p>When switching phases, we generally want the new phase to start responding to input exactly where the old phase left off.
If we&#8217;re not careful, however, the new phase will begin with an old input.
I&#8217;ve made exactly this mistake in defining <code>cat1</code> above.
Consequently, each new phase will begin by responding to all of the old input and then carry on.
This meaning is both unintended and is very expensive (the dreaded &#8220;space-time&#8221; leak).</p>

<p>This difficulty is not unique to FRP.
In functional programming, we have to be careful how we hold onto our inputs, so that they can get accessed and freed incrementally.
I don&#8217;t think the difficulty arises much in imperative programming, because input (like output) is destructively altered, and programs have access only to the current state.</p>

<p>I&#8217;ve done it wrong above, in defining <code>cat</code>.
How can I do it right?
The solution/hack I came up for Fran was to add a function that trims (&#8220;ages&#8221;) dynamic input while waiting for event occurrences.</p>

<pre><code>trim :: B b -&gt; E a -&gt; E (a, B b)
</code></pre>

<p><code>trim b e</code> follows <code>b</code> and <code>e</code> in parallel.
At each occurrence of <code>e</code>, the remainder of <code>b</code> is paired up with the event data from <code>e</code>.</p>

<p>Now I can define the interactive, multi-phase behavior I intend:</p>

<pre><code>cat2 :: B World -&gt; B Cat
cat2 world = sleep world `switcher`
               ((uncurry prowl &lt;$&gt; trim world (wake   world)) `mappend`
                (uncurry eat   &lt;$&gt; trim world (hunger world)))
</code></pre>

<p>The event <code>trim world (wake world)</code> occurs whenever <code>wake world</code> does, and has as event data the cat&#8217;s friskiness on waking, plus the remainder of the cat&#8217;s world at the occurrence time.
The &#8220;<code>uncurry prowl &lt;$&gt;</code>&#8221; applies <code>prowl</code> to each friskiness and remainder world on waking.
Similarly for the other phase.</p>

<p>I think this version defines the behavior I want and that it can run efficiently, assuming that <code>trim e b</code> traverses <code>e</code> and <code>b</code> in parallel (so that laziness doesn&#8217;t cause a space-time leak).
However, this definition is much trickier than what I&#8217;m looking for.</p>

<p>One small improvement is to abstract a trimming pattern:</p>

<pre><code>trimf :: (B i -&gt; E o) -&gt; (B i -&gt; E (o, B i))
trimf ef i = trim i (ef i)

cat3 :: B World -&gt; B Cat
cat3 world = sleep world `switcher`
               ((uncurry prowl &lt;$&gt; trimf wake   world) `mappend`
                (uncurry eat   &lt;$&gt; trimf hunger world))
</code></pre>

<h3>A comonad comes out of hiding</h3>

<p>The <code>trim</code> functions above look a lot like snapshotting of behaviors:</p>

<pre><code>snapshot  :: B b -&gt; E a -&gt; E (a,b)

snapshot_ :: B b -&gt; E a -&gt; E b
</code></pre>

<p>Indeed, the meanings of trimming and snapshotting are very alike.
They both involving following an event and a behavior in parallel.
At each event occurrence, <code>snapshot</code> takes the <em>value</em> of the behavior at the occurrence time, while <code>trim</code> takes the entire remainder from that time on.</p>

<p>Given this similarlity, can one be defined in terms of the other?
If we had a function to &#8220;extract&#8221; the first defined value of a behavior, we could define<code>snapshot</code> via <code>trim</code>.</p>

<pre><code>b `snapshot` e = fmap (second extract) (b `trim` e)

extract :: B a -&gt; a
</code></pre>

<p>We can also define <code>trim</code> via <code>snapshot</code>, if we have a way to get all trimmed versions of a behavior &#8212; to &#8220;duplicate&#8221; a one-level behavior into a two-level behavior:</p>

<pre><code>b `trim` e = duplicate b `snapshot` e

duplicate :: B a -&gt; B (B a)
</code></pre>

<p>If you&#8217;ve run into comonads, you may recognize <code>extract</code> and <code>duplicate</code> as the operations of <code>Comonad</code>, dual to <code>Monad</code>&#8216;s <code>return</code> and <code>join</code>.
It was this definition of <code>trim</code> that got me interested in comonads recently.</p>

<p>In the style of <em><a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">Semantic editor combinators</a></em>,</p>

<pre><code>snapshot = (result.result.fmap.second) extract trim
</code></pre>

<p>or</p>

<pre><code>trim = argument remainders R.snapshot
</code></pre>

<p>The <code>extract</code> function is problematic for classic FRP, which uses absolute (global) time.
We don&#8217;t know with which time to sample the behavior.
With <em>relative-time FRP</em>, we&#8217;ll only ever sample at (local) time 0.</p>

<h3>Relative time</h3>

<p>So far, the necessary trimming has strong operational significance: it prevents obsolete reactions and the consequent space-time leaks.</p>

<p>If we switch from absolute time to relative time, then trimming becomes something with familiar semantics, namely <code>drop</code>, as generalized and used in two of my previous posts, <em><a href="http://conal.net/blog/posts/sequences-functions-and-segments/" title="blog post">Sequences, streams, and segments</a></em> and <em><a href="http://conal.net/blog/posts/sequences-segments-and-signals/" title="blog post">Sequences, segments, and signals</a></em>.</p>

<p>The semantic difference: trimming (absolute time) erases early content in an input; while dropping (relative time) shifts input backward in time, losing the early content in the same way as <code>drop</code> on sequences.</p>

<h3>What&#8217;s next?</h3>

<p>While input trimming can be managed systematically, doing so explicitly is tedious and error prone.
A follow-up post will automatically apply the techniques from this post.
Hiding and automating the mechanics of trimming allows interactive behavior to be expressed correctly and without distraction.</p>

<p>Another post will relate input trimming to the time transformation of interactive behaviors, as discussed in 
<em><a href="http://conal.net/blog/posts/why-classic-FRP-does-not-fit-interactive-behavior/" title="blog post">Why classic FRP does not fit interactive behavior</a></em>.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=71&amp;md5=842949917d89f4d726e5c20b26d662bc"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-programming/feed</wfw:commentRss>
		<slash:comments>3</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Ftrimming-inputs-in-functional-reactive-programming&amp;language=en_GB&amp;category=text&amp;title=Trimming+inputs+in+functional+reactive+programming&amp;description=This+post+takes+a+close+look+at+one+awkward+aspect+of+classic+%28non-arrow%29+FRP+for+interactive+behavior%2C+namely+the+need+to+trim+%28or+%26%238220%3Bage%26%238221%3B%29+old+input.+Failing+to+trim+results...&amp;tags=comonad%2CFRP%2Cfunctional+reactive+programming%2Cinteraction%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Why classic FRP does not fit interactive behavior</title>
		<link>http://conal.net/blog/posts/why-classic-frp-does-not-fit-interactive-behavior</link>
		<comments>http://conal.net/blog/posts/why-classic-frp-does-not-fit-interactive-behavior#comments</comments>
		<pubDate>Wed, 10 Dec 2008 06:45:48 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[interaction]]></category>
		<category><![CDATA[perspective]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=69</guid>
		<description><![CDATA[In functional reactive programming (FRP), the type we call &#8220;behaviors&#8221; model non-interactive behavior. To see why, just look at the semantic model: t -&#62; a, for some notion t of time. One can argue as follows that this model applies to interactive behavior as well. Behaviors interacting with inputs are functions of time and of [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Why classic FRP does not fit interactive behavior

Tags: FRP, functional reactive programming, interaction, perspective

URL: http://conal.net/blog/posts/why-classic-FRP-does-not-fit-interactive-behavior/

-->

<!-- references -->

<!-- teaser -->

<p>In functional reactive programming (FRP), the type we call &#8220;behaviors&#8221; model <em>non-interactive</em> behavior.
To see why, just look at the semantic model: <code>t -&gt; a</code>, for some notion <code>t</code> of time.</p>

<p>One can argue as follows that this model applies to interactive behavior as well.
Behaviors interacting with inputs are functions of time and of inputs.
Those inputs are also functions of time, so behaviors are just functions of time.
I held this perspective at first, but came to see a lack of composability.</p>

<p>My original FRP formulations (<a href="http://conal.net/Fran" title="Functional reactive animation">Fran</a> and its predecessors <a href="http://conal.net/tbag/" title="Project web page">TBAG</a> and <a href="http://conal.net/papers/ActiveVRML/" title="Tech report: &quot;A Brief Introduction to ActiveVRML&quot;">ActiveVRML</a>), as well as the much more recent library <a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page">Reactive</a>, can be and are used to describe interactive behavior.
For simple sorts of things, this use works out okay.
When applications get a bit richer, the interface and semantics strain.
If you&#8217;ve delved a bit, you&#8217;ll have run into the signs of strain, with coping mechanisms like <em>start times</em>, <em>user arguments</em> and <em>explicit aging</em> of inputs, as you avoid the dreaded <em>space-time leaks</em>.</p>

<!--
**Edits**:

* 2008-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-69"></span></p>

<h3>Behaviors</h3>

<p>Suppose I define an object that pays attention to an input that&#8217;s interesting near a certain time.
For instance, a cat chasing a bird in the back yard one morning last week.
Now shift the behavior a week forward to this morning.
The result is just a week-delayed form of the earlier behavior, displaying reactions to week-old input.
The original interactive behavior (the cat) was fed input (bird and yard) from a week ago, the result was recorded, and that recording is replayed a week later.</p>

<p>Written functionally,</p>

<pre><code>laterObserveCat :: Behavior World -&gt; Behavior Cat
laterObserveCat = later week . cat
</code></pre>

<p>where</p>

<pre><code>world :: Behavior World
cat   :: Behavior World -&gt; Behavior Cat

later :: Duration -&gt; Behavior a -&gt; Behavior a
</code></pre>

<p>We can also sit in the living room, five meters north of the back yard, and watch the cat via a camera.</p>

<pre><code>livingRoomObserveCat :: Behavior World -&gt; Behavior Cat
livingRoomObserveCat = north (5 * meter) . cat
</code></pre>

<p>Typically recordings are moved <em>both</em> in space and in time.
For example, record the cat in the back yard one week and watch the recording in the living room a week later:</p>

<pre><code>replayLRL :: Behavior World -&gt; Behavior Cat
replayLRL = later week . north (5 * meter) . cat
</code></pre>

<h3>Interactive behavior</h3>

<p>I&#8217;ve just described transforming a <em>non-interactive</em> behavior (recording) in time and space.
That non-interactive behavior resulted applying an interactive behavior (the cat) to some input (bird and yard).
An <em>interactive</em> behavior is simply a function from non-interactive behavior (time-varying input) to non-interactive behavior (time-varying output).</p>

<p>Consider what it means to transform an <em>interactive</em> behavior in space.
I pick up the cat and carry her five meters north, into the living room.
The cat has a different perspective, which is that her environment moves five meters south.
For instance, the mouse that was one meter south of her is now six meters sounth.</p>

<p>If I lift the cat four feet above the ground, she sees the ground now four <em>below</em> her.
If I spin her clockwise, her environment spins counter-clockwise.
If I point my shrinking ray at her, she sees her world as growing larger, and she flees from the giant mice.</p>

<p>We can play the perspective game in time as well as space.
I point my speed-up ray at the cat, and she perceives her environment slow down.</p>

<p>When I say I move the cat north and she see her environment move south, I am not suggesting that there are two alternative perspectives and we might take either one.
Rather, a complete picture combines both movements.
The behavior my cat exhibits when I pick her up is composed of three transformations:</p>

<ul>
<li>The world moves south;</li>
<li>The cat perceives this more southerly world and responds to it; and</li>
<li>The resulting cat behavior is moved north.</li>
</ul>

<p>Written functionally,</p>

<pre><code>livingRoomCat :: Behavior World -&gt; Behavior Cat
livingRoomCat = north (5 * meter) . cat . south (5 * meter)
</code></pre>

<p>Similarly with other spatial transformation, each with paired inverse transformations.
And similarly in time, e.g.,</p>

<pre><code>laterCat :: Behavior World -&gt; Behavior Cat
laterCat = later week . cat . earlier week
</code></pre>

<p>So there&#8217;s a semantically consistent model of transforming interactive behaviors in time or space, and this model explains the difference between transformation of non-interactive behavior and of interactive behavior.</p>

<h3>Aside</h3>

<p>Comparing these two definitions with the previous two suggests an alternative implementation.
Refactoring,</p>

<pre><code>laterCat :: Behavior World -&gt; Behavior Cat
laterCat = laterObserveCat . earlier week
</code></pre>

<p>Instead of carrying the interactive cat a week ahead in time, this definition suggests that we move the world (other than the cat) <em>back</em> one week, record the cat interacting with it, and hold onto that recording to watch a week later.
My back-of-the-envelope calculations suggest that this second implementation is less resource-efficient than the first one, so I&#8217;ll not consider it further in this post.</p>

<h3>Arrow FRP</h3>

<p>When we use for an interactive setting the tools suited to non-interactive behaviors, we&#8217;re begging for space leaks (filling up recording media) and time leaks (fast-forwarding through old recorded matter to catch up with what we want to see).
These problems are what people experience when programming explicitly with  might call &#8220;classic FRP&#8221;, i.e., programming explicitly with (non-interactive) behaviors.
Out of this experience was born &#8220;Arrow FRP&#8221;, as in <a href="http://conal.net/papers/genuinely-functional-guis.pdf" title="Paper: &quot;Genuinely Functional User Interfaces&quot;">Fruit</a> and <a href="http://haskell.org/haskellwiki/Yampa" title="Wiki page">Yampa</a>.
The idea of Arrow FRP was to program with a type of &#8220;signal functions&#8221;, which can be thought of as mappings between behaviors:</p>

<pre><code>type SF a b = Behavior a -&gt; Behavior b
</code></pre>

<p>(Behaviors were also renamed to &#8220;signals&#8221;, but I&#8217;ll stick with &#8220;behaviors&#8221; for now.)
We can just look at semantic model for signal transformers and see that they&#8217;re about <em>interactive</em> behavior.</p>

<p>A lot of work was done with this new model, mostly growing out of Paul Hudak&#8217;s group at Yale.
It addressed the inherent awkwardness of the explicit-behavior style.</p>

<p>Despite this fundamental advantage of signal transformers, why am I still fooling around with the &#8220;classic&#8221; (pre-arrow) style?</p>

<ul>
<li>The <code>Arrow</code> style required multiple behaviors to be bunched up into one.  That combination appears to thwart efficient, change-driven evaluation, which was the problem I tackled in <em><a href="http://conal.net/blog/posts/simply-efficient-functional-reactivity/" title="Blog post">Simply efficient functional reactivity</a></em>.</li>
<li>There was no room for the distinction between behaviors and events.  I&#8217;d love to see how to combine them without losing expressiveness or semantic accuracy, and I haven&#8217;t yet.</li>
<li>Arrow programming is very awkward without the special arrow notation and has more of a sequential style than I like with the arrow notation.  I missed the functional feel of classic FRP.</li>
</ul>

<h3>Coming up</h3>

<p>Given the fundamental semantic fit between signal transformers for the problem domain of interactive behavior, I&#8217;d like to come up wh convenient and efficient packaging.
The next posts will describe the packaging I&#8217;m trying out now.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=69&amp;md5=1c05654dd716cacaea14eb65a6d8945b"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/why-classic-frp-does-not-fit-interactive-behavior/feed</wfw:commentRss>
		<slash:comments>6</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fwhy-classic-frp-does-not-fit-interactive-behavior&amp;language=en_GB&amp;category=text&amp;title=Why+classic+FRP+does+not+fit+interactive+behavior&amp;description=In+functional+reactive+programming+%28FRP%29%2C+the+type+we+call+%26%238220%3Bbehaviors%26%238221%3B+model+non-interactive+behavior.+To+see+why%2C+just+look+at+the+semantic+model%3A+t+-%26gt%3B+a%2C+for+some+notion+t+of...&amp;tags=FRP%2Cfunctional+reactive+programming%2Cinteraction%2Cperspective%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Sequences, segments, and signals</title>
		<link>http://conal.net/blog/posts/sequences-segments-and-signals</link>
		<comments>http://conal.net/blog/posts/sequences-segments-and-signals#comments</comments>
		<pubDate>Fri, 05 Dec 2008 08:14:33 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[applicative functor]]></category>
		<category><![CDATA[comonad]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[function]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[functor]]></category>
		<category><![CDATA[monoid]]></category>
		<category><![CDATA[segment]]></category>
		<category><![CDATA[sequence]]></category>
		<category><![CDATA[type class morphism]]></category>
		<category><![CDATA[zipper]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=67</guid>
		<description><![CDATA[The post Sequences, streams, and segments offered an answer to the the question of what&#8217;s missing in the following box: infinitefinite discreteStream Sequence continuousFunction ??? I presented a simple type of function segments, whose representation contains a length (duration) and a function. This type implements most of the usual classes: Monoid, Functor, Zip, and Applicative, [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Sequences, segments, and signals

Tags: function, sequence, monoid, functor, applicative functor, comonad, FRP, functional reactive programming, segment, type class morphism, zipper

URL: http://conal.net/blog/posts/sequences-segments-and-signals/

-->

<!-- references -->

<!-- teaser -->

<p>The post <em><a href="http://conal.net/blog/posts/sequences-streams-and-segments/" title="blog post">Sequences, streams, and segments</a></em> offered an answer to the the question of what&#8217;s missing in the following box:</p>

<div align=center style="margin-bottom:10px">
  <table border="2">
    <tr><td></td><td style="text-align:center;padding:5px"><strong>infinite</strong><td style="text-align:center;padding:5px"><strong>finite</strong></third></tr>
    <tr><td style="text-align:center;padding:5px"><strong>discrete</strong><td style="text-align:center;padding:7px">Stream</td> <td style="text-align:center;padding:7px">Sequence</td></tr>
    <tr><td style="text-align:center;padding:5px"><strong>continuous</strong><td style="text-align:center;padding:7px">Function</td> <td style="text-align:center;padding:7px"><em>???</em></td></tr>
  </table>
</div>

<p>I presented a simple type of <em>function segments</em>, whose representation contains a length (duration) and a function.
This type implements most of the usual classes: <code>Monoid</code>, <code>Functor</code>, <code>Zip</code>, and <code>Applicative</code>, as well <code>Comonad</code>, but not <code>Monad</code>.
It also implements a new type class, <code>Segment</code>, which generalizes the list functions <code>length</code>, <code>take</code>, and <code>drop</code>.</p>

<p>The function type is simple and useful in itself.
I believe it can also serve as a semantic foundation for functional reactive programming (FRP), as I&#8217;ll explain in another post.
However, the type has a serious performance problem that makes it impractical for some purposes, including as implementation of FRP.</p>

<p>Fortunately, we can solve the performance problem by adding a simple layer on top of function segments, to get what I&#8217;ll call &#8220;signals&#8221;.
With this new layer, we have an efficient replacement for function segments that implements exactly the same interface with exactly the same semantics.
Pleasantly, the class instances are defined fairly simply in terms of the corresponding instances on function segments.</p>

<p>You can download the <a href="http://conal.net/blog/code/Signal.hs">code for this post</a>.</p>

<p><strong>Edits</strong>:</p>

<ul>
<li>2008-12-06: <code>dup [] = []</code> near the end (was <code>[mempty]</code>).</li>
<li>2008-12-09: Fixed <code>take</code> and <code>drop</code> default definitions (thanks to sclv) and added point-free variant.</li>
<li>2008-12-18: Fixed <code>appl</code>, thanks to sclv.</li>
<li>2011-08-18: Eliminated accidental emoticon in the definition of <code>dup</code>, thanks to anonymous.</li>
</ul>

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-67"></span></p>

<h3>The problem with function segments</h3>

<p>The type of function segments is defined as follows:</p>

<pre><code>data t :-&gt;# a = FS t (t -&gt; a)
</code></pre>

<p>The domain of the function segment is from zero up to but not including the given length.</p>

<p>An efficiency problem becomes apparent when we look at the <code>Monoid</code> instance:</p>

<pre><code>instance (Ord t, Num t) =&gt; Monoid (t :-&gt;# a) where
    mempty = FS 0 (error "sampling empty 't :-&gt;# a'")
    FS c f `mappend` FS d g =
      FS (c + d) ( t -&gt; if t &lt;= c then f t else g (t - c))
</code></pre>

<p>Concatenation (<code>mappend</code>) creates a new segment that chooses, for every domain value <code>t</code>, whether to use one function or another.
If the second, then <code>t</code> must be shifted backward, since the function is being shifted forward.</p>

<p>This implementation would be fine if we use <code>mappend</code> just on simple segments.
Once we get started, however, we&#8217;ll want to concatenate lots &amp; lots of segments.
In FRP, time-varying values go through many phases (segments) as time progresses.
Each quantity is described by a single &#8220;behavior&#8221; (sometimes called a &#8220;signal&#8221;) with many, and often infinitely many, phases.
Imagine an infinite tree of concatenations, which is typical for FRP behaviors.
At every moment, one phase is active.
Every sampling must recursively discover the active phase and the accumulated domain translation (from successive subtractions) to apply when sampling that phase.
Quite commonly, concatenation trees get progressively deeper on the right (larger <code>t</code> values).
In that case, sampling will get slower and slower with time.</p>

<p>I like to refer to these progressive slow-downs as &#8220;time leaks&#8221;.
There is also a serious space leak, since all of the durations and functions that go into a composed segment will be retained.</p>

<h3>Sequences of segments</h3>

<p>The problem above can be solved with a simple representation change.
Instead of combining functions into functions, just keep a list of simple function segments.</p>

<pre><code>-- | Signal indexed by t with values of type a.
newtype t :-&gt; a = S { unS :: [t :-&gt;# a] }
</code></pre>

<p>I&#8217;ll restrict in function segments to be <em>non-empty</em>, to keep the rest of the implementation simple and efficient.</p>

<p>This new representation allows for efficient <em>monotonic</em> sampling of signals.
As old segments are passed up, they can be dropped.</p>

<h4>What does it mean?</h4>

<p>There&#8217;s one central question for me in defining any data type: <em>What does it mean?</em></p>

<p>The meaning I&#8217;ll take for signals is function segments.
This interpretation is made precise in a function that maps from the type to the model (meaning).
In this case, simply concatenate all of the function segments:</p>

<pre><code>meaning :: (Ord t, Num t) =&gt; (t :-&gt; a) -&gt; (t :-&gt;# a)
meaning = mconcat . unS
</code></pre>

<p>Specifying the meaning of a type gives users a working model, and it defines correctness of implementation.
It also tells me what class instances to implement and tells users what instances to expect.
If a type&#8217;s meaning implements a class then I want the type to as well.
Moreover, the type&#8217;s intances have to agree with the model&#8217;s instances.
I&#8217;ve described this latter principle in <em><a href="http://conal.net/blog/posts/simplifying-semantics-with-type-class-morphisms" title="blog post">Simplifying semantics with type class morphisms</a></em> and <a href="http://conal.net/blog/tag/type-class-morphism/" title="Posts on type class morphisms">some other posts</a>.</p>

<h4>Higher-order wrappers</h4>

<p>To keep the code below short and clear, I&#8217;ll use some functions for adding and removing the newtype wrappers.
These higher-order function apply functions inside of <code>(:-&gt;)</code> representations:</p>

<pre><code>inS  :: ([s :-&gt;# a] -&gt; [t :-&gt;# b])
     -&gt; ((s :-&gt;  a) -&gt; (t :-&gt;  b))

inS2 :: ([s :-&gt;# a] -&gt; [t :-&gt;# b] -&gt; [u :-&gt;# c])
     -&gt; ((s :-&gt;  a) -&gt; (t :-&gt;  b) -&gt; (u :-&gt;  c))
</code></pre>

<p>Using the trick described in <em><a href="http://conal.net/blog/posts/prettier-functions-for-wrapping-and-wrapping/" title="blog post">Prettier functions for wrapping and wrapping</a></em>, the definitions are simpler than the types:</p>

<pre><code>inS  = result   S . argument unS
inS2 = result inS . argument unS
</code></pre>

<h4><code>Functor</code></h4>

<p>The <code>Functor</code> instance applies a given function inside the function segments inside the lists:</p>

<pre><code>instance Functor ((:-&gt;) t) where
    fmap h (S ss) = S (fmap (fmap h) ss)
</code></pre>

<p>Or, in the style of <em><a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">Semantic editor combinators</a></em>,</p>

<pre><code>instance Functor ((:-&gt;) t) where
    fmap = inS . fmap . fmap
</code></pre>

<p>Why this definition?
Because it is correct with respect to the semantic model, i.e., the meaning of <code>fmap</code> is <code>fmap</code> of the meaning, i.e.,</p>

<pre><code> meaning . fmap h == fmap h . meaning
</code></pre>

<p>which is to say that the following diagram commutes:</p>

<div align=center><img src="http://conal.net/blog/pictures/signal-meaning-fmap-morphism.png"/></div>

<p>Proof:</p>

<pre><code>meaning . fmap h 

  == {- fmap definition -}

meaning . S . fmap (fmap h) . unS

  == {- meaning definition -}

mconcat . unS . S . fmap (fmap h) . unS

  == {- unS and S are inverses  -}

mconcat . fmap (fmap h) . unS

  == {- fmap h distributes over mappend -}

fmap h . mconcat . unS

  == {- meaning definition -}

fmap . meaning
</code></pre>

<h4><code>Applicative</code> and <code>Zip</code></h4>

<p>Again, the <code>meaning</code> functions tells us what the <code>Applicative</code> instance has to mean.
We only get to choose how to implement that meaning correctly.
The <code>Applicative</code> <a href="http://conal.net/blog/tag/type-class-morphism/" title="Posts on type class morphisms">morphism properties</a>:</p>

<pre><code>meaning (pure a)    == pure a
meaning (bf &lt;*&gt; bx) == meaning bf &lt;*&gt; meaning bx
</code></pre>

<p>Our <code>Applicative</code> instance has a definition similar in simplicity and style to the <code>Functor</code> instance, assuming a worker function <code>appl</code> for <code>(&lt;*&gt;)</code>:</p>

<pre><code>instance (Ord t, Num t, Bounded t) =&gt; Applicative ((:-&gt;) t) where
    pure  = S . pure . pure
    (&lt;*&gt;) = inS2 appl

appl :: (Ord t, Num t, Bounded t) =&gt;
        [t :-&gt;# (a -&gt; b)] -&gt; [t :-&gt;# a] -&gt; [t :-&gt;# b]
</code></pre>

<p>This worker function is somewhat intricate.
At least my implementation of it is, and perhaps there&#8217;s a simpler one.</p>

<p>Again, the <code>meaning</code> functions tells us what the <code>Applicative</code> instance has to mean.
We only get to choose how to implement that meaning correctly.</p>

<p>First, if either segment list runs out, the combination runs out (because the same is true for the <em>meaning</em> of signals).</p>

<pre><code>[] `appl` _  = []

_  `appl` [] = []
</code></pre>

<p>If neither segment list is empty, open up the first segment.
Split the longer segment into a prefix that matches the shorter segment, and combine the two segments with <code>(&lt;*&gt;)</code> (on function segments).
Toss the left-over piece back in its list, and continue.</p>

<pre><code>(fs:fss') `appl` (xs:xss')
   | fd == xd  = (fs  &lt;*&gt; xs ) : (fss' `appl`       xss' )
   | fd &lt;  xd  = (fs  &lt;*&gt; xs') : (fss' `appl` (xs'':xss'))
   | otherwise = (fs' &lt;*&gt; xs ) : ((fs'':fss') `appl` xss')
 where
   fd         = length fs
   xd         = length xs
   (fs',fs'') = splitAt xd fs
   (xs',xs'') = splitAt fd xs
</code></pre>

<p>A <code>Zip</code> instance is easy as always with applicative functors:</p>

<pre><code>instance (Ord t, Num t, Bounded t) =&gt; Zip ((:-&gt;) t) where zip = liftA2 (,)
</code></pre>

<h4><code>Monoid</code></h4>

<p>The <code>Monoid</code> instance:</p>

<pre><code>instance Monoid (t :-&gt; a) where
  mempty = S []
  S xss `mappend` S yss = S (xss ++ yss)
</code></pre>

<p>Correctness follows from properties of <code>mconcat</code> (as used in the <code>meaning</code> function).</p>

<p>We&#8217;re really just using the <code>Monoid</code> instance for the underlying representation, i.e.,</p>

<pre><code>instance Monoid (t :-&gt; a) where
    mempty  = S mempty
    mappend = inS2 mappend
</code></pre>

<h4><code>Segment</code></h4>

<p>The <code>Segment</code> class has <code>length</code>, <code>take</code> and <code>drop</code>.
It&#8217;s handy to include also <code>null</code> and <code>splitAt</code>, both modeled after their counterparts on lists.</p>

<p>The new &amp; improved <code>Segment</code> class:</p>

<pre><code>class Segment seg dur | seg -&gt; dur where
    null    :: seg -&gt; Bool
    length  :: seg -&gt; dur
    take    :: dur -&gt; seg -&gt; seg
    drop    :: dur -&gt; seg -&gt; seg
    splitAt :: dur -&gt; seg -&gt; (seg,seg)
    -- Defaults:
    splitAt d s = (take d s, drop d s)
    take    d s = fst (splitAt d s)
    drop    d s = snd (splitAt d s)
</code></pre>

<p>If we wanted to require <code>dur</code> to be numeric, we could add a default for <code>null</code> as well.
This default could be quite expensive in some cases.
(In the style of <em><a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">Semantic editor combinators</a></em>, <code>take = (result.result) fst splitAt</code>, and similarly for <code>drop</code>.)</p>

<p>The <code>null</code> and <code>length</code> definitions are simple, following from to properties of <code>mconcat</code>.</p>

<pre><code>instance (Ord t, Num t) =&gt; Segment (t :-&gt; a) t where
  null   (S xss) = null xss
  length (S xss) = sum (length &lt;$&gt; xss)
  ...
</code></pre>

<p>The null case says that the signal is empty exactly when there are no function segments.
This simple definition relies on our restriction to non-empty function segments.
If we drop that restriction, we&#8217;d have to check that every segment is empty:</p>

<pre><code>  -- Alternative definition
  null (S xss) = all null xss
</code></pre>

<p>The <code>length</code> is just the sum of the lengths.</p>

<p>The tricky piece is <code>splitAt</code> (used to define both <code>take</code> and drop), which must assemble segments to satisfy the requested prefix length.
The last segment used might have to get split into two, with one part going into the prefix and one to the suffix.</p>

<pre><code>  splitAt _ (S [])  = (mempty,mempty)
  splitAt d b | d &lt;= 0 = (mempty, b)
  splitAt d (S (xs:xss')) =
    case (d `compare` xd) of
      EQ -&gt; (S [xs], S xss')
      LT -&gt; let (xs',xs'') = splitAt d xs in
              (S [xs'], S (xs'':xss'))
      GT -&gt; let (S uss, suffix) = splitAt (d-xd) (S xss') in
              (S (xs:uss), suffix)
   where
     xd = length xs
</code></pre>

<h4><code>Copointed</code> and <code>Comonad</code></h4>

<p>To extract an element from a signal, extract an element from its first function segment.
Awkwardly, extraction will fail (produce &perp;/error) when the signal is empty.</p>

<pre><code>instance Num t =&gt; Copointed ((:-&gt;) t) where
    extract (S [])     = error "extract: empty S"
    extract (S (xs:_)) = extract xs
</code></pre>

<p>I&#8217;ve exploited our restriction to non-empty function segments.
Otherwise, <code>extract</code> would have to skip past the empty ones:</p>

<pre><code>-- Alternative definition
instance Num t =&gt; Copointed ((:-&gt;) t) where
    extract (S []) = error "extract: empty S"
    extract (S (xs:xss'))
      | null xs     = extract (S xss')
      | otherwise   = extract xs
</code></pre>

<p>The error/&perp; in this definition is dicey, as is the one for function segments.
If we allow the same abuse in order to define a list <code>Copointed</code>, we can get an alternative to the first definition that is prettier but gives a less helpful error message:</p>

<pre><code>instance Num t =&gt; Copointed ((:-&gt;) t) where
    extract = extract . extract . unS
</code></pre>

<p>See the closing remarks for more about this diciness.</p>

<p>Finally, we have <code>Comonad</code>, with its <code>duplicate</code> method.</p>

<pre><code>duplicate :: (t :-&gt; a) -&gt; (t :-&gt; (t :-&gt; a))
</code></pre>

<p>I get confused with wrapping and unwrapping, so let&#8217;s separate the definition into a packaging part and a content part.</p>

<pre><code>instance (Ord t, Num t) =&gt; Comonad ((:-&gt;) t) where
    duplicate = fmap S . inS dup
</code></pre>

<p>with content part:</p>

<pre><code>dup :: (Ord t, Num t) =&gt; [t :-&gt;# a] -&gt; [t :-&gt;# [t :-&gt;# a]]
</code></pre>

<p><!-- 
I think of `duplicate` as forming all of the *tails* of a segment (as it does with lists via the standard `tails` function). 
-->
The helper function, <code>dup</code>, takes each function segment and prepends each of its tails onto the remaining list of segments.</p>

<p>If the segment list is empty, then it has only one tail, also empty.</p>

<pre><code>dup []        = []

dup (xs:xss') = ((: xss') &lt;$&gt; duplicate xs) : dup xss'
</code></pre>

<h3>Closing remarks</h3>

<ul>
<li><p>The definitions above use the function segment type only through its type class interfaces, and so can all of them can be generalized.
Several definitions rely on the <code>Segment</code> instance, but otherwise, each method for the composite type relies on the corresponding method for the underlying segment type.
(For instance, <code>fmap</code> uses <code>fmap</code>, <code>(&lt;*&gt;)</code> uses <code>(&lt;*&gt;)</code>, <code>Segment</code> uses <code>Segment</code>, etc.)
This generality lets us replace function segments with more efficient representations, e.g., doing constant propagation, as in <a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a>.
We can also generalize from lists in the definitions above.</p></li>
<li><p>Even without concatenation, function segments can become expensive when <code>drop</code> is repeatedly applied, because function shifts accumulate (e.g., <code>f . (+ 0.01) . (+ 0.01) ....</code>).
A more <code>drop</code>-friendly representation for function segments would be a function and an offset.
Successive drops would add offsets, and <code>extract</code> would always apply the function to its offset.
This representation is similar to the <code>FunArg</code> comonad, mentioned in <a href="http://cs.ioc.ee/~tarmo/papers/essence.pdf" title="Paper by Tarmo Uustalu and Varmo Vene">The Essence of Dataflow Programming</a> (Section 5.2).</p></li>
<li><p>The list-of-segments representation enables efficient monotonic sampling, simply by dropping after each sample.
A variation is to use a list zipper instead of a list.
Then non-monotonic sampling will be efficient as long as successive samplings are for nearby domain values.
Switching to multi-directional representation would lose the space efficiency of unidirectional representations.
The latter work well with lazy evaluation, often running in constant space, because old values are recycled while new values are getting evaluated.</p></li>
<li><p>Still another variation is to use a binary tree instead of a list, to avoid the list append in the <code>Monoid</code> instance for <code>(t :-&gt; a)</code>.
A tree zipper would allow non-monotonic sampling.
Sufficient care in data structure design could perhaps yield efficient random access.</p></li>
<li><p>There&#8217;s a tension between the <code>Copointed</code> and <code>Monoid</code> interfaces.
<code>Copointed</code> has <code>extract</code>, and <code>Monoid</code> has <code>mempty</code>, so what is the value of <code>extract mempty</code>?
Given that <code>Copointed</code> is parameterized over <em>arbitrary</em> types, the only possible answer seems to be &perp; (as in the use of <code>error</code> above).
I don&#8217;t know if the comonad laws can all hold for possibly-empty function segments or for possibly-empty signals.
I&#8217;m grateful to <a href="http://comonad.com/reader/">Edward Kmett</a> for helping me understand this conflict.
He suggested using two coordinated types, one possibly-empty (a monoid) and the other non-empty (a comonad).
I&#8217;m curious to see whether that idea works out.</p></li>
</ul>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=67&amp;md5=e15e8c9dfd9e8f9c0aadd8dc805515dc"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/sequences-segments-and-signals/feed</wfw:commentRss>
		<slash:comments>9</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fsequences-segments-and-signals&amp;language=en_GB&amp;category=text&amp;title=Sequences%2C+segments%2C+and+signals&amp;description=The+post+Sequences%2C+streams%2C+and+segments+offered+an+answer+to+the+the+question+of+what%26%238217%3Bs+missing+in+the+following+box%3A+infinitefinite+discreteStream+Sequence+continuousFunction+%3F%3F%3F+I+presented+a+simple+type...&amp;tags=applicative+functor%2Ccomonad%2CFRP%2Cfunction%2Cfunctional+reactive+programming%2Cfunctor%2Cmonoid%2Csegment%2Csequence%2Ctype+class+morphism%2Czipper%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Sequences, streams, and segments</title>
		<link>http://conal.net/blog/posts/sequences-streams-and-segments</link>
		<comments>http://conal.net/blog/posts/sequences-streams-and-segments#comments</comments>
		<pubDate>Mon, 01 Dec 2008 07:29:27 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[applicative functor]]></category>
		<category><![CDATA[comonad]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[function]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[functor]]></category>
		<category><![CDATA[monoid]]></category>
		<category><![CDATA[sequence]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=65</guid>
		<description><![CDATA[What kind of thing is a movie? Or a song? Or a trajectory from point A to point B? If you&#8217;re a computer programmer/programmee, you might say that such things are sequences of values (frames, audio samples, or spatial locations). I&#8217;d suggest that these discrete sequences are representations of something more essential, namely a flow [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Sequences, streams, and segments

Tags: function, sequence, monoid, functor, applicative functor, comonad, FRP, functional reactive programming, segment

URL: http://conal.net/blog/posts/sequences-functions-and-segments/

-->

<!-- references -->

<!-- teaser -->

<p>What kind of thing is a movie?
Or a song?
Or a trajectory from point A to point B?
If you&#8217;re a computer programmer/programmee, you might say that such things are sequences of values (frames, audio samples, or spatial locations).
I&#8217;d suggest that these discrete sequences are representations of something more essential, namely a <em>flow</em> of continuously time-varying values.
Continuous models, whether in time or space, are often more compact, precise, adaptive, and composable than their discrete counterparts.</p>

<p>Functional programming offers great support for sequences of variable length.
<em>Lazy</em> functional programming adds <em>infinite</em> sequences, often called <em>streams</em>, which allows for more elegant and modular programming.</p>

<p>Functional programming also has functions as first class values, and when the function&#8217;s domain is (conceptually) continuous, we get a continuous counterpart to infinite streams.</p>

<p>Streams, sequences, and functions are three corners of a square.
Streams are discrete and infinite, sequences are discrete and finite, and functions-on-reals are continuous and infinite.
The missing corner is continuous and finite, and that corner is the topic of this post.</p>

<div align=center>
  <table border="2">
    <tr><td></td><td style="text-align:center;padding:5px"><strong>infinite</strong><td style="text-align:center;padding:5px"><strong>finite</strong></third></tr>
    <tr><td style="text-align:center;padding:5px"><strong>discrete</strong><td style="text-align:center;padding:15px">Stream</td> <td style="text-align:center;padding:15px">Sequence</td></tr>
    <tr><td style="text-align:center;padding:5px"><strong>continuous</strong><td style="text-align:center;padding:15px">Function</td> <td style="text-align:center;padding:15px"><em>???</em></td></tr>
  </table>
</div>

<p>You can download the <a href="http://conal.net/blog/code/Segment.hs">code for this post</a>.</p>

<p><strong>Edits</strong>:</p>

<ul>
<li>2008-12-01: Added <a href="http://conal.net/blog/code/Segment.hs">Segment.hs</a> link.</li>
<li>2008-12-01: Added <code>Monoid</code> instance for function segments.</li>
<li>2008-12-01: Renamed constructor &#8220;<code>DF</code>&#8221; to &#8220;<code>FS</code>&#8221; (for &#8220;function segment&#8221;)</li>
<li>2008-12-05: Tweaked the inequality in <code>mappend</code> on <code>(t :-&gt;# a)</code>. </li>
</ul>

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-65"></span></p>

<!--
This play has a cast of characters you may have met before.
`Monoid` is the star of the show, with `Functor` and `Applicative` playing supporting roles.
`Monad` appears only briefly.
Making her debut appearance on this blog is `Comonad`.
We'll be seeing more of her in future episodes.
-->

<h3>Streams</h3>

<p>I&#8217;ll be using Wouter Swierstra&#8217;s <a href="http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Stream" title="Haskell library on Hackage">Stream library</a>.
A stream is an infinite sequence of values:</p>

<pre><code>data Stream a = Cons a (Stream a)
</code></pre>

<!--, but renaming his `Cons` constructor to `(:<)`, as used in *[The Essence of Dataflow Programming][]*. -->

<p><code>Stream</code> is a functor and an applicative functor.</p>

<pre><code>instance Functor Stream where
    fmap f (Cons x xs) = Cons (f x) (fmap f xs)

instance Applicative Stream where
    pure  = repeat
    (&lt;*&gt;) = zipWith ($)

repeat :: a -&gt; Stream a
repeat x = Cons x (repeat x)
</code></pre>

<h3>Comonads</h3>

<p>Recently I&#8217;ve gotten enamored with comonads, which are dual to monads.
In other words, comonads are like monads but wearing their category arrows backwards.
I&#8217;ll be using the comonad definitions from the <a href="http://hackage.haskell.org/cgi-bin/hackage-scripts/package/category-extras" title="Haskell library on Hackage">category-extras library</a>.</p>

<p>The most helpful intuitive description I&#8217;ve found is that comonads describe <em>values in context</em>.</p>

<p>The <code>return</code> method injects a pure value into a monadic value (having no effect).</p>

<pre><code>return  :: Monad m     =&gt; a -&gt; m a
</code></pre>

<p>The dual to monadic <code>return</code> is <code>extract</code> (sometimes called &#8220;<code>counit</code>&#8221; or &#8220;<code>coreturn</code>&#8220;), which extracts a value out of a comonadic value (discarding the value&#8217;s context).
<a href="http://hackage.haskell.org/cgi-bin/hackage-scripts/package/category-extras" title="Haskell library on Hackage">category-extras library</a> splites this method out from <code>Comonad</code> into the <code>Copointed</code> class:</p>

<pre><code>extract :: Copointed w =&gt; w a -&gt; a
</code></pre>

<p>Monadic values are typically <em>produced</em> in effectful computations:</p>

<pre><code>a -&gt; m b
</code></pre>

<p>Comonadic values are typically <em>consumed</em> in context-sensitive computations:</p>

<pre><code>w a -&gt; b
</code></pre>

<p>(Kleisli arrows wrap the producer pattern, while CoKleisli arrows wrap the consumer pattern.)</p>

<p>Monads have a way to extend a monadic producer into one that consumes to an entire monadic value:</p>

<pre><code>(=&lt;&lt;) :: (Monad m) =&gt; (a -&gt; m b) -&gt; (m a -&gt; m b)
</code></pre>

<p>We more often see this operation in its flipped form (obscuring the conceptual distinction between Haskell arrows and arbitrary category arrows):</p>

<pre><code>(&gt;&gt;=) :: (Monad m) =&gt; m a -&gt; (a -&gt; m b) -&gt; m b
</code></pre>

<p>Dually, comonads have a way to extend a comonadic consumer into one that produces an entire comonadic value:</p>

<pre><code>extend :: (Comonad w) =&gt; (w a -&gt; b) -&gt; (w a -&gt; w b)
</code></pre>

<p>which also has a flipped version:</p>

<pre><code>(=&gt;&gt;) :: (Comonad w) =&gt; w a -&gt; (w a -&gt; b) -&gt; w b
</code></pre>

<p>Another view on monads is as having a way to <code>join</code> two monadic levels into one.</p>

<pre><code>join      :: (Monad   m) =&gt; m (m a) -&gt; m a
</code></pre>

<p>Dually, comonads have a way to <code>duplicate</code> one level into two:</p>

<pre><code>duplicate :: (Comonad w) =&gt; w a -&gt; w (w a)
</code></pre>

<p>For a monad, any of <code>join</code>, <code>(=&lt;&lt;)</code>, and <code>(&gt;&gt;=)</code> can be used to define the others.
For a comonad, any of <code>duplicate</code>, <code>extend</code>, and <code>(=&gt;&gt;)</code> can be used to define the others.</p>

<h3>The Stream comonad</h3>

<p>What might the stream comonad be?</p>

<p>The Stream library already has functions of the necessary types for <code>extract</code> and <code>duplicate</code>, corresponding to familiar list functions:</p>

<pre><code>head :: Stream a -&gt; a
head (Cons x _ ) = x

tails :: Stream a -&gt; Stream (Stream a)
tails xs = Cons xs (tails (tail xs))
</code></pre>

<p>where</p>

<pre><code>tail :: Stream a -&gt; Stream a
tail (Cons _ xs) = xs
</code></pre>

<p>Indeed, <code>head</code> and <code>tails</code> are just what we&#8217;re looking for.</p>

<pre><code>instance Copointed Stream where extract   = head
instance Comonad   Stream where duplicate = tails
</code></pre>

<p>There is also a <code>Monad</code> instance for <code>Stream</code>, in which <code>return</code> is <code>repeat</code> (matching <code>pure</code> as expected) and <code>join</code> is diagonalization, producing a stream whose <em>n<sup>th</sup></em> element is the <em>n<sup>th</sup></em> element of the <em>n<sup>th</sup></em> element of a given stream of streams.</p>

<div class="exercise">

<p><strong>Exercise</strong>: The indexing function <code>(!!)</code> is a sort of semantic function for <code>Stream</code>.
Show that <code>(!!)</code> is a morphism for <code>Functor</code>, <code>Applicative</code>, <code>Monad</code>, and <code>Comonad</code>.
In other words, the meaning of the functor is the functor of the meanings, and similarly for the other type classes.
The <code>Comonad</code> case has a little wrinkle.
See the posts on <a href="http://conal.net/blog/tag/type-class-morphism/" title="Posts on morphisms">type class morphisms</a>.</p>

</div>

<h3>Adding finiteness</h3>

<p>Lists and other possibly-finite sequence types add an interesting new aspect over streams, which is concatenation, usually wrapped in a <code>Monoid</code> instance.</p>

<pre><code>class Monoid o where
    mempty  :: o
    mappend :: o -&gt; o -&gt; o
</code></pre>

<p>Lists also have <code>take</code> and <code>drop</code> operations, which can undo the effect of concatenation, as well as a notion of <code>length</code> (duration).
Let&#8217;s generalize these three to be methods of a new type class, <code>Segment</code>, so that we can defined <em>continuous</em> versions.</p>

<pre><code>class Segment seg len where
    length :: seg -&gt; len
    drop   :: len -&gt; seg -&gt; seg
    take   :: len -&gt; seg -&gt; seg
</code></pre>

<p>For lists, we can use the prelude functions</p>

<pre><code>instance Segment [a] Int where
    length = Prelude.length
    drop   = Prelude.drop
    take   = Prelude.take
</code></pre>

<p>Or the more <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Data-List.html#26">generic versions</a>:</p>

<pre><code>instance Integral i =&gt; Segment [a] i where
    length = genericLength
    drop   = genericDrop
    take   = genericTake
</code></pre>

<p>These three functions relate to <code>mappend</code>, to give us the following &#8220;Segment laws&#8221;:</p>

<pre><code>drop (length as) (as `mappend` bs) == bs
take (length as) (as `mappend` bs) == as

t &lt;= length as ==&gt; length (take t as) == t
t &lt;= length as ==&gt; length (drop t as) == length as - t
</code></pre>

<h3 id="adding-continuity">Adding continuity</h3>

<p>Streams and lists are <em>discrete</em>, containing countably many or finitely many elements.
They both have <em>continuous</em> counterparts.</p>

<p>When we think of a stream as a function from natural numbers, then John Reynolds&#8217;s alternative arises: functions over <em>real numbers</em>, i.e., a continuum of values.
If we want uni-directional streams, then stick with non-negative reals.</p>

<p>Many stream and list operations are meaningful and useful not only for discrete sequences but also for their continuous counterparts.</p>

<p>The infinite (stream-like) case is already handled by the class instances for functions found in the GHC base libraries (<a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Functor-Instances.html" title="Haskell module documentation">Control.Functor.Instances</a> and <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html" title="Haskell module documentation">Control.Applicative</a>).</p>

<pre><code>instance Functor ((-&gt;) t) where
    fmap = (.)

instance Applicative ((-&gt;) t) where
    pure = const
    (f &lt;*&gt; g) x = (f x) (g x)

instance Monad ((-&gt;) t) where
    return = const
    f &gt;&gt;= k =  t -&gt; k (f t) t
</code></pre>

<p>As a consequence,</p>

<pre><code>  join f == f &gt;&gt;= id ==  t -&gt; f t t
</code></pre>

<p>Assume a type wrapper, <code>NonNeg</code>, for non-negative values.
For discrete streams, <code>r == NonNeg Integer</code>, while for continuous streams, <code>r == NonNeg R</code>, for some type <code>R</code> representing reals.</p>

<p>The <a href="http://hackage.haskell.org/packages/archive/category-extras/latest/doc/html/Control-Comonad.html">co-monadic instances</a> from the <a href="http://hackage.haskell.org/cgi-bin/hackage-scripts/package/category-extras" title="Haskell library on Hackage">category-extras library</a>:</p>

<pre><code>instance Monoid o =&gt; Copointed ((-&gt;) o) where
    extract f = f mempty

instance Monoid o =&gt; Comonad ((-&gt;) o) where
    duplicate f x =  y -&gt; f (x `mappend` y)
</code></pre>

<h3>Finite and continuous</h3>

<p>Functions provide a setting for generalized streams.
How do we add finiteness?
A very simple answer is to combine a length (duration) with a function, to form a &#8220;function segment&#8221;:</p>

<pre><code>data t :-&gt;# a = FS t (t -&gt; a)
</code></pre>

<p>The domain of this function is from zero to just short of the given length.</p>

<p>Now let&#8217;s define class instances.</p>

<p><strong>Exercise</strong>: Show that all of the instances below are semantically consistent with the <code>Stream</code> and <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html#t%3AZipList"><code>ZipList</code></a> instances.</p>

<h4><code>Monoid</code></h4>

<p>Empty function segments have zero duration.
Concatenation adds durations and samples either function, right-shifting the second one.</p>

<pre><code>instance (Ord t, Num t) =&gt; Monoid (t :-&gt;# a) where
    mempty = FS 0 (error "sampling empty 't :-&gt;# a'")
    FS c f `mappend` FS d g =
      FS (c + d) ( t -&gt; if t &lt;= c then f t else g (t - c))
</code></pre>

<h4><code>Segment</code></h4>

<p>The <code>Segment</code> operations are easy to define:</p>

<pre><code>instance Num t =&gt; Segment (t :-&gt;# a) t where
    length (FS d _) = d
    drop t (FS d f) = FS (d - t) ( t' -&gt; f (t + t'))
    take t (FS _ f) = FS t f
</code></pre>

<p>Notice what&#8217;s going on with <code>drop</code>.
The length gets shortened by <code>t</code> (the amount dropped), and the function gets shifted (to the &#8220;left&#8221;) by <code>t</code>.</p>

<p>There&#8217;s also a tantalizing resemblance between this <code>drop</code> definition and <code>duplicate</code> for the function comonad.
We&#8217;ll return in another post to tease out this and</p>

<p>I&#8217;ve allowed dropping or taking more than is present, though these cases can be handled with an error or a by taking or dropping fewer elements (as with the list <code>drop</code> and <code>take</code> functions).</p>

<h4><code>Functor</code>, <code>Zip</code> and <code>Applicative</code></h4>

<p><code>fmap</code> applies a given function to each of the function values, leaving the length unchanged.</p>

<pre><code>instance Functor ((:-&gt;#) t) where
    fmap h (FS d f) = FS d (h . f)
</code></pre>

<p><code>zip</code> pairs corresponding segment values and runs out with the shorter segment.
(See <em><a href="http://conal.net/blog/posts/more-beautiful-fold-zipping" title="blog post">More beautiful fold zipping</a></em> for the <code>Zip</code> class.)</p>

<pre><code>instance Ord t =&gt; Zip ((:-&gt;#) t) where
    FS xd xf `zip` FS yd yf = FS (xd `min` yd) (xf `zip` yf)
</code></pre>

<p><code>pure</code> produces a constant value going forever.
<code>(&lt;*&gt;)</code> applies functions to corresponding arguments, running out with the shorter.</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Applicative ((:-&gt;#) t) where
    pure a = FS maxBound (const a)
    (&lt;*&gt;)  = zipWith ($)
</code></pre>

<h4><code>Copointed</code> and <code>Comonad</code></h4>

<p><code>extract</code> pulls out the initial value (like <code>head</code>).</p>

<pre><code>instance Num t =&gt; Copointed ((:-&gt;#) t) where
    extract (FS _ f) = f 0
</code></pre>

<p><code>duplicate</code> acts like <code>tails</code>.
The generated segments are progressivly <code>drop</code>ped versions of the original segment.</p>

<pre><code>instance Num t =&gt; Comonad ((:-&gt;#) t) where
    duplicate s = FS (length s) (flip drop s)
</code></pre>

<h4><code>Monad</code></h4>

<p>I don&#8217;t know if there is a monad instance for <code>((:-&gt;#) t)</code>.
Simple diagonalization doesn&#8217;t work for <code>join</code>, since the <em>n<sup>th</sup></em> segment might be shorter than <em>n</em>.</p>

<h3>What&#8217;s ahead?</h3>

<p>The instances above remind me strongly of type class instances for several common types.
Another post will tease out some patterns and reconstruct <code>(t :-&gt;# a)</code> out of standard components, so that most of the code above can disappear.</p>

<p>Another post incorporates <code>(t :-&gt;# a)</code> into a new model and implementation of relative-time, comonadic FRP.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=65&amp;md5=844ce48c997f35fb216e2d03108a8a21"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/sequences-streams-and-segments/feed</wfw:commentRss>
		<slash:comments>13</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fsequences-streams-and-segments&amp;language=en_GB&amp;category=text&amp;title=Sequences%2C+streams%2C+and+segments&amp;description=What+kind+of+thing+is+a+movie%3F+Or+a+song%3F+Or+a+trajectory+from+point+A+to+point+B%3F+If+you%26%238217%3Bre+a+computer+programmer%2Fprogrammee%2C+you+might+say+that+such+things...&amp;tags=applicative+functor%2Ccomonad%2CFRP%2Cfunction%2Cfunctional+reactive+programming%2Cfunctor%2Cmonoid%2Csequence%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Early inspirations and new directions in functional reactive programming</title>
		<link>http://conal.net/blog/posts/early-inspirations-and-new-directions-in-functional-reactive-programming</link>
		<comments>http://conal.net/blog/posts/early-inspirations-and-new-directions-in-functional-reactive-programming#comments</comments>
		<pubDate>Mon, 01 Dec 2008 07:24:09 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[history]]></category>
		<category><![CDATA[inspiration]]></category>
		<category><![CDATA[relative time]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=66</guid>
		<description><![CDATA[In 1989, I was a grad student nearing completion at Carnegie Mellon. Kavi Arya gave a talk on &#8220;functional animation&#8221;, using lazy lists. I was awe-struck with the elegance and power of that simple idea, and I&#8217;ve been hooked on functional animation ever since. At the end of Kavi&#8217;s talk, John Reynolds offered a remark, [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Early inspirations and new directions in functional reactive programming

Tags: history, inspiration, relative time, FRP, functional reactive programming

URL: http://conal.net/blog/posts/early-inspirations-and-new-directions-in-functional-reactive-programming/

-->

<!-- references -->

<!-- teaser -->

<p>In 1989, I was a grad student nearing completion at Carnegie Mellon.
Kavi Arya gave a talk on &#8220;functional animation&#8221;, using lazy lists.
I was awe-struck with the elegance and power of that simple idea, and I&#8217;ve been hooked on functional animation ever since.</p>

<p>At the end of Kavi&#8217;s talk, <a href="http://www.cs.cmu.edu/~jcr/">John Reynolds</a> offered a remark, roughly as follows:</p>

<blockquote>
  <p>You can think of sequences as functions from the natural numbers.
  Have you thought about functions from the reals instead?
  Doing so might help with the awkwardness with interpolation.</p>
</blockquote>

<p>I knew at once I&#8217;d heard a wonderful idea, so I went back to my office, wrote it down, and promised myself that I wouldn&#8217;t think about Kavi&#8217;s work and John&#8217;s insight until my dissertation was done.
Otherwise, I might never have finished.
A year or so later, at Sun Microsystems, I started working on functional animation, which then grew into functional reactive programming (FRP).</p>

<p>In the dozens of variations on FRP I&#8217;ve played with over the last 15 years, John&#8217;s refinement of Kavi&#8217;s idea has always been the heart of the matter for me.</p>

<p>Lately, I&#8217;ve been rethinking FRP yet again, and I&#8217;m very excited about where it&#8217;s leading me.</p>

<p>The semantic model of FRP has been based on behaviors of infinite duration and, mostly on absolute time.
Recently I realized that some problems of non-modular interaction could be elegantly addressed by switching to finite duration and relative time, and by adopting a <em>co-monadic</em> approach.</p>

<p>Upcoming post will explore these ideas.</p>

<!--
**Edits**:

* 2008-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=66&amp;md5=b845d26270e91da23f6d98ba030eb754"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/early-inspirations-and-new-directions-in-functional-reactive-programming/feed</wfw:commentRss>
		<slash:comments>14</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fearly-inspirations-and-new-directions-in-functional-reactive-programming&amp;language=en_GB&amp;category=text&amp;title=Early+inspirations+and+new+directions+in+functional+reactive+programming&amp;description=In+1989%2C+I+was+a+grad+student+nearing+completion+at+Carnegie+Mellon.+Kavi+Arya+gave+a+talk+on+%26%238220%3Bfunctional+animation%26%238221%3B%2C+using+lazy+lists.+I+was+awe-struck+with+the+elegance+and...&amp;tags=FRP%2Cfunctional+reactive+programming%2Chistory%2Cinspiration%2Crelative+time%2Cblog" type="text/html" />
	</item>
		<item>
		<title>Simplifying semantics with type class morphisms</title>
		<link>http://conal.net/blog/posts/simplifying-semantics-with-type-class-morphisms</link>
		<comments>http://conal.net/blog/posts/simplifying-semantics-with-type-class-morphisms#comments</comments>
		<pubDate>Wed, 09 Apr 2008 04:22:35 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[applicative functor]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[functor]]></category>
		<category><![CDATA[monad]]></category>
		<category><![CDATA[monoid]]></category>
		<category><![CDATA[semantics]]></category>
		<category><![CDATA[type class]]></category>
		<category><![CDATA[type class morphism]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=23</guid>
		<description><![CDATA[When I first started playing with functional reactivity in Fran and its predecessors, I didn&#8217;t realize that much of the functionality of events and reactive behaviors could be packaged via standard type classes. Then Conor McBride &#38; Ross Paterson introduced us to applicative functors, and I remembered using that pattern to reduce all of the [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Simplifying semantics with type class morphisms

Tags: type class, functor, applicative functor, monad, monoid, type class morphism, semantics, FRP, functional reactive programming

URL: http://conal.net/blog/posts/simplifying-semantics-with-type-class-morphisms

-->

<!-- references -->

<!-- teaser -->

<p>When I first started playing with functional reactivity in Fran and its predecessors, I didn&#8217;t realize that much of the functionality of events and reactive behaviors could be packaged via standard type classes.
Then Conor McBride &amp; Ross Paterson introduced us to <em><a href="http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html" title="Documentation for Control.Applicative: applicative functors">applicative functors</a></em>, and I remembered using that pattern to reduce all of the lifting operators in Fran to just two, which correspond to <code>pure</code> and <code>(&lt;*&gt;)</code> in the <code>Applicative</code> class.
So, in working on a new library for functional reactive programming (FRP), I thought I&#8217;d modernize the interface to use standard type classes as much as possible.</p>

<p>While spelling out a precise (denotational) semantics for the FRP instances of these classes, I noticed a lovely recurring pattern:</p>

<blockquote>
  <p>The meaning of each method corresponds to the same method for the meaning.</p>
</blockquote>

<p>In this post, I&#8217;ll give some examples of this principle and muse a bit over its usefulness.
For more details, see the paper <em><a href="http://conal.net/blog/posts/simply-efficient-functional-reactivity/" title="Blog post: &quot;Simply efficient functional reactivity&quot;">Simply efficient functional reactivity</a></em>.
Another post will start exploring type class morphisms and type composition, and ask questions I&#8217;m wondering about.</p>

<!--
**Edits**:

* 2008-02-09: just fiddling around
-->

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-23"></span></p>

<h3>Behaviors</h3>

<p>The meaning of a (reactive) behavior is a function from time:</p>

<pre><code>type B a = Time -&gt; a

at :: Behavior a -&gt; B a
</code></pre>

<p>So the semantic function, <code>at</code>, maps from the <code>Behavior</code> type (for use in FRP programs) to the <code>B</code> type (for understanding FRP programs)</p>

<p>As a simple example, the meaning of the behavior <code>time</code> is the identity function:</p>

<pre><code>at time == id
</code></pre>

<h4>Functor</h4>

<p>Given <code>b :: Behavior a</code> and a function <code>f :: a -&gt; b</code>, we can apply <code>f</code> to the value of <code>b</code> at every moment in (infinite and continuous) time.
This operation corresponds to the <code>Functor</code> method <code>fmap</code>, so</p>

<pre><code>instance Functor Behavior where ...
</code></pre>

<p>The informal description of <code>fmap</code> on behavior translates to a formal definition of its semantics:</p>

<pre><code>  fmap f b `at` t == f (b `at` t)
</code></pre>

<p>Equivalently,</p>

<pre><code>  at (fmap f b) ==  t -&gt; f (b `at` t)
                == f . ( t -&gt; b `at` t)
                == f . at b
</code></pre>

<p>Now here&#8217;s the fun part.
While <code>Behavior</code> is a functor, <em>so is its meaning</em>:</p>

<pre><code>instance Functor ((-&gt;) t) where fmap = (.)
</code></pre>

<p>So, replacing <code>f . at b</code> with <code>fmap f (at b)</code> above,</p>

<pre><code>  at (fmap f b) == fmap f (at b)
</code></pre>

<p>which can also be written</p>

<pre><code>  at . fmap f == fmap f . at
</code></pre>

<p>Keep in mind here than the <code>fmap</code> on the left is on behaviors, and on the right is functions (of time).</p>

<p>This last equation can also be written as a simple square commutative diagram and is sometimes expressed by saying that <code>at</code> is a &#8220;natural transformation&#8221; or &#8220;morphism on functors&#8221; [<a href="http://books.google.com/books?id=eBvhyc4z8HQC" title="Book: &quot;Categories for the Working Mathematician&quot; by Saunders Mac Lane">Categories for the Working Mathematician</a>].
For consistency with similar properties on other type classes, I suggest &#8220;functor morphism&#8221; as a synonym for natural transformation.</p>

<p>The <a href="http://www.haskell.org/haskellwiki/Category_theory/Natural_transformation" title="Haskell wiki page: &quot;Haskell wiki page on natural transformations&quot;">Haskell wiki page on natural transformations</a> shows the commutative diagram and gives <code>maybeToList</code> as another example.</p>

<h4>Applicative functor</h4>

<p>The <code>fmap</code> method applies a static (not time-varying) function to a dynamic (time-varying) argument.
A more general operation applies a dynamic function to a dynamic argument.
Also useful is promoting a static value to a dynamic one.
These two operations correspond to <code>(&lt;*&gt;)</code> and <code>pure</code> for <a href="http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html" title="Documentation for Control.Applicative: applicative functors">applicative functors</a>:</p>

<pre><code>infixl 4 &lt;*&gt;
class Functor f =&gt; Applicative f where
  pure  :: a -&gt; f a
  (&lt;*&gt;) :: f (a-&gt;b) -&gt; f a -&gt; f b
</code></pre>

<p>where, e.g., <code>f == Behavior</code>.</p>

<p>From these two methods, all of the n-ary lifting functions follow.
For instance,</p>

<pre><code>liftA3 :: Applicative f =&gt;
          (  a -&gt;   b -&gt;   c -&gt;   d)
       -&gt;  f a -&gt; f b -&gt; f c -&gt; f d
liftA3 h fa fb fc = pure h &lt;*&gt; fa &lt;*&gt; fb &lt;*&gt; fc
</code></pre>

<p>Or use <code>fmap h fa</code> in place of <code>pure h &lt;*&gt; fa</code>.
For prettier code, <code>(&lt;$&gt;)</code> (left infix) is synonymous with <code>fmap</code>.</p>

<p>Now, what about semantics?
Applying a dynamic function <code>fb</code> to a dynamic argument <code>xb</code> gives a dynamic result, whose value at time <code>t</code> is the value of <code>fb</code> at <code>t</code>, applied to the value of <code>xb</code> at <code>t</code>.</p>

<pre><code>at (fb &lt;*&gt; xb) ==  t -&gt; (fb `at` t) (xb `at` t)
</code></pre>

<p>The <code>(&lt;*&gt;)</code> operator is the heart of FRP&#8217;s concurrency model, which is determinate, synchronous, and continuous.</p>

<p>Promoting a static value yields a constant behavior:</p>

<pre><code>at (pure a) ==  t -&gt; a
            == const a
</code></pre>

<p>As with <code>Functor</code>, let&#8217;s look at the <code>Applicative</code> instance of functions (the meaning of behaviors):</p>

<pre><code>instance Applicative ((-&gt;) t) where
  pure a    = const a
  hf &lt;*&gt; xf =  t -&gt; (hf t) (xf t)
</code></pre>

<p>Wow &#8212; these two definitions look a lot like the meanings given above for <code>pure</code> and <code>(&lt;*&gt;)</code> on behaviors.
And sure enough, we can use the function instance to simplify these semantic definitions:</p>

<pre><code>at (pure a)    == pure a
at (fb &lt;*&gt; xb) == at fb &lt;*&gt; at xb
</code></pre>

<p>Thus the semantic function distributes over the <code>Applicative</code> methods.
In other words, the meaning of each method is the method on the meaning.
I don&#8217;t know of any standard term (like &#8220;natural transformation&#8221;) for this relationship between <code>at</code> and <code>pure</code>/<code>(&lt;*&gt;)</code>.
I suggest calling <code>at</code> an &#8220;applicative functor morphism&#8221;.</p>

<h4>Monad</h4>

<p>Monad morphisms are a bit trickier, due to the types.
There are two equivalent forms of the definition of a monad morphism, depending on whether you use <code>join</code> or <code>(&gt;&gt;=)</code>.
In the <code>join</code> form (e.g., in <a href="http://citeseer.ist.psu.edu/wadler92comprehending.html" title="Paper: &quot;Comprehending Monads&quot;">Comprehending Monads</a>, section 6), for monads <code>m</code> and <code>n</code>, the function <code>nu :: forall a. m a -&gt; n a</code> is a monad morphism if</p>

<pre><code>nu . join == join . nu . fmap nu
</code></pre>

<p>where</p>

<pre><code>join :: Monad m =&gt; m (m a) -&gt; m a
</code></pre>

<p>For behavior semantics, <code>m == Behavior</code>, <code>n == B == (-&gt;) Time</code>, and <code>nu == at</code>.</p>

<p>Then <code>at</code> is also a monad morphism if</p>

<pre><code>at (return a) == return a
at (join bb)  == join (at (fmap at bb))
</code></pre>

<p>And, since for functions <code>f</code>,</p>

<pre><code>fmap h f == h . f
join f   ==  t -&gt; f t t
</code></pre>

<p>the second condition is</p>

<pre><code>at (join bb) == join (at (fmap at bb))
             ==  t -&gt; at (at . bb) t t
             ==  t -&gt; at (at bb t) t
             ==  t -&gt; (bb `at` t) `at` t
</code></pre>

<p>So sampling <code>join bb</code> at <code>t</code> means sampling <code>bb</code> at <code>t</code> to get a behavior <code>b</code>, which is also sampled at <code>t</code>.
That&#8217;s exactly what I&#8217;d guess <code>join</code> to mean on behaviors.</p>

<p><em>Note:</em> the FRP implementation described in <em><a href="http://conal.net/blog/posts/simply-efficient-functional-reactivity/" title="Blog post: &quot;Simply efficient functional reactivity&quot;">Simply efficient functional reactivity</a></em> <em>does not</em> include a <code>Monad</code> instance for <code>Behavior</code>, because I don&#8217;t see how to implement one with the hybrid data-/demand-driven <code>Behavior</code> implementation.
However, the closely related but less expressive type, <code>Reactive</code>, has the same semantic model as <code>Behavior</code>.  <code>Reactive</code> does have a Monad instance, and its semantic function (<code>rats</code>) <em>is</em> a monad morphism.</p>

<h4>Other examples</h4>

<p><a href="http://conal.net/blog/posts/simply-efficient-functional-reactivity/" title="Blog post: &quot;Simply efficient functional reactivity&quot;">The <em>Simply</em> paper</a> contains several more examples of type class morphisms:</p>

<ul>
<li><a href="http://conal.net/blog/posts/reactive-values-from-the-future/" title="Blog post: &quot;Reactive values from the future&quot;">Reactive values</a>, time functions, and <a href="http://conal.net/blog/posts/future-values/" title="Blog post: &quot;Future values&quot;">future values</a> are also morphisms on <code>Functor</code>, <code>Applicative</code>, and <code>Monad</code>.</li>
<li><em>Improving values</em> are morphisms on <code>Ord</code>.</li>
</ul>

<p>The paper also includes a significant <em>non-example</em>, namely events.
The semantics I gave for <code>Event a</code> is a time-ordered list of time/value pairs.  However, the semantic function (<code>occs</code>) <em>is not</em> a <code>Monoid</code> morphism, because</p>

<pre><code>occs (e `mappend` e') == occs e `merge` occs e'
</code></pre>

<p>and <code>merge</code> is not <code>(++)</code>, which is <code>mappend</code> on lists.</p>

<h4>Why care about type class morphisms?</h4>

<p>I want my library&#8217;s users to think of behaviors and future values as being their semantic models (functions of time and time/value pairs).
Why?
Because these denotational models are simple and precise and have simple and useful formal properties.
Those properties allow library users to program with confidence, and allow library providers to make radical changes in representation and implementation (even from demand-driven to data-driven) without breaking client programs.</p>

<p>When I think of a behavior as a function of time, I&#8217;d like it to act like a function of time, hence <code>Functor</code>, <code>Applicative</code>, and <code>Monad</code>.
And if it does implement any classes in common with functions, then it had better agree the function instances of those classes.
Otherwise, user expectations will be mistaken, and the illusion is broken.</p>

<p>I&#8217;d love to hear about other examples of type class morphisms, particularly for <code>Applicative</code> and <code>Monad</code>, as well as thoughts on their usefulness.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=23&amp;md5=1da841eb36a6131cb61d906442f69326"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/simplifying-semantics-with-type-class-morphisms/feed</wfw:commentRss>
		<slash:comments>18</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fsimplifying-semantics-with-type-class-morphisms&amp;language=en_GB&amp;category=text&amp;title=Simplifying+semantics+with+type+class+morphisms&amp;description=When+I+first+started+playing+with+functional+reactivity+in+Fran+and+its+predecessors%2C+I+didn%26%238217%3Bt+realize+that+much+of+the+functionality+of+events+and+reactive+behaviors+could+be+packaged+via...&amp;tags=applicative+functor%2CFRP%2Cfunctional+reactive+programming%2Cfunctor%2Cmonad%2Cmonoid%2Csemantics%2Ctype+class%2Ctype+class+morphism%2Cblog" type="text/html" />
	</item>
	</channel>
</rss>
