<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Conal Elliott &#187; caching</title>
	<atom:link href="http://conal.net/blog/tag/caching/feed" rel="self" type="application/rss+xml" />
	<link>http://conal.net/blog</link>
	<description>Inspirations &#38; experiments, mainly about denotative/functional programming in Haskell</description>
	<lastBuildDate>Thu, 25 Jul 2019 18:15:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.17</generator>
	<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2F&amp;language=en_US&amp;category=text&amp;title=Conal+Elliott&amp;description=Inspirations+%26amp%3B+experiments%2C+mainly+about+denotative%2Ffunctional+programming+in+Haskell&amp;tags=blog" type="text/html" />
	<item>
		<title>Another angle on functional future values</title>
		<link>http://conal.net/blog/posts/another-angle-on-functional-future-values</link>
		<comments>http://conal.net/blog/posts/another-angle-on-functional-future-values#comments</comments>
		<pubDate>Mon, 05 Jan 2009 04:01:05 +0000</pubDate>
		<dc:creator><![CDATA[Conal]]></dc:creator>
				<category><![CDATA[Functional programming]]></category>
		<category><![CDATA[caching]]></category>
		<category><![CDATA[FRP]]></category>
		<category><![CDATA[functional reactive programming]]></category>
		<category><![CDATA[future value]]></category>
		<category><![CDATA[referential transparency]]></category>
		<category><![CDATA[type class morphism]]></category>
		<category><![CDATA[type composition]]></category>

		<guid isPermaLink="false">http://conal.net/blog/?p=73</guid>
		<description><![CDATA[An earlier post introduced functional future values, which are values that cannot be known until the future, but can be manipulated in the present. That post presented a simple denotational semantics of future values as time/value pairs. With a little care in the definition of Time (using the Max monoid), the instances of Functor, Applicative, [&#8230;]]]></description>
				<content:encoded><![CDATA[<!-- 

Title: Another angle on functional future values

Tags: future value, type class morphism, type composition, caching, referential transparency, FRP, functional reactive programming

URL: http://conal.net/blog/posts/another-angle-on-functional-future-values/

-->

<!-- references -->

<!-- teaser -->

<p>An earlier post introduced functional <em><a href="http://conal.net/blog/posts/future-values/" title="blog post">future values</a></em>, which are values that cannot be known until the future, but can be manipulated in the present.
That post presented a simple denotational semantics of future values as time/value pairs.
With a little care in the definition of <code>Time</code> (using the <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/Data-Max.html" title="module documentation"><code>Max</code> monoid</a>), the instances of <code>Functor</code>, <code>Applicative</code>, <code>Monad</code> are all derived automatically.</p>

<p>A follow-up post gave an implementation of <em><a href="http://conal.net/blog/posts/future-values-via-multi-threading/" title="blog post">Future values via multi threading</a></em>.
Unfortunately, that implementation did not necessarily satisfy the semantics, because it allowed the nondeterminism of thread scheduling to leak through.
Although the implementation is usually correct, I wasn&#8217;t satisfied.</p>

<p>After a while, I hit upon an idea that really tickled me.
My original simple semantics could indeed serve as a correct and workable implementation if I used a subtler form of time that could reveal partial information.
Implementing this subtler form of time turned out to be quite tricky, and was my original motivation for the <code>unamb</code> operator described in the paper <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em> and the post <em><a href="http://conal.net/blog/posts/functional-concurrency-with-unambiguous-choice/" title="blog post">Functional concurrency with unambiguous choice</a></em>.</p>

<p>It took me several days of doodling, pacing outside, and talking to myself before the idea for <code>unamb</code> broke through.
Like many of my favorite ideas, it&#8217;s simple and obvious in retrospect: to remove the ambiguity of nondeterministic choice (as in the <code>amb</code> operator), restrict its use to values that are equal when non-bottom.
Whenever we have two different methods of answering the same question (or possibly failing), we can use <code>unamb</code> to try them both.
Failures (errors or non-termination) are no problem in this context.
A more powerful variation on <code>unamb</code> is the least upper bound operator <code>lub</code>, as described in <em><a href="http://conal.net/blog/posts/merging-partial-values/" title="blog post: &quot;Merging partial values&quot;">Merging partial values</a></em>.</p>

<p>I&#8217;ve been having trouble with the <code>unamb</code> implementation.
When two (compatible) computations race, the loser gets killed so as to free up cycles that are no longer needed.
My first few implementations, however, did not recursively terminate <em>other</em> threads spawned in service of abandoned computations (from nested use of <code>unamb</code>).
I raised this problem in <em><a href="http://conal.net/blog/posts/smarter-termination-for-thread-racing/" title="blog post">Smarter termination for thread racing</a></em>, which suggested some better definitions.
In the course of several helpful reader comments, some problems with my definitions were addressed, particularly in regard to blocking and unblocking exceptions.
None of these definitions so far has done the trick reliably, and now it looks like there is a bug in the GHC run-time system.
I hope the bug (if there is one) will be fixed soon, because I&#8217;m seeing more &amp; more how <code>unamb</code> and <code>lub</code> can make functional programming even more modular (just as laziness does, as explained by John Hughes in <em><a href="http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html" title="Paper by John Hughes">Why Functional Programming Matters</a></em>).</p>

<p>I started playing with future values and unambiguous choice as a way to implement <a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a>, a library for functional reactive programming (FRP).
(See <em><a href="http://conal.net/blog/posts/reactive-values-from-the-future/" title="blog post">Reactive values from the future</a></em> and <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em>.)
Over the last few days, I&#8217;ve given some thought to ways to implement future values without unambiguous choice.
This post describes one such alternative.</p>

<p><strong>Edits</strong>:</p>

<ul>
<li>2010-08-25: Replaced references to <em><a href="http://conal.net/papers/simply-reactive" title="Paper: &quot;Simply efficient functional reactivity (superceded)&quot;">Simply efficient functional reactivity</a></em> with <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em>.
The latter paper supercedes the former.</li>
<li>2010-08-25: Fixed the <code>unFuture</code> field of FutureG to be <code>TryFuture</code>.</li>
</ul>

<!-- without a comment or something here, the last item above becomes a paragraph -->

<p><span id="more-73"></span></p>

<h3>Futures, presently</h3>

<p>The current <code>Future</code> type is just a time and a value, wrapped in a a <code>newtype</code>:</p>

<pre><code>newtype FutureG t a = Future (Time t, a)
  deriving (Functor, Applicative, Monad)
</code></pre>

<p>Where the <code>Time</code> type is defined via the <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/Data-Max.html" title="module documentation"><code>Max</code> monoid</a>.
The derived instances have exactly the intended meaning for futures, as explained in the post <em><a href="http://conal.net/blog/posts/future-values/" title="blog post">Future values</a></em> and the paper <em><a href="http://conal.net/papers/push-pull-frp/" title="Paper">Push-pull functional reactive programming</a></em>.
The &#8220;G&#8221; in the name <code>FutureG</code> refers to generalized over time types.</p>

<p>Note that <code>Future</code> is parameterized over both time and value.
Originally, I intended this definition as a denotational semantics of future values, but I realized that it could be a workable implementation with a lazy enough <code>t</code>.
In particular, the times have to reveal lower bounds and allow comparisons before they&#8217;re fully known.</p>

<p>Warren Burton explored an applicable notion in the 1980s, which he called &#8220;improving values&#8221;, having a concurrent implementation but deterministic functional semantics.
(See the paper <em><a href="http://journals.cambridge.org/action/displayAbstract?aid=1287720" title="paper by Warren Burton">Encapsulating nondeterminacy in an abstract data type with deterministic semantics</a></em> or the paper <em><a href="http://portal.acm.org/citation.cfm?id=99402" title="paper by Warren Burton">Indeterminate behavior with determinate semantics in parallel programs</a></em>.
I haven&#8217;t found a freely-available online copy of either.)
I adapted Warren&#8217;s idea and gave it an implementation via <code>unamb</code>.</p>

<p>Another operation finds the earlier of two futures.
This operation has an identity and is associative, so I wrapped it up as a <code>Monoid</code> instance:</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty = Future (maxBound, undefined)
  Future (s,a) `mappend` Future (t,b) =
    Future (s `min` t, if s &lt;= t then a else b)
</code></pre>

<p>This <code>mappend</code> definition could be written more simply:</p>

<pre><code>  u@(Future (t,_)) `mappend` u'@(Future (t',_)) =
    if t &lt;= t' then u else u'
</code></pre>

<p>However, the less simple version has more potential for laziness.
The time type might allow yielding partial information about a minimum before both of its arguments are fully known, which is the case with improving values.</p>

<h3>Futures as functions</h3>

<p>The <a href="http://haskell.org/haskellwiki/Reactive" title="Wiki page for the Reactive library">Reactive</a> library uses futures to define and implement reactivity, i.e., behaviors specified piecewise.
Simplifying away the notion of <em>events</em> for now,</p>

<pre><code>until :: BehaviorG t a -&gt; FutureG t (BehaviorG t a) -&gt; BehaviorG t a
</code></pre>

<p>The semantics (but not implementation) of <code>BehaviorG</code> is given by</p>

<pre><code>at :: BehaviorG t a -&gt; (t -&gt; a)
</code></pre>

<p>The semantics of <code>until</code>:</p>

<pre><code>(b `until` Future (t',b')) `at` t = b'' `at` t
 where
   b'' = if t &lt;= t' then b else b'
</code></pre>

<p>FRP (multi-occurrence) events are then built on top of future values, and reactivity on top of <code>until</code>.</p>

<p>The semantics of <code>until</code> shows what information we need from futures: given a time <code>t</code>, we need to know whether <code>t</code> is later than the future&#8217;s time and, <em>if so</em>, what the future&#8217;s value is.
For other purposes, we&#8217;ll also want to know the future&#8217;s time, but again, only once we&#8217;re past that time.
We might, therefore, represent futures as a function that gives exactly this information.
I&#8217;ll call this function representation &#8220;function futures&#8221; and use the prefix &#8220;S&#8221; to distinguish the original &#8220;simple&#8221; futures from these function futures.</p>

<pre><code>type TryFuture t a = Time t -&gt; Maybe (S.FutureG t a)

tryFuture :: F.FutureG t a -&gt; TryFuture t a
</code></pre>

<p>Given a probe time, <code>tryFuture</code> gives <code>Nothing</code> if the time is before or at the future&#8217;s time, or <code>Just u</code> otherwise, where <code>u</code> is the simple future.</p>

<p>We could represent <code>F.FutureG</code> simply as <code>TryFuture</code>:</p>

<pre><code>type F.FutureG = TryFuture  -- first try
</code></pre>

<p>But then we&#8217;d be stuck with the <code>Functor</code> and <code>Applicative</code> instances for functions instead of futures.
Adding a <code>newtype</code> fixes that problem:</p>

<pre><code>newtype FutureG t a = Future { unFuture :: TryFuture t a } -- second try
</code></pre>

<p>With this representation we can easily construct and try out function futures:</p>

<pre><code>future :: TryFuture t a -&gt; FutureG t a
future = Future

tryFuture :: FutureG t a -&gt; TryFuture t a
tryFuture = unFuture
</code></pre>

<p>I like to define helpers for working inside representations:</p>

<pre><code>inFuture  :: (TryFuture t a -&gt; TryFuture t' a')
          -&gt; (FutureG   t a -&gt; FutureG   t' a')

inFuture2 :: (TryFuture t a -&gt; TryFuture t' a' -&gt; TryFuture t'' a'')
          -&gt; (FutureG   t a -&gt; FutureG   t' a' -&gt; FutureG   t'' a'')
</code></pre>

<p>The definitions of these helpers are very simple with the ideas from <em><a href="http://conal.net/blog/posts/prettier-functions-for-wrapping-and-wrapping/" title="blog post">Prettier functions for wrapping and wrapping</a></em> and a lovely notation from Matt Hellige&#8217;s <em><a href="http://matt.immute.net/content/pointless-fun" title="blog post by Matt Hellige">Pointless fun</a></em>.</p>

<pre><code>inFuture  = unFuture ~&gt; Future

inFuture2 = unFuture ~&gt; inFuture 

(~&gt;) :: (a' -&gt; a) -&gt; (b -&gt; b') -&gt; ((a -&gt; b) -&gt; (a' -&gt; b'))
g ~&gt; h = result h . argument g
</code></pre>

<p>These helpers make for some easy definitions in the style of <em><a href="http://conal.net/blog/posts/semantic-editor-combinators/" title="blog post">Semantic editor combinators</a></em>:</p>

<pre><code>instance Functor (FutureG t) where
  fmap = inFuture.fmap.fmap.fmap

instance (Bounded t, Ord t) =&gt; Applicative (FutureG t) where
  pure  = Future . pure.pure.pure
  (&lt;*&gt;) = (inFuture2.liftA2.liftA2) (&lt;*&gt;)
</code></pre>

<h4>Type composition</h4>

<p>These <code>Functor</code> and <code>Applicative</code> instances (for <code>FutureG t</code>) may look mysterious, but they have a common and inevitable form.
Every type whose representation is the (semantic and representational) composition of three functors has this style of <code>Functor</code> instance, and similarly for <code>Applicative</code>.</p>

<p>Instead of repeating this common pattern, let&#8217;s make the type composition explicit, using <a href="http://hackage.haskell.org/packages/archive/TypeCompose/0.6.3/doc/html/Control-Compose.html#2" title="module documentation"><code>Control.Compose</code></a> from the <a href="http://haskell.org/haskellwiki/TypeCompose" title="Wiki page for the TypeCompose library">TypeCompose</a> library:</p>

<pre><code>type FutureG t = (-&gt;) (Time t) :. Maybe :. S.FutureG t  -- actual definition
</code></pre>

<p>Now we can throw out <code>inFuture</code>, <code>inFuture2</code>, <code>(~&gt;)</code>, and the <code>Functor</code> and <code>Applicative</code> instances.
These instances follow from the general instances for type composition.</p>

<h4>Monoid</h4>

<p>The <code>Monoid</code> instance could also come automatically from type composition:</p>

<pre><code>instance Monoid (g (f a)) =&gt; Monoid ((g :. f) a) where
  { mempty = O mempty; mappend = inO2 mappend }
</code></pre>

<p>The <code>O</code> here is just the <code>newtype</code> constructor for <code>(:.)</code>, and the <code>inO2</code> function is similar to <code>inFuture2</code> above.</p>

<p>However, there is another often-useful <code>Monoid</code> instance:</p>

<pre><code>-- standard Monoid instance for Applicative applied to Monoid
instance (Applicative (g :. f), Monoid a) =&gt; Monoid ((g :. f) a) where
  { mempty = pure mempty; mappend = liftA2 mappend }
</code></pre>

<p>Because these two instances &#8220;overlap&#8221; are are both useful, neither one is declared in the general case.
Instead, specialized instances are declared where needed, e.g.,</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty  = (  O .  O ) mempty   -- or future mempty
  mappend = (inO2.inO2) mappend
</code></pre>

<p>How does the <code>Monoid</code> instance work?  Start with <code>mempty</code>.  Expanding:</p>

<pre><code>mempty

  == {- definition -} 

O (O mempty)

  == {- mempty on functions -}

O (O (const mempty))

  == {- mempty on Maybe -}

O (O (const Nothing))
</code></pre>

<p>So, given any probe time, the empty (never-occurring) future says that it does not occur before the probe time.</p>

<p>Next, <code>mappend</code>:</p>

<pre><code>O (O f) `mappend` O (O f')

  == {- mappend on FutureG -}

O (O (f `mappend` f'))

  == {- mappend on functions -}

O (O ( t -&gt; f t `mappend` f' t))

  == {- mappend on Maybe -}

O (O ( t -&gt; f t `mappendMb` f' t))
  where
    Nothing `mappendMb` mb'    = mb'
    mb `mappendMb` Nothing     = mb
    Just u `mappendMb` Just u' = Just (u `mappend` u')
</code></pre>

<p>The <code>mappend</code> in this last line is on simple futures, as defined above, examining the (now known) times and choosing the earlier future.
Previously, I took special care in that <code>mappend</code> definition to enable <code>min</code> to produce information before knowing whether <code>t &lt;= t'</code>.
However, with this new approach to futures, I expect to use simple (flat) times, so it could instead be</p>

<pre><code>u@(Future (s,_)) `mappend` u'@(Future (s',_)) = if s &lt;= s' then u else u'
</code></pre>

<p>or</p>

<pre><code>u `mappend` u' = if futTime u &lt;= futTime u' then u else u'

futTime (Future (t,_)) = t
</code></pre>

<p>or just</p>

<pre><code>mappend = minBy futTime
</code></pre>

<p>How does <code>mappend</code> work on function futures?
Given a test time <code>t</code>, if both future times are at least <code>t</code>, then the combined future&#8217;s time is at least <code>t</code> (yielding <code>Nothing</code>).
If either future is before <code>t</code> and the other isn&#8217;t, then the combined future is the same as the one before <code>t</code>.
If both futures are before <code>t</code>, then the combined future is the earlier one.
Exactly the desired semantics!</p>

<h4>Relating function futures and simple futures</h4>

<p>The function-based representation of futures relates closely to the simple representation.
Let&#8217;s make this relationship explcit by defining mappings between them:</p>

<pre><code>sToF :: Ord t =&gt; S.FutureG t a -&gt; F.FutureG t a

fToS :: Ord t =&gt; F.FutureG t a -&gt; S.FutureG t a
</code></pre>

<p>The first one is easy:</p>

<pre><code>sToF u@(S.Future (t, _)) =
  future ( t' -&gt; if t' &lt;= t then Nothing else Just u)
</code></pre>

<p>The reverse mapping, <code>fToS</code>, is trickier and is only defined on the image (codomain) of <code>sToF</code>.
I think it can be defined mathematically but not computationally.
There are two cases: either the function always returns <code>Nothing</code>, or there is at least one <code>t</code> for which it returns a <code>Just</code>.
If the former, then the simple future is <code>mempty</code>, which is <code>S.Future (maxBound, undefined)</code>.
If the latter, then there is only one such <code>Just</code>, and the simple future is the one in that <code>Just</code>.
Together, <code>(sToF, fToS)</code> form a projection-embedding pair.</p>

<p>We won&#8217;t really have to implement or invoke these functions.
Instead, they serve to further specify the type <code>F.FutureG</code> and the correctness of operations on it.
The representation of <code>F.FutureG</code> as given allows many values that do not correspond to futures.
To eliminate these representations, require an invariant that a function future must be the result of applying <code>sToF</code> to some simple future.</p>

<p>We&#8217;ll require that each operation preserves this invariant.
However, let&#8217;s prove something stronger, namely that operations on on <code>F.FutureG</code> correspond precisely to the same operations on <code>S.FutureG</code>, via <code>sToF</code>.
In other words, <code>sToF</code> preserves the shape of the operations on futures.
For type classes, these correspondences are the type class morphisms.
For instance, <code>sToF</code> is a <code>Monoid</code> morphism:</p>

<pre><code>sToF mempty == mempty

sToF (u `mappend` u') == sToF u `mappend` sToF u'
</code></pre>

<h3>Caching futures</h3>

<p>This function representation eliminates the need for tricky times (using improving values and <code>unamb</code>), but it loses the caching benefit that lazy functional programming affords to non-function representations.
Now let&#8217;s reclaim that benefit.
The trick is to exploit the restriction that every function future must be 
(semantically) the image of a simple future under <code>sToF</code>.</p>

<p>Examining the definition of <code>sToF</code>, we can deduce the following monotonicity properties of (legitimate) function futures:</p>

<ul>
<li>If the probe function yields <code>Nothing</code> for some <code>t'</code>, then it yields <code>Nothing</code> for earlier times.</li>
<li>If the probe function yields <code>Just u</code> for some <code>t'</code>, then it yields <code>Just u</code> for all later times.</li>
</ul>

<p>We can exploit these monotonicity properties by caching information as we learn it.
Caching of this sort is what distinguishes call-by-need from call-by-name and allows lazy evaluation to work efficiently for data representations.</p>

<p>Specifically, let&#8217;s save a best-known lower bound for the future time and the simple future when known.
Since the lower bound may get modified a few times, I&#8217;ll use a <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent-SampleVar.html#v:SampleVar"><code>SampleVar</code></a> (thread-safe rewritable variable).
The simple future will be discovered only once, so I&#8217;ll use an <a href="http://hackage.haskell.org/packages/archive/reactive/latest/doc/html/FRP-Reactive-Internal-IVar.html"><code>IVar</code></a>.
I&#8217;ll keep the function-future for probing when the cached information is not sufficient to answer a query.</p>

<p>Prefix this caching version with a &#8220;C&#8221;, to distinguish it from function futures (&#8220;F&#8221;) and the simple futures (&#8220;S&#8221;):</p>

<pre><code>data C.FutureG t a =
  Future (SampleVar t) (IVar (S.FutureG t a)) (F.FutureG t a)
</code></pre>

<p>Either the simple future or the function future will be useful, so we could replace the second two fields with a single one:</p>

<pre><code>data C.FutureG t a =
  Future (SampleVar t) (MVar (Either (F.FutureG t a) (FF.FutureG t a)))
</code></pre>

<p>We&#8217;ll have to be careful about multiple independent discoveries of the same simple future, which would correspond to multiple writes <code>IVar</code> with the same value.
(I imagine there are related mechanics in the GHC RTS for two threads evaluating the same thunk that would be helpful to understand.)
I guess I could use a <code>SampleVar</code> and just not worry about multiple writes, since they&#8217;d be equivalent.
For now, use the <code>IVar</code> version.</p>

<p>The caching representation relates to the function representation by means of two functions:</p>

<pre><code>dToF :: Ord     t =&gt; C.FutureG t a -&gt; F.FutureG t a

fToD :: Bounded t =&gt; F.FutureG t a -&gt; C.FutureG t a
</code></pre>

<p>The implementation</p>

<pre><code>dToF (C.Future tv uv uf) =
  F.Future $  t' -&gt; unsafePerformIO $
    do mb &lt;- tryReadIVar uv
       case mb of
         j@(Just (S.Future (Max t,_))) -&gt;
           return (if t' &lt;= t then Nothing else j)
         Nothing        -&gt;
           do tlo &lt;- readSampleVar tv
              if t' &lt;= tlo then
                 return Nothing
               else
                 do let mb' = F.unFuture uf t'
                    writeIVarMaybe uv mb'
                    return mb'

-- Perhaps write to an IVar
writeIVarMaybe :: IVar a -&gt; Maybe a -&gt; IO ()
writeIVarMaybe v = maybe (return ()) (writeIVar v)

fToD uf = unsafePerformIO $
          do tv &lt;- newSampleVar t0
             uv &lt;- newIVar
             writeIVarMaybe uv (F.unFuture uf t0)
             return (Future tv uv uf)
 where
   t0 = minBound
</code></pre>

<p>It&#8217;ll be handy to delegate operations to <code>F.Future</code>:</p>

<pre><code>inF :: (Ord t, Bounded t') =&gt;
       (F.FutureG t a -&gt; F.FutureG t' a')
    -&gt; (  FutureG t a -&gt;   FutureG t' a')
inF = dToF ~&gt; fToD

inF2 :: (Ord t, Bounded t', Ord t', Bounded t'') =&gt;
        (F.FutureG t a -&gt; F.FutureG t' a' -&gt; F.FutureG t'' a'')
     -&gt; (  FutureG t a -&gt;   FutureG t' a' -&gt;   FutureG t'' a'')
inF2 = dToF ~&gt; inF
</code></pre>

<p>Then</p>

<pre><code>instance (Ord t, Bounded t) =&gt; Monoid (FutureG t a) where
  mempty  = fToD mempty
  mappend = inF2 mappend

instance (Ord t, Bounded t) =&gt; Functor     (FutureG t) where
  fmap = inF . fmap

instance (Ord t, Bounded t) =&gt; Applicative (FutureG t) where
  pure  = fToD . pure
  (&lt;*&gt;) = inF2 (&lt;*&gt;)
</code></pre>

<h3>Wrap-up</h3>

<p>Well, that&#8217;s the idea.
I&#8217;ve gotten as far as type-checking the code in this post, but I haven&#8217;t yet tried running it.</p>

<p>What interests me most above is the use of <code>unsafePerformIO</code> here while preserving referential transparency, thanks to the invariant on <code>F.FutureG</code> (and the consequent monotonicity property).
The heart of lazy evaluation of <em>pure</em> functional programs is just such an update, replacing a thunk with its weak head normal form (whnf).
What general principles can we construct that allow us to use efficient, destructive updating and still have referential transparency?
The important thing above seems to be the careful definition of an abstract interface such that the effect of state updates is semantically invisible through the interface.</p>
<p><a href="http://conal.net/blog/?flattrss_redirect&amp;id=73&amp;md5=7a9fda2441e621e5220ead93a09bb210"><img src="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png" srcset="http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@2x.png 2xhttp://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white.png, http://conal.net/blog/wp-content/plugins/flattr/img/flattr-badge-white@3x.png 3x" alt="Flattr this!"/></a></p>]]></content:encoded>
			<wfw:commentRss>http://conal.net/blog/posts/another-angle-on-functional-future-values/feed</wfw:commentRss>
		<slash:comments>11</slash:comments>
		<atom:link rel="payment" title="Flattr this!" href="https://flattr.com/submit/auto?user_id=conal&amp;popout=1&amp;url=http%3A%2F%2Fconal.net%2Fblog%2Fposts%2Fanother-angle-on-functional-future-values&amp;language=en_GB&amp;category=text&amp;title=Another+angle+on+functional+future+values&amp;description=An+earlier+post+introduced+functional+future+values%2C+which+are+values+that+cannot+be+known+until+the+future%2C+but+can+be+manipulated+in+the+present.+That+post+presented+a+simple+denotational...&amp;tags=caching%2CFRP%2Cfunctional+reactive+programming%2Cfuture+value%2Creferential+transparency%2Ctype+class+morphism%2Ctype+composition%2Cblog" type="text/html" />
	</item>
	</channel>
</rss>
