<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Future values via multi-threading</title>
	<atom:link href="http://conal.net/blog/posts/future-values-via-multi-threading/feed" rel="self" type="application/rss+xml" />
	<link>http://conal.net/blog/posts/future-values-via-multi-threading</link>
	<description>Inspirations &#38; experiments, mainly about denotative/functional programming in Haskell</description>
	<lastBuildDate>Sat, 26 Sep 2020 21:06:12 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.17</generator>
	<item>
		<title>By: Conal Elliott &#187; Blog Archive &#187; Another angle on functional future values</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-27</link>
		<dc:creator><![CDATA[Conal Elliott &#187; Blog Archive &#187; Another angle on functional future values]]></dc:creator>
		<pubDate>Mon, 05 Jan 2009 04:02:16 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-27</guid>
		<description><![CDATA[&lt;p&gt;[...] follow-up post gave an implementation of Future values via multi threading. Unfortunately, that implementation did not necessarily satisfy the semantics, because it allowed [...]&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] follow-up post gave an implementation of Future values via multi threading. Unfortunately, that implementation did not necessarily satisfy the semantics, because it allowed [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jeffrey Yasskin</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-26</link>
		<dc:creator><![CDATA[Jeffrey Yasskin]]></dc:creator>
		<pubDate>Sun, 06 Apr 2008 21:54:56 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-26</guid>
		<description><![CDATA[&lt;p&gt;I didn&#039;t completely understand, and I was also too close to C++. It looks like MVars do guarantee sequential consistency, so you don&#039;t have to worry about only having a partial order on time. Then I missed that fmap and mappend exactly propagate their input times, despite taking a finite amount of real time to execute. So it&#039;s possible for:&lt;/p&gt;

&lt;pre&gt;do a &lt;- future (return 0)
   b &lt;- (+1) `fmap` a
   c &lt;- (+2) `fmap` a
   val_bc &lt;- force (b `mappend` c)
   val_cb &lt;- force (c `mappend` b)
   guard (val_bc == 1 &amp;&amp; val_cb == 2)
&lt;/pre&gt;

&lt;p&gt;to fail even though the denotational semantics say it should succeed. Even if fmap introduced a new time, it would take some tricky programming to completely avoid the possibility that (val_bc == 2 &amp;&amp; val_cb==1), which would prove that they happened in an inconsistent order. But Improving already implements most of that tricky programming, so great!&lt;/p&gt;

&lt;p&gt;After skimming your paper, I didn&#039;t see a way to introduce Futures or (Improving Time)s, just the compositions. To get time to work, I think you need a global monotonic MVar counter. Then when the future gets its value, you do something like:&lt;/p&gt;

&lt;pre&gt;do c_val &lt;- takeMVar counter
   -- Important that this happens while counter has no value:
   putMVar an_mvar_inside_the_improving c_val
   putMVar counter (c_val + 1)
&lt;/pre&gt;

&lt;p&gt;and to write exact and compare_I for a particular Future, you use:&lt;/p&gt;

&lt;pre&gt;exact = unsafePerformIO $ readMVar an_mvar_inside_the_improving  -- bound from the creation of the future
compare_I other = unsafePerformIO $ do
    -- make sure that the global counter has passed other&#039;s value:
    evaluate other
    my_val &lt;- tryTakeMVar an_mvar_inside_the_improving
    case my_val of
        -- If an_mvar... is not set any time after other was set, it&#039;s guaranteed to get a higher value.
        Nothing -&gt; return GT
        Just val -&gt; do putMVar an_mvar_inside_the_improving val
                       return $ compare val other
&lt;/pre&gt;

&lt;p&gt;I haven&#039;t run this, so it&#039;s very possible it doesn&#039;t actually work. It&#039;s also fairly expensive to maintain that global counter on a machine with lots of processors because the cache line has to move between processors a lot, but that may be masked under other Haskell overhead.&lt;/p&gt;

&lt;p&gt;Or have you already implemented all of that somewhere that I missed?&lt;/p&gt;

&lt;p&gt;Also, according to http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html#v%3AkillThread, &quot;if there are two threads that can kill each other, it is guaranteed that only one of the threads will get to kill the other.&quot;, so you don&#039;t actually need the lock in &lt;code&gt;race&lt;/code&gt;.&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>I didn&#8217;t completely understand, and I was also too close to C++. It looks like MVars do guarantee sequential consistency, so you don&#8217;t have to worry about only having a partial order on time. Then I missed that fmap and mappend exactly propagate their input times, despite taking a finite amount of real time to execute. So it&#8217;s possible for:</p>

<pre>do a &lt;- future (return 0)
   b &lt;- (+1) `fmap` a
   c &lt;- (+2) `fmap` a
   val_bc &lt;- force (b `mappend` c)
   val_cb &lt;- force (c `mappend` b)
   guard (val_bc == 1 &amp;&amp; val_cb == 2)
</pre>

<p>to fail even though the denotational semantics say it should succeed. Even if fmap introduced a new time, it would take some tricky programming to completely avoid the possibility that (val_bc == 2 &amp;&amp; val_cb==1), which would prove that they happened in an inconsistent order. But Improving already implements most of that tricky programming, so great!</p>

<p>After skimming your paper, I didn&#8217;t see a way to introduce Futures or (Improving Time)s, just the compositions. To get time to work, I think you need a global monotonic MVar counter. Then when the future gets its value, you do something like:</p>

<pre>do c_val &lt;- takeMVar counter
   -- Important that this happens while counter has no value:
   putMVar an_mvar_inside_the_improving c_val
   putMVar counter (c_val + 1)
</pre>

<p>and to write exact and compare_I for a particular Future, you use:</p>

<pre>exact = unsafePerformIO $ readMVar an_mvar_inside_the_improving  -- bound from the creation of the future
compare_I other = unsafePerformIO $ do
    -- make sure that the global counter has passed other's value:
    evaluate other
    my_val &lt;- tryTakeMVar an_mvar_inside_the_improving
    case my_val of
        -- If an_mvar... is not set any time after other was set, it's guaranteed to get a higher value.
        Nothing -&gt; return GT
        Just val -&gt; do putMVar an_mvar_inside_the_improving val
                       return $ compare val other
</pre>

<p>I haven&#8217;t run this, so it&#8217;s very possible it doesn&#8217;t actually work. It&#8217;s also fairly expensive to maintain that global counter on a machine with lots of processors because the cache line has to move between processors a lot, but that may be masked under other Haskell overhead.</p>

<p>Or have you already implemented all of that somewhere that I missed?</p>

<p>Also, according to <a href="http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html#v%3AkillThread" rel="nofollow">http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html#v%3AkillThread</a>, &#8220;if there are two threads that can kill each other, it is guaranteed that only one of the threads will get to kill the other.&#8221;, so you don&#8217;t actually need the lock in <code>race</code>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: conal</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-25</link>
		<dc:creator><![CDATA[conal]]></dc:creator>
		<pubDate>Sun, 06 Apr 2008 05:13:52 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-25</guid>
		<description><![CDATA[&lt;p&gt;Thanks for the comment, Jeffrey.&lt;/p&gt;

&lt;p&gt;I wonder if you understood what I&#039;m after in an implementation, which is to realize (implement) the simple, determinate, denotational semantics from the earlier &lt;em&gt;&lt;a href=&quot;http://conal.net/blog/posts/future-values/&quot; rel=&quot;nofollow&quot;&gt;Future values&lt;/a&gt;&lt;/em&gt; post.  That semantics includes and addresses simultaneity, and implementing it faithfully doesn&#039;t require hardware simultaneity.&lt;/p&gt;

&lt;p&gt;My original demand-driven FRP implementations implemented this semantics (including determinacy and simultaneity) correctly (whether on uniprocessor or multiprocessor), as does the new data-driven implementation described in &lt;em&gt;&lt;a href=&quot;http://conal.net/blog/posts/simply-efficient-functional-reactivity/&quot; rel=&quot;nofollow&quot;&gt;Simply efficient functional reactivity&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>Thanks for the comment, Jeffrey.</p>

<p>I wonder if you understood what I&#8217;m after in an implementation, which is to realize (implement) the simple, determinate, denotational semantics from the earlier <em><a href="http://conal.net/blog/posts/future-values/" rel="nofollow">Future values</a></em> post.  That semantics includes and addresses simultaneity, and implementing it faithfully doesn&#8217;t require hardware simultaneity.</p>

<p>My original demand-driven FRP implementations implemented this semantics (including determinacy and simultaneity) correctly (whether on uniprocessor or multiprocessor), as does the new data-driven implementation described in <em><a href="http://conal.net/blog/posts/simply-efficient-functional-reactivity/" rel="nofollow">Simply efficient functional reactivity</a></em>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jeffrey Yasskin</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-24</link>
		<dc:creator><![CDATA[Jeffrey Yasskin]]></dc:creator>
		<pubDate>Sat, 05 Apr 2008 19:55:32 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-24</guid>
		<description><![CDATA[&lt;p&gt;I don&#039;t think you should worry so much about left-biasing mappend. Assuming that you intend this to be useful on multi-core machines, you need to understand that there is no such thing as simultaneity. Take the following example (known as Independent Reads of Independent Writes, IRIW), with a==b==0 initially:&lt;/p&gt;

&lt;p&gt;Thread 1
  Thread 2
  Thread 3
  Thread 4&lt;/p&gt;

&lt;p&gt;r1 = a
  r3 = b
  a = 1
  b = 1&lt;/p&gt;

&lt;p&gt;r2 = b
  r4 = a&lt;/p&gt;

&lt;p&gt;On many 4-processor systems, it is quite possible that r1 == r3 == 1 and r2 == r4 == 0, proving that a was written before b to thread 1, and b was written before a to thread 2. Not only didn&#039;t they happen at the same time, they happened in inconsistent orders in different places. (It&#039;s very relativistic.)  When processors do allow you to eliminate this possibility, they instead guarantee that memory actions happen in a global total order. Again, no simultaneity.&lt;/p&gt;

&lt;p&gt;So you have two choices. Either your Time parameter models the behavior of your Futures, in which case you have to arrange to use the fairly expensive sequentially-consistent instructions for reads and writes to shared variables (which guarantees that no two events will be simultaneous). Or a totally-ordered Time is inappropriate to model the semantics of Futures, and you instead have to fall back on something like the happens-before partial order that defines the Java and C++0x memory models. There may be another good semantics for this besides happens-before, but it&#039;s unlikely to present you with the problem of events that are provably simultaneous, so even if mappend picks &quot;wrong&quot;, your users will never be able to tell.&lt;/p&gt;

&lt;p&gt;P.S. I&#039;m working on a C++ Futures library and had realized that futures were monadic, but these articles are a great summary of the realization. Thanks!&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>I don&#8217;t think you should worry so much about left-biasing mappend. Assuming that you intend this to be useful on multi-core machines, you need to understand that there is no such thing as simultaneity. Take the following example (known as Independent Reads of Independent Writes, IRIW), with a==b==0 initially:</p>

<p>Thread 1
  Thread 2
  Thread 3
  Thread 4</p>

<p>r1 = a
  r3 = b
  a = 1
  b = 1</p>

<p>r2 = b
  r4 = a</p>

<p>On many 4-processor systems, it is quite possible that r1 == r3 == 1 and r2 == r4 == 0, proving that a was written before b to thread 1, and b was written before a to thread 2. Not only didn&#8217;t they happen at the same time, they happened in inconsistent orders in different places. (It&#8217;s very relativistic.)  When processors do allow you to eliminate this possibility, they instead guarantee that memory actions happen in a global total order. Again, no simultaneity.</p>

<p>So you have two choices. Either your Time parameter models the behavior of your Futures, in which case you have to arrange to use the fairly expensive sequentially-consistent instructions for reads and writes to shared variables (which guarantees that no two events will be simultaneous). Or a totally-ordered Time is inappropriate to model the semantics of Futures, and you instead have to fall back on something like the happens-before partial order that defines the Java and C++0x memory models. There may be another good semantics for this besides happens-before, but it&#8217;s unlikely to present you with the problem of events that are provably simultaneous, so even if mappend picks &#8220;wrong&#8221;, your users will never be able to tell.</p>

<p>P.S. I&#8217;m working on a C++ Futures library and had realized that futures were monadic, but these articles are a great summary of the realization. Thanks!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: conal</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-23</link>
		<dc:creator><![CDATA[conal]]></dc:creator>
		<pubDate>Sun, 10 Feb 2008 05:57:18 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-23</guid>
		<description><![CDATA[&lt;p&gt;sjanssen wrote:&lt;/p&gt;

&lt;p&gt;&lt;div&gt;
&lt;pre class=&quot;haskell&quot;&gt;It seems to me that this is &lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:not&quot; rel=&quot;nofollow&quot;&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;not&lt;/span&gt;&lt;/a&gt; referentially transparent: ...&lt;/pre&gt;
&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;To me also.  I think the uses of &lt;code&gt;future&lt;/code&gt; in &lt;code&gt;Future.hs&lt;/code&gt; are referentially transparent.  I could simply not export &lt;code&gt;future&lt;/code&gt; or make a clear explanation of how I intend it be used.&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>sjanssen wrote:</p>

<p><div>
<pre class="haskell">It seems to me that this is <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:not" rel="nofollow"><span style="font-weight: bold;">not</span></a> referentially transparent: ...</pre>
</div></p>

<p>To me also.  I think the uses of <code>future</code> in <code>Future.hs</code> are referentially transparent.  I could simply not export <code>future</code> or make a clear explanation of how I intend it be used.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: sjanssen</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-22</link>
		<dc:creator><![CDATA[sjanssen]]></dc:creator>
		<pubDate>Thu, 07 Feb 2008 07:12:39 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-22</guid>
		<description><![CDATA[&lt;p&gt;It seems to me that this is not referentially transparent:&lt;/p&gt;

&lt;p&gt;
let x = future &lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:getChar&quot; rel=&quot;nofollow&quot;&gt;getChar&lt;/a&gt; in &lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html#v:liftA2&quot; rel=&quot;nofollow&quot;&gt;liftA2&lt;/a&gt; &#040;,&#041; x x&lt;/pre&gt;
&lt;/p&gt;

&lt;p&gt;vs.&lt;/p&gt;

&lt;p&gt;
&lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html#v:liftA2&quot; rel=&quot;nofollow&quot;&gt;liftA2&lt;/a&gt; &#040;,&#041; &#040;future &lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:getChar&quot; rel=&quot;nofollow&quot;&gt;getChar&lt;/a&gt;&#041; &#040;future &lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:getChar&quot; rel=&quot;nofollow&quot;&gt;getChar&lt;/a&gt;&#041;&lt;/pre&gt;
&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>It seems to me that this is not referentially transparent:</p>

<p>
let x = future <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:getChar" rel="nofollow">getChar</a> in <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html#v:liftA2" rel="nofollow">liftA2</a> &#40;,&#41; x x
</p>

<p>vs.</p>

<p>
<a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Applicative.html#v:liftA2" rel="nofollow">liftA2</a> &#40;,&#41; &#40;future <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:getChar" rel="nofollow">getChar</a>&#41; &#40;future <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:getChar" rel="nofollow">getChar</a>&#41;
</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Conal Elliott &#187; Blog Archive &#187; Invasion of the composable Mutant-Bots</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-21</link>
		<dc:creator><![CDATA[Conal Elliott &#187; Blog Archive &#187; Invasion of the composable Mutant-Bots]]></dc:creator>
		<pubDate>Wed, 06 Feb 2008 17:16:23 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-21</guid>
		<description><![CDATA[&lt;p&gt;[...] One thing I like a lot about the implementations in this post, compared with Reactive, is that they do not need any concurrency, and so it&#8217;s easy to achieve deterministic semantics. I don&#8217;t quite know how to do that for Reactive, as mentioned in &#8220;Future values via multi-threading.&#8221; [...]&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] One thing I like a lot about the implementations in this post, compared with Reactive, is that they do not need any concurrency, and so it&#8217;s easy to achieve deterministic semantics. I don&#8217;t quite know how to do that for Reactive, as mentioned in &#8220;Future values via multi-threading.&#8221; [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Russell</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-20</link>
		<dc:creator><![CDATA[Russell]]></dc:creator>
		<pubDate>Sat, 02 Feb 2008 13:52:27 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-20</guid>
		<description><![CDATA[&lt;p&gt;Instead of&lt;/p&gt;

&lt;p&gt;
a `&lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:seq&quot; rel=&quot;nofollow&quot;&gt;seq&lt;/a&gt;` &lt;a href=&quot;http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:return&quot; rel=&quot;nofollow&quot;&gt;return&lt;/a&gt; a&lt;/pre&gt;
&lt;/p&gt;

&lt;p&gt;I recommend the slightly less evil&lt;/p&gt;

&lt;p&gt;
Control.Exception.evaluate a&lt;/pre&gt;
&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>Instead of</p>

<p>
a `<a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:seq" rel="nofollow">seq</a>` <a href="http://haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v:return" rel="nofollow">return</a> a
</p>

<p>I recommend the slightly less evil</p>

<p>
Control.Exception.evaluate a
</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: conal</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-19</link>
		<dc:creator><![CDATA[conal]]></dc:creator>
		<pubDate>Sat, 26 Jan 2008 06:03:45 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-19</guid>
		<description><![CDATA[&lt;blockquote&gt;
  &lt;p&gt;In the race function, what happens in the unlikely case that the two threads both perform the &#039;killThread’ action before either gets to ’sink x’?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I think that could happen, and the result wouldn&#039;t be pleasant, as there would be no winner instead of one or two.  Perhaps a safer alternative would be to swap the &lt;code&gt;killThread&lt;/code&gt; and &lt;code&gt;sink&lt;/code&gt; lines.  Then there&#039;d be a possibility of two sinks getting through.  Both sink calls would &lt;code&gt;putMVar&lt;/code&gt; to the same MVar, which means the second call would block until its thread is killed later.  But there&#039;s still a gotcha: the &lt;code&gt;readMVar&lt;/code&gt; operation is not atomic.  It does a &lt;code&gt;takeMVar&lt;/code&gt; followed by a &lt;code&gt;putMVar&lt;/code&gt;.  Between those two calls, the blocked &lt;code&gt;putMVar&lt;/code&gt; thread could easily be scheduled.  The result is that the same future could report two different values.  I think the &quot;simple implementation&quot; avoids that problem by hiding the action that contains the &lt;code&gt;readMVar&lt;/code&gt;, allowing it to get executed at most once.&lt;/p&gt;

&lt;p&gt;I don&#039;t expect any of these remarks to be reassuring.  At least, they&#039;re not to me.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Also, why is it a problem that in a mappend b we may get b when a and b are available simultaneously?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because I want a clear &amp; deterministic (denotational) semantics.  Why?  For tractability of reasoning.  The semantics has to specify one bias or the other in the case of simultaneity.&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<blockquote>
  <p>In the race function, what happens in the unlikely case that the two threads both perform the &#8216;killThread’ action before either gets to ’sink x’?</p>
</blockquote>

<p>I think that could happen, and the result wouldn&#8217;t be pleasant, as there would be no winner instead of one or two.  Perhaps a safer alternative would be to swap the <code>killThread</code> and <code>sink</code> lines.  Then there&#8217;d be a possibility of two sinks getting through.  Both sink calls would <code>putMVar</code> to the same MVar, which means the second call would block until its thread is killed later.  But there&#8217;s still a gotcha: the <code>readMVar</code> operation is not atomic.  It does a <code>takeMVar</code> followed by a <code>putMVar</code>.  Between those two calls, the blocked <code>putMVar</code> thread could easily be scheduled.  The result is that the same future could report two different values.  I think the &#8220;simple implementation&#8221; avoids that problem by hiding the action that contains the <code>readMVar</code>, allowing it to get executed at most once.</p>

<p>I don&#8217;t expect any of these remarks to be reassuring.  At least, they&#8217;re not to me.</p>

<blockquote>
  <p>Also, why is it a problem that in a mappend b we may get b when a and b are available simultaneously?</p>
</blockquote>

<p>Because I want a clear &amp; deterministic (denotational) semantics.  Why?  For tractability of reasoning.  The semantics has to specify one bias or the other in the case of simultaneity.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ivan</title>
		<link>http://conal.net/blog/posts/future-values-via-multi-threading#comment-18</link>
		<dc:creator><![CDATA[Ivan]]></dc:creator>
		<pubDate>Sat, 26 Jan 2008 05:35:23 +0000</pubDate>
		<guid isPermaLink="false">http://conal.net/blog/posts/future-values-part-two-a-multi-threaded-implementation/#comment-18</guid>
		<description><![CDATA[&lt;p&gt;A very interesting article :)&lt;/p&gt;

&lt;p&gt;In the race function, what happens in the unlikely case that the two threads both perform the &#039;killThread&#039; action before either gets to &#039;sink x&#039;?&lt;/p&gt;

&lt;p&gt;Also, why is it a problem that in a &lt;code&gt;mappend&lt;/code&gt; b we may get b when a and b are available simultaneously?&lt;/p&gt;

&lt;p&gt;Ivan&lt;/p&gt;
]]></description>
		<content:encoded><![CDATA[<p>A very interesting article <img src="http://conal.net/blog/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>

<p>In the race function, what happens in the unlikely case that the two threads both perform the &#8216;killThread&#8217; action before either gets to &#8216;sink x&#8217;?</p>

<p>Also, why is it a problem that in a <code>mappend</code> b we may get b when a and b are available simultaneously?</p>

<p>Ivan</p>
]]></content:encoded>
	</item>
</channel>
</rss>
