main :: MMap k v -> MMap kk vv

Dictionary lookups on a key block until that key arrives in the dictionary.

lookup :: MMap k v -> k -> v

Sometimes you want to wait on all keys at the same time, and produce a result. This form accomplishes this:

mapp :: MMap k v -> (k -> v -> MMap kk vv) -> MMap kk vv

The only caveat is that if mapp ever detects that the result of the function for two key-value pairs disagree at a key, it must signal an error.

All destruction actions are pushed to the top level of the model. The option to delete input records can be provided. If a record is deleted, all computations that depended on its existence can be automatically recomputed.

]]>I admit I was hoping you’d covered this base already in your work or knew someone who did .

I’m not sure I have a good enough sense of what you mean by a “precise denotation” to come up with one.

I can think of some of my own approaches to the problem, and I may try some experiments to see what works out in practice, but I’m not sure if they are going to be as “good” as the procedural version.

For example, if the input is given as a list of lines of text and the output is given as a list of lines, then one could have a function from input lines to output lines. Like:

prog [] = ["What is your name?"] prog [name] = (prog []) ++ ["Hello, " ++ name ++ "! How old are you?"] prog [name, age] = (prog [name]) ++ ["Well, you look much younger..."]

Presumably I can devise a function from time to a list of input lines, and pass that into this function to get the output lines.

I know this approach has major flaws, but is this an example of what you mean by casting something in terms of denotation?

]]>`IO`

is not. One way to investigate is to cast something like your `IO`

example in terms of a (precise) denotation, and then see whether/how to capture that denotation in terms of FRP’s which is functions of (continuous) time. I expect that doing so would flush out some hidden assumptions and reveal some alternatives.]]>```
printLine "What is your name?"
name ← readLine
printLine "Hello, " ++ name ++ "! How old are you?"
age ← readLine
printLine "Well, you look much younger..."
```

Is FRP unsuitable for this kind of application?

]]>Did you somehow get the impression that I have a bias against monads? If so, how?

Many of my best friends are monads (e.g. (partially applied) pairings, functions, Maybe, Future, backtracking, …). These friends are all denotative. They have precise & tractable meanings I can work with. In contrast, there are non-denotative types, like IO. And while IO happens to be a Monad, it’s also a Functor, Applicative, Monoid (when applied to a Monoid), and even a Num (when applied to a Num)! These classes don’t hinder or help denotative-ness.

My goal here is to remind people of the quest of denotative programming and hopefully inspire some curiosity and creative exploration of new denotative alternatives to Haskell’s IO.

]]>Btw, it was mildly ironic that in your talk you had to resort to textual notation to describe the constructs behind Eros That is somewhat like asking for the visual equivalent of the understanding that lists, maybe and IO are in some sense “the same”.

The download link at http://www.haskell.org/haskellwiki/Eros is broken.

]]>`ls | grep txt`

as hinting in the direction I’m talking about but not quite going there. For instance, while `grep`

is a pure function, it takes its two inputs in very different ways, one of which is non-compositional (except through inconsistent hacks). Also, `ls`

takes its input in a very non-functional way. It accesses various bits of mutable state (current working directory and current file system contents), as well as having an implicit reference to a particular file system. And then there’s the lack of static typing in Unix chains. Please see my “modern marriage” talk for discussion of these points. (Near the start in discussing the Unix dream and how its choice of text I/O prevented it from achieving that dream.)]]>p>[...] Erstellungsprozess? Brandon Savage legt seine Gr

]]>Uniqueness types are very nice, and their cousin linear types are of fundamental importance to proof theory and semantics. However, they don’t directly supply an answer to Conal’s question: namely, does this library have good equational reasoning principles?

A good example of this arises when you try to program randomized algorithms in a functional style. The RNG will need a seed, and so the typical thing to do is to thread the seed through the program, and use either a monadic API or explicit state-passing (with uniqueness types) to manage this work, so that whenever we make a call to the RNG, we have the state available.

However, this is a lousy API. The reason is that when we’re writing our program, we don’t care about the order in which we make calls to the RNG, since the whole point is that it gives us a source of unpredictable values. However, the fact that we’re passing a state around means that we cannot reorder operations without changing the visible behavior of the program. So there’s a gap between how we want to think about the program, and what we can actually do.

A better API is to give a monadic API, where the the type constructor P(A) is interpreted as a probability distribution over values of A. Then we give no operation to “call the RNG”, and instead supply an operation to create distributions (for example, “choose p a b” might mean a occurs with probability p and b occurs with probability 1-p). Now, the return of the monad “return v” is the point distribution that is v with probability 1, and the bind operation corresponds to conditionalization.

Now, this gives very good equational reasoning principles — in fact, we can apply probability theory very directly to reasoning about our programs. The implementation is actually still the same as the usual state passing implementation, which shows that the API exposed is what’s important to determining the reasoning principles we get. (If you want the details of this idea, there’s a nice paper by Avi Pfeffer and Norman Ramsey “Stochastic Lambda Calculus and Monads of Probability Distributions”, at http://www.cs.tufts.edu/~nr/pubs/pmonad.pdf.)

]]>please read the DDC thesis which explains how the uniqueness typing ASCII gets quickly out of hand in real world programs, if i recall correctly.

]]>(forgive me if the question doesn’t make too much sense as I am new to both)

]]>If the technically (but awkwardly) functional IO had their imperative essence removed at a sufficiently low level (say the kernel) we could break out of the paradigm? or would it always be a clever hack like monads? Certain things like determining the current time with no inputs are not computable. A machine needs input either from someone initially setting a time and having a piece of quartz determine the next moment; or having a camera record the positions of the heavenly bodies. both of those things are non-deterministic inputs for performing something as trivial as getting the current time.

Do our realities interact with a turing complete computer analogously to a stack augmenting an FSA (to yield a more powerful pushdown automata)? Do computers really express time/space continuously sequential as we experience it? analogous to a FSA not expressing arbitrary state like a pushdown automata?

Maybe we need denotative/functional models for things like clocks/time; networking IO; random values; device IO. these things can provide values that can be used in turing complete computations but themselves need not be made up of turing complete systems.

If we are to bust out of the von Neumann paradigm we need to denote systems beyond turning completeness.

]]>