Did you somehow get the impression that I have a bias against monads? If so, how?

Many of my best friends are monads (e.g. (partially applied) pairings, functions, Maybe, Future, backtracking, …). These friends are all denotative. They have precise & tractable meanings I can work with. In contrast, there are non-denotative types, like IO. And while IO happens to be a Monad, it’s also a Functor, Applicative, Monoid (when applied to a Monoid), and even a Num (when applied to a Num)! These classes don’t hinder or help denotative-ness.

My goal here is to remind people of the quest of denotative programming and hopefully inspire some curiosity and creative exploration of new denotative alternatives to Haskell’s IO.

]]>Btw, it was mildly ironic that in your talk you had to resort to textual notation to describe the constructs behind Eros That is somewhat like asking for the visual equivalent of the understanding that lists, maybe and IO are in some sense “the same”.

The download link at http://www.haskell.org/haskellwiki/Eros is broken.

]]>`ls | grep txt`

as hinting in the direction I’m talking about but not quite going there. For instance, while `grep`

is a pure function, it takes its two inputs in very different ways, one of which is non-compositional (except through inconsistent hacks). Also, `ls`

takes its input in a very non-functional way. It accesses various bits of mutable state (current working directory and current file system contents), as well as having an implicit reference to a particular file system. And then there’s the lack of static typing in Unix chains. Please see my “modern marriage” talk for discussion of these points. (Near the start in discussing the Unix dream and how its choice of text I/O prevented it from achieving that dream.)]]>p>[...] Erstellungsprozess? Brandon Savage legt seine Gr

]]>Uniqueness types are very nice, and their cousin linear types are of fundamental importance to proof theory and semantics. However, they don’t directly supply an answer to Conal’s question: namely, does this library have good equational reasoning principles?

A good example of this arises when you try to program randomized algorithms in a functional style. The RNG will need a seed, and so the typical thing to do is to thread the seed through the program, and use either a monadic API or explicit state-passing (with uniqueness types) to manage this work, so that whenever we make a call to the RNG, we have the state available.

However, this is a lousy API. The reason is that when we’re writing our program, we don’t care about the order in which we make calls to the RNG, since the whole point is that it gives us a source of unpredictable values. However, the fact that we’re passing a state around means that we cannot reorder operations without changing the visible behavior of the program. So there’s a gap between how we want to think about the program, and what we can actually do.

A better API is to give a monadic API, where the the type constructor P(A) is interpreted as a probability distribution over values of A. Then we give no operation to “call the RNG”, and instead supply an operation to create distributions (for example, “choose p a b” might mean a occurs with probability p and b occurs with probability 1-p). Now, the return of the monad “return v” is the point distribution that is v with probability 1, and the bind operation corresponds to conditionalization.

Now, this gives very good equational reasoning principles — in fact, we can apply probability theory very directly to reasoning about our programs. The implementation is actually still the same as the usual state passing implementation, which shows that the API exposed is what’s important to determining the reasoning principles we get. (If you want the details of this idea, there’s a nice paper by Avi Pfeffer and Norman Ramsey “Stochastic Lambda Calculus and Monads of Probability Distributions”, at http://www.cs.tufts.edu/~nr/pubs/pmonad.pdf.)

]]>please read the DDC thesis which explains how the uniqueness typing ASCII gets quickly out of hand in real world programs, if i recall correctly.

]]>(forgive me if the question doesn’t make too much sense as I am new to both)

]]>If the technically (but awkwardly) functional IO had their imperative essence removed at a sufficiently low level (say the kernel) we could break out of the paradigm? or would it always be a clever hack like monads? Certain things like determining the current time with no inputs are not computable. A machine needs input either from someone initially setting a time and having a piece of quartz determine the next moment; or having a camera record the positions of the heavenly bodies. both of those things are non-deterministic inputs for performing something as trivial as getting the current time.

Do our realities interact with a turing complete computer analogously to a stack augmenting an FSA (to yield a more powerful pushdown automata)? Do computers really express time/space continuously sequential as we experience it? analogous to a FSA not expressing arbitrary state like a pushdown automata?

Maybe we need denotative/functional models for things like clocks/time; networking IO; random values; device IO. these things can provide values that can be used in turing complete computations but themselves need not be made up of turing complete systems.

If we are to bust out of the von Neumann paradigm we need to denote systems beyond turning completeness.

]]>So in theory programs are somewhat composable in that they’re computations that result in a value (other than ()), but in practice I don’t think this gives you much, nor have I ever seen this used in any actual Haskell program.

Hi Tom. Thanks for the correction about the type of `main`

(though the `forall`

version is probably not quite what you mean).

The main hindrance I see in composing Haskell programs is exactly the same as with the original Unix model, which is that the programs inseparably combine functionality and interface (I/O). I give an alternative in *Tangible Functional Programming: a modern marriage of usability and composability*. The trick is to keep functionality & interface together for usability and separable for composability.

Second, can we imagine a world where any functional programmer can pretend to live in a stateless world ? No we can’t. [...]Please see my comments above about self-fulfilling prophecies, hand-waving arguments, and self-deceit.

As I said, I don’t care whether you’re playing “Yes, we can” or “No, we can’t”, so long as you’re rigorous. Rigorous proofs of possibility are usually (not always) constructive: demonstrate an example. That demonstration is what I’m working toward and inviting others along, as in this post and most of my other work. Rigorous/correct impossibility proofs are usually much more profound (e.g., uncountability of the reals). The required rigor will reveal your assumptions and maybe point you to the possibility you set out to disprove. And if it turns out that you’re correct about impossibility, your rigorous demonstration will be helpful to others. Much more so than the popular practice I’ve been calling “proof by lack of imagination”.

]]>Imagine starting up GHCi and always having available data structures representing natively all the source (parse trees?) of GHC and libraries, many images, sounds… wonderful. The ability to concentrate on manipulating information in the form most natural for the programming model (in this case objects), before having to deal with “the real world”, makes it amazingly easy to play with ideas. Addictive.

That’s the second part – probably half the reason Smalltalk is less popular than its successors, is that it (community, libraries) places less focus on connecting to the rest of the world. This is changing with the web, but…

For denotational programming, in Haskell or otherwise, I’d love to see what it would do to my computer if it transformed it in its own image, but be careful what you wish for on the way there.

]]>