Notions of purity in Haskell

Lately I’ve been learning that some programming principles I treasure are not widely shared among my Haskell comrades. Or at least not widely among those I’ve been hearing from. I was feeling bummed, so I decided to write this post, in order to help me process the news and to see who resonates with what I’m looking for.

One of the principles I’m talking about is that the value of a closed expression (one not containing free variables) depends solely on the expression itself — not influenced by the dynamic conditions under which it is executed. I relate to this principle as the soul of functional programming and of referential transparency in particular.

Edits:

  • 2009-10-26: Minor typo fix

Recently I encountered two facts about standard Haskell libraries that I have trouble reconciling with this principle.

  • The meaning of Int operations in overflow situations is machine-dependent. Typically they use 32 bits when running on 32-bit machine and 64 bits when running on 64-bit machines. Implementations are free to use as few as 29 bits. Thus the value of the expression “2^32 == (0 ::Int)” may be either False or True, depending on the dynamic conditions under which it is evaluated.
  • The expression “System.Info.os” has type String, although its value as a sequence of characters depends on the circumstances of its execution. (Similarly for the other exports from System.Info. Hm. I just noticed that the module is labeled as “portable”. Typo? Joke?)

Although I’ve been programming primarily in Haskell since around 1995, I didn’t realize that these implementation-dependent meanings were there. As in many romantic relationships, I suppose I’ve been seeing Haskell not as she is, but as I idealized her to be.

There’s another principle that is closely related to the one above and even more fundamental to me: every type has a precise, specific, and preferably simple denotation. If an expression e has type T, then the meaning (value) of e is a member of the collection denoted by T. For instance, I think of the meaning of the type String, i.e., of [Char], as being sequences of characters. Well, not quite that simple, because it also contains some partially defined sequences and has a partial information ordering (non-flat in this case). Given this second principle, if os :: String, then the meaning of os is some sequence of characters. Assuming the sequence is finite and non-partial, it can be written down as a literal string, and that literal can be substituted for every occurrence of “os” in a program, without changing the program’s meaning. However, os evaluates to “linux” on my machine and evaluates to “darwin” on my friend Bob’s machine, so substituting any literal string for “os” would change the meaning, as observable on at least one of these machines.

Now I realize I’m really talking about standard Haskell libraries, not Haskell itself. When I discussed my confusion & dismay in the #haskell chat room, someone suggested explaining these semantic differences in terms of different libraries and hence different programs (if one takes programs to include the libraries they use). One would not expect different programs (due to different libraries) to have the same meaning.

I understand this different-library perspective — in a literal way. And yet I’m not really satisfied. What I get is that standard libraries are “standard” in signature (form), not in meaning (substance). With no promises about semantic commonality, I don’t know how standard libraries can be useful.

Another perspective that came up on #haskell was that the kind of semantic consistency I’m looking for is impossible, because of possibilities of failure. For instance, evaluating an expression might one time fail due to memory exhaustion, while succeeding (perhaps just barely) on another attempt. After mulling over that point, I’d like to weaken my principle a little. Instead of asking that all evaluations of an expression yield same value, I ask that all evaluations of an expression yield consistent answers. By “consistent” I mean in the sense of information content. Answers don’t have to agree, but they must not disagree. Failures like exhausted memory are modeled as ⊥, which is called “bottom” because it is the bottom of the information partial ordering. It contains no information and so is consistent with every value, disagreeing with no value. More precisely, values are consistent when they have a shared upper (information) bound, and inconsistent when they don’t. The value ⊥ means i-don’t-know, and the value (1,⊥,3) means (1, i-don’t-know, 3). The consistent-value principle accepts possible failures due to finite resources and hardware failure, while rejecting “linux” vs “darwin” for System.Info.os or False vs True for “2^32 == (0 ::Int). It also accepts System.Info.os :: IO String, which is the type I would have expected, because the semantics of IO String is big enough to accommodate dependence on dynamic conditions.

If you also cherish the principles I mention above, I’d love to hear from you.

55 Comments

  1. BMeph:

    It sounds as if, among its many issues, as if the Haskell community should be proselytizing about comonads, as much as everyone is hearing the buzz about monads.

    Also, I’d like to rephrase: "Answers don’t have to agree, but they must not [contradict]." I know, I know, it’s rather dumb for a guy as wrong as I am to try to be a pedant, but still, I think it’s a small difference, that helps distinguish not just what you’re saying, but what you mean.

    Okay, time for every one to take a deep breath (one per person, please don’t try to take someone else’s breath, that defeats the purpose…), and have a group hug. Well, except for you “touching-real-people-freaks-me-out” types, go hug a teddy bear, or something.

  2. Tracy Harms:

    Semantic consistency is dear to me. I am continually attracted to applicative and function-level programming because it allows me to keep my attention on abstractions. What you draw attention to here seems to be right at the core of this.

    You’ve proposed that we can improve things by assuring consistency, since various things keep us from relying entirely on equality. I think this will be both tractable and effective. It is advice that can be applied at several levels, e.g. language, library, and application.

    Ignorance, inaccuracy, and the limitations of approximation are unavoidable. These aspects of knowledge cannot be eliminated, overcome, or erased. They can, however, be managed to some degree. What you’re encouraging here is, I think, a way to apply appreciation of purity so that the “edges” of actual purity are available for us to reason with. The alternative may appear to be more pure, as there are no edges in sight, but since it actually involves hiding facts that affect results the purity is lost.

    Accommodating things in this way does mean incorporating implementation. For example, we thereby admit to working with representations of a finite subset of integers, rather than integers per se. Ultimately, though, something has to give. I’d rather it be the illusion of unimplemented computation that we abandon, and consistency of meaning be what we retain.

  3. Daniel Yokomizo:

    I agree with you. One of the worst features in Haskell is the lack of a proper module system. If it had something like ML’s module system we could model System.Info as a parameter of our module, allowing different runs of our program to have different results for “os”. It would be better than having “os :: IO String” which permits two calls for “os” on the same run to give different results.

    Even the module implementing Int could vary and give different results for different runs or be fixed and consistent, we could have the choice to which we prefer.

  4. BMeph:

    Also, since I don’t think I said so, “I cherish the principles”. I also believe that there’s a Huge heap of stuff in Haskell that betrays the principles for the sake of expediency, and while I’m qualifiedly okay with the betrayal, I would not mind a statement somewhere that says that the betrayal happened, mentions why it did, and why a solution to it isn’t forthcoming any time soon. Also, that if anyone has a solution to the expedient problem, that input would be welcome. I really don’t like that someone who may not realize that such betrayal is happening may be deliberately misled to believe that there is no issue, and that anyone bringing up the point is just picking a fight over nothing.

    Be expedient, sure. But be honest above all.

  5. conal:

    BMeph — Thanks for the comments!

    I’m fine with “contradict” as a more specific form of “disagree” that captures my meaning more plainly. I started with that word and then succumbed to a temptingly pretty, though more subtle (and I suppose more ambiguous), phrasing.

    I like very much what you said about honesty in the face of expedience. I’m with you — let’s honestly and explicitly violate (“betray”) our principles while we’re still in the dark about how to live them. Keep “TODO” comments and other reminders in our documentation so that we will remember and other folks will recognize the difference between what we want to say and what we know how to say. Otherwise, as you point out, some of those folks will zealously defend our mistakes when we’d rather they help us correct them.

    And yeah — group hug (including teddy bears as needed). :)

  6. Jake McArthur:

    You already know I agree with you on this.

    I think _|_ is a nearly perfect approximation for error. It just means that the computation will not converge to a useful value, which is pretty much what an error is. We even already have error in Haskell, which is just _|_ with a fancy name, so the notion is already familiar.

    I don’t think “consistent” is a really good word to describe what you want, though. The restriction might be best worded as you word the semantics for unamb: “Answers must agree unless _|_.”

  7. solrize:

    I’ve felt for a long while that Int is just plain evil and that Haskell should relegate it to a special library where it won’t be accessed by accident. Int overflows, just like buffer overflows, cause real, nasty bugs in deployed applications. And in Haskell, these bugs can enter working code rather subtly because of type inference. I.e. I recently wrote some Haskell code that did some integer calculations that worked just fine (Haskell’s default numeric type is Integer). Then I changed the code slightly and it appeared to keep working just fine, but was in fact giving wrong answers. What happened is my change used (xs !! n) in an expression, which meant n had to be Int instead of Integer, and that made lots of other expressions Int as well through type inference, and that broke remote parts of the code that really needed high-precision Integers. Of course writing type annotations on more functions would have caught that quicker, and I’m trying to get in the habit of writing them, but really, automatic inference is supposed to be one of the reasons Haskell beats Java.

    Other languages like Scheme, Common Lisp, and Python use bigints by default and I think Haskell’s successor should catch up with them.

  8. conal:

    Daniel — I agree that IO may be overkill. (Although IO is probably overkill in every use, considering its role as catch-all (“sin bin”).)

    On the other hand, I question whether it is really a safe assumption that the OS cannot change during execution. Part of what disturbs me about the broad acceptance of machine-dependent semantics is that I suspect a harmful, and possibly unconscious, underlying assumption, which is that a program execution takes place on a single machine, with a single architecture and OS (including bit-width). I think many programmers are already conscious of the importance of allowing a single execution to involve more than one thread, or more than one CPU core. And we Haskellers are proud that that the functional programming paradigm has sufficient beauty to accommodate multiple threads and multiple cores. Now I’m suggesting that we also anticipate a single execution of functional program running on multiple CPUs with multiple architectures & operating systems. Of course the imperative paradigm will choke on such scenario, but the functional paradigm can sail through it gracefully — if we don’t create unnecessary stumbling blocks for ourselves.

  9. Patai Gergely:

    I have no problem thinking of Int as some kind of UnsafeInteger. After all, even unsafePerformIO has its legitimate uses. Isn’t it all about contracts from this point?

    As for the System.Info bunch, I can’t think of any good reason for them not to be wrapped in IO. The only sensible run-time use I can think of is displaying them in some way, so we need to be in IO land anyway.

  10. BMeph:

    @conal: Thinking of this, it reminds me that, since Int’s representation – and thus its op-semantics – changes on 64-bit machines, does that means that Integer’s representation does as well?

  11. Matt Hellige:

    I cherish the principles. I understand why the principles were compromised in some cases. But just as many of Haskell’s nice innovations were motivated (read: necessitated) by a commitment to laziness (and therefore to purity), another set of innovations might be discovered if we stick a bit more strictly to semantic purity. Necessity, as they say, is the mother… But maybe we can only expect so much from one language.

    Anyway I would be deeply suspicious of someone who did not regard your examples as evils, even if necessary ones.

  12. conal:

    Another way to express my last comment: distributed execution of declarative programs is almost as easy to see coming as Y2K was. So I’d like it if functional programmers are not among those with programs built on unfortunate (and avoidable) assumptions.

  13. Raoul Duke:

    i believe i’m with you and your line of thought on this matter, for whatever it is worth, and thanks for bringing it to light.

  14. John Dorsey:

    There was some discussion about this in the cafe a few months ago. You could certainly banish IEEE types[1], Int, System.Info and similar offenders to the IO monad. IMO it carries a pretty high practicality cost when you can no longer do fast, native, “pure” operations with floats and machine-sized Ints on common platforms.

    The shameful, dirty, compromising alternative — which I prefer with hesitation — is admitting to more complicated semantics in the pure world.

    Specifically, my view is that the guarantees of purity and referential transparency apply within a limited scope, and the limitations arise from just this sort of architectural difference. System.Info.os will always have the value “darwin” — on that architecture. Under this view there’s nothing wrong with calling System.Info “portable”.

    By the way, I think you understate your case. Isn’t Int overflow a core language matter?

    Anyway, this is a nice example of the natural tension between Haskell-the-research-language-avoid-success-at-all-costs, and Haskell-the-real-world-engineering-tool.

    [1] Floats probably still break the reduced guarantees. Floats sort of carry an implicit “unsafe” warning.

  15. Robert Harper:

    Hear, hear!

  16. Daniel Yokomizo:

    @Conal: The question of what is the semantics of “os” is a very interesting one. A library can’t be entirely future-proof we must always assume something in our representations. I would prefer to have “osName :: OSInfo -> String” and “os :: IO OSInfo”, with parametrized modules we can have “osi :: OSInfo” as a module parameter (i.e. we want the information frozen as the module was instantiated) or use “os :: IO OSInfo” in appropriate places in our program. Similar issues (e.g. number of cores) come to mind, but if we have a commitment to purity we can’t go wrong: either we have “os :: IO String” or there must be some binding able to define “os :: String” from the result of a IO call.

  17. conal:

    BMeph wrote

    @conal: Thinking of this, it reminds me that, since Int’s representation – and thus its op-semantics – changes on 64-bit machines, does that means that Integer’s representation does as well?

    I’d hope that Integer would be represented as efficiently as it (correctly) can be — so yes, I’d anticipate the Integer — representation being built up from Int32 on 32-bit machines and Int64 on 64-bit machines. The crucial thing here is that the semantics of Integer would be identical. Thus programs would run consistently (with each other and with the integer semantics) and efficiently on all platforms. Even with distributed across different platforms.

    So this example illustrates what I mean by semantically standard libraries. Different implementation, same semantics. In contrast, the current Int situation is different implementation, different semantics. Standard in form (interface) but not substance (meaning).

  18. conal:

    Specifically, my view is that the guarantees of purity and referential transparency apply within a limited scope, and the limitations arise from just this sort of architectural difference.

    Perhaps more a choice than a view. In other words maybe a self-fulfilling prophecy, in that if you take this view, you’re likely to build semantically messy systems. Do you see a way in which diversity of execution platforms necessitates loss of referential transparency? So far I only see people’s willingness to sacrifice purity (and its pragmatic benefits) and then to justify doing so as if the sacrifice were necessary.

    System.Info.os will always have the value “darwin” — on that architecture. Under this view there’s nothing wrong with calling System.Info “portable”.

    If a program consistently acts inconsistently between execution platforms, then it’s “portable”? I’m really puzzled here. I use “portable” to mean “behaves consistently across platforms”. How do you use “portable”?

    By the way, I think you understate your case. Isn’t Int overflow a core language matter?

    I think Int comes from a library.

  19. Luke Palmer:

    I have been taking these principles very seriously recently, so here are a few points of interest.

    I’ve found zen in various combinator calculi recently, because every expression is closed. I still can’t program fluently with combinators, and I’m not sure I want to, but lambdas introduce a context-sensitivity that makes reasoning more difficult. My core calculi for Dana are combinator based for this reason.

    A program cannot intelligently manage its resources (memory + CPU), for some definition of intelligent, while guaranteeing semantic consistency everywhere, because any failure handler will not be monotone in the information content. For example, say you are implementing an interactive interpreter, and you allow the user to evaluate arbitrary expressions, but you want to put a memory cap on it because it’s a multiuser environment or something. You have to interpret the code so that you can insert appropriate checks. This seems like a lot of overhead. (For those out of the loop, IO is not an answer) Compositional management of such resources is of great interest to me, but I haven’t come to any resolution.

    Finally, a semantics for every type/closed value gets tricky when you get down and dirty with it. I think of semantics platonically; e.g. an expression of type Integer denotes a “real” integer. But platonism is a helpful myth, and all there really is is formalism. Semantics must be defined relative to some other system. In fact, a definition of semantics in another system is a proof of relative consistency. So in some way, expressions have many semantics, one for each consistent system powerful enough to encode it. Thanks to Godel and friends, there’s no ultimate system in which you can define your semantics though…

    Sorry for being kind of all over the place. These are ideas that are in the distillation process, but it was too exciting not to share them in their half-baked form.

  20. newsham:

    What of pragmatic concerns? There are Haskell packages which are available only for some platforms. If I want to implement a compatibility library that hides such features I will need to known which system I am using to patch out the differences appropriately. Would you suggest using IO String to fetch this information? And then what? That gives a snapshot of a single execution environment at that point in time, but any future execution I perform based on that information will happen at a different point in time. If the execution environment isn’t guaranteed to remain the same, then I have serious problems. In other words, there’s a deeper problem here than making the OS type a String. Taking a step back, what you’re really asking for is a platform for distributed heterogeneous computing. This is a platform in its own right, unique from darwin, linux or anything else. The components of this platform will have to hide the effects of moving code between different systems. I would argue that this is the layer where the OS name should be handled. Once a platform is fixed, it fixes parameters such as the OS name, and those can truely be constant for that invocation of your program (but aren’t constants for all invocations of your program, which is what you seem to be after).

  21. claus:

    I question whether it is really a safe assumption that the OS cannot change during execution.

    Oh, how long I’ve been waiting for Haskell to become mobile. But despite promising starts, it hasn’t happened yet (as usual, Clean is somewhat ahead in this..). You might also be interested in the resources and mailing list at

    Research in Theory for Mobile Processes http://www.it.uu.se/research/group/mobility .

    However, with this focus, your quest becomes more interesting: you’re no longer worrying about [| |] :: Syntax -> Semantics really being [| |] :: Syntax -> Machine -> Semantics behind the scenes, on individual machines, but about the prevalent port :: Platform -> Syntax -> Semantics (with ports resulting in machine-specific instances of traditional [| |] :: Syntax -> Semantics) not being useable, in a consistent form, if a single program covers multiple platforms.

    But then, it no longer matters much whether os is a String or another enumeration. We could structure our code as Platform -> Maybe Program, but that would only capture whether or not our code runs at all on a given platform, not the differences in meaning you started out with. For that, it seems one can either make the programs aware of the platform boundaries and put the burden of translating between platform-x-view-of-the-world and platform-y-view-of-the-world whenever data or code is exchanged between x and y on the programmers’ shoulders; or one can try to ensure a common view of the world for all x, y, ..

    Which brings us to

    Now I’m suggesting that we also anticipate a single execution of functional program running on multiple CPUs with multiple architectures & operating systems. Of course the imperative paradigm will choke on such scenario, but the functional paradigm can sail through it gracefully — if we don’t create unnecessary stumbling blocks for ourselves.

    You might want to check out Squeak, http://www.squeak.org/About/ , which provides

    “A fast virtual machine written in a subset of Squeak, making it easy to ensure the same behavior on supported platforms”.

    This looks like the kind of basis you are looking for. Which, in turn, enabled Croquet, http://www.opencroquet.org/index.php/Overview ,

    “Croquet is a powerful new open source software development environment for creating and deploying deeply collaborative multi-user online applications on multiple operating systems and devices. Derived from Squeak, it features a peer-based network architecture that supports communication, collaboration, resource sharing, and synchronous computation between multiple users on multiple devices. Using Croquet, software developers can create and link powerful and highly collaborative cross-platform multi-user 2D and 3D applications and simulations – making possible the distributed deployment of very large scale, richly featured and interlinked virtual environments.”

    Which looks like the sort of distributed programming environment you suggested.

    I’m afraid we functional programmers have some catching up to do..

  22. Jason Dusek:

    @conal:

    Not too long ago, I actually wrote an email to the list about that very value, System.Info.os, as I also found the matter a little baffling. The same expression should yield the same value, yes? However, it is not really the same expression. It is the same text string. One one computer, it names one expression; on another, another. In short, my Haskell compiler doesn’t know what those other Haskell compilers are doing and remains in a state of blessed purity due to that fact.

    This is more about a trusted platform than it is about language semantics. The language semantics are (at present) enforced by the individual compiler on the individual computer; only by feeding all the (identical) compilers the same data can we be sure they are compiling the same way. This boils down to a gentleman’s agreement or some kind of certification scheme. I can not actually place any trust in either of these arrangements.

    In a comment, you say “Now I’m suggesting that we also anticipate a single execution of functional program running on multiple CPUs with multiple architectures & operating systems. Of course the imperative paradigm will choke on such scenario, but the functional paradigm can sail through it gracefully — if we don’t create unnecessary stumbling blocks for ourselves.”. Before I can imagine this scenario, I need to imagine a situation where I trust all those computers. As for the imperative paradigm “choking” on it, well — it’s actually this kind of system that message passing systems were designed to handle. It is shared state, not imperative programming per se, that trips us up in the distributed setting.

  23. Simon Marlow:

    I’m not sure which side of the fence I fall on here, but I’d like to note that the “problem” is perhaps more widespread than is immediately obvious. Here’s some more examples I found, apart from the aforementioned Int and System.Info.os:

    • The Float and Double types, and hence floatRadix, floatDigits etc.
    • The whole System.FilePath library: it is full of non-IO functions that behave differently on Windows and Unix.
    • Foreign.sizeof, Foreign.alignment, and everything in Foreign.C.Types

    I’m out of time, but that’s what I found in a quick scan of the libraries that come with GHC.

  24. conal:

    @Jake,

    I don’t think “consistent” is a really good word to describe what you want, though. The restriction might be best worded as you word the semantics for unamb: “Answers must agree unless .”

    I want the semantics of lub (a more general variant of unamb, equivalent for flat domains), which I think is exactly “consistent”, i.e., having a common upper information bound. For instance, (3,⊥) and (⊥,4) are consistent.

  25. John Dorsey:

    @conal: Regarding choice/view. Of course I’m willing to accept messy semantics. I’m a software engineer! What should I do rather than prefer the best descriptive semantics for the best tools I have? I anxiously await the usable system which exhibits better semantics. Is there a critical flaw in my suggestion of “scoped purity”?

    Regarding portability, I am sympathetic to the tendency to define it in absolute terms, allowing for no variation in behavior/semantics/what-have-you. And I really do (and did) understand where you’re coming from. But it doesn’t reflect any common understanding of portability.

    Portable code is code that can be easily and usefully moved between host environments. Your definition of “portable” looks like your definition of “pure”. Do you consider IO code unportable? It certainly can’t be said to act consistently between different platforms. Incidentally, it wouldn’t bother me to see System.Info pushed into IO. I am bothered by the idea of pushing Int into IO. A lot of code is useful, and pure under my model, that wouldn’t be under yours. That’s my practical justification.

    Regarding Int, the overflow behavior is defined in the Haskell Report Section 6.4 (Predefined Types and Classes), which is in Part I (Language). So in that sense it’s in the core language.

  26. conal:

    @Jason:

    The same expression should yield the same value, yes? However, it is not really the same expression. It is the same text string. One one computer, it names one expression; on another, another.

    I guess I don’t know what an “expression” is to you. To me parsing is a function, so if I start with the same string, I’m going to get the same expression. Unless you mean you have different languages, e.g., Haskell and Standard ML.

    The language semantics are (at present) enforced by the individual compiler on the individual computer; only by feeding all the (identical) compilers the same data can we be sure they are compiling the same way.

    I guess that’s a fair description of what I’m suggesting we fix. Because we have not defined Haskell’s meaning precisely and implementation-independently, implementations can and do disagree. If, like Standard ML, we actually defined what the language means (i.e., a “standard” in substance as well as in form), and if we do the same for our “standard” libraries, then when implementation inconsistencies are found, we submit bug reports and implementations get fixed.

  27. conal:

    @John:

    Of course I’m willing to accept messy semantics. I’m a software engineer! What should I do rather than prefer the best descriptive semantics for the best tools I have? I anxiously await the usable system which exhibits better semantics.

    I see: we’re playing different roles at the moment. You’re adapting to the tools you’re given. I’m suggesting changing them to bring our practice more in line with our theory.

    Is there a critical flaw in my suggestion of “scoped purity”?

    For your current role, no. For mine, yes.

    Portable code is code that can be easily and usefully moved between host environments. Your definition of “portable” looks like your definition of “pure”. Do you consider IO code unportable? It certainly can’t be said to act consistently between different platforms.

    IO can be consistent in my definition, because I’m talking about denotation. The denotation of IO itself already includes access to “the dynamic conditions under which it is executed”. In contrast, I’d prefer that the semantics of types like String and Bool have a much simpler semantics. Do you see the difference?

    I believe that the denotational simplicity of “pure” types is the very heart of why functional programming has useful mathematical properties and composes well (is “good for equational reasoning”); and the relative denotational complexity of imperative programming (e.g., IO in Haskell) is exactly why imperative programming lacks these properties, as Backus discussed in his Turing Award lecture.

    And I really do (and did) understand where you’re coming from. But it doesn’t reflect any common understanding of portability.

    I hope that when Haskellers stamp their modules as “portable”, they mean more than that their “code that can be easily and usefully moved between host environments.” I just did a quick survey on #haskell and learned that there are a variety of interpretations. Some folks mean semantic consistency, and some just mean whether a library operates at all in various contexts. Some mean across compilers and some mean across execution platforms. Perhaps we’ll hear some more interpretations and intentions around the “portable” label for Haskell modules.

  28. Duncan:

    I think the explanation about modules as functors is a good solution to the semantic problem. It explains the meaning of System.Info.os :: String. That doesn’t mean it is satisfactory, we still have the issue that there is an environment that maps module names to module meanings.

    In GHC this environment is kept in the ghc-pkg database. End users can change that mapping (by installing different packages) and thus change the meaning of your code by composing it with different modules as imports. The mapping is also system dependent by default. On Windows we typically have a different module meaning for the System.Info module name than we do on Linux etc.

    I think this module/functor explanation is much better than trying to explain that the denotation of Bool is actually a MachineInfo -> Bool. It also means your closed expressions have a fixed denotation. The fact is however, you very rarely have any closed expressions. All the imported functions and types in your program are really free variables that are substituted for on the end users machine when their package manager composes a (closed) program out of packages/modules. That does make it hard to describe the denotation of some expression in isolation.

    As a practical matter we should try to avoid system-dependent module implementations living under the same module name. It’s hard to banish completely however. We’re not going to stop people writing new package versions that export the same module names. That example is of course less bad than the Prelude being different between different systems. Both examples are part of the same continuum though.

  29. Carlton Mills:

    I began programming on the vacuum tube IBM 650, but I am a total newby in Haskell. Therefore I can speak with the absolute certainty that is only available to the ignorant.

    Does “avoid success at all costs” mean “avoid usefulness”? That is the question. Anyone who writes “2^32 == (0 ::Int)” without checking the implementation should not be allowed to program anything useful; however, one should be able to presume that“2^30 == (0 ::Int)” is OK (sometimes computer science types play funny games with the high order bits). Ideally one would have library interfaces that would behave the same on all platforms for all time. However, the authors of the library will need to utilize platform specific primitives and flow of control. Do we want to be able to write those libraries in Haskell? If we do then there will be need for the Haskell code to know platform specifics. Software engineering has been a long process of creating consensus understandings of what computable objects are and how to manipulate them. This process is ongoing. There is a consensus on what variables of type Int32 and Int64 are (ultimately whatever Intel says they are). And that a type of Int holds values of at least 2^14. There used to be a consensus that a String is [Char], but no more. With all the international alphabets and conventions string manipulation is to be approached with great care; only the most trivial string functions will work as written a decade from now. The type String is an amorphous concept from never-never land. This is just current reality.

    The Haskell type system includes types that will behave the same on all platforms for all time; and types that correspond to our current consensus, Int for instance. As a software engineer, rather than a category type theorist, it looks to me like Haskell is going to become more and more important and usefull in the future. I can create types that will behave the same for all time (a sort of algorithmic purity) and types that incorporate platform specifics; and I can keep them separate. Haskell is practical purity.

  30. sclv:

    @Conal:

    I totally agree on the Int issue — obviously machine specific representations should be around when efficiency is at a premium, but for the other 99% of the time, and in the core libraries, prelude, etc. it would be better to use a guaranteed sound representation. Even if it isn’t an issue to most folks now, it’ll necessarily be a source of bugs at some point.

    Floats and Doubles are way off in magicland anyway, so I’d almost give them a pass, although a crippled, no IEEE, uniform floating type wouldn’t be awful.

    As for the FilePath library, I think it’s better if we treated filepaths as abstract data types uniformly anyway — the existence of string functions for them at all only encourages bad code.

  31. sclv:

    Oh I should also note though that, perhaps wrongly, I feel like in all real programming languages, “referentially transparent” is more like “tasty” than “on fire” — i.e. it is not a binary proposition, but a quality.

  32. conal:

    @sclv:

    I totally agree on the Int issue — obviously machine specific representations should be around when efficiency is at a premium, but for the other 99% of the time, and in the core libraries, prelude, etc. it would be better to use a guaranteed sound representation. Even if it isn’t an issue to most folks now, it’ll necessarily be a source of bugs at some point.

    Thanks. I’m suggesting even more: go ahead and use a variety of representations to make fast implementations, as long as the observable semantics is fully consistent. For instance, build the Integer representation out of Int32 for 32-bit platforms and out of Int64 for 64-bit platforms, and ensure that the these two Integer implementations are denotationally identical.

    Floats and Doubles are way off in magicland anyway, so I’d almost give them a pass, although a crippled, no IEEE, uniform floating type wouldn’t be awful.

    As for the FilePath library, I think it’s better if we treated filepaths as abstract data types uniformly anyway — the existence of string functions for them at all only encourages bad code.

    I’m with you there. As I understand it, the point of differing FilePath implementations is exactly to reach semantic commonality. Not that the underlying string representations should be identical, but rather than the denote semantically consistent values in terms of of file system operations. Or, as you said more succinctly, use data abstraction.

  33. conal:

    @Duncan: Thanks for the comments. I’d like to offer a distinction between parameterizing over differing implementations of a module, vs over different semantics. With the principle of semantic consistency, we can see an expression like “(2 + 3) :: Integer” as (semantically) closed, rather than as one parameterized by arbitrary possible meanings of 2, 3, fromInteger (from desugaring of literals), (+), and Integer. This way, module implementations are given not just a signature to follow but also a meaning for the signature. Those implementations are correct exactly if they satisfy the signature and that semantics. Substance, as well as form.

  34. Duncan:

    Certainly we can have different module implementations that provide the same semantics and when we do that it’s usually for very sensible reasons and makes life easier etc.

    My point, I suppose, was that the explanation of modules accounts both for module names mapping to different module semantics and a module name always mapping to the same module semantics. There is then a continuum of cases where module names mapping to the same or different semantics in different environments is more or less important. Clearly it is helpful if core modules like the name Prelude always map to the same semantics — it lets us treat “(2 + 3) :: Integer” as if it were closed. That is where the objection about Int comes in. In other cases it is extremely useful to change the meaning associated with a module name (it lets us fix bugs in libraries without having to change the programs that use them).

    There are obviously lots of details about fixed or flexible meanings associated with particular module names where reasonable people will differ but my main point is that there isn’t a big semantic hole as we first worried.

    When it comes to the details I would in general agree with you. I would prefer semantic consistency. I’m not sure that FilePath being different between platforms is the ideal solution. Although yes it is trying to achieve some approximation of consistency (when considered as and ADT) there are inevitably differences because the semantics of FilePaths are actually different between operating systems (even if we never look at the string representation). It would be nicer to genuinely parametrise file system handling code by the FilePath module, as one can do in ML. We could select the module to use based on a function in IO that tells us what OS we are talking to now.

    As for Int, sadly it is the case that some relatively low level pure code runs an order of magnitude faster using Int than Integer and there are no obvious ways of improving that. If it were not the case then we could certainly banish Int. While it remains the case it seems essential to provide Int or an equivalent. Whether Int is overused generally is a different matter. There is certainly a reasonable argument for moving Int out of the Prelude and into a module that you use if you need the performance and are prepared to deal with the more tricky semantics.

  35. Jason Dusek:

    @conal

    To me parsing is a function, so if I start with the same string, I’m going to get the same expression.

    In practice, it is a function from compiler and string to a value. Were we to adopt a multi-versioning policy — for example, files should have a version pragma in the header — than we could reasonably expect to evolve the language definition gradually while not altering the former meaning of modules “silently”.

  36. conal:

    Conal wrote

    To me parsing is a function, so if I start with the same string, I’m going to get the same expression. Unless you mean you have different languages, e.g., Haskell and Standard ML.

    Jason replied

    In practice, it is a function from compiler and string to a value. Were we to adopt a multi-versioning policy — for example, files should have a version pragma in the header — than we could reasonably expect to evolve the language definition gradually while not altering the former meaning of modules “silently”.

    When you describe parsing as compiler-dependent, do you really mean language(-version)-dependent? If so, I think we’re in agreement. If not, then you & I may be operating in different conceptual frameworks, and thus talking about different worlds, rather than disagreeing about the same world.

    Just to be extra-clear, in the framework in which I’m working, languages are distinct from and implementations of languages. A language has concrete/surface syntax (as strings), abstract syntax (trees), and (denotational) semantics (mathematical values). An implementation of a language is correct when it agrees with (or at least is consistent with) the (implementation-independent) semantics. There can be a family of similar languages, which one might call “versions”. Still, each member of the family would be distinct from implementations of that member (language version).

  37. wren ng thornton:

    I believe in these values. I took them seriously when I came to Haskell, and that seriousness has been monotonically increasing ever since (and exponentially so of late).

    When I discussed my confusion & dismay in the #haskell chat room, someone suggested explaining these semantic differences in terms of different libraries and hence different programs

    I think the idea that these semantic differences need any explanation is begging the question. Semantics should, as much as possible, be proof-irrelevant. Proof-irrelevance means we’ve defined things sufficiently apart from actual implementations that we can say the semantics are well and truly denotational.

    I also believe that there’s a Huge heap of stuff in Haskell that betrays the principles for the sake of expediency, and while I’m qualifiedly okay with the betrayal, I would not mind a statement somewhere that says that the betrayal happened, mentions why it did, and why a solution to it isn’t forthcoming any time soon.

    +1.

    I bemoan how lax we seem about living up to the claim to purity, but —to be brutally honest— that laxity is some part of what I like about Haskell. It has a certain whipuptitude that is lacking in the offering of other “correct” languages. What I’d like most is to get rid of this caveat. Providing maps to where the bodies are buried (and why) is a crucial first step.

    One idea I’ve been thinking a lot about recently, which I’ve had difficulty selling to others, is the idea of moving from defining one language to defining a family of sub-languages where code can be distinguished by what infelicities it has permitted. That is, the semantic purity of code should always be recognizable (and enforceable via compiler flags), but impurities might not be expressly forbidden in the largest sub-languages. (This is orthogonal to monadic purity of values, or families of language via language extensions.) A total functional language would be at the core, with the introduction of ⊥ being the first extension. Other infelicities like IEEE-754 and the existence of non-⊥ exceptional values would be much further up the hierarchy. The sin-bin monad would only exist at the uppermost level (thus more sane parts of IO like seq and memory-regions could be separated and used in code that doesn’t completely abandon the notion of semantic consistency). This way we can tighten the straps on the straightjacket as much as we dare, without strangling clients of our code.

    Giving up on whipuptitude is a high price to pay and for some applications the burden of correctness may be too much; but for libraries and especially for core libraries, compilers, and operating systems it is a burden that we must bear. While the hackers and the v&v folks are both right in their own ways, they’re also both wrong: it’s not a binary choice. What we need is a hybrid system that can give strong guarantees about parts of a program, without demanding perfection of the whole (unless asked to).

  38. Jason Dusek:

    Conal wrote:

    When you describe parsing as compiler-dependent, do you really mean language(-version)-dependent?

    Yes, that is what I mean.

    …in the framework in which I’m working, languages are distinct from and implementations of languages.

    In practice, though, compiler developers tend to evolve the language. The approach taken by GHC at present — to mark such evolution with pragmas — could reasonably be extended to library collections as well. (For example, variants of the upcoming “Haskell platform” could be so marked.)

    As you point out in your post, for all intents and purposes, standard libraries form much of the semantics of the language. A disciplined and thorough approach to identifying language subsets would lay the ground work for standardization of the scope that you envision without leaving Haskell stuck in the lamentable state of, say, C++ or FORTRAN.

    I do not think it is reasonable to hope that the string Data.ByteString.ByteString should mean the same thing for all time. I think it is quite reasonable, however, to have a way to disambiguate different usages that is not quite as fine and narrow as specifying library versions. An “algebra of platforms” is called for.

  39. Improved Means For Achieving Deteriorated Ends / All Over Everywhere:

    […] of Haskell, Conal Elliott’s blog hosted a really good discussion on what portability means in terms of semantics, GHC 6.10.2 was released and the Haskell-Platform Mailing List put out a call for volunteers to […]

  40. Jason Dusek:

    wren ng thornton wrote:

    One idea I’ve been thinking a lot about recently, which I’ve had difficulty selling to others, is the idea of moving from defining one language to defining a family of sub-languages where code can be distinguished by what infelicities it has permitted.

    +1

  41. Paul Chiusano:

    Conal, what would your preferred way be of accessing information like the OS name from within a Haskell program? Changing the return type to IO String doesn’t seem to fix anything – it is still not referentially transparent according to your definition… or is it? (If you claim it is, then explain…)

  42. conal:

    Hi Paul. The answers to your questions follow from the second principle in my post. For each candidate type, ask yourself whether its meaning (denotation) has room for what you want to put into it. Pick the semantically simplest type for which you can answer “yes”. Remember that the meaning of a type has nothing to do with machines & implementations. It’s math.

    As for referential transparency in particular, in a poll on #haskell, I discovered that different people have different notions, and some had no specific notion in mind, even though they’re sure that Haskell is referentially transparent. Which meaning do you have in mind?

  43. nolrai:

    So the values depend on where its executed, not what its compiled for? (If it is the second, then it seems fine to me, am I wrong about that feeling?)

    Conal, your response to Paul is not particularly enlightening to me.

    What type should should the OS name have?

  44. conal:

    Conal, your response to Paul is not particularly enlightening to me.

    Hi Nolrai,

    In responding to Paul, my intention was not to enlighten, but to point Paul and you and others down a path on which you can find enlightenment. If you follow the process and come to the answer yourself, I expect you’ll understand what I’m saying here more deeply (and thus transferably) than if I were to spell it out. (It’s not enough to just read the homework. One must actually do the homework.)

    So the values depend on where its executed, not what its compiled for? (If it is the second, then it seems fine to me, am I wrong about that feeling?) […] What type should should the OS name have?

    Again, the answers follow from the second principle in my post. The most important step will be getting to a precise question (defining/replacing “fine” and “should” in this context). Then I think you’ll be close to finding a clear (and thus opinion-free) answer.

    If you don’t understand something that I’ve said here, feel free to ask for clarification about it.

  45. Paul Chiusano:

    Conal,

    You said of RT: “the value of a closed expression (one not containing free variables) depends solely on the expression itself — not influenced by the dynamic conditions under which it is executed.”

    By this definition, even getStr :: IO String is not RT, since its value clearly depends on the dynamic conditions under which it is executed. Likewise if you change System.Info.os to return IO String.

    One thought I had is that it may be better to think of a Haskell program as not being fully closed – that is, there are certain free variables that are bound only when executing a Haskell program on a particular machine. For instance, the functions specifying the details of Int arithmetic could be considered free variables in a Haskell program. When we decide to run a Haskell program on some specific system, what we are really doing is supplying values for these free variables and running the resulting closed expression.

  46. conal:

    Hi Paul,

    You said of RT: “the value of a closed expression (one not containing free variables) depends solely on the expression itself — not influenced by the dynamic conditions under which it is executed.”

    By this definition, even getStr :: IO String is not RT, since its value clearly depends on the dynamic conditions under which it is executed. Likewise if you change System.Info.os to return IO String.

    Does the value of getStr depend on dynamic conditions? How can you tell? What do you mean when you say that one action (IO value) is the same as or is different from another action?

    One thought I had is that it may be better to think of a Haskell program as not being fully closed – that is, there are certain free variables that are bound only when executing a Haskell program on a particular machine. For instance, the functions specifying the details of Int arithmetic could be considered free variables in a Haskell program. When we decide to run a Haskell program on some specific system, what we are really doing is supplying values for these free variables and running the resulting closed expression.

    I get a queasy feeling about this suggestion. I worry about arbitrariness sneaking in to our semantic model. Why bind variables when assigning Haskell code to a particular machine? Why not bind whenever processing changes threads or cores? Or hours? The meaning of my code could change every hour, on the hour. Or moment-to-moment, as in consistently non-RT languages. Moreover, consider distributed execution, as mentioned previously in comments.
    I really do believe that intelligently transient distributed execution will become the norm. And functional languages could be ready unless we make some semantically incautious choices and fail to fix the ones we’ve already made.

    Besides, functional languages already have an elegant and semantically sound way to express values that depend on dynamic information: functions. If we want code that depends on choice of OS, machine bit width, or time of day, I’d be a lot more comfortable defining functions over arguments of those types.

  47. Paul Chiusano:

    Conal,

    I think it would be more useful if you would just give your argument as to why getStr is still RT according to your definition rather than trying to use the socratic method over this comment thread. :)

    Let me give my argument as to why getStr (and other functions that have side effects) is RT. I think I have a more limited definition of RT: to me, a RT function is one with the property that any call to it can be replaced with its result without affecting the semantics of the program. I think of this as dictating that the function must have all its behavior reflected in its return value – there is not other way for it to propagate information to the rest of the program. It means that you can reason about program behavior by simple substitution – just replace each function call with the value that call refers to.

    Since Haskell forces you to thread a single IO value through all functions that perform IO, there is no way for the program to detect that values of type IO are actually do have side effects. I imagine functions that perform IO as returning a “World” object, which is then passed as an argument to the next function that performs IO, which returns a new World object, etc. We can pretend that the IO values that are chosen dynamically were in fact chosen ahead of time, before we even ran our program.

  48. conal:

    Paul Chiusano wrote:

    I think it would be more useful if you would just give your argument as to why getStr is still RT according to your definition rather than trying to use the socratic method over this comment thread. :)

    First, I’m picking up something of a bait-and-switch going on here (unintentional, I presume). I didn’t say that getStr is RT (referentially transparent), nor that I was giving a definition of RT.

    It was you, not I, who claimed that getStr is RT. I wouldn’t make a claim either way, because I don’t know what RT can mean for IO.

    Usually referential transparency is defined as meaning that a subexpression can be replaced by its value without change in meaning. This definition is problematic in general, since expressions and values are different sorts of things. Some types have a sort of normal form, which could be considered “values”. For instance, for numeric types, we have numerals/literals. For IO in particular, I don’t know what plays the role of “literals”.

    Another definition of referential transparency is that any subexpression can be replaced by another expression that has the same value/semantics, without changing the meaning/value of the containing expression. Again there’s some subtlety. When we say that two (sub)expressions do or do not “have the same value”, what do we mean by “same”? For the statement to be meaningful, we have to define semantic equality. I don’t know of such a definition for IO. Perhaps a compelling definition could exist, but perhaps not.

    IO carries the collective sins of our tribe, as the scapegoat did among the ancient Hebrews. Or, as Simon Peyton Jones expressed it, “The IO monad has become Haskell’s sin-bin. Whenever we don’t understand something, we toss it in the IO monad.” (From Wearing the hair shirt – A retrospective on Haskell.) Is it likely that we can then come along later and give a compelling and mathematically well-behaved notion of equality to our toxic waste pile? Or will it insist on behaving anti-sociably, as our own home-grown Toxic Avenger?

    Paul continues:

    Let me give my argument as to why getStr (and other functions that have side effects) is RT. I think I have a more limited definition of RT: to me, a RT function is one with the property that any call to it can be replaced with its result without affecting the semantics of the program. I think of this as dictating that the function must have all its behavior reflected in its return value – there is not other way for it to propagate information to the rest of the program. It means that you can reason about program behavior by simple substitution – just replace each function call with the value that call refers to.

    First, I’d simplify/generalize away the role of functions here, and talk about expressions (including function applications) rather than functions as RT. Something of a nit-pick I realize, but I don’t know how else to get to clarity (and beyond opinions) on issues like this one. As David R. MacIver said in “A problem of language“,

    Of course, once you start defining the term people will start arguing about the definitions. This is pretty tedious, I know. But as tedious as arguing about definitions is, it can’t hold a candle to arguing without definitions.

    Since “getStr” is not a function but is an expression, we’ll probably want a definition that applies to expressions.

    Next, there are the troubling (to me at least) questions I raised above: What is the value of an IO sub-expression? How could such a value be substituted into a program? What does it mean to say whether or not the meaning of new expression is equal to the meaning of the old expression?

    Continuing,

    Since Haskell forces you to thread a single IO value through all functions that perform IO, there is no way for the program to detect that values of type IO are actually do have side effects. I imagine functions that perform IO as returning a “World” object, which is then passed as an argument to the next function that performs IO, which returns a new World object, etc. We can pretend that the IO values that are chosen dynamically were in fact chosen ahead of time, before we even ran our program.

    I’ve read and re-read this paragraph, trying to sort it out, particularly how you’d believe that “Haskell forces you to thread a single IO value through all functions that perform IO” or that “values of type IO are actually do have side effects”.

    My best guess is that you’re confusing “IO values” with the hypothetical World values that GHC uses in its implementation of IO, and confusing “functions that perform IO” with IO values themselves. I don’t know whether the confusion is in your language or your thinking, or whether I’m just somehow missing what you’re saying here.

    By the way, GHC’s World-passing representation of IO is just an implementation hack. It’s not a sound denotational model of Haskell IO, because it cannot account for concurrency. It somehow became popular to think that the World type was something more than an implementation hack.

    There is a popular, and much simpler, argument about the referential transparency of IO in Haskell, which I think is very different from what you’re saying. In this perspective, the reason that “there is no way for the program to detect that values of type IO are actually do have side effects” is simply that they honestly do not have side-effects. No lie necessary. IO values don’t do anything; they simply are. It is only a particular interpretation of those values that gives rise to side-effects. Sometimes people get confused about this distinction or think it’s some kind of strange magic or con game. I liken it to the situation with simple types like Boolean and Integer. The values denoted by “True” and “3+4″ have no side-effects, and yet the interpretation called “print” will indeed lead to a side-effect.

    Thus, many claim, IO in Haskell is a well-behaved, purely functional type, and Haskell’s chaste status as purely functional is preserved. I’m not a big fan of this line of thought. As I explained elsewhere, similar reasoning shows that The C language is purely functional.

    While this viewpoint avoids disproving referential transparency IO in Haskell, it doesn’t demonstrate referential transparency, because our definitions of referential transparency depend on equality, and we don’t have a definition of equality for IO. Maybe we can come up with a sound and compelling definition that is consistent with concurrency and with everything tossed into the sin-bin so far and yet is somehow sound and compelling.

  49. Luke Palmer:

    I always interpreted RT as the property that one may replace a symbol by its definition without affecting semantics. That applies to the pure fragment of Haskell, including pure functions manipulating the IO type and using its combinators. However, there are quite a few IO “primitives”, such as getChar and newIORef. RT does not apply to these primitives, because they have no definition.

    The pure lambda calculus — just abstraction and application — is completely RT. Every symbol must have a definition (it is equivalent to a program where every “symbol” is bound by a lambda). The very second you add something new — numbers/addition, booleans/if-then-else — it’s likely you have introduced symbols with no definition. They don’t necessarily break RT, that term just doesn’t mean anything for those symbols.

    Hmm. “… without affecting semantics” was an interesting phrase in my definition. Expressions have many types of semantics, all acting at once. So when we talk about RT, we must do so with respect to some given semantics. Eg. we might be able to say that Haskell (sans IO, FFI, and seq) is RT wrt. its domain-theoretic semantics. But certainly it is not wrt. its (say, GHC’s) operational semantics, because definitions introduce sharing.

  50. conal:

    Thanks for the remarks, Luke. There are a few different definitions of RT floating around. I see what you mean that your particular definition doesn’t apply in the presence of primitives. So I guess I prefer the one about replacing subexpressions with other expressions having the same meaning. It applies with types that have precise semantics, though not yet IO (that I know of).

  51. Paul Chiusano:

    Conal,

    First, I’m picking up something of a bait-and-switch going on here (unintentional, I presume). I didn’t say that getStr is RT (referentially transparent), nor that I was giving a definition of RT.

    Ok, maybe I am confused here. From your original post, you seemed to be saying: “I don’t like System.Info.os :: String” since that violates (the soul of) RT. Then I suggested that making its type IO String doesn’t seem to fix anything, also that according to (what I thought was) your definition of RT, getStr was not RT either. From your response I got the (erroneous?) impression that you thought that yes, if we make Info.os :: IO String then we’re RT. Likewise for getStr.

    Since I think we might be talking past each other, at this point I think we might be better hashing this out over IRC sometime. :) Btw, I’m enjoying this discussion and I think it’s all very interesting!

  52. conal:

    Hi Paul,

    From your original post, you seemed to be saying: “I don’t like System.Info.os :: String” since that violates (the soul of) RT. […]

    I think I see part of the communication gap here: you thought my post was talking about referential transparency. Instead, I was shifting the conversation away from a focus on RT and onto two other principles. With the first one (“the value of a closed expression … depends solely on the expression itself”), I made a passing nod to RT, accidentally opening the door to confusion. The second one, which is “even more fundamental to me”, is that types have meanings, and the meaning of an expression belongs to the meaning of type of that expression.

    One reason I’d like to reframe these discussions in terms other than RT is that I repeatedly see RT discussions get terribly muddled. And for me, the definitions of RT are technical and without heart. I get a lot more clarity and oomph out of the principles I’ve stated in this blog post.

    Here’s a more accurate reading of what I was trying to convey: “I don’t like System.Info.os :: String since that typing violates these two principles that are at the heart of what I love about functional programming. And, by the way, those principles are at the heart of RT for me as well.” Because there’s so much confusion about RT, and it’s such a technical/lifeless notion, I’m offering a hopefully-more-fruitful alternative to RT as a framework for discussing notions of purity.

    Since I think we might be talking past each other, at this point I think we might be better hashing this out over IRC sometime. :) Btw, I’m enjoying this discussion and I think it’s all very interesting!

    I’m delighted & relieved to hear you’re enjoying the discussion. It’s so hard to tell without the usual non-verbal cues, so thanks for letting me know. I’ll be happy to chat more on IRC.

  53. Conal Elliott » Blog Archive » Lazier functional programming, part 1:

    […] semantics, amenable to various implementations. (Haskell falls somewhat short, as explained in Notions of purity in Haskell.) Sadly, I don’t know a pleasant comparative form to mean “less strict”, so […]

  54. Tommy Thorn:

    Thank you for expressing more eloquently what I can’t mention without risking a nerd-rage.

    I was pointed to your article from haskell-cafe where a real world instance of the horrible Int semantics resulted in the same program mysteriously behaving differently on two different platforms.

    My only “contribution” here is to question the premise that led The Committee to give Int the silent overflowing semantics. With modern compiler technology and modern hardware, trapping overflows doesn’t necessarily come with a larger performance overhead. A conditional branch on the overflow from an add is nearly zero cost as it predicts perfectly and can be issued in parallel with all other instructions. Some architectures, eg. SPARC, even have overflow checking add/sub variants with zero overhead. Not to mention that for many instances it’s trivial to see that overflow can’t happen.

  55. Irene Knapp:

    I do cherish these principles; they are good ones. Just for the record!

Leave a comment