Examples:

In Haskell, constructors and field names can be called like functions (they effectively are functions when used within specific contexts in the code, though they can also be used in other contexts where normal functions can’t).

In other languages, operators are built into the language and are distinct from functions. In Haskell, operators are just functions with a special syntax. (They’re actually variables, but they usually contain function values.)

In Javascript (for example), we can have: x = 5 function f() { return 5 } We can get 5 from either of these by saying ‘x’ or ‘f()’ respectively. The equivalent in Haskell is: x = 5 f = 5 And we can get 5 from either by saying ‘x’ or ‘f’ respectively. In the former case, a function that takes no arguments is distinct from a variable with a value. In the latter case, it is not. I think this thought process comes from a number of misunderstandings about the language: – They see a definition without an argument as a zero-arity function (as opposed to what’s actually happening: a definition with an argument is a variable, but the value of that variable is a function value). – Similarly, they see a definition with multiple arguments as a multi-arity function, as opposed to a 1-arity function that returns another function. – They see type syntax as a list of function argument types and a return type separated by arrows, so they figure if there’s no arrows it’s a function that just returns something and takes no arguments. – They’re probably coming from languages with no lambdas, so they don’t really understand the concept of “a variable with a function value”, and instead think of functions as a special kind of toplevel definition with a special syntax. So, when they see toplevel definitions of functions in Haskell, they conclude that all toplevel definitions are functions (since they aren’t syntactically distinct). I think this is what you mean when you talk about “definition vs function”. In my mind, this can actually be a useful (or at least non-harmful) way to think about it; a constant variable can be thought of as a zero-arity function, just as a multi-argument definition can be thought of a multi-arity function, and just as a value with parentheses around it can be thought of as a 1-tuple. None of these things are actually true, but the language is designed in such a way as to allow you to program as if they were true, for the most part.

I’m sure there are other examples that escape me at the moment. The point is, I expect people come into Haskell and read about functions, and then they read about constructors and think “oh those are functions”, and then they read about operators and think “oh those are functions too”, and then they read about variable definitions and think “those definitely also look like functions … man, it really seems like everything in Haskell is a function!”

]]>Of course, in most “OO” languages, everything is *not* an object (e.g. methods and primitives in some languages), and in Haskell everything is not a value (modules and type class instances). But, close enough

`f x y = 5`

is a 2-argument function, and `f x = 5`

is a 1-argument function, I thought `f = 5`

must be a 0-argument function. The syntax makes it hard to tell that all functions take exactly one argument.
]]>However, the viewpoint “everything can be seen a function (in addition to whatever else it is)” seems quite reasonable. As others have pointed out, n-ary (curried) functions are a very helpful concept, where formally an n-ary function can be defined as a term of type “A_1 -> … -> A_n -> B”, for some types A_1 … A…n. So “map” can be seen both as a unary function from “A -> B” to “[A] -> [B]”, and also as a binary function from “A -> B” and “[A]” to “[B]”.

But then by the same token, the integer 5 is a nullary function to “Int”; it’s also still an “Int”, but that doesn’t stop it being a nullary function as well.

So “everything is an n-ary function, for some n” is surely acceptable? In the context of Haskell, I don’t know how useful it is; but in some mathematical contexts, it’s very fruitful indeed (e.g. the insight that constants could be seen as 0-ary functions was essential for universal algebra, and its cousin the theory of operads).

]]>Let X and Y be sets and let the cartesian product of X and Y, denoted X x Y be the set {(x,y) | x belongs to X, y belongs to Y}. A function from X to Y, written as f : X -> Y, is a subset of X x Y with the property that for all (x1,y1), (x2,y2) in f, x1 = x2 implies that y1 = y2.

This is the mathematical definition of of a function and is the one used by Haskell and probably the other functional programming languages as well. The defining property of the function as stated above is usually referred to as ‘referential transparency’. The C programming language started the confusion, and other such languages have perpetuated it, by using the term function to mean something quite different. In C a function is not necessarily an association at all – it may not take an input and it may not produce an output – it may be pure side effect and even if it does define an association there is no guarantee of referential transparency.

Adopting this definition, how do you make sense of the phrase ‘a function with 0 arguments’?

]]>`undefined`

example is tricky.
In general, we might say that “`foo`

is a function” exactly if `foo`

has function type, i.e., `foo :: a -> b`

for some types `a`

and `b`

.
In that case, `undefined`

passes the test, since `undefined :: a -> b`

not only for some types but for `a`

and `b`

.
Of course by this same line of thought, `undefined`

“is a” number, list, pair, boolean, etc.
However, I’m guessing that the section you quoted about `error`

and `undefined`

was simply careless writing.
I see what you mean about “nullary constructor” reinforcing the idea of “everything is a function in Haskell”.

As for a single-word expression, we do have “identifier”.
But it wouldn’t work very well in your example: “The Prelude provides two identifiers to directly cause such errors: …”.
Identifiers themselves cannot cause errors.
It’s really the values *bound* to these two identifiers.
Here are some other attempts and criticisms:

- “The Prelude defines two identifiers to directly cause such errors: …”. Vague relationship between identifiers and causality.
- “The Prelude defines two values to directly cause such errors: …”. Values cannot be defined. Only names.
- “The Prelude provides two definitions for directly causing such errors: …”. Again, definitions do not cause anything.
- “The Prelude names two values that directly cause such errors: …”. Most accurate in my comprehension.

I hadn’t realized that the Haskell report itself was encouraging the sort of confusion I keep noticing. Thanks for pointing out these statements. Now I wonder whether we’ve even developed careful enough language to say what we mean here.

As Bertrand Russell said, “Everything is vague to a degree you do not realize till you have tried to make it precise.”

]]>