Communication, curiosity, and personal style

When people talk, especially via the Internet, I expect misunderstandings, since human language is so fraught with ambiguity. And though I work very hard at clarity, I know my skill is limited and I sometimes fail.

So I see communication as an iterative process: I say something that you don’t quite get, and you say “Huh?” about the what or why or how of some particular part, and I refine my message. Likewise, I listen to you, and check out the places were I’m puzzled.

I’m very sensitive where I think I’m seeing criticisms/dismissals without inquiry/curiosity, and hence some peevishness in some remarks (now edited) in my previous post, Why program with continuous time?. See Fostering creativity by relinquishing the obvious for related remarks.

In some venues, like reddit, aggressive non-inquiry can become the dominant mode of discussion. Because I prefer high-quality, open-minded inquiry, I mostly choose moderated blogging instead. If I don’t pass a comment through, I’ll usually offer the writer some suggestions, perhaps suggesting a possible misreading of whatever was being responded to, and invite re-submission. If I’m particularly annoyed with the writer, I’ll usually take time to get over my annoyance before responding, so there can be a delay.

In this way, I try to keep a high signal-to-noise ratio, where noise includes assumptions, reactions to misreadings, and often compounded public attempts by people to get each other to listen more carefully.

I’m starting to discover that people I don’t get along with sometimes have a very different style from mine. I like to invite many possibilities into a space and explore them. My mother shared with me a quote from Henry Nelson Wieman, which she now uses as her email signature:

To get the viewpoint of the other person appreciatively and profoundly and reconcile it with his own so far as possible is the supreme achievement of man and his highest vocation.

While I’m agnostic about the “supreme/highest” part (and open to it), I very much like Wieman’s description of individual and collective learning as progressing most powerfully by integrating different viewpoints, as founded on working to understanding each other “appreciatively and profoundly”.

I’m learning that some other folks have an oppositional style of learning and discovering.

When Thomas (Bob) Davie and I worked together in Antwerp, he told me that his style is to fiercely resist (battle) any new idea proposed to him. Whatever breaks through his defenses is worth his learning. I was flabbergasted at the distance between his style and mine. And greatly relieved also, because I had to work with him, and I had previously interpreted his behavior as non-curious and even anti-creative. Although I wasn’t willing to collaborate in the battle mode at the time, fortunately he was willing to try shifting his style. And the recognition of our differing style toward similar ends helped greatly in relieving the building tension between us. Now I enjoy him very much.

Since this surprising discovery, I’ve wondered how often friction I have with other people coincides with this particular difference in personal styles and whether there are additional style that I hadn’t been aware of. So when friction arises, I now try to find out, via a private chat or email.

Edits:

  • 2010-01-04: Filled in Bob’s name (with permission).

Is program proving viable and useful?

Is program proving a viable and useful endeavor? Does it create more problems than it solves?

Today I participated in a reddit discussion thread. The discussion brought up some issues I care about, and I think others will care also.

Although I’m quoting from another person whose viewpoint may contrast with mine, I mean no insult or disrespect to him. I think his views are fairly common, and I appreciate his putting them into writing and stimulating me to put some of mine into writing as well.

[...] you could specify the behavior it should have precisely. However in this case (and in most practical cases that aren’t tricky algorithms actually) the mathematical spec is not much shorter than the implementation. So meeting the spec does not help, the spec is just as likely to be wrong/incomplete as the implementation. With wrong/incomplete I mean that even if you meet the spec you don’t attain your goal.

I like the observation that precise specifications can be as complex and difficult to get correct as precise programs. And indeed specification complexity is problematic. Rather than give up on precision, my own interest is in finding simpler and clearer ways to think about things so that specifications can be quite simple. Thus my focus on simple and precise semantic models. One example is as given in Push-pull functional reactive programming. I’m proud of the simplicity of this specification and of how clearly it shows what parts of the model could use improving. (The Event type doesn’t follow the type class morphism principle, and indeed has difficulty in practice.) And since such semantics-based software design methods are still in their infancy, I agree that it’s usually more practical to abandon preciseness in software development.

For more examples of formally simple specifications, see the papers Denotational design with type class morphisms and Beautiful differentiation.

I suspect the crux of the issue here is that combining precision and simplicity is very difficult — whether for specifications or programs (or for many other other human endeavors).

Any intelligent fool can make things bigger and more complex … it takes a touch of genius — and a lot of courage — to move in the opposite direction. – Albert Einstein

We had been writing informal/imprecise “programs” for a long time before computers were invented, as various kinds of instructions/recipies. With mechanization of program handling in the 1940s came the requirement of precision. Since simple precision is difficult (according to my hypothesis), and precision is required, simplicity usually loses out. So our history of executably-precise programs is populated mostly by complexity.

I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is make it so complicated that there are no obvious deficiencies. The first method is far more difficult. – C.A.R. Hoare, The Emperor’s Old Clothes, Turing Award lecture, 1980

As much as one assumes that the future is bound to perpetuate the past, one might believe that we will never escape complexity. In my own creative process, I like to distinguish between the notion of necessarily true and merely historically true. I understand progress itself to be about this distinction — the overturning of the historically true that turns out (often surprisingly) not to be necessarily true.

What about specification? Are precise specifications necessarily complex, or merely historically complex? (I.e., must we sit by while the future repeats the complexity of the past?) This question I see as even more unexplored than the same question for programs, because we have not been required to work with formal specifications nearly as much as with formal programs. Even so, some people have chosen to work on precise specification and so some progress has been made in improving their simplicity, and I’m optimistic about the future. Optimistic enough to devote some of my life energy to the search.

Simplicity is the most difficult thing to secure in this world; it is the last limit of experience and the last effort of genius. – George Sand

Everything is vague to a degree you do not realize till you have tried to make it precise. – Bertrand Russell

I expect that future software developers will look back on our era as barbaric, the way we look at some of the medical practices of the past. Of course, we’re doing the best we know how, just as past medical practitioners were.

I know people are apt to hear criticism/insult whether present or not, and I hope that these words of mine are not heard as such. I appreciate both the current practitioners, doing their best with tools at hand, and the pioneer, groping toward the methods of the future. Personally, I play both roles, and I’m guessing you do as well, perhaps differing only in balance. I urge us all to appreciate and respect each other more as we muddle on.

Fostering creativity by relinquishing the obvious

“It’s obvious”

A recent thread on haskell-cafe included claims and counter-claims of what is “obvious”.

First,

O[b]viously, bottom [⊥] is not ()

Then a second writer,

Why is this obvious – I would argue that it’s “obvious” that bottom is () – the data type definition says there’s only one value in the type. [...]

And a third,

It’s obvious because () is a defined value, while bottom is not – per definitionem.

Personally, I have long suffered unaware under the affliction of “obviousness” that ⊥ is not (), and I suspect many others have as well. My thanks to Bob for jostling me out of my preconceptions. After weighing some alternatives, I might make one choice or another. Still, I like being reminded that other possibilities are available.

I don’t mean to pick on these writers. They just happened to provide at-hand examples of a communication (anti-)pattern I often hear. Also, if you want to address the issue of whether necessarily () /= ⊥, I refer you to the haskell-cafe thread “Re: Laws and partial values”.

A fallacy

I understand discussions of what is “obvious” as being founded on a fallacy, namely believing that obviousness is a property of a thing itself, rather than of an individual’s or community’s mental habits (ruts). (See 2-Place and 1-Place Words.)

Creativity

Still, I wondered why I get so annoyed about the uses of “obvious” in this discussion and in others. (It’s never merely because “Someone is wrong on the Internet”.) Whenever I get bugged and I take the time to look under the surface of my emotional reaction, I find there’s something important & beautiful to me.

My reaction to “obvious” comes from my seeing it as injurious to creativity, which is something I treasure in myself and others. I understand “it’s obvious” to mean “I’m not curious”. Worse yet, in a debate, I hear it as a tactic for discouraging curiosity in others.

For me, creativity begins with and is sustained by curiosity. Creativity is what’s important & beautiful to me here. It’s a value instilled thoroughly & joyfully in me from a very young age by my mother, as curiosity was by my father.

I wonder if “It’s obvious that” is one of those verbal devices that are most appealing when untrue. For instance, “I’m sure that”, as in “I’m sure that your [sick] cat will be okay”. Perhaps “It’s obvious that” most often means “I don’t want to imagine alternatives” or, when used in argument, “I don’t want you to imagine alternatives”. Perhaps “I’m sure that” means “I’d like to be sure that”, or “I’d like you to be sure that”.

In praise of curiosity

Here is a quote from Albert Einstein:

The important thing is not to stop questioning. Curiosity has its own reason for existing. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, of the marvelous structure of reality. It is enough if one tries merely to comprehend a little of this mystery every day. Never lose a holy curiosity.

And another:

I have no special talents. I am only passionately curious.

Today I stumbled across a document that speaks to curiosity and more in a way that I resonate with: Twelve Virtues of Rationality. I warmly recommend it.

How we pose a question

At the end of FDPE, someone brought up Scratch and asked what a functional version might be. His description of the problem being solved was “moving icons around on the screen” and, accordingly, suggested that Scratch’s imperative style may therefore be fitting.

To me, animation is no more about moving things around on the screen than arithmetic is about putting numerals on the screen. Sure, in the end, we’ll probably want to present (display) animations and numbers, but thinking of animations or numbers as being about presentation thwarts simplicity and composability.

How we pose a question is perhaps the single strongest influence on the beauty and power of our answers.

Designing for the future

I think I finally found the words for why I don’t design for “90%” of the use cases. It’s because doing so is designing for the past, while my creations will be used in the future, which I see as unboundedly creative. Moreover, the claim that “user’s won’t want to X” turns into a self-fulfilling prophecy.