## Tuesday, September 14, 2010

### 'List' leads off with the letter 'Lambda' (λ)

I was revisiting the Project Euler problems recently, and one of the problems addressed lexicographic permutations.

Well, problem solved if you're using C++ or other languages that have lexicographic_permutation packaged with their standard library set. Some languages do and some don't, but that discussion is not à propos to this article, nor even what lexicographic permutations are or how to go about getting to them.1

But something common enough did come up in the solving of that problem, which is the question of appending to the end of lists effectively.

Now, I have a prejudice: I love lists, and by that I mean to say is that in more than a few functional and logical programming languages, the List Data Structure (uttered with reverence, and with not even (barely) a hint of mocking) provides a facility for grouping and then operating on sets of objects that I find unparalleled in other programming languages and paradigms. I can, for example, in Haskell create a list simply by typing `[0..9]` or `["Mary", "Sue", "Bob"]`, and if I'm feeling old-fashioned and gutsy, I can iterate through those lists inductively with a:

`doSomething [] = []doSomething (h:t) = f h : doSomething t`

Or if I wish to embrace the modern era of programming (like, oh, since the 1970ish-es),2 I can use `map` or `fold` to go anywhere I want to with a list. I mean, come on! `fold` is, after all, the 'program transformer function.' If I don't like my list as it is now (or if I don't even like it being a list at all), I just `fold` over it until it's exactly how I want it to be.

Like the above function, it's really just a specialization of `map`, isn't it?

`map _ [] = []map f (h:t) = f h : map f t`

And `map` is just a specialization of `fold`, right?

`foldr ((:) . f) [] ≡ map f`

So, yeah, lists rock my world.

But, you complain, I know language X [yes, the great language (flame-)war, with endless debates in the imperial senate, continues] and language X has lists and `map`, so what's the big deal?

The big deal is this: have you ever tried to construct a list in language X? Is it easy as the example I provided above?

No, but ...

Yeah, no, it's not as easy. In fact, I've done quite a bit of programming in language X, and I put forward that constructing and then destructuring lists is a painfully long and tedious process:

`BidirectionalLinkedList list = new BidirectionalLinkedList();list.add("Mary");list.add("Sue");list.add("Bob");`

Blech!

BUT THEN it gets better when you do destructuring:

`BidirectionalLinkedList newlist = new BidirectionalLinkedList();for(String elem : list) { newlist.add(doit(elem)); }`

Um, and that's only after the "stunning" advances that the STL started with functionals3 grafted onto the language, so have you seen the contortions you have to go through to create a functional object to map with? And don't you dare get your template parameter off because the compiler error...? shudder!

Enough of that.

So lists are structures or collections, and structures can be viewed as objects (phenomenon), and that is a very useful and powerful way to view them and to define them:

`data List t = [] | (t : List t)`

No problem with that and everything is right with the world.

... until it isn't.

This definitely works, and works well, for lists in general, and it also works great most of the time for working with the elements of the list. After all, in practice, we are most concerned with the element we have worked on most recently, so, in most cases, the element we just put in is the element we'd most likely to be retrieving ... AND (generally) our interest diminishes the further (in time) we are from an element, and that translates directly into where elements are found in a list.

Generally.

So, in general, lists work just fine; great, in fact.

There are specific cases where what we care about is not the most recent element, but another ordering is important. Two cases spring immediately to mind: first, queuing, and secondly (and commonly and specifically), document assembly.

In these two cases we wish to push onto the end of the list new elements, and the above definition doesn't give us access to the last element or position of the list. And to get that access, we must reverse the list so the last element becomes the first, or prepend to a reversed list. Either of these options has at least a cost in linear time when `reverse` is (eventually) called.

Or, we must call `append` or a function like it to append elements to the end of the working list; this also has linear costs. Either way, we pay the full price.

Now there are objects that do give us immediate access to that last element and position: deques and queues, for example, but when we go there, we give up the ease of composition and decomposition that we have with lists. We pay a penalty in expressitively with these types or in performance when using lists against their nature.

Or do we?

Well, one way of looking at lists is as objects, and above we gave a definition of lists as objects, but this is not the only way to view lists. I propose another way of looking at lists: lists can be viewed as functions.4

`(:) :: t → [t] → [t]`

The above definition says that `(:)` (pronounced 'cons') takes an element and a list and gives a list.

Yeah, so? you ask, for this is already a well-known function in the list lexicon.

I propose we look at this function in an entirely different way:

`(:) :: t → ([t] → [t])`

In this case, `(:)` constructs a list function from a seed element. This allows us to use this function a novel but perfectly acceptable way:

`x |> list = (x:) . list`

What we are saying here is that `(|>)` (pronounced 'put to front') is an operator that takes an `x` and puts that value on the front of `list`, just like `(:)` does.

The difference here (heh: 'difference') (sorry, mathematician humor) is what `list` is. For `(:)`, `list` is of type [a] (or, directly and tautologically: `list` is a list), but for `(|>)`, `list` is of type [a] → [a]. Or, translating: `list` is a function.

So? You've just reinvented the wheel with lists as functions now. Big deal.

Well, actually, what I've done is to reinvent the wheel as a meta-wheel, and so the big deal is this:

`list <| x = list . (x:)`

What we have here with `(<|)` (pronounced 'put to back') is a function that adds an element to the end of a list in — wait for it!constant time. For 'regular' lists the best you can do that operation is in linear time, and so my work of constructing a document by appending to the end of a list that was occurring in O(n²) time has just gone to a linear-time operation. Not a big deal for a small document, but we found that once a document became more than a page or two, the operation went from 'a blink of an eye' to 'keeping your eyes closed until the cows came home ... that had been eaten by wolves.' This is not the case with lists as functions: document construction became reasonable endeavor (that is, it occurred so quickly for us mere humans, living in this non-nanosecond-time, we didn't notice the elapsed time).

So, I've been a sly thing in one respect. I still haven't told you what a (functional) list is. I've defined the object view of lists, and I've declared what a function view of lists, but haven't defined them.

Here. Allow me to define them now:

`empty = id`

There you go. There are the 'definitions.' I say 'definitions' because with that sole definition, everything works, for to ground a functional list, we simply pass it an empty list:

`empty [] ⇒ []`

Or even whatever list we are working with:

`empty [1,2,3] ⇒ [1,2,3]`

When you append something to an empty list, you get that something back.

AND `empty` works as the seed of the 'put to' operators:

`(5 |> 6 |> 2 |> empty) [1,2,4] ⇒ [5,6,2,1,2,4]`

and:

`(empty <| 1 <| 2 <| 3) [4,5,6] ⇒ [1,2,3,4,5,6]`

It all works!

Summary
In this article we've demonstrated that there's more than one way to look at, specifically, lists. The standard way is to view them is as objects, and for the most cases, that works, and works well: this view provides a syntax that makes list processing simple and intuitive.

We've also shown that that is not the sole way to view things, and this view can be constraining, particularly when lists are played against type as a queue or a document assembler. In these cases it becomes simpler, declaratively, to view lists as functions, and not only does that provide a syntax for simple list construction either at the beginning or the end of the list, but also provide constant-time construction at either end. Furthermore, defining this view is as simple as viewing the empty list as `id` and then using the partial function of `(:)` ('cons'). That's the generative view. Then, to extract the ("real") list from that view, it's as simple as sending a list (for example, `[]`) to that functional list to ground the value for disposition.

In these special cases of working at the back end of a list, that aren't all that rare, the functional view of list processing gives the programmer expressivity and efficiency, eliminating the chore of appending to the end or reversing the target list. I quite enjoy list processing this way: it gives me a fresh perspective on an old friend and makes list processing in these cases easy and fun.

Endnotes

 1 There is an article at http://wordaligned.org/articles/next-permutation that provides a simple, clear explanation of the lexicographic permutation algorithm with a very nice demonstrative example if you are so inclined to investigate. 2 Laurent Siklóssy, Let's Talk Lisp talks about MAPCAR in chapter 6. And, if I may: this book is one of the rare ones. It's one of those books like Smullyan's To Mock a Mockingbird or van der Linden's Deep C Secrets (with a fish on the orange book cover, no less!) or, and let us not forget the ultimate: Hofstadter's Gödel, Escher and Bach: the Eternal Golden Braid (I mean, even the title is a pun, Doug doesn't waste time in having fun, does he? He gets right to it!) that make you a better person but you don't notice it, but everybody else does, because you are constantly busting out laughing or grinning from ear-to-ear at the cleverness of their prose. Why do books about esoterica have to be so heavy! These esoteric books show that they don't have to be heavy in the lightness they deal with the subject (and deal with dealing with these esoteric subjects).And who is that bright young man who left such a feisty review of that book back in (*le gasp!*) 2001? I can see a promising future for that boy, just so long as he doesn't get too big for his britches.À propose de rien, what are britches?*ahem* Yes, so those books are three of my favorites ... which ones are yours? 3 Bjarne Stroustrup, The C++ Programming Language, 3rd ed., §18.4 4 Such a view of lists has a name: these functional list types are called difference lists from the Prolog community. The standard Haskell library does not have Data.DList [sad loss, I say!] but this type is provided by Don Stewart of Galois in the hackageDB under the dlist package.The Data.DList works great, but I've implemented my own system that is more than twice as fast as I provide several specialized implementations of difference list operators that take a more direct approach in their implementations (`foldr` is elegant but can be a costly operation). If you'd like to use my Data.DiffList package please contact me, and I'll be delighted to share this library.

## Wednesday, May 26, 2010

### Math is hard

So, here's an interesting, everyday conundrum, sent to me by a reader:

Hello my mathematical genius friend. :) [should I edit that out? *blush*]

I have been sent the following mathematical joke of sorts. The person who sent it to me claims there are no flaws in it. But obviously there has to be a flaw, because the conclusion is incorrect. The problem is that I don't know how to explain the flaw---but I suspect it happens in that third line where it attempts to equate squared cents with squared dollars. Is there any way that you could explain the flaw in such a way that a seventeen year old Norwegian would understand? Don't worry, you don't have to say it in Norwegian, he speaks English.

If you don't mind having a look at this and explaining, I would be ever so obliged. And no pressure, but...pants may be on the line in this little bet I've entered into.

1. \$1= 100¢ (so \$0.1 = 10¢)
2. And, 100¢ = 10¢²
3. Then, 10¢² = \$0.1²
4. \$0.1² = \$0.01

5. \$0.01 = 1¢

The implied conclusion is
∴ 'a dime squared equals one penny'

Then we say 'Q.E.D.'.

Hm, if pants are on the line for my dear reader, I wonder what's on the line for moi-self (that is faux-French) (and 'faux' is French)? A review? Or two? Or three? Of my stories?

Let's leave my preening aside.

So, who sees the fallacies above that lead to the absurd conclusion?

If you do not see it, please think on this awhile before looking at the answer.

From basically the get-go this problem statement is erroneous and imprecise, but this comes from a fundamental laxity in understanding of what operators are and what operators do. Certainly, the first premise is correct: One hundred pennies does indeed equate to one dollar, for
1. 100¢ = \$1

is a statement of fact about the conversion from one set of units (pennies or ¢) to another set (dollars or \$). But already the trickster plays fast and loose, for indeed:
1. ... (so \$0.1 = 10¢)

is still correct but the (implied) conclusion makes a statement about the square of dimes, not about the square of tenths of dollars.

Do you see the fallacy now?

No? Let's continue.

So 1. is true, insofar as I can throw it, and days where my back gives out (ah! me poor bones!) that's not very far, but it's far enough for this problem statement, so long as it goes no farther than that.

But it does. *sigh*

So now let's get into the lies:
2. 100¢ = (10¢)² [parentheses implied and erroneous]

This is a lie. It's a lie, lie, lie, lie, lie, lie, lie!

Huh?

The lie is this: a square of a thing is not the thing itself, and even if you know nothing about mathematics, you can prove this to yourself. Socrates did it with an unlettered and untutored slave boy, and you are further along than what Socrates had to work with.

So let's prove 2. false with an analogue.

Take a foot rule (sorry, my readers who do not follow the British Imperial system, which, oddly enough, includes Brits these days, too) and a large piece of butcher paper. Draw a line on the butcher paper measuring one foot.

a. _ = 1 foot.

Now, 'square' that line, by drawing three more lines to make a foot square on the butcher paper.

b. ❏ = 1 foot square.

So, is

c? 1 foot = 1 foot square

Obviously not! for that would be to say:

c? _ = ❏

Or, put another way, 'one thing of one thing is equivalent to one thing of entirely a different thing'. One gulp of water does not equal one gulp of bleach. One I wish to have with my breakfast, the other, I do not, as my father very unfortunately discovered the hard way one not-so-fine morning.

"But, but, but ..." you stutter angrily, "but isn't '10² = 100' a statement of fact?"

Yes, indeed, it is, but please remember '10 ≠ 10 things' is also a statement of fact. Number (with a capital 'N') is a class of classes as, e.g., Introduction to Mathematical Philosophy so clearly and succinctly explains.

So to state:
2. 100¢ = (10¢)² [parentheses implied and erroneous]

is to state a falsehood.

How do we correct it? Well, by replacing the (implied) error with an explicit (corrected) ordering:
2. 100¢ = 10²¢

Do you see the correction? It the former case, we erroneously squared the units along with the number, in the latter case we do not square the units, we square the number solely.

Do I need to go any further? Or, do the fallacies fall out obviously in the rest of the assertions?

For completeness sake, I will review each step.
3. (10¢)² = (\$0.1)² [parentheses implied and erroneous]

No. (10 pennies) squared does not equal (1 dime) squared.
But, a statement of fact of conversion is that (10 pennies) = (1 dime), but that's as far as it goes, and no farther.

If we square pennies we have a new unit of measure called, I don't know: (¢²), and (¢²) does not equal dimes. Not in this world.
4. (\$0.1)² = \$0.01 [parentheses implied and erroneous]

Again, no.
Again: \$(0.1²) = \$0.01, but again, that is as far as you can go with that statement.
5. \$0.01 = 1¢

is just a reformulation of the first statement and is true, yes, but redundant.

Remember from Frege's predicate calculus:
q |- p [read: 'q implied by p' or 'if p then q']

[Shoot! why don't they have an 'implied by' HTML character?]
But if:
¬p [read: 'not p' or 'p is not true (or provable)']

Then you can say anything you like for q, or, more correctly, you cannot say anything at all about q, because q does not depend on ¬p, it depends on p.

So, as it were: 'If I had done the laundry, we wouldn't have had this argument' and 'I didn't do the laundry' means I don't have a leg to stand on about why we had this argument, honey.

(Oops, sorry, ... but it's not like I have had this experience at all ...)

And, to the point of this article: 'If a (10 one-hundredths of a dollar) squared were to equal (10 pennies) squared, then ...'

Well, then anything, because '(10 one-hundredths of a dollar) squared = (10 pennies) squared' is false. So say away, because anything coming from a false premise is an absurd conclusion: whirled (black eyed) peas, butterflies flapping in the Amazon, and the Number 23. They may be true enough in their own right, but you can say nothing about them from the false premise.

So, let's take the absurd conclusion:
∴ 'a dime squared equals one penny'

and reformulate it to be a true statement.

Well, the first thing we have to do it to get rid of the '∴', so let's do that, and then state the plain facts:
'a dime squared equals one penny' is an absurdity.

Q.E.D.

What can we take away from this?

Let's examine another absurity:
'Math is hard.'

No.

No, my dear ladies and gentlemen, 'Math' isn't 'hard.' Math is simple. Math can even be easy, for we learned from the Greeks, after all, that Math is one of the humanities. Math is simply a language. A language that can describe things exactly as they are and exactly as they are not. And precisely at that. It can even describe imprecision precisely. The 'hard'ness of mathematics comes from us, when we don't wish to be precise in what we are talking or thinking about.

Being precise ... well, that can be hard, I suppose, so then perhaps it's more precise not to say 'Math is hard' but to say 'Life is hard.'

Yes. That's true. 'Life is hard' as we choose to make it.

Oh, well. I never promised you a Rose Garden.