Friday, February 28, 2014

'Arrow' is spelt 'fizz-buzz'


A little literate Haskell program:

> module Fizzbuzz where

So, fizz-buzz through a functional lens

> import Control.Arrow

Our predicate is for some number, x, we print 'fizz' if it's modulo 3,
or we print 'buzz' if it's modulo 5. N.b.: these predicates are not
exclusive.

So our fizz-buzz predicate follows ('pred'icate 'f'izz-'b'uzz)

> predfb :: String -> Int -> Int -> Either String Int
> predfb str modulo x | x `mod` modulo == 0 = Left str
>                     | otherwise = Right x

so:

> fizz = predfb "fizz" 3
> buzz = predfb "buzz" 5

... that's really all there is so we just split the input number
into the two predicates and then remerge the results following
this rule:

Left str1 (+) Left str2 = str1 ++ str2
Left str (+) _ = str
_ (+) Left str = str
Right x (+) _ = show x

which transliterates quite nicely (it's nice programming requirement specification as implementation-code when your programming language is declarative)

> fbprinter :: (Either String Int, Either String Int) -> String
> fbprinter (Left x, Left y) = x ++ y
> fbprinter (Left x, _) = x
> fbprinter (_, Left y) = y
> fbprinter (Right num, _) = show num

Now the fizz-buzz game: count from 1 to 100 replacing '3's with fizz and '5's
with 'buzz' ... off you go:

> fizzbuzz = [1..100] >>= return . (fizz &&& buzz >>> fbprinter)

There it is. fizzbuzz in, lessee, 8 lines of implementation code. Any questions?

Nope? Q.E.D.

Afterthought:

Well, there is one improvement. If we look at the Either type as a cartesian
product type (which it is), then the print rule looks rather redundant to the
monoidal append operation, for, after all

m0 (+) (anything) = (anything) (order of arguments superfluous); and,
m+ (+) m+ = defined by the semigroupoid-implementation

so, the monoidal addition of lists is

[] (+) lst = lst; and,  (... even if lst == [])
lst1 (+) lst2 = lst1 ++ lst2

Can't we just convert our Either String Int type to be a monoid and have
the special base case of 'show num' for the (Right num (+) Right num) case?

Hm. Yes. I leave this now as an exercise for the reader...

... which is another way of saying that I see a simple solution of

mzero == Right num

and

mplus == Left str

in my head, but how to implement that in Haskell is currently puzzling me.

Intrepid readers, show me the light!

... at any rate, 'running' fizzbuzz gets you all fizzy-buzzy and you can feel good that you've used a little predicate logic, functional programming, programming with arrows, no less, and you didn't have any redundant boolean logic that you see in other implementation for fizz-buzz: Either took care guarding our conditioned results.

Sweet!

p.s. The payoff! The payoff! How could I forget the payoff for those of you who don't have Haskell running natively on your 'iron'?

(Now, why you don't have haskell running on your platform, I don't want to even think about. Not having haskell on hand to feed yourself your functional-programming (daily/hourly/secondly) fix? geophf shudders)

*Fizzbuzz> :l Fizzbuzz.lhs 
[1 of 1] Compiling Fizzbuzz         ( Fizzbuzz.lhs, interpreted )
Ok, modules loaded: Fizzbuzz.
*Fizzbuzz> fizzbuzz 
["1","2","fizz","4","buzz","fizz","7","8","fizz","buzz","11","fizz","13","14","fizzbuzz","16","17","fizz","19","buzz","fizz","22","23","fizz","buzz","26","fizz","28","29","fizzbuzz","31","32","fizz","34","buzz","fizz","37","38","fizz","buzz","41","fizz","43","44","fizzbuzz","46","47","fizz","49","buzz","fizz","52","53","fizz","buzz","56","fizz","58","59","fizzbuzz","61","62","fizz","64","buzz","fizz","67","68","fizz","buzz","71","fizz","73","74","fizzbuzz","76","77","fizz","79","buzz","fizz","82","83","fizz","buzz","86","fizz","88","89","fizzbuzz","91","92","fizz","94","buzz","fizz","97","98","fizz","buzz"]

There ya go!

(this program contains its own solution ... meta-quine, anyone?)

Friday, February 21, 2014

Quotes-map-reduce-close-quotes


"So, geophf," I was asked during a recent interview for a software quality-assurance tester job, "have you heard of the product map-reduce?"

"Well," sez I, "no, but I'm familiar with the concept from functional programming."

That was a good enough answer for me.

The test manager had heard of functional programming and (from me) category theory. I was talking with Ph.D.s who were test managers.

A refreshing conversation, actually.

"Yes," he said, "but what is all this stuff good for?"

I gave him a "you've got to be kidding me" look.

What is category theory good for? Coming on the heels of have I heard of map-reduce?

Like where did the concept of map-reduce come from?

So, here's what it's good for.


The Joke


An infinite number of mathematicians walk into a bar. The first math dude sez, "Bartender, I'll have half-a-beer." The next one sez, "I'll have half what he's having." Then next: "I'll have half of hers." And the next one goes to speak.

And they want to get in this specification, because that's how mathematicians roll, but the bartender holds up his hands.

"Ladies and gentlemen," he says, "know your limits, please!"

And pours out one beer which the infinite number of mathematicians enjoy.

The proof in the pudding

let nat = [1..]
let halflings = map (\x → 1 / 2 ** x) nat
let onebeer = sum halflings

onebeer

Now, that may take some time to return (geophf cackles evilly at his phrasing of 'some time'), but we've just used map-reduce. How? map we used directly above in halflings, as you see, and reduce (a.k.a. the fold function) is embedded in the definition of sum:

let sum :: Num a ↠ [a] → a = foldl (+) 0

So, if we look at the the above we just used map-reduce to solve the beering-mathematicians problem (as opposed to solving the dining philosophers problem, because, anyway, those philosophers really should go on a diet. Just sayin').

Piece of cake, right?

Well, not really. Our solution, to use a technical term, sucks.

Let's look at it. What does our set of equations reduce to? The reduce to this one formula:

let onebeer = sum (map (\x → 1 / 2 ** x) [1..])

What's wrong with this?

Well, for any n, where n is the length of the set of natural numbers, we have a least a double-linear complexity to solve this problem.

But let's look at the sequence we create by unwinding the problem.

First we have [1,2,3,4, ...]
Then we have [1/2, 1/4, 1/8, 1/16, ...]
Then we have [1/2 (+ 0), 1/4 + 1/2, 1/8 + 3/4, 1/16 + 7/8, ...]

Huh. What's interesting about this pattern?

Story time

Once upon a time, there was a bad little boy in maths class, named Gauss, and he was held after school. The teacher told him he could go home after he summed the first 100 'numbers' (positive integers, I believe the teacher meant).

Gauss left for home after writing the solution down in less than one second.

The teacher was furious, believing that Gauss had cheated.

He didn't. He just noticed a pattern.

1 = 1
1 + 2 = 3
1 + 2 + 3 = 6
1 + 2 + 3 + 4 = 10
sum [1..n] = ... what?

Well, what does it equal?

When n = 1, sum [1..1] = n
when n = 2, sum [1..2] = n + 1
when n = 3, sum [1..3] = n * 2
when n = 4, sum [1..4] = ... hm, well, it equals ... what?

It equals n (n + 1) / 2, doesn't it.

n = 100, sum [1..100] = 100 * 101 / 2 = 5,050

... and Gauss was done. He went home, serving the shortest detention ever, and put egg on his prof's face, too.

Good times. Good times.

Well, what about this problem?

We can map and sum all we want, but can't we do what Gauss did and reduce the double-linear (at least) problem complexity down to some constant on the number n?

Yes, we can.

The pattern is this, after we've done our reduction to the simplest forms:

[1/2, 3/4, 7/8, 15/16, ... blah-blah-blah, boring-boring-boring]

Who needs map-reduce when we just simply return

limn (1 - 1 / 2n

for any n even as n approaches infinity?

Who, indeed.

The take-away? Yes, I could have used map and reduce and exponentiation and all that to get to the answer, but sometimes we just have to take a step back from trying to solve a problem and just observe what we're solving.

"Hey," algorist exclaims, "I'm just solving to get this simple fraction! I don't have to map-sum anything!"

It's amazing what meta-problem-solving saves us as we go about solving problems, isn't it?

Epilogue

Oh, map is reduce, did you know that?

Specifically, for every function f, there is a reduce function that is isomorphic to map f.

What is it?

A little κ-calculus conversation

So, "wan plas wan is doo" in the κ-calculus. Because why? Because if you can prove addition in some calculus, you can prove anything.

Well, prove any primitively-recursive thing in that calculus, and that's good enough for most anything (except Ackermann and some others), which is close enough to a latte at Caribou café for me.

SO! without further ado: 1 + 1 = 2, κ-style.

First of all, some basics. What is the κ-calculus?


κ-calc: application is composition: 
f: A B is κ x . f : (C x A) B
and the lifting ('identity') function
C : 1 C is lift : A (C x A)

Then, for addition we need the category Nat and the following arrows:


0 : 1 → Nat; and,       (so, 0 ~> 0)
succ : Nat → Nat         (so, succ o succ o 0 ~> "2")

but we also need the iterator function for natural numbers:

(a) it(a, f) o 0 ~> a; and
(b) it(a, f) o succ o n ~> f o it(a, f) o n

from this we can declare the function add in the κ-calculus:

add : Nat x Nat → Nat ≡ (κ x . it(x, succ))

Note how we don't care what the second (snd) argument of the tuple is, we just iterate succ over it, so with add defined above we can now prove 1 + 1 = 2

add o (succ o 0, succ o 0) = (κ x . it(x, succ)) o lift (succ o 0) o succ o 0 (lift distribution)
= it(x, succ)[succ o 0] o succ o 0 (composition)
= it(x [succ o 0/x], succ [succ o 0/x]) o succ o 0 (substitution)
= it(succ o 0, succ) o succ o 0 (substitution)
= succ o (it(succ o 0, succ) o 0) (iteration (b))
= succ o (succ o 0) = 2 (iteration (a))

Q.E.D.

Piece of cake, right? Using categories (for types), we reduce 200 pages of the Principia Mathematica down to a couple of axioms, a couple of types and one function (it), then a few lines of proof and we're done.

Awesomesauce!

But then, my daughters became curious about what the Papa was doing, so this conversation ensued with my 10-year-old.

"But, Papa," Li'l Iz asked with big-big eyes, "what does snd have to do with 2?"

So, I explained.

A tuple is of the form: (a, b)

So for example we have: (3, 4)

So the tupling functions are:

fst (3, 4) ~> 3
snd (3, 4) ~> 4

And therefore for the tuple (1,1)
fst (1, 1) ~> 1
snd (1, 1) ~> 1

So, to prove 1 + 1 = 2 we thence have:

add o (1, 1)

Therefore:

fst (1, 1) o add o snd (1, 1) = 2 

and that's why snd (1, 1) has everything to do with 2. :p

wa, wa, waaaaaaaa (this my daughter wrote)