Tuesday, April 15, 2014

'M' is for burrito. No, really! (Monads, a burrito-escque introduction)


'M' is for Burrito. No, really.
(A little post about monads)

This one's easy.

A monad is a burrito: a nice, soft-shell wrapper hiding all that yummy, but complicated stuff inside (green peppers and onions? who was the first one to think of that?)

A monad is a burrito, wrapping all that complicated stuff like an integer:

Just 3

under its soft-shell wrapper:

[1,2,3]

Just showed you two monads: the first one is the 'Maybe' monad, representing semideterminism (we're certain, in this case, that the underlying unit is 'Just' 3, but we could, in a different situation, be equally certain that the underlying unit was 'Nothing' Those two cases (two = semi) give us our certainty (determinism) one way or the other), the second is the 'List' monad, representing nondeterminism (you could have the empty list [] (no answers, no = non), or, in the above case, a list with a set of different answers (on or several)).

So, monad = burrito. Simple, right?

Class dismissed.

"Excuse me, Professor geophf," the wormy student in the back of the class wearing b-c glasses and who always has to ask annoying questions in his whiny voice at the very end of class, making everybody late for their next class and me late for my latte, and you don't want to make me late for my latte! He continued, oblivious: "What about monads that don't contain anything at all? What is that? An empty burrito?"

He giggled at his own joke, thinking himself rather smart at drôle.

"What?" I roared, "An empty burrito? Such sacrilege cannot be! Besides, what's the point of an empty monad? That's just ridiculous!"

I was rather peeved that this young upstart would trespass on the beauty of my burrito-analogue. It was simple and easy to understand. Why mess with it?

"But," he continued in his whiny voice, "what about the empty list and 'Nothing' in the Maybe monad. They carry no value but are essential to each of those monads, how do you explain these monadic values as burritos? Are they churros?"

I sputtered in fury. He had me there.

So I did what every tyrantteacher does, I relied on my position of authority.

"No, monads are burritos! Get that through your head!" And then: "Class dismissed!"

Everybody fled. I do cut quite the imposing figure when I have that angry glint in my eye and my flinty face set in stone.

Okay, so monads aren't burritos. I admit it (But not to Mr. Wormwood. Never, never!) There is nothing that says a monad has to carry a value, it's just a convenience to think of monads like that: a wrapper around an 'ordinary' object. But monads are objects, themselves, and, in fact, more 'ordinary,' that is: regular, than plain-old objects like integers.

The problem of plain-old objects, the problem that monads address, each in their own way (and there are many species of monads, not just maybes and lists, but eithers, and states and ... stuff! Lots of wild and weird different stuff!), is that plain-old objects admit the uninhabited instance, , in their type. So the type of natural numbers is 0, 1, 2, 3, ... but it's also . For programming languages that realize this (haskell, for example), they deal with this gracefully. For programming languages that realize this, but then do nothing about it, there is a systemic problem of the program going into an unstable state when encountering an uninhabited instance.

Something that causes Algol-child language programmers to shake in their booties: the 'null-pointer exception.'

Question: how can Java throw a 'NullPointerException,' when it doesn't have the pointer construct in its language, at all? Am I the first to call this one out?

So, the function in these languages that have but do not handle it properly must always admit that you're not working with functions at all, for:

x + y

is x plus y, but if either in uninhabited, the answer is KABOOM!

A program crash. You just destroyed your entire system. And you didn't even do it: the language designers did.

Monads were an answer to this problem. A little dude with a long, bushy beard by the name of Kleisli developed the work around what came to be known as the Kleisli categories, that had monads as their objects and (what came to be known as) Kleisli arrows as morphism, or functions.

The guarantee of the Kleisli category was that, indeed, every object as a monad, so there's no uninhabited type, this means Kleisli functions always get you back into a Kleisli category (yes: once you enter the monadic domain, you never leave it. That's the guarantee of a Kleisli category, you can only be a monad (or a monadic function or monad category ...) to get in, and no monads exist outside the Kleisli categories. The monadic domain is the Hotel California of categories).

Okay, your stuck. And the guarantee of the monad is that you don't care what's under the soft-shell, there's no 'getting a value out' of a monad, because, as my annoying student, Mr. Wormwood, pointed out, there are monads with no underlying values. So what do you do with them?

Monads have a special function associated with them, call the 'join' operator.

Monads, you see, are a special type in that a join of a monad of a monad is just a monad, so, for example,

join (Just (Just 3))

gives you 

Just 3

What does that do for us?

Well, the structure of a monadic function is this:

Monad m => f m :: a -> b m

That is, the function takes an ordinary type a and 'lifts' it into the monadic category giving the (monadic) answer of type b (correctly m b, as m is the monadic type and b is the type carried by the monad).

Okay, but monadic functions can't dig out the underlying type in a monad of its category, so how do even the functions work at all?

Just as you can lift an unit type into a monad:

return 3 = Just 3

You can also lift an ordinary function into the monadic domain:

liftM succ = m Int -> m Int (for some monadic type m)

Okay, so, but where does that even get us? We're still stuck, right?

Well, the thing is, you can even lift monadic functions into the monadic-monadic domain:

liftM f (of type Monad m => a -> m b) = f' :: Monad m => m a -> m (m b)

What does that get us?

Well, combining join with a lifted monadic function gets us a monad from monadic input:

join (liftM f (Just 3)) = some monadic answer dependent on the resultant type of f.

This whole lifting-into-the-monadic-domain-and-then-joining-the-result is so common when working in monad categories that a special operator was invented, à la Haskell, named '>>=' (pronounced: 'bind').

Now, with bind the true power of monads come out:

m >>= f >>= g >>= h >>= ... etc.

What you see here a monad being bound, in turn, to the monadic functions f, g, h, etc.

So, okay, you've got a short-hand for monadically binding functions together. Great. Good on you, and have a latte.

The thing here is this.

In 'plain-old' programming, you have this ever-present fear that a NullPointerException is going to ruin your life, so you can't write:

obj.getA().getB().getC().thenDoSomething()

getting an A from obj, then getting a B from that A, then getting a C from that B, then you do something with that C.

I mean, you can write that code, but if any of those objects are null: obj, A, B, C, your whole statement explodes?

No, worst than that: your entire system blows up, and you have to deal with irate customers wondering why they got a 505-Service down error when they submit their credit care information but forgot to enter their zip code, or some such.

So you have to wrap that statement in a try-block, and try it and see if it works, and if it doesn't you have this whole envelope, this whole burrito shell of code you have to write to handle the uninhabited case.

Now let's look at the monadic function binding again:

m >>= f >>= g >>= h >>= ... etc.

What happens if m is null, or the result of f m is ...

No. Wait. Monads don't have nulls. Monads are just monads, so the above binding ...

... okay, wait for it, ...

just.

works.

In fact, in the Kleisli category, it is guaranteed to work.

It. just. works.

For you, not a Java(script) programmer, you're like: "Well, duh, yeah, that's how it's supposed to be! 1 + 1 = 2, not 'error' or whatever."

You can say that, and expect that, in a pure mathematical domain (with monads implicit in the mathematics for you. You're welcome.) But a Java programmer, with any experience, any hard-won experience should be (rightly) awed.

"Wait. You mean, it just works? How is that possible?"

Yeah. How is that possible? That Java code just works?

It doesn't. But, when I translate the Kleisli category down into a Java implementation, and then lift every Java object into that cateogy, so it's not a Plain-old Java Object (POJO) any more, but now it's a monad in the Kleisli category, and operate on it as such from then on.

Well, then, it just works.

Some people look at my monadic harness and say I'm unnecessarily complicating things.

I beg to differ.

"But, geophf, I just want the start date; all this stuff just complicates things!"

But what do you want the start date ... for? If you just want the start date, then

pojo.getStartDate()

will get you there, and you're done.

But if you want to base any logic off it? Is your start date  start of a work-flow? So if the start date is null, your system, that you unit tested with start date values, now crashes in production and you have no idea why, because you unit tested it, and the code looks good.

But that one null wrecks everything.

I don't have any nulls. So I lift my start date up into a monad, and start my work flow, and my work flow works, and if the start date is null, then the start date isn't lifted, the functions fall through, because they're not working with a monad, and I just go onto the next thing.

And I coded zero lines of code around that.

Your try-catch blocks looking for nulls everywhere ...

Well, what's more complicated now? What code works in production, regardless of dirty or bad data?

Monads are a guarantee, and, as burritos go, they are quite tasty. Give them a try!

p.s.:

Now there's the whole conversation around monads that carry no value intentionally throughout the entire monad. So, for example, if the monad type is 'Quark' the the monadic inhabited types are up, down, strange, charmed, top and bottom, and their values are ... nothing at all: their instance is the value, it has no underlying value it carries (unless you want to get into the guts of spin and charge ...), its the particular monad instance, or the particular quark, we care about, not what's inside at all, something like monads as enumerated values, but ... eh. Monads-as-burritos is a limited view, it will trip you up, but for the most part it works. When you do get to the point of treating monads-as-monads, and not caring, anymore, about the underlying value, you can tap yourself on the shoulder with monadic-Nirvana. It's a really good feeling, and gives an entirely new understanding of monads and their utility. "Good to know," as it were, but monad-as-burrito works for the most part and is tasty, too, to boot.

Monday, April 14, 2014

'L' is for the Lambda-calculus

'L' is for the λ-calculus

So, nearly halfway through the alphabet. How are we all doing, are we going to be able to go from 'A' to 'Z' in one month? Tune in tomorrow to see the next compelling and exciting installment of the #atozchallenge!

So, this is the big one, a way at looking at mathematics that has been seething under the surface, but with the λ-calculus (pronounced 'lambda calculus') has now bubbled up to the surface.

So, we've seen that the formula, or the functions, are the things that concern mathematics, and not so much the objects operated on, and that turned world-thought on its head. Well, the λ-calculus came to be to address a very particular concern that mathematicians worry about (mathematicians are such worriers. Except me, I'm carefree as a hummingbird, flitting among rose bushes and lemon trees, gathering necktar to keep my heartrate at three-thousand beats per minute.) (no, ... wait.), and that concern is this: in the function

f x y z = x + 2y - 3z

How do we know x is x, y is y and z is z?

That is, what if I juxtapose y and z, mixing the two up, as it were? Is there something, anything, that enforces the ordering of the arguments to a function?

In set theory, you see, the set {1, 2, 3} is equivalent to the set {3, 1, 2} or any permutation? Two sets are the same if they have the same members, so there's no ordering imposed.

Now, you could enforce an ordering, by adding another layer of sets:

{{1, 1}, {2, 2}, {3, 3}}

And this ... 'means' (I say 'means' quite loosely, you see) that "the first number is 1, the second number is two, and the third number is two."

But it doesn't at all, because, as sets are order-independent, how can we guarantee the ordering argument is the first one in the set? We can't. So we add another layering of sets:

{{{1}, 27}, {42, {3}}, {{2}, 21}}}

Which says: "The first argument is 27, the second argument is 21, and the third argument is 42."

You see what I did there? I mixed up the ordering of the sets and even their ordering arguments, but since the order is a subset of the subset, you're still able to get that information to order the sets as you read through them.

This gave mathematicians headaches, because mathematicians are wont to get headaches about things, because the above set is not the set {27, 21, 42} (or any of its permutations, but something quite different, and the function

f x y z

is not the function

f {{z, {3}}, {{1}, x}, {y, {2}}

(or any of its permutations

because the first one, f x y z, has three arguments, but f {...whatever...} has one argument, which is a set of ordered arguments.

Ugh. Try working with that!

Some people have enough problems working with f x y z, already!

So, how to solve this?

Two of the most popular ways that mathematicians solved this problem are:

1. get rid of variables altogether. And numbers, too. And all symbols, except some very primitive functions which we'll predefine for you, so instead of the above function

f x y z = x + 2y - 3z

we have something looking like this:

BBBVKI xxxx

Okay, you can reason about that, it comes as its own Gödel numeral, so you can have fun with it that way, but ...

But that's hard for a lot of people to swallow or to know what you're talking about, or to have a conversation about the veracity of the statement you've constructed.

The other approach is this: partial funtions.

The function f x y z can actually be represented as three functions

Some function f that takes a variable x given the result of the partial functions of y and z, or, in 'math words':

f x = something + x

where 'something' is

g y = 2y + something else

and 'something else' is

h z = - 3 z.

See?

So when we string these functions together to get our total answer (instead of three partial answers we have

f x . g y . h z = x + 2y - 3z

Which gets us our original equation back, and guarantees that our f x is getting x, because it's not getting anything else, our g y and h z are getting their y and z, respectively, too, for the same reason. Once you reduce a function down to one of only one argument, then you know the argument you're getting is the right one for that function.

Problem solved.

Class...

is...

dismissed!

(And all the students stand up an applaud at the end of the lecture.)

That solves that problem, neatly, too.

But then it magically solved a bunch of other problems, too.

Because now that you're going through your math-space in your spaceship of imagination, just like Carl Sagan through his Cosmos, then it becomes totally natural to start composing these very simple functions together to get more complex ones, and you build what you're really after from these very simple building blocks.

And it was Church, the inventor of the lambda calculus, or maybe his Hinglish buddy, Turing, who invented the U-combinator, aka the 'universal combinator,' or also called the Turing-combinator in his honor, was that all you needed was the lambda (λ), the variables, and 'application' (the gluing of them together) and you got out of that ... everything.

And by everything, I mean, everything: numbers, counting, addition, and all math operations after that, and then, from number (again), knowledge-representation.

λ z -> z  (zero)
λ succ . λ z -> succ z (one) (and counting, too)

You just named the things, like 'z' and 'succ,' but you could name them anything, and they would ... 'mean' the same thing.

This naming to make numbers came to earn its own name: it was called 'Church numerals,' after its inventor.

And the Church numbers actually mean the numbers they name, because

λ succ . λ z -> succ succ succ succ z

Is the Church numeral 'four,' but if you put in a chicken for 'succ' you have:

λ succ[succ/chicken] . λ z -> succ succ succ succ z or

chicken chicken chicken chicken z

And there you go: your four chickens. Throw those puppies on the grille and you've got yourself a nice tailgate party for you and your friends to go with 'z,' the keg.

(What else would 'z' stand for, I ask you.)

(And the keg would definitely be 'zeroed' out, I tell you what.)

The really neat thing?

There's a one-to-one correspondence between combinatory logic and the λ-calculus.

For if you have the λ-term K such that

λ K . λ x . λ y -> x (dropping the 'y' term)

And the λ-term S such that

λ S . λ x . λ y . λ z -> xy(xz) (distributing application over the forms)

Then combining the λ-terms K and S in the following way:

SKK

gets you λ x -> x

Which is the identity function, or the I-combinator, just like combining the combinators S and K in the exact same way gets you the same result.

Surprising. Shocking. Really neat!

These to things, the λ-calculus and combinatory logic were developed around the same time, the lambda calculus to provide proper ordering of variables, and then demonstrating that all you needed were the variables, and nothing else, not numbers, nor addition, nor anything else, just the λ-abstraction and function application got you to numbers and everything else.

Well, combinatory logic argued that all you needed were the functions. It had no variables whatsoever.

Two opposite mathematical languages, but they ended up by meaning the exact same thing!

Fancy that!

So, if you proved one thing in one of the languages, because it was easier to express the forms in that way, then the exact same proof held in the corresponding (and opposite) mathematical language.

The other really neat thing is that once you've captured formulas in these building blocks, you can even express logic in these formula, so not only do you have:

λ x . λ y . λ z = x + 2y - 3z

in the λ-calculus, but you also begin to build logic statements of truth, like: 

For every x in the above formula, x is a number.

It was later found out that these statements of truth are simple propositions that type the λ-terms.

Ooh! Typed-λ calculus, and on top of that, using the λ-calculus, itself, to type ... well, itself!

AND THEN!

And then came this dude, Per Martin-Löf (Swedish professors call each other dude, but collegially. That's a well-known fact.), and he said, 'Yo! Homies!' and then lifted the type-system to predicate calculus, where a type depended (or was predicated on) other types or the instances of the type itself.

So, he would write about the chickens (that came after the eggs, as you recall), that:

For every succ in the below formula, given that every succ is a chicken, then for every z below, then the z must be a keg.

λ succ . λ z -> succ succ succ succ z

The very type of z depends upon the instances ('inhabitants') of the type of succ, knowing one gives you the other, dependent on the foreknowledge.

Cool!

So 'cool,' in fact, that you could actually program with just the types and not have any instances or spelled out formula at all, and, guess what? People have actually done that: created programs that only had type descriptors, and the program somehow magically figured out what it was supposed to do from those type descriptions.

How did it figure this out? The types themselves only allowed one way for the program to flow from its start to the end.

Magic! A program with no code actually doing something!

(Cue spooky Twilight Zone thematic music.)

And that brings us right back to 'meaning' ... (and my quotes around 'meaning').

(Question: when you're indicating that you're quoting a word, such as the word 'meaning' ... is that quoted word quoted (again) because you're quoting that you're quoting that word?)

(Just wondering.)

What does 'succ' and 'z' and 'x' and 'y' and the other 'z' and ... everything else ... mean?

I could save that for my entry 'M' is for Meaning, but I'm not going to write that entry, because 'M,' of course, is for 'Burrito' (a.k.a. 'Monad,' but we'll get to that later).

So I'll tell you know:

Nothing.

None of those symbols mean anything at all in mathematics. Succ is an abbreviation for 'successor function,' which means 'add one to a number,' but, for example:

λ succ . λ z -> succ succ succ succ z

could mean, just as well, 'apply the function named "succ" cumulatively to some function named "z"'

Because that's what is actually happening there, in the lambda calculus. The lambda calculus doesn't 'care' what 'succ' or 'z' 'mean.' They are just functional variables to be applied to each other, the above function could have just as easily be written with the symbols 'x' and 'y':

λ x . λ y -> x x x x y

Same functional application, same effect. No meaning.

Just like 'succ' doesn't 'mean' successor function, nor does it mean 'chicken.'

In mathematics, it is vitally important that the symbols mean nothing, and by nothing, I mean absolutely nothing. All that matters is the mechanics: there are symbols and how you move them around in your system, and that's it.

Why is that so vitally important?

Because if the symbols mean nothing, then they can represent anything, in which case math truly becomes the universal language: you encode your knowledge base as symbols ('O' is for ontology) you manipulate those symbols following the rules of manipulation you established a priori, and you come up with a result. Many times, it's a surprising result, so surprising that you get people telling you, to your face, 'that can't be done.'

The you translate those symbols back to what they were used to represent, or you apply your knowledge to those symbols a posteri to the symbolic manipulations, and you do those things that you were told are impossible for you, or anyone else, to do.

But you just did it, despite that fact that 'it can't be done.'

The non-mathematician then has the opportunity to do something they've never done in their lives, not at home, not at school, not in college, and especially not in their jobs:

They have the opportunity to learn something.

About the world, yes, but the world is as how they see it, so they have the opportunity to do something even better. They have the opportunity to learn something about themselves.

Math is hard.

Why?

Because, to 'get' math, you have to 'get' yourself, or, as is most often the case: you have to 'get' over yourself and your preconceived notions of what the world is and how the world works. Math will fail you as hard as you're failing, and, when you transcend yourself, it will succeed in places you never could before and let you see things you've never seen, ... if you are brave enough to look.

The λ-calculus, by the way, is the basis for every electronic device that exists today, and for every programming language used today. A programming language judged as 'Turing-complete,' and it must be that to solve anything more than trivial problems (which, unfortunately, most programming tasks are not even trivial, anymore: they're just simply data storage and retrieval and 'ooh! look at the pretty colors on this web-page!' 'Can I get it in corn-flower blue?' 'Why, yes! Yes, you can!' 'Good, good!') Turing based his 'machine,' which was a conceptual model of a universal computational machine, on an equivalent to Church's λ-calculus (Turing and Church were two dudes who were buds.)

Church developed this very simple calculus to address a very small concern. It's simple math.

And it changed the world into the one we now live in.

'L' is for this world we now live in. Thanks to a little bit of mathematics.

Oh, and p.s. ... HA! I finally wrote a complete post. Yay!

Saturday, April 12, 2014

'K' is for Kappa-calculus


'K' is for κ-calculus

This is an epic story of epic epicness!

No, really.

So, way back when, this guy named Church (Alonzo was his Christian name, and he liked going to churches)

(Actually, that's just conjecture on my part.)

Okay, everybody, stop everything and go to this wikipedia link on the Frege-Church ontology, right now.

Have you read it?

Well, what are you waiting for, read the post, for goodness sake!

I'M WAITING!

...

Okay.

So, I read the post, and I was all like skyr, you know? No whey!

Like, totally! Like. I mean, like, wikipedia, wrote that?

So, I quizzed my girls:

"Girls!" I shouted excitedly, "What's the farthest planet from the Sun?"

They were like, "Whaaaaa?"

Then they asked, like, "Papa!" with arched eyebrows, "why do you want to know what's the farthest plant from the Sun?"

Hm. I, like, have to work on my, like, diction.

(We'll get to the reason behind the paroxysms of 'like's momentarily.)

So, I clarified, "Planet, planet!" I shouted, "What's the farthest planet from the Sun!"

EM, the older, and therefore smarter one (she has to be, you see), said, "Pluto. No, wait. Pluto's not a planet. Neptune, then!"

"HAAAAAA!" I screamed.

Elena Marie looked at me like her Papa had finally gone off the deep end.

Well, finally off the deep end, but this time, never to return, gone-off-the-deep kind-of-way.

But before I could explain my victorious berserker scream ...

(Berserker, n.: etymology: íslandic, "Bear Skinned")

(Question: what is the etymology of the word 'etymology'?)

(Did you know Icelandic is an insular Nordic language? They can read the poetic and prose eddas straight off, just like that. Could you read a 1,000 year old document? No, you can't, unless your an Icelander. So there.)

(But I digress.)

(As always.)

(The neat thing here, is this is a digression, from a digression! And it's not even leading us back to the topic, but further away) (just as this digressionary turn discussing my digressionary habits).

(Digressions within digressions within digressions.) (Move over, Frank Herbert, you dead old guy, you're not the only one who can write 'Wheels within wheels within wheels' and have a really bad movie with a Sting cameo made out of your book!)

BUT BEFORE I COULD EXPLAIN my berserker-scream, Li'l Iz piped right up.

"How can you know what's the farthest planet of the Sun, huh, Papa? There're other planets in this galaxy, and what about other galaxies, huh?"

She had me there. But never confuse the point you're trying to make with facts.

WHEN I REREAD what I wrote to my girls, they sighed, appreciatively, EM said, "How ironic can that be?" and walked off to her room to read another book before her ballet recital in forty-five minutes.

I asked Li'l Iz, "Was that, like, ironic?"

She rolled her eyes and deigned not to answer, returning to her own book.

SO, the POINT of all this is:

Pluto is not a planet!

Whew!

I'm so glad that the ravings of a madman in Borderlands 2 would actually have significance to mathematical logic theory. The things one learns by reading blogposts by the geophfmeister.

(That is moi-self) (That is French) (No, it's not.)

Ooh! Digression!

When I was working with some Ph.D.'s and cognitive scientists, I worked with this flaming-red-haired dude (cognitive scientist) named Bill. He would always say 'Gratzi' to people answering his questions, I would always respond: "That is French."

He always gave me weird looks, but he never got the nerve to correct me.

Until one day, he said 'Gratzi' to one of our analysts, and I said, 'That is French.'

She looked me dead in the eye and snarled a disdainful, "Who cares?" before stalking off.

Bill was on the floor, suffering a paroxysm of laugher.

I think that was the best day of his life.

ANYWAY!

Pluto, not planet ... It's not even a plant.

Besides which, Eris is more massive than Pluto, anyway. So there.


SO!

The Lambda calculus ('L' is for Lambda calculus) addressed a certain set of concerns, which I'll get to in Monday's post, but it, like mathematics in general, has its own set of problems, that is, you can write a function that never terminates and so it is undecidable.

So, along came this little kid by the name of Masahito Hasegawa (dunno, maybe he's Italian?) and he noticed something, and that was this: the problem of the lambda calculus is that it's actually two calculi intertwined. And when you disentangle them, you have two distinct calculi, neither of which are undecidable.

The first of the calculi, the one we focus on here, is the κ-calculus, or contextual calculus.

The κ-calculus is this.

The types are pairs of unit types:

τ : 1|τ x τ|...

So, you see from the types that it's either nothing (the 1, or unit type) or it's a (crossed, or cartesian product) of types.

The functions only take these types, NOT functional types, so we're safe in that we're only working with unit types.

The primary operation is the lifting function, which takes an instance and lifts it into the type of concern.

liftτ x -> (a, x) where a is of type τ

Then the κ-abstraction that when acting with an instance carries it along:

κ x . (some function (τ1, τ2) -> τ3) : τ3

Tying lift and the κ-abstraction together gives us our function 'application' where actually no application happens, it's all just composing (in this case) tuples.

Friday, April 11, 2014

'J' is for the noble basis

'J' is for the noble combinator basis, JI

;)

... and I've got to stop falling asleep mid-stream-of-consciousness/brain-dump (not 'bran'-dump! as that's something else entirely: 'brain'-dump!)

I'm going to entitle this article JILL after a lovely lady I knew, Ernimagilda Mucciaroni. 'Ernimagilda,' it turns out, is a nickname for Maria Theresa.

Santa Theresa, pray for us.

So, all combinators, and, by extension, a set of functional logics can be expressed by two combinators, the S:

S = \ x y z -> xz(yz)

and the K

K = \ x y -> x

combinators. Given those two combinators, and only those two combinators, you can rewrite any of the other combinators, like, for example, the I combinator:

I = \ x -> x

is SKK

Or the B combinator:

B = \ x y z -> x(yz)

is simply SI, which I being defined above is S(SKK), or SII, I'll look that up. xxx

Great, we've got it all.

Problem. The K combinator, the constifier, drops a combinator from the discussion. So, if we're proving something, and we come across an inconvenient true, we simply apply the K combinator, and just drop it.

What if that instance of truth, that we just dropped, was the only instance in out universe of discourse?

Well, you could apply the continuation function and, going back in time, get it back. Or, without the continuation function, you could simply invent another universe, or reconstruct the one you were working with that truth back in there, and you're good to go, you can carry on from there.

Some philosophers have a problem with this, understandably, the 'oh, you can just go back and redo if you get it wrong the first time or if you find you needed it later'-excuse smacks of sloppiness, and in mathematics you have to be rigorous; you can't be sloppy. Imagine, for example, if they were blasé in constructing the Hilbert spaces? How much buy-in would there be if the disclaimer were added: "Oh, if something goes awry, just start over from the beginning and hope for the best. That should work"?

Yeah. No.

So, some site the dropping operator as the foundational problem, if there exists a truth in your space, why would you want to drop it? You'll only have to reconstruct it later on, anyway.

So, from that concern there arose the discussion around the 'noble' basis, or, how can we do what we need to do without discarding things that appear inconvenient or seem expedient to do so. After all, the Hilbert space is ... pretty big. There is room in this mathematics to carry around truths, regardless of their current utility. So, what combinators are necessary but also not destructive? Can this even be done.

It could. Four combinators showed to be of high utility in that they could construct the other combinators, except K, and they were S, M, B, and I or S, M, C, and I, depending on the preference of the proponent. Either set worked tolerably well.

But four combinators? Couldn't there be a reduced set that allowed one to use the noble set? After all, the SK-basis was originally the SKI-basis until someone brilliant realized that simply using S's and K's the I could be independently reproduced and therefore was unnecessary to form that basis.

It turns out in the noble SCMI-basis (or also in the SBMI-basis), there was. The J. The J combinator is defined as follows:

J = \ x y z w -> xy(xwz) xxx

The J in combination with another combinator would be able to reproduce the S (obviously), the C or the B (both, actually) and the M. Do you see which combinator?

Well, the T combinator:

T = \ x y -> yx

is JII:

JII = \ I I z w -> II(Iwz) -I-> wx = T -- simplest realization

The R combinator:

R = \ x y z -> yzx

is JT or J(JII):

JT = \ T y z w -subst-> Ty(Twz) -T-> (Twz)y -T-> (zw)y = R

So, C from J, T, and R:

C = \ x y z -> xzy

(J(JII)(J(JII))(J(JII)))

or

RRR ... HA!

because RRRabc = RaRbc = Rbac = acb !

I = I -- simplest, simplest realization

M = \ x -> xx

But to get M, B would really help, so B is:

B = \ x y z -> x(yz)

B = C(JIC)(JI)xyz = (JIC)x(JI)yz = JICx(JI)yz = IC(I(JI)x)yz = C((JI)x)yz = JIxzy = Ix(Iyz) = B

So, M from J, I, B, C, R, T:

J(Ra)III = (Ra)I((Ra)II) = I((Ra)IIa = IIaa = aa = M, which, in combinators is:

RI(RI(RI(BJR))) ... RI moves the desired system down the chain ... for the 'J' anyway.

which is RI(RI(BJR))aI = RI(BJR)aII = BJRaIII = J(Ra)III
So, now S:

S = \ x y z = xy(xz)

So, the strategy here is to move the 'W' to before the x and then 'I' it out, or:

JabcI ... no because that gets us:

ab(aIc) and that is NOT what we want. we want: