Wednesday, April 30, 2014

'Z' is for the ζ-calculus

'Z' is for ζ-calculus  (pronounced 'zeta-calculus')

So, 'Z' is the last letter of the alphabet, unless you're Greek, then ζ is the sixth letter. The last letter of the Greek alphabet is Ω.

But we already did Ωm-Ωm-Ωm. Ram-Ram-Sita-Ram-Sita-Ram-Ram, so we're good there.

As α denotes the beginning of things, Ω (which on a Mac, you get by selecting option-Z. Fitting, that) denotes the end of things. That's why everybody back then got it when Jesus said: "I am the α and the Ω."

He was the package deal.

But look at you all, doing your A-to-Z challenge faithfully. Did you survive? Are you shell-shocked?

To put this into perspective, my mom was a journalist for local news for our local newspaper. She did the A-to-Z challenge, every day (except Sundays), every month, year-after-year.

So you've completed your A-to-Z challenge. Congratulations! Now, lather, rinse, repeat, every month, for the rest of your life, right?

And you thought one month was hard?

Okay, last entry for my A-to-Z guide book for surveyists, explorers and tourists of mathematics.

Okay, so the lambda calculus is the commingling of the κ-calculus (the contextual calculus) and the ζ-calculus (the control calculus), which we talk about in this post (disclaimer/citation: I get most of my information on the ζ-calculus from Hasegawa's seminal 1995 paper on the κζ-calculi). A good deal of attention, when it has been given at all, has been focused principally on the κ-calculus...

'And why not!' says most. In the κ-calculus you get things done. There's no application, but through composition of (first-order) functions and first-order tuple-types you have numbers, counting, addition, and all the rest come out of that.

With the control calculus you make functions that take functions as arguments.

τ : 1 | ((—) ↠ (—))

So reduction (to your answer) is of the form of the continuation-passing form, for a function in the ζ-calculus f : a → b is of the ζ-term form of:

ζ f : a → (c ↠ b)

Which is to say, the zeta-term, f, take a functionally-typed argument, a, and returns the continuation (c ↠ b).

Now, we got the κ-calculus to work by lifting function arguments to tupled-typed terms, and we saw that with the implementation of addition (here). Now, instead of lifting unit types to tuples, in the ζ-calculus we pass a function on x to the zeta-term by using functional composition, i.e.:

pass(c) o (ζ x . f) ~> f[c/x]

To be able to construct a function in the ζ-calculus we have the function-constructor, code:

given f : c → d and x is the 'constifying function' : 1 → c
we have f o x : 1 → d

from that we have code:

code(f) ζ x . (f o x) : 1 → (c ↠ d)

Boom! code, then, is a function that creates a function from nothing (the '1' type).

But, okay, what can you do with the ζ-calculus?

Well, since we don't have unit cartesian types, like we have in the κ-calculus, then the answer to that is, well, ... nothing, really.

I mean, it's possible to have function represent unit types then start composing them to build an algebra, but this approach is rather unwieldy. The ζ-calculus exists to show that the λ-calculus is perfectly decomposable into the κ-calculus and the ζ-calculus, and so control is usually mapped out in the ζ-calculus (although some directed flow is possible in the κ-calculus, itself alone, as we've seen) (link).

For example, to create the numbers zero and one are possible in the ζ-calculus, but when we try to represent two (that is, functionally, double application), we get this:

x : 1 → c
z : 1 → (c ↠ c)  pass(x) : (c ↠ c) → c
  pass(x) o z : 1 → c  -- so we've got our constant continuation (step 1)
pass(pass(x) o z) : c ↠ (c → c) -- now we've got a continuation-creator
pass(pass(x) o z) o z) : 1 → c
ζx (pass (pass(x) o z) o z) : 1 → (c ↠ c) -- continuation-creator for the right-hand side
dupl ζz . ζx (pass (pass(x) o z) o z))) : 1 → ((c ↠ c) ↠ (c ↠ c))

And there we have it, the dupl(icate) function, or the number two, in the ζ-calculus.


Well, you did say you wanted to see it. I did, too. It's a morbid fascination of mine, watching this train wreck transpire.

The Takeaway here

But that's a reveal. Math isn't necessarily easy (shocker!) nor is it necessarily neat. Here we see in the ζ-calculus that it's difficult to express things (like, for example, the number two, or any unit types).

So, if you've got a correspondence (a provable one), all you have to do is to change categories to one where you can express what you need to simply and cleanly.

The application works here, and now, too.

I'm doing things slow and stupid, because I'm doing them the way I've always done them, the way I see how they're done.

Change categories. Somebody is doing what I'm doing, but doing it well. Somebody isn't doing what I'm doing, because they've moved onto better things — more sublime problems — ... I can do the same thing. All I have to do is change my perspective, how I see things, and then, seeing them anew, I can do what I need to do, what I want to do, simply and cleanly, then translate back down to the way I always do things, ...

Or hey, just move on, after having moved up.

It's called a 'lifting function.' It lifts an object from one category to another one, and in being lifted, the object is changed, and what it can do (its morphisms) are changed. And you can change it back 'down' (the co-ajoint function) or you can stay in the newly-lifted category, or you can lift again.

Math is neat that way, once you see that as a possibility.

Life can be like that, too.

Life is neat that way, once you see that as a possibility.

'Z' is for the ζ-calculus.

And with 'Z' we're all done doing our A-to-Z petit survey of mathematics and logic. I hope you had fun, for I had a blast writing these posts.

Thank you for reading them.

Tuesday, April 29, 2014

'Y' is for Y-Combinator

So, the Y-combinator is from the lambda calculus. It is of the form:

Y = λ f → f (Y f)

Wow. The thing about the Y-combinator, also called the fixpoint function, is that it's a big mess, type-wise, because sometimes you'll get no value out of it (as it does not terminate for many cases of functions) or sometimes you'll get multiple values out of it.

But what is it supposed to do? It's supposed to find the 'fixed' point of a function, the point of the function where it is 'fixed,' or comes to rest.

This is usually the case of a function with multiple cases, a case 'at rest' and an inductive case, like our good old friend, factorial:

factorial x = if (x = 0) then 1 else x * factorial (x -1)

for any natural number x.

The fixed point of this factorial function is one:

factorial x (where x = 1) = x

If you put this into the Y-combinator, you get:

Y factorial = factorial (Y factorial)

and when apply the natural number 1 you get

Y factorial 1 = factorial (Y factorial) 1

which comes to rest at 1 = 1

for any other number, this doesn't come to rest.

So, the Y-combinator is a very simple thing to define (as you saw above) and there's nothing in that definition about recursion (explicitly), but in computer science the Y-combinator is often used to model and to study recursion because of both its restful and (infinitely) inductive properties.

Okay, that's Y-combinator, the combinator from combinatory logic.

So, there's another Y-Combinator, and that's the name of a startup company headed up by Paul Graham (the Paul Graham, who wrote On Lisp).  I had the pleasure of meeting Paul at the Little Languages conferences at MIT back in 2008/2009 time-frame. He had just started Y-Combinator after selling Yahoo!-store to Yahoo! for 40 million dollars.


Nice, personable guy.

At that same conference I met Joe Armstrong, the Joe Armstrong, who invented Erlang (another name from mathematics, c.f. Markov chains and Queuing theory used telephony) after having sold his Erlang company for 40 million dollars.

Nice, soft-spoken guy. Some pretty 'weird' ideas about Concurrent-oriented programming ('weird' is a direct quote from wikipedia when describing the properties of the Y-combinator).

What do these guys have in common ... besides working really hard on something everybody else told them didn't work and would never work. That, and 40 million dollars, each.

I mean, Paul Graham wrote web services (before 'web services' existed as a term) in Lisp, of all things. He was afraid to tell others about using Lisp, for if they found out, they'd start to copy him and program in Lisp, too.

Nobody programs in Lisp, by the way. Just stupid students going to comp.lang.lisp asking "How do I reverse a list in Lisp? Please answer right away; homework due tomorrow morning."

Paul was worried somebody would find out, until his partner, Robert Morris, laughed at him: "Nobody cares! Everybody's programming in Java and has an Oracle back-end. They're following 'best practices;' they'd never use Lisp. That's why they're going to fail, and why we're going to succeed!"

Paul chilled after that pep-talk.

And Joe Armstrong? Ericsson told him: write us a concurrent language.

So he did.

It didn't look like Java, and it was untyped (just like Paul Graham's Lisp), so Ericsson fired him (actually, they didn't fire him, they just mandated that Erlang was not to be used at all in Ericsson anymore, so he left. Yeah. They fired him). He asked if he could keep Erlang. They said. "Erlang? Whatever. Sure. Take it and go."

He took it and went.

Erlang supports 60,000 concurrent nodes, at the same time. Joe showed it by connecting to his Erlang web server.

Netscape webserver crashed after 1,000 connections. It's a Unix thing. So, you know, Java ...? Same problems.

Joe showed his stuff around. Somebody bit. Cell phones? You need to support a lot of concurrency. A lot. He made a lot of money until Ericsson bought him back. Seems they needed his technology, after all.

What's weird about these guys?


Paul Graham's Y-combinator has started up scribd, Dropbox, and reddit.

His startups, in total, are valued at over $13.7 billion dollars. The startups had their original ideas and did all the work, all Paul Graham, and his family and partners did was open the door, give them a tiny little bit of money, and listen to them.

Joe. All he did was go against convention of 'everything has to be typed' and 'shared memory' and what threads are, and he kept at it until he got something working and he made it work with stunning results. And still he was rejected.

These are two men who believed in what they were doing, regardless of what everybody else was telling them was so, and they kept at it through success and disappointment.

And they are both generous with their time and are soft-spoken. Yes, they believe what they believe, and stand up for it, but they don't start confrontations, they don't even finish them. They just walk away and get to work, and make it work.

I see it all the time. That is: very rarely. This one guy, Brian Kernighan, at work, he wasn't given resources and told the acquisition process would take forever, or long after the due delivery date for the project.

So he bought dedicated hardware, set up a web server, and ran the application from his house.

This was for a big, big, really big cable company. Really big.

He was a water-walker. Why? Because he did what he needed to do, regardless of what everybody told him what couldn't be done.

You. Me. Every day we're told 'no' or 'later' or 'that can't be done' or 'you can't do it.'

You. Me. Everybody. We can choose to listen to what we're told, and to believe it, or we can go ahead and do what we see needs to be done, but nobody's doing it, because everybody says it can't be done.

The Y-combinator as a mathematical function that does what math functions aren't suppose to do: it doesn't give an answer, or it can give multiple answers. A function just returns one answer, that's all.  The Y-combinator doesn't. And it's modeled within mathematics using straight-up math.

It can't be done. But it's doing it.

You. You're told every day, 'it can't be done.' You can listen, and be like everybody else.

Or you can do it. And be what nobody else can be: you.

'Y' is for Y-combinator. 'Y' is for you.

I've been 'X'-marks-the-spot'ed! Oh, noes!

So, 'X' is for 'X' marks the Spot. It could have been for X-multiplication for division by fractions ...

1/2 ÷ 1/3 = (cross-multiplying gives you 1 * 3  ÷ 2 * 1 as Mr. Milardo, my math teacher in high school would chant: "Criss-cross; Applesause!" as we did fractional division)  3/2 or 1 1/2

But that would be that post and there it is and there you have it.

Where's the adventure and drama in that? Math is exciting and original, so why would I retreat boring, rote stuff like that, which everybody knows math isn't!

(Uh, wait, geophf: most people think math is boring and rote)


But that's not what this post is about. This post is about what happened at 5 pm at work in a meeting yesterday ...

... and why do these things always have to happen at 5 pm at the meeting at work, that you could've  just skipped out on, saying you had to catch your bus, but did you do that, no, and see what it gets you? Trouble, and more Trouble, that's why!

Trouble with a capital-'T'

('T' is for big Trouble)

('Hey, big Trouble.' 'Hey, little trouble.' Ah, the story of Siegfried and Brünnhilde, as retold by Quentin Tarantino)

So, yesterday at work, at 5 pm, we were having an innocuous little meeting about the layout of a chooser-window, allowing our users to select which columns to include, which not to, and which order they want to see them in.

With 600 columns, this isn't a trivial concern. Or it is trivial, but it's also a major time-sink for our users, having to deal with stuff that isn't a business concern.

Anyway, innocuous little meeting. I was bored out of my mind.


Our manager asked: "But what are we doing for learning our users' preferences in the application?"

Me: "Nothing."

Everybody else, all excited: "Well, we could do this and that and this and that!"

The Boss: "But what are we doing now for this?"

Me: "Nothing."

Then the boss goes on this five-minute tirade about how everywhere on the internet, it just knows what you want and how you want it, and how? And we're building applications straight out of 1993 and our users have to deal with this (euphemistic word for excrement), and what are we going to do about this? Who is going to add intelligence to our application?

And he glared right at me.

Somebody read my resume. Nobody reads resumes. Nobody reads emails. Nobody reviews code. I was totally and completely safe here being a 'senior Java developer,' slinging code and recommending 'best practices.'

Somebody read my resume.

I sighed. "I'll take this on."

Boss: "Good."

He got up and left. Meeting adjourned.

I've been played. My strengths have been played to.

My company is 'utilizing my full potential,' as it were.

I checked my shirt and my back. I didn't see a big-ole 'X' targeting me.

But it was there.

Okay, new day, but now I get to think about Markov chains and Google Analytics and neural classifiers ... at work ... and they're paying me to do this research.

Yeah. New day.

Tomorrow we'll look at the letter Y, the combinator and the company.

Sunday, April 27, 2014

'W' is for It's My BIRTHDAY today! ... and Comonads

'W' is for 'Whaaaaat...?' (Comonads) ... and 'W' is for 'It's my birthday today! YAY!'

That's actually a 'Y' in 'YAY,' but when you string two Y's together: YY, you get something that looks like a W ... or two martini glasses next to each other, husband and wife, take your pick.

And 'Wife' starts with the letter 'W,' so there you go!

Wife, n: ORIGIN Old English wīf [woman,] of Germanic origin; related to Dutch wijf and German Weib.

All very ... Vikingescque. 

... In a good way.

So, 'W' is for 'It's my party and I'll cry if I want to, cry if I want to, cry if I want to! You would cry, too, if this happened to you. Da-da-da-da-da-dat!' (youtube vid)

But I don't feel like crying at all! I (ain't) happy, I'm feeling glad, I've got sunshine, in a bag, I'm useless, but not for long, my future is coming on! [Gorillaz cartoon clip

So, yeah, that.

Okay, so to the (mathematical) topic. Why do I say 'W' is for comonad? And what is a comonad, anyway, and what is the meaning of life, the universe, and everything?

That last one is a piece of cake:


I'm so glad Deep Thought covered that ground for me. Otherwise I would have looked like I didn't have the answer to something, and would've had to have made something up to assuage the masses!

Like ketchup!

But anyway, why are we even having a blog-post day for the letter 'W,' when, after all, it's as plain as day that there's no such thing as the letter 'W'! I mean, 'double'-'u' it's two 'v's together, and even 'v' is a constructed thing from 'u,' (in the Greek there is no 'v,' they use 'β' so it's Blad, the impaler), so 'v' and 'w,' today and yesterday, are silly posts for silly days for letters that don't even exist!

AND it's my birthday today, and did I get cake? Huh? Did I? Huh? Or am I here, all alone, writing this blog entry about the non-extant letter 'W' and comonads? Huh? Huh? I ask you!

(Cue entry of my daughter, EM, baring gifts of gold, frankincense and cake.)

Ooh! Cake! What flavor is that?

EM: Dark chocolate mousse.

Ooh! My favorite!

Excuse me a moment whilst I indulge after much being-sung-Happy-Birthday-to.


Okay, I'm back.




Okay, geophf, how in the world does 'W' have anything to do with Comonads?

Okay, so Comonads are the dual to monads, and, in the literature, the monadic types are represented by 'm' and monadic functions are prefixed with a little-m or suffixed by a capital-M (depending on their mode of use:

msum :: Monad m, Num a => [m a] -> m a
liftM :: Monad m => (a -> r) -> m a -> m r
mapM_ :: Monad m => (a -> m b) -> [a] -> m ()

You see how that works? m-functions work with monads, simplifying them, whereas function-Ms work with non-monad forms, putting them into the monadic domain), complexifying simpler objects.

So that explains the 'M' for monads ...

... although there is the whole question lingering as to why they are even called 'monads' (one thing) as they are the triple:

(M, μ, η) where:

M is the monadic type,
μ is the join-function such that join :: Monad m => m (m a) -> m a; and,
η is the unit-function such that unit :: Monad m => a -> m a

(the unit-function, in Haskell, is known as the return-function)

Monads from mathematics were originally called 'triples,' because that's what they are, monads from philosophy mean something else entirely, so occasionally we get a logician in the mathematics forum asking what the hell we're all taking about!

So, 'monads' (misnamed) are represented by 'm' for their type-families.

So, comonads (mis-co-named) are monads' duals, so their type-families are represented by the inverted symbol of monads, comonads have the type-variable: w.

Get it? Got it? Good!

Now, I could go all 'w' is 'ω' and say that it represents Ohm, and Ohm's laws, ...

but then this post would be all chanting:

Ohm, Ohm, Ohm, Ohm, Ohm, Ohm, Ohm, ... (youtube link)

And there's something in that for some, I suppose, but not here nor now.

Here and now we're talking about the comonad.

The Comonad is the dual of the monad, so it is a co-triple of the form:

(W, δ, ε) where:

W is the Comonadic type
δ is the duplicate-function such that duplicate :: Comonad w => w a -> w (w a); and,
ε is the extract-function such that extract :: Comonad w => w a -> a

For monads, you can only push 'plain-old' objects up into the monadic domain and once there, you never leave it. You can't get an object out of a monads.

For comonads, you can only extract 'plain-old' objects from the comonadic context, and once extracted, it's free of the comonad. You can't make comonads from plain-old objects.

For monads, they have join and monadic-bind, stringing together operations in the monadic domain:

(>>=) :: Monad m => m a -> (a -> m b) -> m b
m >>= f = join . fmap f

(given that the monad is a functor, as well)

For comonads, they have duplicate and comonadic-extention, extending the comonadic context over the computation:

(=>>) :: Comonad w => w a -> (w a -> b) -> w b
w =>> f = fmap f . duplicate

(again, when the comonad is a functor as well)

So, like, Monads and comonads are, so, like, totes the opposite.

Dude. (The Dude Abides)

For me, personally, I didn't 'get' comonads for a long, long time. What was the point of them? I wondered, but now I see them everywhere.

The object-model of object orientation is not so well-understood as the categorical representation of functional programming, but one approach is the Ω-calculus.

That's one way of seeing OOP. I see it, perhaps in the same way, but, I don't think so. I see object-orientation with inheritance from the Art of the MetaObject Protocol, and from that perspective, objects and inheritance are totally comonadic. You have the computational context of the parent class and you extend that to your child class (or your child class extends the computational context).

Totally comonadic.

For-loops, modeled by folds, yes, but these are also an entirely comonadic thing. You're extending the context of your loop over the life of your computation.

Comonads are everywhere!

... except not maybe so much in if-then-else ... monads fit that so much more naturally, or a transformation on the Either type.

Either way.

Comonads. Don't worry about it if you heard about them but you just don't get them. I was there for years. Just keep doing what you're doing, 'normal' functional programming, and you're way ahead of the game. Monads and applicative functors? Go to town! Arrows? You rock.

But, for me, one day it came to the point that I was monad-ing myself to death, and I wondered, 'is there a better way?'

And then that's when I looked at comonads, again.

So, where do I use them now? and for what? and how?

The 'how' is so trivial it's really pointless to go over them. Most people, for this very reason, look at comonads, and say, 'yeah, but ... that's the map-function, right?' and eh-onto other things.

Kind of like how monads should be used.

"Eh, but that's a function, right?" and eh-move onto your next task.

But monads are so novel for most people, that we're still in the honeymoon phase, and we will be there, for, oh, another fifty years or so (it will take that long for the very idea to be introduced and then absorbed into the mainstream programming culture).

But until that time, there'll be this love-affair with monads, with the dual-neglect of comonads (note that on the wikipedia page, comonads are a footnote to monads. This dismissive treatment of comonads is not uncommon). Sigh.

I use comonads to look at things in other ways.

Say I have this list-of-lists:

Date,Open,High,Low,Close,Volume,Adj Close
[2011-03-24,62.52,62.99,59.75,62.63,5398500,62.44], ... etc.

And instead of that raw data, I wish to observe some trends in it:

Date,Open,High,Low,Close,Volume,Adj Close,sma 15,ema 12, ema 26, accum_over_distr,adx 20
[2011-03-24,$62.52,$62.99,$59.75,$62.63,5398500,$62.44,62.63,61.73466415405274,61.73466415405274,4198833.333333336,100.0], ... etc

Comonads allow me to do this quite simply by using a little, simple technique called regression:

> regression :: ([Row] -> a) -> [Row] -> [a] -- your classic comonad
> regression f rows = rows =>> f 

That takes a formula, in this case an indicator, such as the SMA (simple moving average) or the EMA (exponential, or weighted, moving average), and scanning the entire screen, giving the result for each row, returning the entire screen to you, again, but now enhanced by the comonadic function applied throughout the screen.

That's what a comonad is. You have this whole big thing, like a stock-screen of GMCR ('Green Mountain Coffee Reserves'), and the comonad takes the entire thing to operate on, and gives the result an unit (or a row) at a time.

Comonads are perfectly-fitted to the uses for which I intend them, now that I see them, finally, and know what they are and how to use them. I have this big old thing. I want to see it in a different way, and I know what result I want piece-by-piece, but, unlike for monads, that deal piecewise-at-a-time, comonad gives you the entire scope of the computational context, so if you need the previous row, for example, you have it and every other row in the history.

'W' is for Comonad. Don't worry if you don't 'get' them. They are there. They'll wait for you.

It's just that when you do start using them, you'll see their usefulness in a lot more places that you had heretofore neglected to use them, and you'll find, going forward, your work is cleaner and simpler because of them.

Friday, April 25, 2014

'V' is for Variable

'V' is for ... I don't know what, because 'V' is for 'Variable'

HA! Saved by the variable.

I cannot believe I didn't know, at first, what 'v' was for, but there it was, all along, so much present I didn't even see it: the variable.

That happens to you? Something is so in front of you all the time, you don't even notice it until it's gone? It happened to Emma, Darwin's wife (yes, the Darwin), she had a 10-year-old boy whom she loved, and then, one day he was gone. Alive and sweet and playful one day, and then in a casket the next, never to breathe or to eat or to sing or to cry again, just to be dead then buried six feet under the ground, and then forgotten, just like Emma is now, just like Charles the Darwin is now not. But he will be. Given enough time and enough distance and everything shrinks to insignificance, and then to invisibility, just like the 'next' number after π.

Okay, side note, this font? Verdana? I love it. I freekin love it, except for one small nagging detail. Why in the world does it make the Greek letter pi, π, look like a frikken 'n,' for goodness sake?

Okay, switching fonts now.

Let's try Geneva. π. Better, much better representation of π. Okay, moving on with this new font.

Okay, so anyway, there, always there, but then gone and forgotten.

Except in Eternity. That's why we have it. In eternity, there always is, and always will be. There is being and that's all there is in Eternity.

Math is eternal. A statement of truth in mathematics is true before (we just didn't know it yet), it's true now, and it will be true for ever (in that particular domain). In mathematics you'll always have the Fermat's thingie, first known as a conjecture, then as his Last Theorem, and now known as 'solved.'


In mathematics, you're granted eternity. Euclid, anyone? He's been going strong these last two-thousand, six-hundred years. Erosthothenese? xxx he was just some dude at the Library of Alexandria, but he just noticed something about how to find prime numbers, and his technique is by no means now the best or most efficient, but which one is the go-to algorithm for primes at school or nearly most anywhere? And who knows his name and the association? Today?

Who will know your name and what you did, twenty-six hundred years from now?

Okay, wait. Geneva font doesn't have italics?



Okay, trying Helvetica now (π). Okay, this is working so far ... (Oh, and don't you dare say, 'oh, geophf, why not just use Times New Roman like everybody else does?' Don't you dare say that! Do I look like everybody else? No? Then don't ask the question why don't I do X like everybody else does! News flash: I'm not like everybody else! THANK you!)

(But I digress.)

(As always.)

So you have this thing, called a variable, that prevalent in mathematics and in computer programming languages. So prevalent, in fact, that it's just taken for granted and not even seen any more, just used, and abused, and that's it.

But, what, ladies and gentlemen, is a variable?

We can revert to the epigram: "Constants aren't; Variables don't" from antiquity.

Or we can say: "Variables? Just say no."

I'm being fatuous (as always), but, as always, there's a kernel of truth I'm trying to convey here.

You all know that the 'a variable is a cup' that teachers teach you in lesson 2 of the high school class that they unknowingly mockingly name 'Computer Science I.'

Computer 'science'? Really? Where is science being done in computer science classes these days in high schools? If there were experimentation and discovery going on, then, yes, they could rightly name these classes and these, okay, really? 'Labs'? computer science. But if it's all 'Okay, class, turn to page 204 of your ... and do the exercises there' rote drilling, then it's not computer science, it's more like, 'big brother grinding out mindless automatons so you can work for the Man until you die I.'

I mean, come on, let's be honest, please.

Anyway. But I digress.

As always.

What I'm (trying to) say(ing) here is that there are different representations of what variables can be, including the 'pure' approach and the 'logical' approach, and the approach that you probably learned in your comp.sci class that told you you can assign your variable, change its value and do all sorts of weird and wonderful thing you do to variables that have no correspondence to actual mathematics.

For example

int x = 5;
x = x + 1;

What is x now?

'Well, 6, geophf, obviously,' you say.

Oh, really? Let's look at this again, mathematically:

x = 5

Okay, fine, x is 5. No problems with that.

Then this:

x = x + 1

Okay, what in the world is that? For:

 x = x + 1    (theorem)
-x = -x         (reflexive)
 0 = 0 + 1    (addition)
 0 = 1          (addition)

And now you're left with a statement that 0 = 1.

So either that is true, meaning that x = x + 1 is true, or that is false, meaning that x = x + 1 is senseless, or absurd.

Take your pick: either zero equals one, or you can't, logically, 'increment' a number to 'become' another number.

So why do you think you can get away with that with variables?

Because in your comp.sci. I class you were told a variable is a 'cup' and you can increment it, and you told that stupid girl in the back of the class to shut the hell up after the third time she said, all confused, 'but, I don't get it.'

That actually happened in an intro to comp.sci class.

That 'stupid' girl in the back actually isn't all that stupid. She didn't 'get it' because there was nothing, logical, to get. It's an absurdity taught in comp.sci. classes (that aren't comp.sci. at all, I'll reiterate) that you all just buy because you were told this, like everything else you were told, and you're gonna get that 'A' from teach for the class, arencha?

But that's how most programming language model variables: something that can change, extralogically, too, at that, so why am I making a big stink about this, and about other extralogical features, such as (shudder) assert?

Because when you introduce things outside your logical domain, then you have a hole in your proof, and a hole, no matter how small, means that you can no longer say anything with certainty about the correctness of your system.

So, how should variables behave then, if they were logical?

'Constants aren't [that's your problem, bud]; variables don't.'

Variables in computer programs shouldn't vary. In the context of their computation, once bound, they should remain bound to the binding value while in that computational context.

(Please, let's not talk about lexical scope verses dynamic scoping of variables. That conversation died with the invention of Scheme.)


x = 5

But what if you need that variable to change? You ask.

Why? I ask back. Let's think this through. Why would you need to have something change during the course of a computation? Because one value didn't work and you want to try another value?

That's easily solved, simply restart the computation in that case, then you rebind the variable with the new trial case.

For example,


Is x 1? Obviously not, so just keep calling that function with a new x until you get your greater-than-100-factorial value from your trial number. But x, inside the computation is that value, and not some other value.

By the way, I had this as an interview question: write a function that returns a factorial of a number greater than X.

... as a fault-tolerant web-service.

The questions of 'Okay, why?' and 'Who would want this, anyway?' were not part of the conversation.

This is all very comonadic ('W' is for comonad ... fer realz, yo! That'll be my next post) because here you control the value in the computational context as well as the computational context, itself.

Variables are comonadic as the title of of one of my '... are comonadic' articles? Why not?

The same thing, in a very different way, happens in Prolog. Once a logical variable is bound (that is, once it's not longer 'free') it remains bound in the logical context, and you either complete the proof, exiting the context, or you fail back up through the proof and in so doing restart somewhere upstream from the variable binding, which allows you to bind a new value to the logic variable.

A logic variable is variable insofar that it is either in a bound state (has a value) or in a free state (and is waiting to be bound to a value), and this happens through some kind of unification ('U' is for Unification).

Variable-binding in prolog is purely logical.

'Variables that don't change, geophf? What use are they? That totally doesn't model the real world!'

Uh, it does model the real world, and much better than your imperative-stateful variable-state changes. I mean, once a thing is a thing, it remains that thing forever more. You ever try to try to 'change the state' of an electron to a proton? You ever change a hydrogen atom into helium?

Actually a 19-year-old kid built a fusion reactor in the toolshed in his back yard. People kept telling him that he 'couldn't' do that. He loved that when he was told that over and over again.

And then submitted his working fusion reactor to the google science fair. 

Do people get attention for listening to nay-sayers? No, they get beaten down to become a nay-sayer.

Did this young laddie get attention at the google fair?

... and beyond?

Yes. Oh, yes, he did.

Where were we? Oh, yes: variables. How did we get here? Oh, yes: changing something to be something else.

You 'can't' do that. So don't bother, sayeth the nay-sayer.

And, if you absolutely, positively must change a variable to become something else, well, I got something for ya! It's called a State variable.

data State a s  = s -> (a, s)

What's going on here?

State isn't a (mutable) variable, actually. It's a function.

Functions rock, by the way.

This function takes a state-of-the-world, whatever you deem that to be, and returns a value from that state as well as the new state-of-the-world that it sees.


So, what you can do is string this state-'variable' from one state to the next and get yourself an approximation of a mutable variable, just like the ones you hold so dearly in your 'ordinary' programming languages, so your x = 5 example now works:

do put 5
     x <- get
     return $ x + 1

Where 'put' is your assignment and 'get' is getting the value of x

And those two statements are strung together, computationally, even though not explicitly so in the above example.

Here, I'll make it explicit:

put(5) >>= get >>= \x -> return $ x + 1

The monadic-bind operation (>>=) strings the computation together and you have a logical trace of the computational state-domain from its initial state (put(5)) to its retrieval (get) to it's updated state (return $ x + 1) there is not one place where a variable actually changes its value and the entire computation is logically sound from beginning to end.

The above is a statement of truth and is provable and fully verifiable.

The below statements, however:

x = 5
x = x + 1

Are, indeed, statements, but I've shown one of them is false, so the whole system crumbles into absurdity from that one falsehood.

And people wonder why computer programs are so hard to write and to get right.

It's only because of something so simple, and so pervasive, as mutable variables.

See you tomorrow.