Wednesday, April 23, 2014

'T' is for Theorem-proving


'T' is for Theorem-proving

So, you've been theorem-proving all your life.

Let's just get that one out there.

When you're given a math problem that says:

"Solve x + 1 = 2 (for x) (of course)"

And you say, "Why, x is 1, of course," quick as thinking, you've just proved a theorem.

Here, let me show you:

x + 1 = 2 (basis)
     -1 = -1 (reflexive)
x + 0 = 1 (addition)
x       = 1 (identity)

Q.E.D. ('cause you rock)

So, yeah. Theorem proving is this, correction: theorem proving is simply this: going from step 1, to step 2 to step 3 until you've got to where you want to be.

How do you go from step 1 to step 2, etc? The same way you do everything! You follow the rules you're given.

Let's prove a theorem, right now.

So, in classical logic, we have a theorem that says

p → p

That is, if you've got a p, that implies (that you've got a) p.

Toughie!

But proving that? How do we go about doing that?

Well, in classical logic, you're given three basic axioms (thanks to sigfpe for his article "Adventures in Classic Land"):

1. p → (q → p)
2. (p → (q → r)) → ((p → q) → (p → r))
3. ¬¬p → p

So, p → p. How do we get there? Let's start with axiom 1 above.

1. p → ((p → p) → p)
axiom 1 (where q = (p → p))
2. (p → ((p → p) → p) → ((p → (p → p)) → (p → p))
axiom 2 (where q = (p → p) and r = p)
3. (p → (p → p)) → (p → p)
modus ponens
4. p → (p → p)
axiom 1 (where q = p)
5. p → p
modus ponens

Q.E.D. (a.k.a. whew!)

I used something called modus ponens in the above proof. It says, basically, that if you've proved something that you're depending on, you can drop it. Or, logically:

p → q, p
       q

Or, we have "p implies q" we've proved p, so now we can just use q.

Now, there's been a lot of thought put into theory, coming up with theorems and proving them, and this has been over a long time, from Aristotle up to now. The big question for a long time was that ...

Well, theorem proving is a lot of hard work. Is there any way to automate it?

So there's been this quest to make theorem proving mechanical.

But to make theorem-proving mechanical, you have to mechanize everything, the axioms, the rules, and the process. Up until recent history, theory has been a lot of great minds spouting truths, and 'oh, here's a new way to look at the world!'

And that has been great (it has), but it hasn't helped mechanize things.

Then along came Frege. What Frege did was to give us a set of symbols that represented logical connectives and then symbols that represented things.

And there you had it: when you have objects and their relations, you have an ontology, a knowledge-base. Frege provided the tools to represent knowledge as discreet things that could be manipulated by following the rules of the (symbolized) relations.

He gave abstraction to knowledge and an uniform way of manipulating those abstractions, so, regardless of the underlying knowledge be it about money or chickens or knowledge, itself, it could be represented and then manipulated.

That way you could go from step 1 to step 2 to step 3, etc, until you arrived at your conclusion, or, just as importantly, arrived at a conclusion (including one that might possibly say what you were trying to prove was inadmissible).

Since Frege, there has been (a lot of) refinement to his system, but we've been using his system since because it works so well. He was the one who came up with the concept of types ('T' is for Types) and from that we've improved logic to be able to deliver this blog post to you on a laptop that is, at base, a pile of sand that constantly proving theorems in a descendent of Frege's logic.

Let's take a look at one such mechanized system. It's a Logic Framework, and is called tweLF. The first example proof from the Princeton team that uses this system is 'implies true,' and it goes like this:

imp_true: pf B -> pf (A imp B) = ...

That's the declaration. It's saying: the proof of B, or pf B, implies the proof of A implies B, or pf (A imp B).

How can you claim such a thing?

Well, you prove it.

imp_true: pf B -> pf (A imp B) =
[p1: pf B]            % you have the proof of B
% ... but now what? for we don't have the proof that A implies B
.

So the 'now what' is our theorem-proving adventure. Mechanized.

We don't have the proof that A implies B, but we have a logical loop-hole (called a hole) that a proof some something that's true is its proof:

hole: {C} pf C.

Which says, mechanized what I just said in words, so with that:

imp_true: pf B -> pf (A imp B) =
[p1: pf B]
(hole (A imp B)).

And there you go.

... that is, if you're fine with a big, gaping loophole in your proof of such a thing.

If you are satisfied, then, well, here, sit on this rusty nail until you get unsatisfied.

So there.

Okay, so we have an implication, but we need to prove that.

So we introduce the implication into the proof, which is defined as:

imp_i: (pf A -> pf B) -> pf (A imp B)

SO! we have that, and can use it:

imp_true: pf B -> pf (A imp B) =
[p1 : pf B]
(imp_i ([p2: pf A] (hole B))).

So, we have the proof of A and that leads to B. But wait! We already have B, don't we? It's p1 (that is [p1 : pf B])

BAM!

imp_true: pf B -> pf (A imp B) =
[p1 : pf B]
(imp_i ([p2 : pf A] p1)).

DONE!

This is what theorem-proving is: you start with what you want, e.g.:

imp_true: pf B -> pf (A imp B)

which is "implies-true is that if you have B proved, then you have the proof that (A implies B) is true, too."

Then you take what you've got:

pf B

And give it a value:

[p1 : pf B]

Then you apply your rules (in this case, implication introduction, or 'imp_i') to fill the holes in your proof, to get:

[p1: pf B] (imp_i [p2: pf A] p1)

Which is the demonstration of your proof, and that's why we say 'Q.E.D.' or 'quod est demonstratum.' 'Thus it is demonstrated.'

Ta-dah! Now you have all the tools you need to prove theorems.

Now go prove Fermat's Last Theorem.


(I am so mean!)

Okay, okay, so maybe Fermat's Last Theorem is a bit much, but if you want to try your hand at proving some theorems, there's a list of some simple theorems to prove.

Tuesday, April 22, 2014

'S' is for Simply Summing


'S' is for ... simply summing.

Okay, this post was going to be about symmetries and super-symmetries, then I was going to transition to sieves, like the Sieve of Eratosthenes, and how that relates to fizz-buzz, ...

But then I thought: 'eh.'

Yeah, that's what I thought: 'eh.'

I'm a moody brute, filled with deep, indiscernible thoughts, 'cause that's how I roll.

So, yeah: eh. This one will be a simple post today.

Sums.

Sum the numbers 1.

Yeah, I just said that; humor me.

Okay, the sum of 1 is 1. Toughie.

Next, sum 1 + 2.

3, right? Still with me?

Okay, 1 + 2 + 3.

Got it? 6. No probs.

Now: sum 1 to 10. No laptop help, just sum it yourself.

A little more fun, right? So, that's 55.

Okay, no computers: sum 1 to 100.

Hm, hm, hm. Done yet?

La-di-dah. You're not even trying, are you? You're just reading this post.

Okay, the sum from 1 to 100, inclusive, no computers is ... 5,050.

1 to 1000: 500,500.

Pattern? You bet.

1 to ... oh, I don't know: 123:

That's a might harder, but, no computers:

62 00 + 12 4 0 + 18 6 = 7,626.

Okay, let's verify, by asking my Haskell interpreter: 

Prelude> sum [1 .. 123]
7626

Yeppers, I was right. I was also as fast as a person typing a program into their computer to solve it, if they were using a functional programming language, and much, much faster than a person using a computer if they weren't and called up Excel, for example (loading ... loading ... loading ... blue screen of death) (You really need to virus-clean your computer, because I see you don't have a Mac, do you, tsk, tsk!) or if they pulled out their HP calculator from their high school days because they are that old, boys and girls!

You see this thing in my hand, children? This is not a smart phone, it's a calculator.

Children: Ooh! Papa, can I borrow it for a sec ...

Children, borrowing it, the puzzling over it: Um, Dad, ... where's the Plants vs. Zombies 2 app?

Yeah, a calculator, it calculates. And that's it. No google, no nothing, just numbers and numeric operations.

Children, looking uncomprehendingly at the thing in their hands, then, tossing the cootied thing from their hands and run to their rooms, screaming, remembering with horror the time Dear Old Dad got out an LP and told them that music came out of it, but ... it didn't have playlists!

Yeah. Calculator. But would your calculator get you that sum that fast? I mean, after you go to the CVS to buy a new battery. Those things are long-lasting, but thirty years later? And you expect your calculator to still just work?

Pfft! Heh.

Pick a number (natural number), any number, I'll sum it for you, from the origin.

Sum 1 to 31415.

Okay

31415 * 15708 = 314,15 0,000 + 157,075, 000 + 21,990,5 00 + 0 + 251,320 = 493,466,820

Okay, that took a bit longer, let's see what Haskell says:

Prelude> sum [1..31415]
493466820

Okay. Damn! ... Sometimes I impress even myself.

How do I do this? How do I add all the numbers from 1 to 31,415 and in under a minute? And perfectly, perfectly, correctly?

Well, I'll tell you.

But first, I'll tell you a story.

Once upon a time, there was a little boy named Gauss, and he was a naughty, little, precocious lad, always getting into trouble for dunking Susie Derkins' pigtails in the inkwell at his desk, and that would be fine if she had jet black hair, but you know the Viking types, right? All blond-haired, blue-eyed things, and getting your honey-blond long-flowing locks dunked into ink was not on the agenda for poor, little Susie, who cried, and had to be sent home to console herself with an appointment at the beautician's who thought the Goth-look actually fit Susie well, so she came back the next day as Suze-in-black-leather-and-studs which was disruptive for the class in other ways, so Gauss found himself at detention again.

But this time the math teacher had had enough of little Gauss' pronouncements and recitations of π to sixty-seven places, and decided to teach the little scamp but good.

Not that I'm channeling, or anything. It's not 'Mary Sue'-writing; it's 'walking in your character's moccasins a mile' ... even though Gauss probably didn't wear moccasins all that often, anyway.

Still with me?

So, mean-old Herr Schopenhauer told little Gauss. "Okay, twerp, this is gonna be a light one on you. You can go home after you sum the number from one to one hundred."

Huh? One to one hundred? But that will take all night!

"That's right," said mean old Herr Bergermeister Meisterburger, "so you'd better get crackin'!" And he cackled evilly with Schadenfreude.

The Head-Master had several names, don't you know, ... and a peridiction for Schadenfreude with his afternoon tea and muffin.

So, what could little Gauss do? He got cracking, the poor lamb.

1 = 1
1 + 2 = 3
1 + 2 + 3 = 6
1 + 2 + 3 + 4 = 10 ... obviously, you just add the last number to the previous sum
1 + 2 + 3 + 4 + 5 = 15 ... now, wait a minute ...
1 + 2 + 3 + 4 + 5 + 6 = 21 ... there seems to be another pattern here ...
1 + 2 + 3 + 4 + 5 + 6 + 7 = 28 ... Gauss looked at the chalkboard, then ...

Got it! Little Gauss thought.

Then he wrote:

1 + 2 + 3 + ... + 98 + 99 + 100 = 5,050.

"G'nite, Teach!" Gauss crowed, and off he skipped home with Suze, where they shared a chocolate milk and a strawberry poptart ... before all that sugar glaze had been added to the crust (shudder).

And Herr Herren HerringSchönheit was left stuttering a "Vat? Vat? Git back here, you rapscallion!"

Okay, what's a rap-scallion? Is it a thug onion with 'tude, a glock, and a triple-platinum pedigree?

But all Herr whatz-his-name could do was sputter, because little Gauss got it.

The sum from one (zero, actually) to any number reduces to a simple formula:

sum [1..n] = n * (n + 1) / 2

Either n or n + 1 is even, so one of them is divisible by two, so it works.

Try it out:

1 = 1 * (1 + 1) / 2 = 1 * 2 / 2 = 1
1 + 2 = 2 * (2 + 1) / 2 = 2 * 3 / 2 = 3
1 + 2 + 3 = 3 * (3 + 1) / 2 = 3 * 4 / 2 = 3 * 2 = 6
... so obviously:
1 + 2 + 3 + ... + 31,415 = 31,415 * (31,415 + 1) / 2 = 31,415 * 15,708 = that big number I computed earlier. Yeah, almost five-hundred million!

Put that into your "ceci-n'est-pas-unu-pipe" and smoke it. I'd buy that for a dollar.

Now, there's something you can impress your friends at Happy Hour with.

Epilogue

Okay, so you can sum 1 to ... whatevs, but what if a friend asks you, very kindly, and very politely, 'Okay, Ms. Smartypants, how about summing 1,000 to 2,000? What's that, huh? Betcha can't do that! Huh? Amirite? Amirite? Lookacha sweating bullets now, arencha? So, well, what is it? Huh?"

Yeah, really politely. Like that.

Now you could say, "But-but-but, Gauss' summer[-function, not -time] doesn't work like that."

But then you'd have feet of clay and egg on your face.

And with this hot summer coming, you don't want egg on your face. Trust me on that one.

So. What to do?

Think about it, that's what.

You simply make your 1,000 your zero, by scaling each number back by 1,000, then get your sum, then scale back each number by 1,000. For each time you applied the scale, you apply the scale to the result.

So, the sum of 1,000 to 2,000 inclusive is:

1,000 + 1,001 + 1,002 + ... 1,998 + 1,999 + 2,000 =

scaled back is

0 + 1 + 2 + ... + 998 + 999 + 1000 = 1000 * (1000 + 1) / 2 = 500,500

Now scale each number forward again, by adding 1,000 back to each number. You did that 1,001 times, right? (Don't forget to count the zero.) That's a rescalation (that's a word now) of 1,001,000. Add that to your sum to get 1,501,500.

There's your answer.

Let's check:

Prelude> sum [1000..2000]
1501500

Bingo! Tickets! This must be the Front Row!

And you, very politely, just told your friend where they could put their superior 'tude, too.

Win-win! Good times; good times!

See y'all tomorrow. By the way, what's the number of moves needed to solve the Towers of Hanoi? Not that 'Towers of Hanoi' is going to be my topic for 'T.'

Not at all.

Monday, April 21, 2014

'R' is fer Realz, yo!


'R' is for Let's Get Real (the Real Number line)

... because this reality you're living in?

It isn't. It's entirely manufactured for you and by you, and you have no clue that you're living in this unreality that you think isn't.

It's as simple, and as pervasive, as this:

The 'Real Numbers'?

They aren't.

Let's take a step back.

First, as you know, there was counting, and that started, naturally, from the number one.

But even that statement is so obviously flawed and ridiculous on the face of it to a modern-day mathematician. Why are you starting with the number one? Why aren't you starting with the number zero?

This isn't an argument over semantics, by the way, this has real (heh!), fundamental impact, for the number one, in counting, is not the identity. You cannot add one to a number and get that number back, and if you can't do that, you don't have a category (a 'Ring'), and if you don't have that, you have nothing, because your number system has no basis, no foundation, and anything goes because nothing is sure.

But getting to the number zero, admitting that it exists, even though it represents the zero quantity, so technically, it refers to things that don't exist (and therefore a fundamental problem with the number zero in ancient societies) ... well, there was a philosopher who posited that the number zero existed.

He was summarily executed by Plato and his 'platonic' buddies because he had spouted heresy.

If number zero exists, then you had to be able to divide by it, and when you did, you got infinity. And, obviously, infinity cannot be allowed to exist, so no number zero for you.

We went on for a long, long time without the number zero. Even unto today. You study the German rule, then you learn your multiplication tables starting from which number? Not zero: one.

"One times one is one. One times two is two. One times three is three."

Where is the recitation for the "Zero times ..."?

And I mean 'we' as Western society, as shaped by Western philosophy. The Easterners, lead by the Indians, had no problem admitting the number zero, they even had a symbol for it, 0, and then playing in the infinite playground it opened up, and therefore Eastern philosophy thrived, flourished, while Western philosophy, and society, stagnated, ...

... for one thousand years, ...

Just because ... now get this, ... just because people refused to open their eyes to a new way of seeing the world, ...

... through the number zero.

BOOM!

That's what happened when finally West met East, through exchange of ideas through trade (spices) and the Crusades (coffee served with croissants), and philosophers started talking, and the number zero was raised as a possibility.

BOOM!

Mathematics, mathematical ideas, and ideas, themselves, exploded onto the world and into though. Now that there was zero, there was infinity, now that there was infinity, and it was tenable, people now had the freedom to explore spaces that didn't exist anymore. People could go to the New World now, both figuratively and literally.

For growth in Mathematics comes from opening up your mind the possibilities you wouldn't ('couldn't') consider before, and growth in Mathematics leads to opening the mind further.

Take, for example, the expansion of the counting numbers, from not admitting zero to, now, admitting it, yes, but then the fractional numbers. You could count fractionally now.

From zero to infinity there were an infinite number of numbers, but, with fractions, we now found that from zero to one, there were an equal number of infinite number of fractions.

The really neat discovery was that if you put all the fractions in one set, and you put all the counting numbers into another, there was a one-to-one correspondence between the two.

An infinity of counting numbers was the same size as an infinity of counting-by-fraction numbers.

Wow.

So, infinity was the biggest number, fer realz, then, eh?

No, not really.

Because then came what we call the 'Real Numbers' (which aren't, not by a long shot), and then we found an infinite number of numbers between one-half and one-third.

But the thing with these numbers?

The were rationals (fractional) in there, to be sure, but they were also irrationals.

There were numbers like π and τ and e and other weird and wonderful numbers, and the problem with these numbers was that there was no correspondence between them and the rational numbers. There was no way you could combine rational numbers in any form and point directly to τ, for example. These numbers were transcendent.

What's more: they were more. There were infinitely more transcendent numbers, irrational numbers, than there were rationals.

And not even countably infinite more, they were uncountably more infinite.

There was an infinite that was bigger than infinity, and this we call the Continuum.

Why? Because between zero and one and then between one and two there's this measured, discrete, gap, and this we use for counting. There's a measured, even, step between the counting numbers, and even between the fractional numbers: you can count by them, because between them there is this discreteness.

Between the Reals there's no measurable gap. You can't count by them, and you can't add 'just this much' (every time) to go from τ to π ...

(Heh, actually, you can: π = τ + τ, but then what? What's the 'next' irrational number? There's no such thing as the 'next' irrational number, because no matter what 'next' number you choose, there will always be an uncountably infinite number of numbers between that 'next' number and the number you started from, so your 'next' number isn't the next number at all, and never will be.)

So, wow, the Reals. Lots of them. They cover everything, then, right?

Not even close.

There are numbers that are not numbers.

For example, what is the number of all the functions that yield the number zero?

There are, in fact, an infinite number of those.

How about all the functions that give you ('eventually') π?

... Ooh! There are several different ones to find π, aren't they?

Yes. Yes, there are. In fact, there are an uncountably infinite number of functions that compute π.

Now, wait. You're saying, geophf, that there are uncountably infinite number of functions to find each and every Real number and that the Real numbers are uncountable as well, so that means...

Yeah, the Continuum reigned supreme for just a while as the biggest-big number, but it was soon toppled by this Powerset infinity (my term for it, it's actually called something else).

Now, I don't know the relation between the functions that yield numbers, and the functions that construct functions that do that.

But do you see where we're going with this?

As big as you can stretch yourself, there's new vistas to see in mathematics (and meta-mathematics, let's not neglect that, now, shall we?).

But we still haven't scratched the surface.

Is the world quantized, like the rational numbers? Or is it a continuum like the Reals?

Or is something else, even something more?

Electricity.

Direct current comes to you in a straight, steady line.

The thing about DC ('direct current')? It sucks.

(Just ask Marvel.)

If you want clean, pure, ... powerful, ... well: power, over any kind of distance, you have to ask ... not our friend Tommy (Edison), but our wild-child Tesla.

He proposed to Edison that we should use AC ('alternating current') to provide electricity, and Edison threw him out of his lab, that idiot, telling him never to show his face there again.

Guess how your electricity is delivered to your home today?

The thing about alternating current? It's a wave-form, and not only that, it's a triple wave-form. How do real numbers model electricity? Well, with DC, you've got one number: "That there is 5 volts or 5 amps or 1.21 gigiwatts."

Boy, that last one came out of the blue: like a bolt of lightning!

Heh.

But if it's alternating current, then you need the sine and cosine functions to describe your power. Functions? Wouldn't it be nice if it were just a number?

Yes, it would be nice, and there is a number to describe wave-form functions, like your alternating current.


They are called 'imaginary numbers,' because, if you look hard enough on the number line, with good enough eyes, eventually you'll see the number π or τ or e or 1, or 2, or 3, or even zero.

But no matter how hard you strain your eyes, you will never see a number with an imaginary component, because why? Because most imaginary numbers, being on the curve of the wave-form are either above or below the number line. They 'aren't' numbers, then, because they're not on the numberline.

They're imaginary.

I mean, come on! The square-root of negative one? Why would anybody do this? Unless they were daft or a bit batty.

The thing is, without imaginary numbers, we wouldn't have the forms to get our heads around alternating current.

Most of the world, except those very lucky few who lived within a mile or two of a power plant, would be in darkness.

And the computer? Pfft! Don't get me started. Hie thee to the nunnery, because we are now back in the Dark Ages.

Or at most in the Age of 'Enlightenment' where you had to run for cover when the landlady bellowed "'ware slops!" ... unless you wanted raw sewage mixed with rotted fruit on your head.

But now, here we are, because we have both Real and imaginary numbers, together giving us the complex number set (which, it turns out, is not bigger than the Reals, as there is a one-to-one correspondence between each real number and each complex number. Fancy that! An infinity 'more' number of complex number above and below the Real number line gives the same number of complex numbers as Reals).

We're good?

Not even close.

Last I checked, I don't live in Flatland, and, last I checked, nor do you.

Complex go 'above' and 'below' the Real number line, but ... what about the third dimension? Is there numbers to model us in three dimension?

What would such numbers look like?

And here's a stunner. If I were on Mars, or the Moon, and you were here, reading this blog post, how would I know where to look to see you?

The Moon, and Mars, too, has their own three-dimensional frames of reference, and the Earth has its own, too (it's called geocentric). So, to draw a line from a satellite (such as the Moon, known as 'Earth's satellite') so that it can look down at a spot on the Earth, you actually have to use a four-dimensional number to connect the two three-dimensional frames of reference so that one can look at the other. This four-dimensional number is called the Quaternion.

It's simple, really, it's just rocket science.

And it's really ... mind-bending, at first, wrapping your head around the math and drawing pictures, or using both your hands, three fingers out indicating both sets of axes, and you use your nose to draw the line connecting the two, and then you scream, 'but how do I measure the angles?'

Not that I've worked on satellite projects or anything. cough-EarthWatch-cough.

But nowadays, you can't get into making a realistic game without having quaternions under your belt. Monster sees you, monster charges you, monster bonks you on the head. Game over, thank you for playing, please insert $1.50 to die ... that is to say, to try again.

The thing is: how does the monster 'see' you? The monster has it's own frame of reference, just as you do. The monster exists in its own three-dimensional coordinate system, just as you do. If you were standing on a little hillock, would you expect the monster not to see you because you're slightly elevated?

Of course not! The monster sees you, the monster bonks you. All of this happens through transformation of disparate coordinate systems via quaternions.

Now that's something to impress people with at cocktail parties.

'Yeah, I spent all day translating coordinate systems using quaternions. Busy day, busy day."

Just don't say that to a mathematician, because he'll (in general, 'he') will pause, scratch his head then ask: "So you were checking out babes checking you out?"

Then you'll have to admit that, no, not that, you were instead avoiding your boss trying to catch your eye so he could hand you a whole stack of TPS reports to work on over the weekend.

Like I ... didn't. Ooh. Ouch! Guess who was working through Easter?

Fun, fun!

Okay, though. Four dimensions. We've got it all, now, right?

Not if you're a bee.

Okay, where did that come from?

Bees see the world differently from you and me.

Please reflect on the syntax of that sentence, writers.

Bees see the world differently (not: 'different') from you and me (not: 'from you and I').

Gosh! Where is the (American) English language going?

(But I digress.)

(As always.)

If you look at how they communicate through their dance, you see an orientation, but you also see some other activity, they 'waggle' (so it's called the 'waggle-dance') and the vigor at which they do their waggle communicates a more interesting cache of nectar. There are other factors, too.

The fact of the matter: three dimensions are not enough for the bee's dance to communicate what it needs to say to the other drones about the location, distance, and quantity of nectar to be found.

So, it has its waggle-dance to communicate this information. Everybody knows this.

Until, one little girl, working on her Ph.D. in mathematics, stopped by her daddy's apiary, and, at his invitation, watched what he was doing.

Eureka.

"Daddy," she said, "those bees are dancing in six dimensions."

Guess who changed the topic of her Ph.D., right then and there?

Combining distance, angle from the sun, quantity of interest, ... all the factors, the bees came up with a dance.

They had only three dimensions to communicate six things.

The thing is, nobody told the bees they had only three dimensions to work with. So they do their dance in six dimension.

If you map what they are doing up to the sixth dimension, it gives a simple (six-dimensional) vector, which conveys all the information in one number.

Bees live in six dimensions, and they live pretty darn well in it, too.

Or, put this way: 80% of the world's food supply would disappear if bees didn't do what they did.

You are living in six dimensions, or, more correctly, you are alive now, thanks to a little six-dimensional dance.

Six dimensions.

Okay, but what if you're Buckaroo Banzai?

Pfft! Eight dimensions? Light weight.

In String Theory, we need at least ten dimensions for super strings, and twenty-six dimensions for some types of strings.

So, 'R' is for real numbers.

The neat thing about numbers, is ... they can get as big as you can think them.

And they're good for a Ph.D. thesis ... or two.http://logicaltypes.blogspot.com/2014/04/f-is-for-function.html

Saturday, April 19, 2014

'Q' is for Quine

So, yesterday ('P' is for Predicate) I forgot to mention to whom we owe our gratitude, and that is Gottlob Frege. After Wittgenstein, Frege is considered to have made the most important contributions to philosophy through his formalization of predicate logic.

I mention this, in fronting (as opposed to 'in passing') because this post is about 'quining.' And the word 'Quine' comes from the American philosopher Willard 'Van' Quine.

Quining is a neat thing. What a quine does is it's a program that outputs itself.

That sounds simple enough, but there's a problem here. If you were using BASIC, for example, then you might start off with:

10 PRINT "10 PRINT \"10 PRINT \" ...

but then, whoops! We've just run into a problem, haven't we?

So how do we solve that? With a GOTO statement? Even though the "GOTO Statement Considered Harmful"?

Crusty BASIC programmer: HECK! I've used GOTO zillions of times, and it never smacked me once!

Yeah. Okay. Whatevs.

But, okay, let's give it a try:

10 PRINT "10 PRINT \""
20 GOTO 10

The problem here is that it prints:

10 PRINT "
10 PRINT "
10 PRINT "
... blah, blah, blah ...

but it never gets to the

20 GOTO 10

That solved a whole ton of nothing.

Writing quines in non-functional languages is a big-ole pile-o-pain! But totally doable, in tons of languages (and since languages have no weight, that's saying a lot)!

Here's the ones for BASIC (the first one is totally cheating, by the way) (or totally clever) (I'm not sure which).

Now, in functional programming languages, writing a quine is trivial.

Here's one in combinatory logic:

MM

because, as you recall,

M = λx → xx

So, MM is M where x is also M so MM = MM

Piece of cake!

Now, since, usually, functional languages are in an evaluator, that gives instant feedback to what you just entered, for example:

3 + 4

returns

7

right away.

Well, then just simply evaluating a function that returns itself is easy enough:

()

gives:

()

(the 'unit' value)

And

[]

gives:

[]

(the empty list)

And even:

7

gives:

7

... and we can skip all this 3 + 4 nonsense.

So, there's that.

Same thing for any programming language that has an evaluator, like Prolog, (SWI, for example, or interpreted on the web) (ooh, look at the cute little 'witch' example on the online Prolog interpreter!):

? X = 3

X = 3

So, there are those that might consider this cheating, too, like that BASIC program that uses a BASIC command to print itself as its own list. I say P-SHAW!

But also, as the quine it to get you thinking about recursion and bootstrapping, maybe those blow-hards saying you can't use the evaluator to write a quine may be onto something.

Quine was a man who exercised his brain, and let to some serious reexamination of what it is to be true, that there may be things we consider 'obviously' true to be examined again under the light of our experiential knowledge.

A little quine program just may do the same thing for me and my ideas.

p.s. Happy Easter!

Friday, April 18, 2014

'P' is for Predicate

Hey, all, 'P' day.

'P' should be for π, but that's redundant, right? since π (pronounced 'pee' in math circles) is just the letter 'p' in Greek.

And it's 3.141592653589793238462643383279502884196939937510582097494459230

Yeah, ... I memorized that in high school. For real. People didn't believe me. Every time. So I'd recite it for them, after I wrote it down on a piece of paper so they could verify that I wasn't just spouting digits.

Yeah.

That.

So, but, this post is not about π.

No. Wait. Stop using 22/7 to represent π. That's, like, way off. Way. Use 355/113 instead. And thank Chuck Moore whenever you do.

Okay, done with that aside. Onto this post. This post is about the predicate.

So, what's a predicate?

A predicate is a statement of truth, that you arrive at from other predicates.

Yeah, math and logic are like that: a thing is a thing from other things just like it. Take 'number' for example, any number, x, is just all the previous numbers before x with some starter seed (usually zero or some other representation of null).

Well, that's what a predicate is.

First of all, a predicate is more than a proposition. A proposition is something that you've been seeing already in the alphabet-soup I've been writing, so the proposition for the type of the continuation function is:

(p → r) → r

So, this says, 'given a function that takes a p and gives an r, I can get you an r,' right? Remember that?

Well, a predicate is of this form:

p |- q

Or, if you're feeling your Prolog:

p :- q

A p is true, depending on whether q is true.

(Here we go again with dependent types).

So, what's the difference? I mean, if you just reverse the arrows you could say you have:

q → p

And you could say that, but there is a difference, in that the propositional logic of

p → r

Is all very deterministic whereas the predicate logic of

p :- q

is all very non-deterministic. Predicate logic allows you to base the truth of your consequence on none, one, or several conditions.

So, you could have:

p :- q, r, s.

Which means that p is true if q and r and s are all true, as well.

These are universals, but you can get specific with it, too:

p(x) :- q(x, y), r(y, z).

Which says that there is some x in p, giving a truth, dependent upon the same x in q (associated with some y) being true, along with the same y in r being true with some z.

And, to prove that statement you have to find the (x, y, z) values that satisfy the conditions of that statement. And you keep looking until you find it.

What does that sound like? ... We'll get back to that in a moment.

The neater thing about predicate logic is that the clauses of a predicate can, themselves, be consequences of other predicates:

q(x, y) :- s(y), t(x)
r(a, b) :- t(a), w(b)

And predicates can have multiple statements to arrive at that truth:

q(1, 2).
q(2, 4).
q(3, 6).

The above is saying: "q(1, 2) is true, regardless of anything." And you can take that truth (that fact) and plug it in anywhere. The fact propagates throughout the system.

So, those three facts, along with the relation, q(x, y) :- s(y), t(x), form a predicate. A statement of truth throughout the system, covering all possibilities.

A predicate is a program.

Yeah. Wow.

In Java or Haskell or Ruby a program is thousands, tens of thousands of lines, but in a logic programming languages, like Prolog, for example, a program could be one line or several ... more than a few lines, actually, and you're going off the deep end. And how you program in Prolog is that you string these predicates, these programs, together to create an Universe of related truths, an ontology, in which you can ... play, by which I mean, you can query the system, get an answer (if there is one) or get back several answers, and explore this Universe you've created, or that someone has created for you, as an inquiry into truth.

*Sigh* I miss coding in Prolog. When done neatly, it was so neat to use.

But I don't miss coding in Prolog. It was all dynamically typed, so you could put in anything for your inquiry, and it would try it out, looking for the string "Mama meetzaballs!" even though this particular predicate is about arithmoquines. And I don't miss it for the fact that statements of fact were easy to formulate, but the functional constructs ... ugh, they were terrible.

I need me a programming language that does logic typefully and handles functional logic beautifully, like Haskell does. Maybe ... Idris? ... but Idris doesn't work on my decade-old laptop. Birthday present for moi, then?

Hm.

Thursday, April 17, 2014

'O' is for ontology.


'O' is for Ontology.

What is an ontology? A knowledge-base? Sure, if that's simpler to grasp, but only insofar as 'knowledge-base' doesn't mean 'collection of facts' or 'data set.'

An ontology is more than that. But what, precisely, is an ontology?

Well, actually, there is a precise meaning to 'ontology.' And 'meaning,' itself, is central to ontology. Because what does these data mean is what an ontology is about. It's not a listing of facts, but it's also the relationship of the facts in the ontology that makes it what it is. The data, the facts, of an ontology have meaning, not only intrinsically, but also explicitly, or: the meaning is useable, or can be processed, itself, as information, and used in the handling of the underlying information.

So:

Milk
Eggs
Bread
(Ho-hos)
Oranges
Lettuce

An ontology? Absolutely! You hand that to your husband, and he knows exactly what it is and he knows exactly how to use it. He even, helpfully, penciled in the missing item (ho-hos, just as a 'fer instance') onto your shopping list for you.

See?

Now, ontology, per se?  Not so much. But if you explicitly titled it "Shopping List," now you're talking!

Format it as XML or JSON or OWL and then your computer will do your shopping for you, just as well as your husband would.

Even better, as it won't add those pesky ho-hos your husband always 'helpfully' adds to your list for you.

Your computer does need arms and legs and artificial intelligence, but I'm sure Cyberdyne Systems will be happy to help you there ... and also hand your terminator that was a computer a fully automatic plasma rifle.

Whoopsie! Better give your husband the list, and live with the ho-hos ... Ho-hos are, relatively speaking, better than global thermonuclear war and the extinction of all mankind.

But I digress.

As always.