'A' is for ℵ0

Read: "'A' is for Aleph-null"

ℵ0 is a symbol for infinity. The first one, that is, as there are many infinities in mathematics, and many

*kinds*of infinities, depending on which number system you choose to use (there are ... more than several kinds of numbers, but that may be the entry for 'N,' as in: "'N' is for 'Number.' But I digress, ...
... as usual).

My introductory post to my A-to-Z blog-writing challenge, or my Α-to-Ω blog-writing challenge, but that may be all Greek to you.

An appropriate entry because mathematics, itself, is as big as you want to make of it, or as small as you want to focus in on it. For example, infinity, one of them, ℵ0, is

*countably*infinite. You take the first number (which happens to be 0 (zero)), and add one to it, and you get:
0, 1, 2, 3, ...

... and you keep going until you (don't) reach ℵ0.

The thing is, you

*can*count this infinity. You just did.
But there are other infinities, including one you can't even count, because a 'bigger' infinity (it actually is bigger), is C, the continuum, because there numbers, such as:

π, τ, e, ...

Numbers that can't even be represented by a series of digits or by any function, even. They are the transcendental numbers, and are 'irrational.'

There is no way to rationalize the number π, for example; you just have to live with it, with all its quirkiness and all its irrationality.

So, how do you go along the numberline that includes irrational numbers? You can't. Why? Well, what's the 'next number' after π? There isn't one. It's not that we don't know what's the next number after π, it's just that there is no such thing, there is no 'very next number adjacent to π.'

Unless you invent a number system that counts along the continuum.

Good luck with that.

But, on the other hand, 4,000 years after Euclid, with his celebrated (or infamous) fifth postulate, said something along the lines of 'if two lines are extended into infinity on a plane and they never cross, then they kinda hafta be parallel,' people were still nodding their heads to that.

(The Ancient Greeks said 'kinda hafta' when they were dead serious about stuff like that.)

Then along came conic sections, and, lo, and behold, on those planes, it is possible, easily so, for two non-parallel lines, extended into infinity never to cross.

And the conics opened up whole new vistas of mathematics for us to explore.

So, one could say mathematics is a confining view; limiting, but the constraints are artificial: they are there because we put them there, and we put them there for some set of reasons, even if we don't know it or even if we forgot those reasons. If a particular set of mathematics doesn't do what we want it to do, or does it in an unforgivably cumbersome way, then, ... simply invent a new set of mathematics, see that it does do what we need it to do, and that, importantly, it doesn't do what we don't want it to do, and then we're good, right?

It's just that ... sometimes mathematics, being not particularly tiny — like our tiny, little, rigid brains — does things we don't expect and never look for, and so we get into trouble if we aren't rigorous. That happened to people who thought they invented complete

*and*consistent systems. Frege invented a mathematics based on pure (predicate) logic, except it allowed the paradox of the 'set that has all sets (including itself)' Russell showed him this error.
So Russell invented the mathematics described in the

*Principia Mathematica,*which most of think of, when we think of mathematics, and was rigorous about it, too. It took two-hundred pages of axioms and theories to prove that
1 + 1 = 2

And that holds, meaning that '1' is actually 1, '2' is actually 2, and '+' and '=' are what we'd like to think they are.

When Russell did that, he chorkled with glee,

*"Ha, we're good!"*he shouted,*"We have the big TOE!*[theory of everything]*Nothing that is inconsistent exists in this system."*
Then, more than several years later, a little mathematician named Gödel did something amazing.

He said, "Really? Are you sure? Because ..."

And then he proved the system inconsistent.

How?

Well, it involves a little 50-page paper he published. Essentially what he did was, working entirely in Russell's system, he modelled mathematical formulae as numbers. He proved his modelling was consistent in that system, in that a Gödel-number mapped to a formula and that a formula mapped to that number, and that the numbers worked the way you expected the formulae to work.

Then he wrote a number that said: 'This formula (of truth) cannot be proved (is inconsistent) in Russell's system.'

And showed that number existed in Russell's system, a system that Russell showed, empirically, was consistent.

Russell's system, using Russell's axioms and theories, was provably inconsistent.

And Russell never saw that one coming.

Nor did most anyone else.

But Gödel accidentally showed that a system is

*either*incomplete*or*inconsistent, or: a system*cannot*be both complete and consistent.
And that proof, way back when, opened the door to mathematics as we are wrestling with today, giving us Quantum theory, allowing us to do neat things, like write blog entries on this little thing we call a laptop.

So, that's that caveat to us mathematicians: we're working with a model that we created. We can stand up and say:

*'Ha! This proves everything!'*And it may be good for what we wanted, but all somebody has to do is remove the limits we've set to explode our neat, little rigorous system.
The good news is the explosion

*may*be a good thing.
'A' is for ℵ0, but that (infinity) is just the start ...

## 1 comment:

The images in the text are not displayed for me. Is there any trick to see them?

Post a Comment