# Typed Logic

Incorporates strong typing over predicate logic programming, and, conversely, incorporates predicate logic programming into strongly typed functional languages. The style of predicate logic is from Prolog; the strongly typed functional language is Haskell.

## Saturday, July 12, 2014

### 1HaskellADay: Up, up, and away!

I've taken it upon myself to submit problems, then show the solutions, for @1HaskellADay. I started this work on July 1st, 2014. So the next set of entries serve to collect what I've done, both problems and solutions, and, if a particular day posed a particularly interesting problem, that I solved, or, that I didn't solve, I'll capture that here, too.

## Monday, June 23, 2014

### matchingSub is Comonadic (obviously!)

So!

Today’s 1HaskellADay problem was an interesting NP-hard problem ... until I reread the problem statement carefully, then it became trivial. ‘Consecutive’ is the keyword here that unlocked the puzzle for me, eventually.

The heart of the algorithm here is simple enough. It says, in words, ‘are these next n numbers in the list the sum specified?’ If they are, return them as a solution, if they are greater, return nothing, if they are less than the sum, keep trying.

Those are the words, here is the implementation:

inquiry :: Int -> Int -> DList Int -> [Int] -> [Int]

inquiry _ _ _ [] = []

inquiry goal sub accum (next : rest) =

let tote = sub + next

naccum = accum << next

in case (compare tote goal) of

LT -> inquiry goal tote naccum rest

EQ -> dlToList naccum

GT -> []

Simple enough, and I use a difference list as an accumulator, just because that’s how I roll (and the fact, too, that difference lists append an element in constant time is

*sweet!)*
So, now all we need to do is move an index over the list to test each possible scenario.

**Enter the (Comonad) Dragon**

Of course, iterating over a list, keeping the context of the list itself active can be done functionally in many ways, but what suggested itself to me right away was the Comonad.

The Comonad of the list is the list itself and all of its tails, and this is exactly the paradigm we need to solve this problem simply, so, writing matchingSub became simply a comonadic extension:

matchingSub’ :: Int -> [Int] -> [[Int]]

matchingSub’ goal domain =

domain =>> inquiry goal 0 emptyDL

Now, this doesn’t quite give the requested solution sets for the given examples, as it returns the empty list as ‘no solution,’ not ‘nothing at all’ as requested, but filtering out the empty list is easy enough: we just need a predicate to test if the list is not empty and then return only the lists of answers:

isCons :: [a] -> Bool

isCons [] = False

isCons _ = True

matchingSub :: Int -> [Int] -> [[Int]]

matchingSub goal = filter isCons . matchingSub’ goal

And there we have it!

I like comonads. They’re

*sweet!*

**Okay, prove it, buster!**

(Who is this 'buster'-person, by the way, and why is he always having to prove things?)

All the above code is in Haskell programming, but it is also mutually-translatable to and from Idris. Nearly the same syntax (intentionally so), and nearly the same semantics (Idris's eager evalution looks and tastes very much like Haskell's normal order evaluation).

In Haskell, we'd hand-verify the above with the provided samples and we're done. We'd run it through quickcheck to be done-done.

In Idris, we can prove that what we specified is actually (heh:

*'actually')*correct in its implementation.
So, let's prove it.

prvSample : (expected : List (List Int))

-> (actual : List (List Int))

-> (so (expected == actual)) -> ()

prvSample expected actual pred = ()

Using the 'so' assertion, we're (almost) done.

Let's run our sample-set through our prover, such that it is:

sample1 : ()

sample1 = (prvSample [[1..4]]

(matchingSub 10 [1..5])) oh

sample2 : ()

sample2 = (prvSample [[1,1], [1,1]]

(matchingSub 2 $ replicate 3 1)) oh

sample3 : ()

sample3 = (prvSample [[1,1], [1,1]]

(take 2 $ matchingSub 2 $ repeat 1)) oh

Code compiles? Yes? We're done: we've delivered functionality as specified.

*Sweet!*

Labels:
1HaskellADay,
comonad,
difference lists,
Idris,
problem-solving,
proof

## Saturday, June 21, 2014

### keepEquals with Difference Lists

Yesterday's 1HaskellADay exercise was keepEqual but it could have easily have been ‘keepSimple’ or ‘write this in your sleep’ or ... something like that.

It was a good exercise, because there’s the obvious solution but then you look to improve the efficiencies.

And then, of course, there’re the wrong solutions, but that’s what we have proofs for, to prove ourselves into the correct implementations.

So the first stab I did was wrong:

keepEqual1 : Eq a => List a -> List a -> List a

keepEqual1 list1 list2 = list1 >>= \h1 =>

list2 >>= \h2 =>

(if h1 == h2 then return h1 else [])

The problem here is that this algorithm, albeit clever

*ish*is, well: wrong.
It iterates through each element of list1, fine, but it compares the currently-selected element of that list to

*every*element of list2. There’s also a tiny problem in that if either list1 or list2 are infinite, see, as the binding operation goes through each element of the list (inner, then outer) before returning, which could be a problem if you ever want a return in this eternity.
Minor problem there.

The other problem is that this is an exponential algorithm: for each element of list1, it possibly iterates through the entire set of list2 to find a match.

Ouch.

So. Clever

*ish*approach was a fail on my part. Shall we try the traditional approach?
keepEqual2 : Eq a => List a -> List a -> List a

keepEqual2 [] _ = []

keepEqual2 _ [] = []

keepEqual2 (h1 :: t1) (h2 :: t2) =

(if h1 == h2 then [h1] else []) ++ (keepEqual2 t1 t2)

So, that’s traditional. That works (and we can choose to verify that it does work), and it terminates at the end of the first list, thereby neatly dodging the non-termination-with-infinite-list-arguments issue.

The problem here is that we are representing choice with a list-compression

*escque*algorithm here, so we continuously concatenate to the end of a single-element list, or, in the case of a non-match, the empty list.
That algorithm, concatenation-no-matter-what, just screams:

*“Improve me! Improve me,**please**!”*
So, okay, one improvement is we can turn our choice-point from the above to construction or return:

(if h1 == h2

then (h1 :: (keepEqual2 t1 t2))

else keepEqual2 t1 t2)

Okay, is that yummy-looking?

No. No, it is not.

I mean, for efficiency’s sake we are eliminating the whole ‘concatenate after the empty list’ operation for the not-equals case, and keepEqual2 is being called only once in this branch, but ...

But it’s still written twice, and that’s ugly. Why do we have to express something twice for the concept of ‘okay, add the head only if it’s the same and then continue on with the rest of the two lists.’ I mentioned the continuation (ooh! continuations!) just once here in my denotation, why do I have to mention it twice in my implementation?

Well, I don’t have to, actually. We’re working in a functional domain, so let’s get functional!

(if h1 == h2 then ((::) h1) else id) (keepEqual2 t1 t2)

Boom! ... actually, not so ‘boom’ because of the overridded values of (::) confuses Idris (Vect, Stream, and List all use (::) as a constructor), so let’s clarify that:

cons : a -> List a -> List a

cons a list = a :: list

So,

*now*we can write:
(if h1 == h2 then (cons h1) else id) (keepEqual2 t1 t2)

I actually had this little partial function revelation just now, so my actual implementation involved me creating the difference list data type, which allowed constant time prepend and append operations. Which is better? Partial functions or the difference list?

Well, let’s take a look at my difference list implementation so we can judge their respective merits.

data DList a = DL (List a -> List a)

A difference list is the difference between two lists, and we represent this difference by capturing it in a function

unDL : DList a -> (List a -> List a)

And we can, of course, extract our function from the difference list type.

(<<) : DList a -> a -> DList a

list << a = DL ((unDL list) . (cons a))

Our append operator (<<) appends an element to the tail end of a DList in constant time (if it were needed, the corresponding (>>) operator prepends an element to the head of a DList, also in constant time).

So, we have our DList structure, but we need to start from something, and then, eventually convert back to a plain-old List structure. The below functions provide those functionalities for us:

emptyDL : DList a

emptyDL = DL id

dlToList : DList a -> List a

dlToList dlist = with List (unDL dlist [])

Now, with the difference list type, we rewrite our keepEqual in the natural style.

keepEqual : Eq a => List a -> List a -> List a

keepEqual’ : Eq a => DList a -> List a -> List a -> List a

keepEqual = keepEqual’ emptyDL

keepEqual’ dl [] _ = dlToList dl

keepEqual’ dl _ [] = dlToList dl

keepEqual’ dl (h1 :: t1) (h2 :: t2) =

keepEqual’ (if (h1 == h2) then dl << h1 else dl) t1 t2

So, what do you think? For this example the DList doesn’t really shine all that much, but if we had something where appending to the end (in constant time) made a whole lot of sense, then we would have a clear win for it.

I like DList.

Labels:
1HaskellADay,
difference lists,
efficiency

## Wednesday, June 18, 2014

### oh : so True

So, ...

So, I'm going to say 'so' a lot in this post. You'll find out why this is, soonest.

soon, adv: composite word of the words 'so' and 'on.'

Amirite? Or amirite!

SO!

Today's 1HaskellADay challenge wasn't really a challenge,

keen, adj.: composite word of 'ki' (棋) and 'in' (院), transliteration from the Japanese: School of weiqi.

... was that the mapping function from one domain to a co-domain, and the corresponding co-mapping function from the co-domain to the domain gives you, when composed, the identity function in the domain or co-domain.

No, really! That was today's challenge.

To translate that into English:

f : a -> b

g : b -> a

(f . g) : a -> a

(g . f) : b -> b

Wowsers!

So, we're not going to dwell on this, as Eilenberg and Mac Lane have dwelt on this so much better than I, but ...

Okay, so how do we prove this for our

CatA (being Either a a) and

CatB (being (Bool, a))?

With Haskell, it's easy enough, run your functions through quickcheck and you've got a PGP, that is: a pretty good (kinda) proof of what you want.

Or, you could actually prove the thing.

'You' meaning 'I,' in this particular case, and, while proving the thing, actually learn some me some more Idris for greater good.

Let's do that.

So, my last post have looked at a bit of proof-carrying code, and my methodology was to create a predicate that said whether some supposition I had held (returning a Bool, and, I hoped, True, at that!), unifying that with True, and then using the reflexive property of equality to prove that my supposition held.

Well, super neat, but today's problem was much harder for me, because I wrote something like this for one of my proofs:

leftIsomorphism : Eq a => (Bool, a) -> Bool

leftIsomorphism pair = (f . g) pair == pair

That, as far as it goes is just what we want, but when we try to prove it with this function:

isEqLeftTrue : Ea a => (val : a)

-> leftIsomorphism (True, val) = True

isEqLeftTrue val = refl

That doesn't work because of a type-mismatch, so replacing the predicate through substitution eventually lead to this type-error:

When elaborating right hand side of isEqLeftTrue:

Can't unify

x = x

with

(val : a) -> val == val = True

See, the problem (or, more correctly:

val == val

val == val

down to

True

But the

Idris*> :t so

Prelude.Bool.so : Bool -> Type

With so we say that something is true, typefully, (that's a word, now) for us to proceed. So, our isEqLeftTrue, etc, functions become easy to write:

isEqLeftTrue : Eq a => (val : a)

-> so (leftIsomorphism (True, val)) -> ()

isEqLeftTrue val pred = ()

and then verification is simply the compilation unit, but you can query the type or test out any value with it as well:

Idris*> isEqLeftTrue "hey"

isEqLeftTrue "hey" : (so True) -> ()

And there you go, we've verified, for the left-adjoin anyway, that a (,) True type has the identity function. Let's verify the other three cases as well:

isEqLeftFalse : Eq a => (val : a)

-> so (leftIsomorphism (False, val)) -> ()

isEqLeftFalse val p = ()

... and for our right-adjoin proofs:

rightIsomorphism : Eq a => Either a a -> Bool

rightIsomorphism eith = (g . f) eith == eith

isEqRightLeft : Eq a => (val : a)

-> so (rightIsomorphism (Left val)) -> ()

isEqRightLeft val p = ()

isEqRightRight : Eq a => (val : a)

-> so (rightIsomorphism (Right val)) -> ()

isEqRightRight val p = ()

... and then the compilation proves the point, but we can exercise them, as well:

So, I'm going to say 'so' a lot in this post. You'll find out why this is, soonest.

soon, adv: composite word of the words 'so' and 'on.'

Amirite? Or amirite!

SO!

Today's 1HaskellADay challenge wasn't really a challenge,

*per se,*so much as it was an observation, and that observation, keen as it might be...keen, adj.: composite word of 'ki' (棋) and 'in' (院), transliteration from the Japanese: School of weiqi.

... was that the mapping function from one domain to a co-domain, and the corresponding co-mapping function from the co-domain to the domain gives you, when composed, the identity function in the domain or co-domain.

No, really! That was today's challenge.

To translate that into English:

f : a -> b

g : b -> a

(f . g) : a -> a

(g . f) : b -> b

Wowsers!

So, we're not going to dwell on this, as Eilenberg and Mac Lane have dwelt on this so much better than I, but ...

Okay, so how do we prove this for our

CatA (being Either a a) and

CatB (being (Bool, a))?

With Haskell, it's easy enough, run your functions through quickcheck and you've got a PGP, that is: a pretty good (kinda) proof of what you want.

Or, you could actually prove the thing.

'You' meaning 'I,' in this particular case, and, while proving the thing, actually learn some me some more Idris for greater good.

Let's do that.

So, my last post have looked at a bit of proof-carrying code, and my methodology was to create a predicate that said whether some supposition I had held (returning a Bool, and, I hoped, True, at that!), unifying that with True, and then using the reflexive property of equality to prove that my supposition held.

Well, super neat, but today's problem was much harder for me, because I wrote something like this for one of my proofs:

leftIsomorphism : Eq a => (Bool, a) -> Bool

leftIsomorphism pair = (f . g) pair == pair

That, as far as it goes is just what we want, but when we try to prove it with this function:

isEqLeftTrue : Ea a => (val : a)

-> leftIsomorphism (True, val) = True

isEqLeftTrue val = refl

That doesn't work because of a type-mismatch, so replacing the predicate through substitution eventually lead to this type-error:

When elaborating right hand side of isEqLeftTrue:

Can't unify

x = x

with

(val : a) -> val == val = True

See, the problem (or, more correctly:

*my*problem) is thatval == val

*is*true, but that's a property of boolean decidability, perhaps? It's easy enough to solve if I refine the termval == val

down to

True

*then*apply the reflexive property to obtain my proof, but the problem (again:*my*problem)*is that I'm not all that savvy with refinement in proofs to get the answer out and blogged today.**The show must go on!*But the

*co-*problem, or, getting a little less esoteric, the*solution*is that the Idris language already provides something that addresses this: soIdris*> :t so

Prelude.Bool.so : Bool -> Type

With so we say that something is true, typefully, (that's a word, now) for us to proceed. So, our isEqLeftTrue, etc, functions become easy to write:

isEqLeftTrue : Eq a => (val : a)

-> so (leftIsomorphism (True, val)) -> ()

isEqLeftTrue val pred = ()

and then verification is simply the compilation unit, but you can query the type or test out any value with it as well:

Idris*> isEqLeftTrue "hey"

isEqLeftTrue "hey" : (so True) -> ()

And there you go, we've verified, for the left-adjoin anyway, that a (,) True type has the identity function. Let's verify the other three cases as well:

isEqLeftFalse : Eq a => (val : a)

-> so (leftIsomorphism (False, val)) -> ()

isEqLeftFalse val p = ()

... and for our right-adjoin proofs:

rightIsomorphism : Eq a => Either a a -> Bool

rightIsomorphism eith = (g . f) eith == eith

isEqRightLeft : Eq a => (val : a)

-> so (rightIsomorphism (Left val)) -> ()

isEqRightLeft val p = ()

isEqRightRight : Eq a => (val : a)

-> so (rightIsomorphism (Right val)) -> ()

isEqRightRight val p = ()

... and then the compilation proves the point, but we can exercise them, as well:

Idris*> isEqLeftFalse 1

isEqLeftFalse 1 : (so True) -> ()

Idris*> isEqRightLeft 'a'

isEqRightLeft 'a' : (so True) -> ()

Idris*> isEqRightRight "hey"

isEqRightRight "hey" : (so True) -> ()

... and there you have it.

*Pensée*

Huh. Looking at the above four proof-functions, they have the same type signatures (pretty much) and the same definitions. Hm. Maybe that's an indicator for me to start looking at Idris' syntactic extensions? Something for me to

*pensée*about.
Ciao for now.

Labels:
1HaskellADay,
category theory,
Dependent Types,
proof,
unification

## Tuesday, June 17, 2014

### No Roam at the Inn(sbruck)

Okay, so, for yesterday's 1HaskellADay problem (about 'collect'ing 'stuff' from 'room's), we’re looking at a pathing-kind of problem, and that is

{- You have a list of rooms, each rooms has two doors and contain one element.

To enter a room, the first door must be opened. To leave a room, the second door

must be open. It means that if you want to move from one room to the next one,

both the second door of the first room and the first door of the second room.

-}

data Room a = Room (Bool, Bool) a

{- | Given a list of rooms, we want to go as far as we can and collect the elements

in he room. This is the purpose of the function `collect`.

Examples:

>>> collect [Room (True, True) 1, Room (True, False) 2, Room (True, False) 3]

[1,2]

>>> collect [Room (False, True) 1, Room (True, False) 2, Room (True, False) 3]

[]

>>> collect [Room (True, True) 'a', Room (False, True) 'b', Room (True, True) 'c']

['a']

-}

collect :: [Room Int] -> [Int]

collect = undefined

Okay, so right off the bat, you see the type is just ... wrong. Look at the type signature. It does not allow the third example at all, so, really collect’s type is:

collect :: [Room a] -> [a]

Ouch.

So, anyway (you know, running the examples before publishing them ...

*‘might’*be prudent).
This particular, problem, albeit trivial, brings to mind two analogues: one is proof search, because we’re going from ‘room’ to ‘room’ (truth to truth) collecting ‘stuff’ (lemmas), and we stop when we hit a closed door (the proof search stops at a falsehood). So if we look at it this way, then all we need to do is build us some generic proof search algorithm, and make the Room type hypotheses, and we’re done.

But we don’t

*need*to do this in Idris, as Idris already has a proof search algorithm.
It’s called, conveniently enough: ‘proof search.’

UGH! TOO EASY!

The other thing this little problem recalled to me was the optimization algorithms I was exploring, including GAs (genetic algorithms) and ACO (ant colony optimization), but the pathing is

*way*too easy for these kinds of hammers.
Now if we had room defined something like:

data Room a = Room String [Room a] a

*THEN*we’re cooking with gas, because if you have a house, your rooms sometimes have one door, sometimes have several, so:

(bleh! draw a picture of our house here!)

And in this example here the rooms are identified (as they aren’t in today’s problem), and a room can have multiple egress points. Direction isn’t defined either. Another things: this graphic is

*not*acyclic. So how do we deal with that.
Maybe

*now*it’s time to call in the ants, eh?
At any rate, my solution to today’s problem was a rather straightforward Idris exercise:

module corn -- as in maze (pronounced mah-ees)

data Roam a = Room (Bool, Bool) a -- note: Roam, Room

collect : List (Roam a) -> List a -- note: NOT! Int, but _a_ duh!

collect [] = []

collect ((Room (False, _) _) :: _) = []

collect ((Room (True, x) y) :: rest) =

y :: (if x then (collect rest) else [])

soln-2014-06-16*> :total collect

corn.collect is Total

Note that Room defined by itself in Idris is a circular-type error. Since every value is typeful and every type is a value (‘valueful’?), then even instances of a (disjoint- or tagged-)type are a type, themselves, so

data Room a = Room (Bool, Bool) a

doesn’t work in Idris, ... at least, it didn’t work for me. So that’s why I ‘Roam’ed about my ‘Room’s for my solution.

But now we look at the problem of ‘shouldbe’ because

sample1 `shouldbe` [1,2]

sample2 `shouldbe` []

sample3 `shouldbe` [‘a’] or type-failure, because the function type is [Room Int] -> [Int]

*not*[Room a] -> [a], amirite?!?!
I’m

*SO*the right.
So here I tried to make the above provable properties. If I could prove the above statements, then I knew my implementation did what I said what it was supposed to do (which, I hope, is what was asked of me, too).

So, my first stab at it was to show decidability. I had gotten wrapped around this axel (of decidability) because from the nGrams problem from before I could show that it was decidable that a String equals a String, but I couldn’t prove a String equaled a String (because there are obvious counterexamples, e.g. “Joe” == “Smart” is false. (sorry, Joe. It’s not personal; it’s just the truth. You

*are not*smart, even if you do (or you don’t)*have*smarts (as the case may be))).
So I went for decidability, because that worked in the existential case for me before when I was verifying:

isDecEqStr : String -> Bool

isDecEqStr str = case (decEq (concat $ nGrams 1 str) str) of

Yes (_) => True

No (_) => False

For any (existing and non-empty) String str, we can show that it is decidable that

concat $ nGrams 1 str = str

... it’s like a join, right?

Anyway.

So, I thought we can show the decidability of collect over some List (Roam a)

So, we start with

isDecEqRoam : Eq a => List (Roam a) -> List a -> Bool

isDecEqRoam rooms stuff = (collect rooms) == stuff

and that works fine. But to prove that? Hm. Lessee.

sample1Is12 : { dec : (isDecEqRoam sample1 [1,2]) } -> dec = True

sample1Is12 = refl

Nope. We get the following error when we try to compile our module:

When elaborating type of corn.sample1is12:

Can’t unify Bool with Type

(with the same ‘specifically’)

So, maybe we try ...

Yeah, I iterated and iterated over decidability until I realized that I could just unify the resulting type of isDecEqRoam with True and I’d be done with it. I didn’t need decidability at all, because I wasn’t deciding that a list of things could equal a list of other (the same) things, no: I was unifying booleans, and that

*is*provable. Woot!
sample1Is12 : (isDecEqRoam sample1 [1,2]) = True

sample1Is12 = refl

soln-2014-06-16*> sample1Is12

refl : True = True

and so then:

sample2IsEmpty : (isDecEqRoam sample2 []) = True

sample2IsEmpty = refl

soln-2014-06-16*> sample2IsEmpty

refl : True = True

sample3IsA : (isDecEqRoam sample3 [‘a’]) = True

sample3IsA = refl

soln-2014-06-16*> sample3IsA

refl : True = True

YAY! ‘Unit’ ‘testing’ by proofs!

(Granted these are very simple (tautological, in fact) proofs, but I have a warm-fuzzy knowing that I proved my samples are true from my simple collect function. YAY!)

But what does that get us, besides proving what I wrote actually

*is*what I said it would be.
The thing is ... this:

badSample : (isDecEqRoam sample3 [‘b’]) = True

badSample = refl

If I insert the above function (the above proof), the

*program won’t compile!*
By putting these assertions, I’m not (just) ‘unit testing,’ nor am I catching unsafe things at runtime (throwing an ‘AssertionException,’ or whatever, aborting the program instead of allowing the program to proceed under false assumptions). No, I’m doing neither of these things, or, more precisely, I’m doing more than either of those things: much more.

What I am doing is this: I am

*proving*my program is correct, and the proof is that I get a compiled object. The above ‘sampleXisY’ are proof-carry code and I’ve just certified my program as safe and correct for the given samples at compile-time.
Cool story, bro!

Labels:
1HaskellADay,
problem-solving,
proof

## Monday, June 2, 2014

### That's totes my Bag!

So, does that mean I like tote-bags?

So, today's question on @1HaskellADay was this:

write a function

(typos faithfully reproduced)

such that

Okay, that can be done easily enough, I suppose, by torquing

But why?

First of all,

("I got two months for selling a dime bag.")

So they now twist the word 'Set' (a 'collection of unique objects') to mean something that's not a set at all, the 'Multi'Set, which is a 'collection of unique objects, but you can have multiples of these unique objects, so they're not unique at all, so it isn't a set at all, but we need to say the word 'set' because we can't say the word 'bag' because saying the word 'bag' would make us sound plebeian for some reason.'

Yeah, that. 'MultiSet.'

What.

But I digress.

As always.

So I

I went for the latter.

Now, I

... notice how

... but then I said:

I mean:

So, yeah. Let's do that, instead.

So, here we go, and in Idris, because that's how I'm rolling these days. The advantages of dependent types have been enumerated elsewhere, so we'll just go with that they're better as an assumption and move on, using them, instead of extolling them, in this post.

So, my first attempt at Bag crashed and burned, because I did this:

data Bag : (x : Type) -> Type where

add : Bag x -> x -> Bag x

emptyBag : Bag x

and the compiler was fine with that. Hey, I can declare any type I'd like, so long as the types just stay as types, but as soon as I tried to define these things:

emptyList : List x

emptyList = []

emptyBag = Bag emptyList

add (Bag []) x = Bag [(x, 1)]

add (Bag ((x, y) :: rest)) x = Bag ((x, y + 1) :: rest)

add (Bag ((z, y) :: rest)) x = Bag ((z, y) :: (add rest x))

The compiler looked at me and asked:

And about the only intelligent answer I could muster was:

I had gotten too clever for myself by half, trying to reshape a data type you learn in Comp.Sci. 101 as a purely functional type.

So, let's just declare

Yes, let's.

Now, I so totally could've gone with the balanced binary-tree representation instead of the simple and standard linked list, but, you know: 'live and learn!'

With this declaration the emptyBag becomes so trivial as to be unnecessary, and then add is simplicity, itself, too, but add is, either way, so that's not saying much.

add : Eq x => Bag x -> x -> Bag x

add Air x = Stuffed (x, 1) Air

add (Stuffed (z, y) rest) x =

case x == z of

True => Stuffed (x, y + 1) rest

False => Stuffed (z, y) (add rest x)

Now, you see me relying on the

I'd like my dependent types to say, 'unify x with x (reflexive) for the isomorphic case, and

Ick. I hate explicit case-statements! Where is really, really, really smart pattern-matching when I need it?

But with

count : Eq x => x -> Bag x -> Nat

count _ Air = 0

count x (Stuffed (z, y) rest) =

case x == z of

True => y

False => count x rest

countOccurences str = co' (unpack str) where

co' [] = Air

co' (char :: rest) = add (co' rest) char

depth Air = 0

depth (Stuffed _ rest) = 1 + depth rest

sample : ?bag

sample = countOccurences "The quick, brown fox jumped over the lazy dog."

bag = proof search

When we do a

Perhaps this could be made a

Just perhaps.

Well, then, let's do that!

data Bag x = Air | Stuffed (x, Nat) (Bag x) (Bag x)

We make Bag balanced, with the

add : Ord x => Bag x -> x -> Bag x

add Air x = Stuffed (x, 1) Air Air

add (Stuffed (z, y) less more) x =

case (compare x z) of

LT => Stuffed (z, y) (add less x) more

GT => Stuffed (z, y) less (add more x)

EQ => Stuffed (z, y + 1) less more

Then all the other functions change

And so, the redefined

Not bad! Not bad! The improved data-structure improves efficiency across the board from O(N) to O(log N).

Hm, perhaps I'll have

Good night, Moon!

So, today's question on @1HaskellADay was this:

write a function

`countOccurences :: [Stirng] -> Map Char Int`

(typos faithfully reproduced)

such that

`lookup 'l' $ countOccurences "Hello"`

~> `Just 2`

`lookup 'q' $ countOccurences "Hello"`

~> `Nothing`Okay, that can be done easily enough, I suppose, by torquing

`Map`into something that it isn't, so one gets wrapped around the axel of creating a mapping from characters to occurrences.But why?

First of all,

`countOccurences`

maps a `String`(not a`List`of`String`s) to a`Map`, and that map is a very specialized kind of map that has existed in the literature for quite a while, and that map is known as the`Bag`data type, and is also, nowadays, called the`MultiSet`by people too embarrassed to say the word 'bag' in a sentence, because of their prior drug convictions.("I got two months for selling a dime bag.")

So they now twist the word 'Set' (a 'collection of unique objects') to mean something that's not a set at all, the 'Multi'Set, which is a 'collection of unique objects, but you can have multiples of these unique objects, so they're not unique at all, so it isn't a set at all, but we need to say the word 'set' because we can't say the word 'bag' because saying the word 'bag' would make us sound plebeian for some reason.'

Yeah, that. 'MultiSet.'

What.

*Ev.*Er.But I digress.

As always.

So I

*COULD*write the`countOccurences`

as a `String -> Map Char Int`

function, but then: why bother? You can either write tons of algorithmic code that obscures the intent or just simply use the appropriate data type.I went for the latter.

Now, I

*wuz gonna*do a dependently-typed pair to represent an occurrence...... notice how

`countOccurences`

is so badly misspelled, by the way?*SOME*body didn't QA-check their problem for the day today, I'm thinking.... but then I said:

*'eh!'*I mean:

*WHY*is`lookup 'q' $ countOccurences "Hello"`

~> `Nothing`?*WHY*can't it be that`count 'q'`

for a `Bag Char`representation of "Hello" be 0? 0 is a valid answer and it keeps everything nice and monoidal without having to lift everything unnecessarily into the monadic domain.So, yeah. Let's do that, instead.

So, here we go, and in Idris, because that's how I'm rolling these days. The advantages of dependent types have been enumerated elsewhere, so we'll just go with that they're better as an assumption and move on, using them, instead of extolling them, in this post.

**Wrong!**So, my first attempt at Bag crashed and burned, because I did this:

data Bag : (x : Type) -> Type where

add : Bag x -> x -> Bag x

emptyBag : Bag x

and the compiler was fine with that. Hey, I can declare any type I'd like, so long as the types just stay as types, but as soon as I tried to define these things:

emptyList : List x

emptyList = []

emptyBag = Bag emptyList

add (Bag []) x = Bag [(x, 1)]

add (Bag ((x, y) :: rest)) x = Bag ((x, y + 1) :: rest)

add (Bag ((z, y) :: rest)) x = Bag ((z, y) :: (add rest x))

The compiler looked at me and asked:

*'geophf, what in tarnation are you-ah tryin' to do?'*And about the only intelligent answer I could muster was:

*'Ummmm... idk.'*I had gotten too clever for myself by half, trying to reshape a data type you learn in Comp.Sci. 101 as a purely functional type.

**Back to Basics**... (but not BASIC)So, let's just declare

`Bag`to be what it is and KISS: 'keep it simple, stupid!'Yes, let's.

`data Bag x = Air | Stuffed (x, Nat) (Bag x)`

Now, I so totally could've gone with the balanced binary-tree representation instead of the simple and standard linked list, but, you know: 'live and learn!'

With this declaration the emptyBag becomes so trivial as to be unnecessary, and then add is simplicity, itself, too, but add is, either way, so that's not saying much.

add : Eq x => Bag x -> x -> Bag x

add Air x = Stuffed (x, 1) Air

add (Stuffed (z, y) rest) x =

case x == z of

True => Stuffed (x, y + 1) rest

False => Stuffed (z, y) (add rest x)

Now, you see me relying on the

`case`

-statement, here. Unhappily.I'd like my dependent types to say, 'unify x with x (reflexive) for the isomorphic case, and

*don't*unify x with z for the other case.' But we're not there yet, or my coding isn't on par with being there yet, so I forced total coverage bifurcating the result-set into isomorphic and not with a hard-case statement.Ick. I hate explicit case-statements! Where is really, really, really smart pattern-matching when I need it?

But with

`add,`

constructing a `Bag`becomes easy, and then counting elements of that bag is easy, too (again, with another`case`

-statement, sigh!):

count : Eq x => x -> Bag x -> Nat

count _ Air = 0

count x (Stuffed (z, y) rest) =

case x == z of

True => y

False => count x rest

`countOccurences`

(with one-too-few 'r's in the function name) becomes easy, given the `Bag`data type:`countOccurences : String -> Bag Char`

countOccurences str = co' (unpack str) where

co' [] = Air

co' (char :: rest) = add (co' rest) char

*YAWN!**But look at this:*

`depth : Bag x -> Nat`

depth Air = 0

depth (Stuffed _ rest) = 1 + depth rest

sample : ?bag

sample = countOccurences "The quick, brown fox jumped over the lazy dog."

bag = proof search

When we do a

`depth sample`

, we get the not-surprising answer of `29 : ``Nat`

Perhaps this could be made a

*tad*bit more efficient?Just perhaps.

Well, then, let's do that!

data Bag x = Air | Stuffed (x, Nat) (Bag x) (Bag x)

We make Bag balanced, with the

`add`

-function, doing the work of (very simply) branching off new nodes:add : Ord x => Bag x -> x -> Bag x

add Air x = Stuffed (x, 1) Air Air

add (Stuffed (z, y) less more) x =

case (compare x z) of

LT => Stuffed (z, y) (add less x) more

GT => Stuffed (z, y) less (add more x)

EQ => Stuffed (z, y + 1) less more

Then all the other functions change

*('morph')*to work with a tree, not a list and work with`Ord`elements, not with (simply)`Eq`ones.And so, the redefined

`depth`

-function gives a very different result:`depth sample`

~> `9 : Nat`

Not bad! Not bad! The improved data-structure improves efficiency across the board from O(N) to O(log N).

Hm, perhaps I'll have

`count`

return a dependently-typed pair, just as the library function `filter`

does on `List`types, but not tonight.Good night, Moon!

Labels:
Data Structures,
Dependent Types,
Idris

## Sunday, May 25, 2014

### Gödel Numbering for LYNE

What is the Gödel numbering for LYNE puzzles?

I propose, there exists a number, L, such that it describes each LYNE puzzle, such that L

_{x}→ puzzle

_{x}and puzzle

_{x}→ L

_{x}uniquely.

That one is simple enough, for each shape node and terminus we introduce a prime, for each board size we introduce a prime (such that a 3x4 board shape is a different prime than a 4x3 board shape) ... or we could describe the board as numbers: 3 x 4 is sssz x ssssz and we have an unique prime for s, z, and x ('by'), for each empty space on the grid we introduce a prime (2?), for each n-pass-through unit we introduce a prime (I've seen up to 4, is there a 5?) or we could go type-theory here and have the different shapes be in a type-family, the different pass-throughs be in an other type-family (inhabited by singleton type instances). And we have from that a Gödel number that we can wrap a puzzle into and unwrap a puzzle from.

Now, that's the Gödel-numbering, or part I of the two-parter of LYNE-as-Gödel numbers. Simple enough. It's just simply a matter of programming.

Now I further propose the following. There exists some Gödel numbering, L', such that the encoding of the board includes the solution, that is, 'for squares, start at x, N, W, NW, S, S, S, terminated. For triangles, start at y, E, E, S, S, NW, terminate, ... etc.' That is to say, the construction of the board renders a Gödel number, and the destruction of the board, that is to say, that each move is a Gödel number, and that each move, when dividing the board L' number yields a new, smaller Gödel number such that the final move, completing the puzzle yields a 'finished' Gödel number for that board, that is 'no more moves, all nodes and pass-throughs touched and filled.'

Now, to construct such a number may entail a foray into Graph theory, I don't know, for it is demonstrable that each LYNE puzzle is a DAG (how?), or it may require using the Gödel numbers for each board as proofs (and the solving of the puzzle the co-construction of the proof).

That's part II of the two-parter of LYNE-as-Gödel numbers.

Labels:
graph theory,
Gödel numbering,
proof

Subscribe to:
Posts (Atom)