Wednesday, May 26, 2010

Math is hard

So, here's an interesting, everyday conundrum, sent to me by a reader:

Hello my mathematical genius friend. :) [should I edit that out? *blush*]

I have been sent the following mathematical joke of sorts. The person who sent it to me claims there are no flaws in it. But obviously there has to be a flaw, because the conclusion is incorrect. The problem is that I don't know how to explain the flaw---but I suspect it happens in that third line where it attempts to equate squared cents with squared dollars. Is there any way that you could explain the flaw in such a way that a seventeen year old Norwegian would understand? Don't worry, you don't have to say it in Norwegian, he speaks English.

If you don't mind having a look at this and explaining, I would be ever so obliged. And no pressure, but...pants may be on the line in this little bet I've entered into.

1. $1= 100¢ (so $0.1 = 10¢)
2. And, 100¢ = 10¢²
3. Then, 10¢² = $0.1²
4. $0.1² = $0.01

5. $0.01 = 1¢

The implied conclusion is
∴ 'a dime squared equals one penny'

Then we say 'Q.E.D.'.

Hm, if pants are on the line for my dear reader, I wonder what's on the line for moi-self (that is faux-French) (and 'faux' is French)? A review? Or two? Or three? Of my stories?

Let's leave my preening aside.

So, who sees the fallacies above that lead to the absurd conclusion?

If you do not see it, please think on this awhile before looking at the answer.




The answer

From basically the get-go this problem statement is erroneous and imprecise, but this comes from a fundamental laxity in understanding of what operators are and what operators do. Certainly, the first premise is correct: One hundred pennies does indeed equate to one dollar, for
1. 100¢ = $1

is a statement of fact about the conversion from one set of units (pennies or ¢) to another set (dollars or $). But already the trickster plays fast and loose, for indeed:
1. ... (so $0.1 = 10¢)

is still correct but the (implied) conclusion makes a statement about the square of dimes, not about the square of tenths of dollars.

Do you see the fallacy now?

No? Let's continue.

So 1. is true, insofar as I can throw it, and days where my back gives out (ah! me poor bones!) that's not very far, but it's far enough for this problem statement, so long as it goes no farther than that.

But it does. *sigh*

So now let's get into the lies:
2. 100¢ = (10¢)² [parentheses implied and erroneous]

This is a lie. It's a lie, lie, lie, lie, lie, lie, lie!

Huh?

The lie is this: a square of a thing is not the thing itself, and even if you know nothing about mathematics, you can prove this to yourself. Socrates did it with an unlettered and untutored slave boy, and you are further along than what Socrates had to work with.

So let's prove 2. false with an analogue.

Take a foot rule (sorry, my readers who do not follow the British Imperial system, which, oddly enough, includes Brits these days, too) and a large piece of butcher paper. Draw a line on the butcher paper measuring one foot.

a. _ = 1 foot.


Now, 'square' that line, by drawing three more lines to make a foot square on the butcher paper.

b. ❏ = 1 foot square.


So, is

c? 1 foot = 1 foot square


Obviously not! for that would be to say:

c? _ = ❏


Or, put another way, 'one thing of one thing is equivalent to one thing of entirely a different thing'. One gulp of water does not equal one gulp of bleach. One I wish to have with my breakfast, the other, I do not, as my father very unfortunately discovered the hard way one not-so-fine morning.

"But, but, but ..." you stutter angrily, "but isn't '10² = 100' a statement of fact?"

Yes, indeed, it is, but please remember '10 ≠ 10 things' is also a statement of fact. Number (with a capital 'N') is a class of classes as, e.g., Introduction to Mathematical Philosophy so clearly and succinctly explains.

The link goes right to the book, all 228 pages of it. It's a quick read, so please (re)read it.


So to state:
2. 100¢ = (10¢)² [parentheses implied and erroneous]

is to state a falsehood.

How do we correct it? Well, by replacing the (implied) error with an explicit (corrected) ordering:
2. 100¢ = 10²¢

Do you see the correction? It the former case, we erroneously squared the units along with the number, in the latter case we do not square the units, we square the number solely.

Do I need to go any further? Or, do the fallacies fall out obviously in the rest of the assertions?

For completeness sake, I will review each step.
3. (10¢)² = ($0.1)² [parentheses implied and erroneous]

No. (10 pennies) squared does not equal (1 dime) squared.
But, a statement of fact of conversion is that (10 pennies) = (1 dime), but that's as far as it goes, and no farther.

If we square pennies we have a new unit of measure called, I don't know: (¢²), and (¢²) does not equal dimes. Not in this world.
4. ($0.1)² = $0.01 [parentheses implied and erroneous]

Again, no.
Again: $(0.1²) = $0.01, but again, that is as far as you can go with that statement.
5. $0.01 = 1¢

is just a reformulation of the first statement and is true, yes, but redundant.

Remember from Frege's predicate calculus:
q |- p [read: 'q implied by p' or 'if p then q']

[Shoot! why don't they have an 'implied by' HTML character?]
But if:
¬p [read: 'not p' or 'p is not true (or provable)']

Then you can say anything you like for q, or, more correctly, you cannot say anything at all about q, because q does not depend on ¬p, it depends on p.

So, as it were: 'If I had done the laundry, we wouldn't have had this argument' and 'I didn't do the laundry' means I don't have a leg to stand on about why we had this argument, honey.

(Oops, sorry, ... but it's not like I have had this experience at all ...)

And, to the point of this article: 'If a (10 one-hundredths of a dollar) squared were to equal (10 pennies) squared, then ...'

Well, then anything, because '(10 one-hundredths of a dollar) squared = (10 pennies) squared' is false. So say away, because anything coming from a false premise is an absurd conclusion: whirled (black eyed) peas, butterflies flapping in the Amazon, and the Number 23. They may be true enough in their own right, but you can say nothing about them from the false premise.

So, let's take the absurd conclusion:
∴ 'a dime squared equals one penny'

and reformulate it to be a true statement.

Well, the first thing we have to do it to get rid of the '∴', so let's do that, and then state the plain facts:
'a dime squared equals one penny' is an absurdity.

Q.E.D.


What can we take away from this?

Let's examine another absurity:
'Math is hard.'

No.

No, my dear ladies and gentlemen, 'Math' isn't 'hard.' Math is simple. Math can even be easy, for we learned from the Greeks, after all, that Math is one of the humanities. Math is simply a language. A language that can describe things exactly as they are and exactly as they are not. And precisely at that. It can even describe imprecision precisely. The 'hard'ness of mathematics comes from us, when we don't wish to be precise in what we are talking or thinking about.

Being precise ... well, that can be hard, I suppose, so then perhaps it's more precise not to say 'Math is hard' but to say 'Life is hard.'

Yes. That's true. 'Life is hard' as we choose to make it.

Oh, well. I never promised you a Rose Garden.

Tuesday, June 2, 2009

Realized Constants are Comonadic

An interesting problem that often arises is "to make" constants. Put another way, it often happens that a system acquires information over time. The system may wish to formalize what it has acquired by creating a constant value.

Here's the problem, however: "Variables don't; Constants aren't"citation?

Or, put another way:

1. One man's constant is another man's variable.


To make a constant sometime down the road, as it were, in languages that have logic variables is simplicity itself: once a variable is unified with a value, it keeps that value throughout that proof.

In "pure" functional languages, that is, languages that do not have side effects, the same can be said.

What about the rest of the world? Take Java, for example. One can make a variable keep its value by declaring that variable final, but that is not helpful if we do not know what our constant value is to be at the time of object creation.

In what scenario could this possibly occur? We need a constant value, but we do not know what it is?

Actually, this situation arises quite frequently. Let's take a concrete example. You have many databases in your database management system, and at compile time you do not know on which port your DBMS is running nor do you know which database the user wishes to query. That's fine, you can lazily create the database connection and access the value through a method:

2. Functions delay binding; data structures induce binding. Moral: Structure data late in the programming process.


But what's not fine is this: the user is data-mining, examining chunks at a time, occasionally calling for the next chunk. How does one know that one has fallen off the end of the table? A simple SELECT on the maximum indexed key will tell you that, but to do that with every query? This seems wasteful. So, once we do the query once, let's just store that value in a variable with a flag to say we've already looked it up.

That sounds suspiciously like a lazily initialized constant, right?

But here's the rub: code that sets a flag allows for other code to unset it, just as code that sets a constant value allows other code to unset that. Using just plain-old regular variables gives no guarantee that the variable, once set, stays set at that constant value.

What to do?

Well, what is the work-flow of this process? We perform an action, checking that we are still within the bounds the valid domain, but the domain only becomes valid after program start, so we cannot make the bounds constant using the final keyword on the variable. This is a very common programming action ... sort of like ... dare I say ... a design pattern. Gee, I wish there was one invented that did all this.

In fact, there does exist such a pattern, and it comes from us via Category Theory. The programming language Haskell has incorporated elements of Category theory in its use of monads and arrows, but the downside of these ways of thinking about computation is that one must "lift" the entire computation into that domain, transforming the original computation to work in that regime.

No, what is needed is something less intrusive, and I found that less intrusive thing when I read an article by sigfpe on comonadic plumbing. In this article he describes three different ways of looking at constants: 1. as a constant, 2. as the Reader Monad, 3. or as the Reader Comonad.

The first one is sometimes untenable, given the programming need, and doesn't work for our case.

The second one is useful if one is already programming in a monad. Umm ... how many OO programmers use monads? Hands up.

[sound of crickets] ... thought so.

The third one allows one to use the constant on demand. And here's the thing, the comonad is very easily comprehended: it has a value that can be extracted "down" from the comonad, it allows a value to be extended over the comonad.

This sounds, in OO parlance, very much like an object. Let's ignore the extentionability of the comonad for now and simply look at extraction (this narrowed functionality is a thing in and of itself. Objects of this type are called Copointed, but let us not be too dogmatic here). Creating such a type is simplicity itself:

> public interface Comonad<T> { public T extract(); }

Um, yawn?

But that is the power of pattern language: not the ability to create these incredibly complex things in a controlled way (they do do that), but the ability to recognize that such and so is a pattern and then encapsulate behavior into the pattern.

Using the comonad pattern, we need simply to make our "maximum row number of Table x" an implementation of the Comonad interface, and then, when we do have enough information to create the database connection (that is, we now have our domain in which our constant resides), we instantiate the comonadic object (which is that domain). Whenever extract is called, it simply returns the constant value required, with the implementation that it does a database look up the first time, then it does the internal constant value look up all other times. Since the Comonad interface does not have methods to change its internals (here), so long as one has the copointed object, the value extract has a guarantee that it remains constant.

Summary

This article examined a commonly recurring problem: one needs to constify a value at some point during a program run and guarantee that it remain constant after being created. A simplification of the comonad was offered as a pattern that is simple to define and to implement.

Monday, September 29, 2008

Animal as RDR, part III

Examples: Building, running and modifying RDR systems

The previous entries showed the implementation of the model of a simple Ripple-Down Rules (RDR) system. This entry will show how to implement the rules for such a system from scratch as well as how to run and then to modify such a system. Again, we are using the computer game Animal as the basis of these examples.

Let's start off by implementing RDR system modelled in the first entry on this topic. But first, we need a couple of improvements. The addRule I had originally implemented wasn't an example for ease of use as it was ...
] addRule :: BinDir
] → RuleTree a b c k v
] → Environment k v b c
] → Condition a (Knowledge k v)
] → Conclusion b c (Knowledge k v)
] → RuleTreeEnv a b c k v
] addRule dir (Zip hist branch) env cond concl
] = let rule = Branch (Rule cond concl) Leaf Leaf
] newbr = fromJust $ mutate dir rule branch
] newtree = return $ settle (Zip hist newbr)
] in RuleEnv newtree env
... so I changed it so that it fit more neatly into building rules in sequence:
> addRule :: BinDirRule a b c (Knowledge k v)
> → RuleTree a b c k v → RuleTree a b c k v
> addRule dir rule (Zip hist branch)
> = let ruleB = Branch rule Leaf Leaf
> in Zip hist (mutate dir ruleB branch)
This new implementation has now replaced the previous one in the implementation entry. Also, constructing Rules themselves was a bit labour-intensive, so I've added the following function to simplify building simple rules:
> type SimpleRule = Rule String String String 
> (Knowledge String String)

> mkRule :: StringStringSimpleRule
> mkRule key ans = Rule (present key) (assume ans)
Also, recall that:
(>>|) :: Monad m ⇒ m a → (a → b) → m b
This function simply reorders the arguments of liftM, so why have it? I find it useful in the flow of building monadic systems, as demonstrated below.

Building

And with that, let us build our Animal guessing game knowledge base:
> animalTree :: Zipper BinDir (BinaryTree SimpleRule)
> → Zipper BinDir (BinaryTree SimpleRule)
> animalTree tree = fromJust
> (return tree >>|
> addRule L (mkRule "has four legs" "pony") >>=
> advance L >>|
> addRule L (mkRule "barks" "dog") >>|
> addRule R (mkRule "swims" "fish") >>=
> advance L >>|
> addRule R (mkRule "purrs" "cat") >>=
> withdraw >>=
> advance R >>|
> addRule R (mkRule "spins web" "spider") >>|
> reset)
The function reset is from the Data.Zipper module:
> reset :: (Mutable c dir c, Transitive c dir)
> ⇒ Zipper dir c → Zipper dir c
> reset z@(Zip [] _) = z
> reset (Zip ((dir, h):t) elt) = reset (Zip t $ mutate dir elt h)
Looking at animalTree above, I say with unmasked pride that I feel (>>|) shows its hidden strength: I could not imagine puzzling out the proper way to write the above definition using liftM and have it follow the natural flow that it does with its current implementation. Also note that it is vital that reset be called after a set of changes to a knowledge base occur, to reset (obviously) the focus to the top-level (default) rule, and to correct the tree containing that knowledge.

Running

Now that we have our animalTree, we need one more function to extract the result (follow the Conclusion) of runRule:
> runConcl :: RuleTreeEnv a b c k v → c
> runConcl (RuleEnv _ (Env ks (Concl _ f))) = f ks
Now, we could set up an interactive question-answer session to tease the animal we are guessing from our hidden thoughts, but, since interactive I/O is a sin in functional languages (see the fall from grace in Lazy K), let's "pretend" our way through an interactive session, recording the results of the questions into the Environment:
> rtests :: IO ()
> rtests = let RuleEnv tree env = initKB "default" (assume "none")
> newTree = animalTree tree
> spider = updateEnv "spins web" "true" env
> chat = updateEnv "has four legs" "true" $
> updateEnv "purrs" "true" env
> spy = runConcl (answer $ RuleEnv newTree spider)
> cat = runConcl (answer $ RuleEnv newTree chat)
> in do print newTree
> print spy
> print cat
As expected, spy is "spider" (in answer to the question "Does it spin a web?"), and cat is "cat" (in answer to the questions "Does it have four legs?" followed by "Does it purr?").

Modifying

All is well and good with the world, yes? Certainly, when we receive the expected answers from our knowledge base, but let's explore the world a bit beyond what we've captured. Not everything that swims is a fish:
> fishey = let RuleEnv tree env  = initKB "default" (assume "none")
> newTree = animalTree tree
> duck = updateEnv "swims" "true"
> $ updateEnv "flies" "true" env
> noDuck = runConcl (answer $ RuleEnv newTree duck)
> in print noDuck
We find that noDuck is a "fish". Perhaps it's a "flying fish", but it definitely wasn't the animal we were guessing, so we need to update our knowledge base to give us the desired answer. Fortunately, the system returns the Rule that rendered the Conclusion, so modifying the system proceeds directly:
> duckey = let RuleEnv tree env  = initKB "default" (assume "none")
> newTree = animalTree tree
> duck = updateEnv "swims" "true"
> $ updateEnv "flies" "true" env
> re@(RuleEnv noDuckTree _) = answer $ RuleEnv newTree duck
> noDuck = runConcl re
> duckTree = addRule L (mkRule "flies" "duck") noDuckTree
> ducky = runConcl (answer $ RuleEnv duckTree duck)
> in print (noDuck, ducky)
With the modification in place, that is, the addition of the new EXCEPT Rule, we find that the animal that swims and flies is, indeed, a "duck", as expected. That's Just ducky!

Knowledge in context

Of course, there is the flying fish conundrum, so a better ordering would be to have the Conclusion of that Rule actually be "flying fish" and its EXCEPT clause (with the Condition being something like "webbed feet" or "feathers") rendering the "duck" Conclusion. While we're on the topic of structuring knowledge, not everything that purrs is a cat. The knowledge base could have had a very different structure if the Condition of the first Rule was "purrs". Trekkers know the answer to that one: "tribble", obviously! The follow-on EXCEPT clause (with the Condition of "four legs") would then clarify to the feline nature.

This demonstrates knowledge in context, where in one context, the context of "having four legs", the attribute of purring leads to "cat", but in another context (the blank context, but that context could be elaborated with some Rules that put us in the context of the Star Trek, um, multiverse?), the very same attribute leads to "tribble". Under this new context, "four legs" leads back to our "chat chapeau" (that is Viennese) [I am really running rampant with my `pataphorisms, I do apologize and will work to check myself, but topic of επιστήμη λόγος does rather lend itself to such openings [which I have relentlessly pursued ... again!]] Furthermore, the quiddity of "four legs" is, itself, context-based. In one sense it leads to every little girl's dream (a "pony") and following (EXCEPTing) that, several other species, and in another context, it leads to non-tribble purring creatures. This is a rather fundamental restructuring of our presumptions from the first article on this topic. I don't have a simple function that restructures knowledge assumptions in fundamental ways; I don't see the benefit of having one, so let's simply rewrite our knowledge base from scratch with our gained experience:
> startrek tree = fromJust
> (return tree >>|
> addRule L (mkRule "purrs" "tribble") >>=
> advance L >>|
> addRule L (mkRule "has four legs" "cat") >>|
> addTree R (firstRule (animalTree tree)) >>|
> reset)
> where addTree dir (Zip _ branch) (Zip hist tree)
> = Zip hist $ mutate dir branch tree
> firstRule = fromJust . advance L
Not as painful as I thought! There are a couple of points to note, however:
  1. The path to discovering a "cat" is duplicated, redundantly. This is fine, however: real knowledge is messy and contains redundancies, and this redundancy doesn't impact the (speed) efficiency of this knowledge base in any way; and,
  2. We are back to missing our "duck". I leave that as an exercise to you to re-add.
Summary

This concludes the series of articles on the explanation, implementation and demonstration of a simple Ripple-Down Rules (RDR) system. In these articles we showed that such systems are easy to implement in Haskell and then to use. Knowledge management, in and of itself, is a rather deep and tricky topic (we have hinted at such trickiness in our "Trouble with Tribbles"), but RDR, using the concept of knowledge in context provides a method that allows modelling this knowledge more directly and allows manipulation of assumptions without adding too much difficulty to the task of knowledge engineering.

Friday, September 19, 2008

Animal as RDR, part II

Implementation study

In the last entry, we introduced what RDR (ripple-down rules) are and reviewed the types that comprise an example system. This entry will show how those types are used to implement such a system.
> module RDRFinding where
This module scans the RDR tree in context to give BOTH the best-fitting conclusion AND the final Branch that led to the ultimate conclusion (in the form of a zipper so that the branch may be replace in place using standard operations on the zipper).
> import RDRTypes
> import Data.Transitive
> import Data.Zipper
> import Data.BinaryTree
> import Data.Map (Map)
> import qualified Data.Map as Map
> import Control.Monad.State
> import Data.Maybe
You have already encountered the above imported modules, but the next two modules need an introduction. The first
> import Control.Monad.Utils
contains my weird and wonderful syntax when I'm using monads for parsing or logic tasks. The parsing syntax you've seen before (see the critique), but I do add one new syntactic construct:
(>>|) :: m a → (a → b) → m b
because I'm always doing "m >>= return . f", and liftM seems to feel oddly backwards when I'm visualizing data flow. The next
> import Data.Mutable
provides a generic operation for changing a data structure:
class Mutable t dir val | t → dir, t → val where
mutate :: dir → val → t → Maybe t
So, what's the game? We have an Environment (a set of attributed values) combined with a RuleTree into the State Monad. What we do is guide the values in the environment through the rule tree (where a successful Condition chooses the EXCEPT branch and displaces the currently saved Conclusion with the one associated with this Rule, and conversely if the Condition fails, the ELSE branch is selected, without displacing the currently saved Conclusion). When we reach a Leaf, we return our current position in the tree (the current state of the Zipper) along with the last valid Conclusion. All this is done by runRule:
> runRule :: RuleFinding a b c k v
> runRule = get >>= λ (RuleEnv root env) . runRule' root env

> runRule' :: RuleTree a b c k v → Environment k v b c
> → RuleFinding a b c k v
> runRule' tree env@(Env ks curr)
> = branch tree >>: λ (cond, conc) .
> let (dir, concl) = liftZdir (testCond cond env conc)
> in advance dir tree >>: λ path .
> put (RuleEnv path (Env ks concl)) >> runRule
> where x >>: f = tryS curr x f
Whew! This is a mouthful in the number of functions it introduces, but conceptually, runRule is rather straightforward. Let's break it down.

The function runRule, itself, merely destructures the RuleTreeEnv term, passing that information to runRule', so let's move right on to that worker function. First, let's examine the funny syntactic construct, (>>:) — what is this monadic operator doing? We see from its definition that it calls tryS:
> tryS :: a → Maybe b → (b → State c a) → State c a
> tryS x may f = maybe (return x) f may
So, tryS lifts the State Monad into semideterminism (using the Maybe Monad). As an aside, perhaps, then, runRule' could be rewritten as a StateT over the Maybe Monad ... perhaps an intrepid reader will gain a ⊥-trophy for an implementation and explanation?

Using that monadic operator, (>>:), we get the current branch in focus (bailing if the focus is on a Leaf) ...
> branch :: RuleTree a b c k v
> → Maybe (Condition a (Knowledge k v),
> Conclusion b c (Knowledge k v))
> branch (Zip _ (Branch (Rule cond conc) _ _)) = Just (cond, conc)
> branch (Zip _ Leaf) = Nothing
... then we test the condition at that Branch ...
> testCond :: Condition a (Knowledge k v)
> → Environment k v ca cb
> → Conclusion ca cb (Knowledge k v)
> → Either (Environment k v ca cb)
> (Environment k v ca cb)
> testCond (Cond _ test) env@(Env kb _) conc1
> | test kb = Left $ Env kb conc1
> | otherwise = Right env

> liftZdir :: Either (Environment k v ca cb)
> (Environment k v ca cb)
> → (BinDir, Conclusion ca cb (Knowledge k v))
> liftZdir test = either (λ (Env _ c) . (L, c))
> (λ (Env _ c) . (R, c))
> test
I do this little pas de deux between testCond and liftZdir because somehow it just feels right to use the Either type here. Perhaps, sometime later Arrows will come into play. At any rate, liftZdir . testCond can be considered one function that returns the appropriate leg of the branch to continue finding the best viable Conclusion, as well as the best current Conclusion reached from applying the Environment to the Condition.

Given that information, we now advance down that path, updating the state, and continue to test recursively, until we reach a Leaf, at which point we have our answer (the ultimate viable Conclusion).

If we're happy with that answer, we call runRule with a new transaction (in other words, a fresh Environment), and the Zipper pointing back at the top of the RuleTree. If we're not happy, then we're given the ability to add a new Rule to the RuleTree. We do this with addRule:
> addRule :: BinDirRule a b c (Knowledge k v)
> → RuleTree a b c k v → RuleTree a b c k v
> addRule dir rule (Zip hist branch)
> = let ruleB = Branch rule Leaf Leaf
> in Zip hist (mutate dir ruleB branch)
The above functions are the meat of the implementation for this simple RDR system. There are a few conveniences that the following functions provide. The first one is answer that scans the rule tree, making the best conclusion, and then backs up one step to provide the user access to the branch in case the precipitating rule finding wasn't exactly giving the desired result.
> answer :: RuleTreeEnv a b c k v → RuleTreeEnv a b c k v
> answer rule = let RuleEnv z ans = execState runRule rule
> in RuleEnv (fromJust $ withdraw z) ans
The next three functions help to automate the creation of the rule parts, Conditions and Conclusions. The function mkCond creates a test function with the assumption that the knowledge store contains a (k,v) pair. It does the lookup in the knowledge store and passes the extracted values to the test function (which, as with any good predicate, returns either True or False). If we can't find the key, I guess, for now, we'll assume the returned value is False:
> mkCond :: Ord k ⇒ k → (v → Bool) → Condition k (Knowledge k v)
> mkCond key fn = Cond key $ λ ks . maybe False fn (Map.lookup key ks)

> present :: Ord k ⇒ k → Condition k (Knowledge k v)
> present = flip mkCond (const True)

> assume :: k → Conclusion k k env
> assume key = Concl key (const key)
This completes the implementation of this RDR system. The next entry will create a small RDR system, based on the game Animal, to demonstrate how the system works.

Thursday, September 18, 2008

Animal: an RDR implementation study, part I: types

Synopsis

Ripple-down rules provide a fast and efficient representation of knowledge in context for use, e.g., by expert systems. We present here a complete implementation of one type of RDR system in Haskell. But what analogy is sufficient to describe what an RDR system is? The literature, albeit comprehensive, seems to concentrate more on the details of making such a system work, but none have presented the essence: the computer guessing-game Animal does a good job of this illustration, and we use it here to build an example knowledge base for this RDR system.

Motivation/Introduction

As a knowledge engineer I have worked with Subject-Matter Experts (SMEs) to build various rule-based expert systems. A common pitfall of such systems, its ὕβρις, is that they attempt to abstract decision making from any context. And, as such, fail to notice the nuances or have the situational awareness needed to render useful judgments. In a knowledge-engineered rule-based/bayesian-like hybrid system I developed, the bayesian decisions lead to over 99% of the positive findings in the transactions analyzed.

This would be the end of the story if there were no hard limits, but there are always such hard limits. Bayesian-like systems tend to scorn the advice and guidance of SMEs: the data set itself is the experts, not the SMEs. Despite the success of using the data, bayesian-like systems also tend to overreact — only 1 transaction out of 1000 transactions it flagged actually lead to a decision — these systems need serious throttling to be successful. Resources, then, are a real-world constraints that rule-based systems better model than bayesian systems in practice. In fact, hard constraints in general are modeled much better by rule-based systems.

But the rule-based systems, popularized by, e.g., iLog JRules™ and used in many expert systems, do not speak the language of the SMEs. Having worked with SMEs across the U.S.A. over a period of years, rules invariably tend to be defined by exception. Whenever we, as the knowledge-engineers, attempt to nail down a definition with the SMEs, the conversations always proceed as follows:
SME: Yeah, a CC transaction of over $57.38 is always suspect.
Me: So, we'll flag those, then [thinking: Ha! that was an easy rule; finally!]
SME: No, no, no! Only from young males or senior citizens in the following three income brackets.
Me: Oh, okay, I'll add that to the constraint.
SME: No, but it needs to be in the following zip-codes ...
Several hours later we're still ironing out the rule, and then, as lunch break approaches, the chair either tables the rule or passes a simplified, useless, version of it.

Note how the rule set was defined above, the SMEs agreed to a general case, and then continue to refine that definition by adding (often conflicting) constraints. In a context-free rule-based system, modelling such a rule set is by no means impossible, but the task quickly becomes a chore in the nightmare of complexity.

RDRs (Ripple-down Rules), on the other hand, embrace the context. The syntax of an RDR system is as follows:
<rule> ::= IF condition THEN conclusion
<knowledge base> ::= ⊥
| <rule>
EXCEPT <knowledge base>
ELSE <knowledge base>
The semantics is as follows. If the condition is met, then that conclusion is saved as a viable result (replacing any prior conclusion) and the EXCEPT branch is followed recursively until , at which time the most recently saved conclusion is the result. On the other hand, if condition fails, the ELSE branch is selected and followed recursively. Of course, the knowledge base must be applied to something. In this system, we have a very simple environment where the condition tests for the presence of a String in that environment.

The initial knowledge base for every RDR system starts as:
IF (const True) THEN none EXCEPT ⊥ ELSE ⊥
As the SMEs interact with the RDR system, they add to knowledge to refine the conclusions guided by refinements in the conditions. The system is very permissive, redundancy is permitted, even encouraged, because a condition in depth of one path of EXCEPTs and ELSEs has a very different meaning, in context, than along another path.

Example

Our RDR system with be the Animal guessing game with the following knowledge base:
IF (const True) THEN "not an animal"
EXCEPT IF (present "four legs") THEN "pony"
EXCEPT IF (present "barks") THEN "dog"
EXCEPT ⊥
ELSE IF (present "meows") THEN "cat"
EXCEPT ⊥
ELSE ⊥
ELSE IF (present "swims") THEN "fish"
EXCEPT ⊥
ELSE IF (present "spins web") THEN "spider"
EXCEPT ⊥
ELSE ⊥
ELSE ⊥


Types
> module RDRTypes where

> import Control.Monad.State
> import Data.Map (Map)
> import qualified Data.Map as Map
I must apologize for not introducing the next three modules properly. These modules are part of my canon and will be introduced in depth elsewhere. For now, I must settle for the following descriptions:
> import Data.Transitive
defines a generic protocol of walking a data structure one step at a time, either "forward" (with advance) or backward (with withdraw).
> import Data.Zipper
The "simple Ariadne zipper" illustrated in the Haskell Wikibooks.
> import Data.BinaryTree
The only novel structure here is that the tree is shaped to conform to the structure of RDRs: the data is in the branch, not the leaves.
From Predicate Logic-based Incremental KA, Barry Drake and Ghassan Beydoun (Nov 2000), file named PRDR.pdf

2.1. Ripple Down Rules (RDR)
An RDR knowledge base is a collection of simple rules organised in a binary tree structure. Each rule has the form, "If condition then conclusion". Every rule can have two branches to other rules: a false-branch (also called the “or” branch) and a true-branch (also called the “exception” branch). An example RDR tree is shown in figure 2.1. When a rule is satisfied, the true branch is taken, otherwise a false branch is taken. The root node of an RDR tree contains the default rule whose condition is always satisfied, that is, it is of the form, “If true then default conclusion”. This default rule has only a true-branch.
The RDR IF-THEN rule contains a condition and conclusion that interact with the Environment (defined later) to inform the decision of the system.
> data Condition a env = Cond a (env → Bool)
> instance Show a ⇒ Show (Condition a env) where
> show (Cond c _) = "IF " ++ show c

> data Conclusion a b env = Concl a (env → b)
> instance Show a ⇒ Show (Conclusion a b env) where
> show (Concl c _) = "THEN " ++ show c
Given the above, an IF-THEN Rule is simply the conjunction of the the Condition and Conclusion:
> data Rule a b c kb
> = Rule (Condition a kb) (Conclusion b c kb)
> instance (Show a, Show b) ⇒ Show (Rule a b c kb) where
> show (Rule a b) = show a ++ " " ++ show b
The Environment is composed of a dictionary (keys to values) and the current most valid conclusion under consideration. In our example (Animal), we merely test for the existence of a key, but more complex system usually treat the keys as attributed values and perform more than simple existence-check tests.
> type Knowledge k v = Map k v
> data Environment k v a b
> = Env (Knowledge k v) (Conclusion a b (Knowledge k v))
> instance (Show k, Show v, Show a)
> ⇒ Show (Environment k v a b) where
> show (Env kv conc) = "{" ++ show kv ++ ": "
> ++ show conc ++ "}"
The above elements are what comprise the simple types for the RDR system, so what is left is those elements that form the structure. This system is in the shape of a binary tree, so, of course, we use that data structure. As we append new rule branches to leaves of the tree, we use the Zipper data type to allow us to add these nodes in place.
> type RuleBranch a b c k v
> = BinaryTree (Rule a b c (Knowledge k v))
> type RuleTree a b c k v
> = Zipper BinDir (RuleBranch a b c k v)

> data RuleTreeEnv a b c k v = RuleEnv (RuleTree a b c k v)
> (Environment k v b c)
> instance (Show a, Show b, Show k, Show v)
> => Show (RuleTreeEnv a b c k v) where
> show (RuleEnv tree env) = "| " ++ show tree ++ " : "
> ++ show env ++ " |"
The RDR system is built around the concept of context, and the State Monad captures that concept well. The final type is used to shuttle around the knowledge base as well as the currently viable conclusion based on the rule finding.
> type RuleFinding a b c k v
> = State (RuleTreeEnv a b c k v)
> (Conclusion b c (Knowledge k v))
The above types describe the RDR system. In the next entry, we will show the implementation of the system when it comes to building and adding rules as well as traversing the rule tree to reach a conclusion.

Wednesday, September 10, 2008

What is declarative programming?

The concept has been bandied about, and has entered into more popular discussion with the broad acceptance of XML. Beside the overall definition, however ("Declarative is the 'what' of programming; imperative, the 'how'"), I haven't heard a definition that sketches, even, what declarative programming is and how it looks like.

For the "quartet of programming styles", being: imperative, object-oriented, functional, and logical, it seems pretty clear that there are well-defined general boundaries (with enough wiggle room to cause fanatics to enjoy flame-wars as the mood struck them) to separate one style from another, with languages easily falling into one or more of those camps:
  • C: imperative
  • Smalltalk/Java: imperative/object-oriented
  • Lisp(and Scheme and Dylan and ...)/Haskell/ML: functional
  • Prolog (Mercury): logical
This was all clear-cut and well and good.

But for classifying something as "declarative programming" it seemed that there has been talk of its benefits or drawbacks, but not much more than superficial talk of what it is. Camps from both the functional programming community and the logic programming community stake claims over the declarativeness of their programming languages, but how does one recognize code as declarative? What is the measure by which the "declarativeness" of such code may be ascertained?

Up until recently, I have been troubled by such misgivings only marginally. I had it from authority, a Lisp giant, Patrick Winston, in a Prolog book (Bratko's 3rd ed of "Prolog Programming for Artificial Intelligence"), that the logical style of Prolog is declarative and the functional style is not. Before your send your flame, here's the quote:
"[...] In my view, modern Lisp is the champion of these [imperative] languages, for Lisp in its Common Lisp form is enormously expressive, but how to do something is still what the Lisp programmer is allowed to be expressive about. Prolog, on the other hand, is a language that clearly breaks away from the how-type languges, encouraging the programmer to describe situations and problems, not the detailed means by which the problems are to be solved.

Consequently, an introduction to Prolog is important for all students of Computer Science, for there is no better way to see what the notion of what-type programming is all about. [...]"
I add here that I also view the bulk of Haskell in this light: although it is possible to code declaratively in Haskell, most Haskell code I see is concern with solving the problem (the "how") instead of describing the problem (the "what"). Put another way, it is natural to use the functional and imperative (with monadic do) styles, and it takes effort to use the logic style.

That has been my prejudice until recently, but then recent correspondence with colleagues, including David F. Place, who recently had an excellent article in the Monad.Reader about Monoid, has opened this issue for reexamination. So, I turn to you, gentle reader. I present two very different programs below. One written in the logic style; one, functional. Both solve the same problem, and both authors claim their own version is definitively declarative. I view the world through a particular lense, so I see one perspective. But I am burning with curiosity: do you see A) or B) as declarative, or both, or neither? If so, how do you justify your position?

A) the "logical" program approach:
import Control.Monad.State
import Data.List

splits :: (Eq a) ⇒ [a] → [(a, [a])]
splits list = list >>= λx . return (x, delete x list)

choose :: Eq a ⇒ StateT [a] [] a
choose = StateT $ λs . splits s

sendmory' :: StateT [Int] [ ] [Int]
sendmory' =
do
let m = 1
let o = 0
s ← choose
guard (s > 7)
e ← choose
d ← choose
y ← choose
n ← choose
r ← choose
guard (num [s, e, n, d ] + num [m, o, r , e ]
≡ num [m, o, n, e, y ])
return [s, e, n, d , m, o, r , y ]
B) the functional program approach (provided by David F. Place):
solve input accept return
= solve' [] input [0..9] accept return

solve' bindings [] _ accept return
| accept bindings = [return bindings]
| otherwise = []
solve' bindings ((_,g):xs) digits accept return
= concatMap f $ g digits
where f n = solve' (bindings++[n]) xs
(delete n digits)
accept return

num = foldl ((+) . (*10)) 0

sendMoreMoney =
solve (('m', const [1]) :
('o', const [0]) :
('s', filter ( > 7)) :
(zip "edynr" (repeat id)))
(λ [m,o,s,e,d,y,n,r] . num [s,e,n,d]
+ num [m,o,r,e]
≡ num [m,o,n,e,y])
(λ [m,o,s,e,d,y,n,r] . [s,e,n,d,m,o,r,y])

Tuesday, September 2, 2008

Fuzzy unification parser in Haskell

Synopsis

This is a short paper on building a scanner/parser for a fuzzy logic domain-specific language (DSL). The system takes as input a file containing an ordered set of fuzzy statements and outputs the equivalent Prolog program. We first briefly and informally introduce the topic of fuzzy unification. Next we provide a Backus-Naur Form (BNF) grammar of the fuzzy DSL. Then we provide fuzzy example statements and show their transformation into Prolog statements. Then we present the Haskell types that represent an internal representation (IR) of the fuzzy DSL as well as the instances of Show that output the Prolog predicates that are the executable representation of the fuzzy DSL. Then we present the scanner/parser of the fuzzy DSL. Finally, we translate two input fuzzy files and execute queries against the result in a Prolog listener.

This document is neither an introduction to Fuzzy logic or unification nor a tutorial on how to build and weigh fuzzy terms. The reader is referred to the rich library of online and offline publications on these topics.

Introduction

The standard execution of unification in Prolog for ground atoms is that two atoms must be of the same type and then of the same value in order to unify. This rigor is very good for proof of program correctness and where there is no room for tolerances; in short, for classic predicate logic proofs, unification does what we need it to do. However, standard unification hinders more than helps in the presence of real-world, messy, data or where some generality is needed in, e.g., the decision-making process of an expert system.

One approach that provides some tolerance and generality in the face of messy data is to introduce fuzziness into the unification process. In this way, we may state facts with some degree of associated certainty. We may also embed in the rule-finding process fuzzy techniques. Three such techniques in fuzzy rule-finding include:
  1. Product logic, where and_prod (x, y) = x * y
  2. Gödel intuitionistic logic, where and_godel (x, y) = min x y
  3. Lukasiewicz logic, where and_luka (x, y) = max 0 (x + y - 1)
These techniques are conjunctive and are implemented in the Prolog file named prelude.pl as follows:
and_prod(X,Y,Z) :- Z is X * Y.
and_godel(X,Y,Z) :- min(X, Y, Z).
and_luka(X,Y,Z) :- H is X+Y-1, max(0, H, Z).
The fuzzy DSL also allows disjunctions of the above. Their implementation can also be found in prelude.pl:
or_prod(X,Y,Z) :- Z is X + Y - (X * Y).
or_godel(X,Y,Z) :- max(X, Y, Z).
or_luka(X,Y,Z) :- H is X+Y, max(1, H, Z).
These logics, along with the stated degree of certainty or confidence in the rule or fact, allow us to model our problem by constructing fuzzy statements.

Grammar

A <program> in the fuzzy DSL this scanner/parser supports is as follows:
<program> = <statement>+
<statement> ::= (<rule> | <fact>) <ss> "with" <ss> <float> ".\n"
<float> ::= Float

<fact> ::= <term>
<rule> ::= <term> <ss> <implication> <ss> <entailment>

<term> ::= <name> "(" <arguments> ")" | <name>
<name> ::= String1

<arguments> ::= <argument> <opt-args>
<opt-args> ::= "," <arguments> | ε

<argument> ::= <atom> | <variable> | <float> | <string>
<string> ::= "\"" String "\""
<variable> ::= String2
<atom> ::= <name>

<implication> ::= "<" <kind>
<kind> ::= "prod" | "luka" | "godel"

<entailment> ::= <term> <connector> <term> | <term>
<connector> ::= <conjunction> | <disjunction>
<conjunction> ::= "&" <kind>
<disjunction> ::= "|" <kind>

<ss> ::= " " <opt-ss>
<opt-ss> ::= <ss> | ε

1 no spaces, first character lowercase alpha, rest underscores and alphanums
2 no spaces, first character is "_" or upcase alpha

Transformation

An example of a statement of fact in the fuzzy DSL is as follows:
r(a) with 0.8.
An example of a rule statement is:
p(X) <prod q(X) &godel r(X) with 0.7.
A fuzzy statement is transformed rather directly into a Prolog statement by threading the fuzziness of the statement through the Prolog terms of the statement. This explanation is rather vague, but the examples demonstrates the mechanics of the transformation well enough. The fuzzy statement of fact is transformed into the following Prolog statement:
r(a, 0.8).
The fuzzy rule statement requires quite a bit more threading, and the system uses a chaining of logic variables to affect this:
p(X, Certainty) :-
q(X, _TV1), r(X, _TV2), and_godel(_TV1, _TV2, _TV3),
and_prod(0.7, _TV3, Certainty).


Strategy

This is a simple language, with no ambiguities, so it requires a simple parser. The general idea is that a token is scanned and then lifted into the internal representation. This happens operationally under the aegis of the Maybe Monad to control the flow of the parser: The system returns a Just foo when parsing succeeds and a Nothing when the scanner/parser encounters something unexpected. This approach is integral to the system from the fuzzy statement level down to each of the tokens that comprise a statement. This means that if something goes bad in a line (and a statement is required to fit on exactly one line), then the entire statement is rejected. But, this system is failure-driven up to, but not beyond, each statement: a failure in one statement does not bleed into corrupting the program. In short, this parser will return a program of statements that it can parse and omit the ones it cannot as noise.

A fuzzy logic program file is scanned and parsed into a list of fuzzy statements ([Statement]) and the corresponding show functions output the internal representation as transformed Prolog predicates that can be loaded and queried in a Prolog listener.

Haskell Types

The Haskell types that form the internal representation of a fuzzy program follow the BNF rather closely (recall the technique of parsing via lifting functions; this module uses that technique):
> module FuzzyTypes where

> import Control.Arrow

> data Term = Term String [Arg]

A term requires no transformation from fuzzy DSL to Prolog:

> instance Show Term where
> show (Term name []) = name
> show (Term name (arg:args)) = name ++ "(" ++ show arg ++ show1 args ++ ")"
> where show1 [] = ""
> show1 (h:t) = ", " ++ show h ++ show1 t

> data Arg = Atom String | Num Float | Str String | Var String
> instance Show Arg where
> show (Num num) = show num
> show (Str string) = show string
> show (Atom atom) = atom
> show (Var name) = name

> data Kind = Prod | Luka | Godel
> instance Show Kind where
> show Prod = "prod"
> show Luka = "luka"
> show Godel = "godel"
The following lifting function converts an input string to the scanner to the correct connective-type value.
> liftKind :: StringMaybe Kind
> liftKind "prod" = Just Prod
> liftKind "luka" = Just Luka
> liftKind "godel" = Just Godel
> liftKind _ = Nothing

> data Implication = Impl Kind
We don't have a Show instance for Implication because we need to weave in the thread of fuzziness from the consequence and entailment. So, we do the showing from the Rule perspective.
> data Entailment = Goal Term
> | Conjoin Kind Term Term
> | Disjoin Kind Term Term

> display :: Entailment → (String, Arg)
> display (Goal term) = (show . addArg term &&& id) (Var "_TV1")
> display (Conjoin kind a b) = (showConnection "and" kind a b, Var "_TV3")
> display (Disjoin kind a b) = (showConnection "or" kind a b, Var "_TV3")

> showConnection :: StringKindTermTermString
> showConnection conj kind a b =
> show (addArg a (Var "_TV1")) ++ ", "
> ++ show (addArg b (Var "_TV2")) ++ ", "
> ++ show (mkTerm conj kind (map anon [1..3]))

> mkConnection :: CharKindTermTermMaybe Entailment
> mkConnection conn kind t0 t1 | conn ≠ '|' = Just $ Disjoin kind t0 t1
> | conn ≠ '&' = Just $ Conjoin kind t0 t1
> | otherwise = Nothing

> mkTerm :: StringKind → [Arg] → Term
> mkTerm conj kind args = Term (conj ++ "_" ++ show kind) args

> anon :: IntArg
> anon x = Var ("_TV" ++ show x)
We've finally built up enough infrastructure to represent a fuzzy rule:
> data Rule = Rule Term Implication Entailment Float

e.g.: Rule (Term "p" [Var "X"]) (Impl Prod)
(Conjoin Godel (Term "q" [Var "X", Var "Y"])
(Term "r" [Var "Y"])) 0.8

> instance Show Rule where
> show (Rule conseq (Impl kind) preds fuzz) =
> let cert = Var "Certainty"
> fuzzyHead = addArg conseq cert
> (goals, var) = display preds
> final = mkTerm "and" kind [Num fuzz, var, cert]
> in show fuzzyHead ++ " :- " ++ goals ++ ", " ++ show final
Representing and showing fuzzy facts turn out to be a rather underwhelming spectacle:
> data Fact = Fact Term Float
> instance Show Fact where
> show (Fact term fuzz) = show (addArg term (Num fuzz))

e.g. Fact (Term "r" [Var "_"]) 0.7
Fact (Term "s" [Atom "b"]) 0.9
And an fuzzy statement is either a fuzzy rule or a fuzzy fact:
> data Statement = R Rule | F Fact
> instance Show Statement where
> show (R rule) = show rule ++ "."
> show (F fact) = show fact ++ "."
Yes, I realize the following implementation of snoc ("consing" to end of a list) is horribly inefficient, but since all the argument lists seem to be very small, I'm willing to pay the O(n2) cost. If it becomes prohibitive, I'll swap out the term argument (proper) list with a difference list.
> snoc :: [a] → a → [a]
> list `snoc` elt = reverse (elt : reverse list)

> addArg :: TermArgTerm
> addArg (Term t args) arg = Term t (args `snoc` arg)
Haskell Scanner/Parser

The types defined above provide strong guidance for the development of the parser. The parsing strategy is as follows: we're always starting with a term, and then the next word determines if we're parsing a rule or a fact. A rule has the implication operators; a fact, the 'with' closure.

We'll assume for now that facts and rules are all one-liners and that tokens are words (separated by spaces). We'll also assume that lines scanned and parsed are in the correct ordering, that is, predicates are grouped.
> module FuzzyParser where

> import Control.Monad
> import Control.Arrow
> import Control.Applicative
> import Data.Maybe
> import FuzzyTypes
Scans a file of fuzzy information and the parses that info into an internal representation, the output of which is the underlying Prolog representation. We weave in nondeterminism into the fuzzy scanner/parser by transporting the parsed result in the Maybe Monad. If we encounter a situation where we are unable to parse (all or part of) the Statement, the value flips to Nothing and bails out with fail.
> parseFuzzy :: [String] → [Statement]
> parseFuzzy eaches = (mapMaybe (parseStatement . words) eaches)

> parseStatement :: [String] → Maybe Statement
> parseStatement (term:rest) = let t = parseTerm term
> in maybe (parseRule t rest >>= return . R)
> (return . F . Fact t)
> (parseFuzziness rest)
The Term is a fundamental part of the fuzzy system, and is where we spend the most time scanning/parsing and hand-holding (as it has a rather huge helper function: parseArgs).
> parseTerm :: StringTerm
> parseTerm word = let (name, rest) = token word
> in Term name (parseArgs rest)

> parseArgs :: String → [Arg]
> parseArgs arglist = parseArgs' arglist
> where parseArgs' [] = []
> parseArgs' args@(_:_) = let (anArg, rest) = token args
> in parseArg anArg : parseArgs rest
For parseArg we try to convert the argument to (in sequence) a number, a variable, a quoted string and then finally an atom. The first one that succeeds is the winner. We do this by using some Control.Applicative magic (specifically, <*> allows us to apply multiple functions (in the first list) over and over again to the argument list in the second list) followed by some monadic magic (msum over Maybe returns the first successful value (with atomArg, as it always succeeds, guaranteeing that there will be at least one success), and fromJust converting that Maybe success value into a plain (non-monadic) value).
> parseArg :: StringArg
> parseArg arg = fromJust (msum ([numArg, varArg, strArg, atomArg] <*> [arg]))
For the following functions recall how my "implied-by" operator (|-) works: in a |- b, a is returned, given b (is True). Given that, the below functions attempt to convert the scanned argument into a parsed (typed) one: a number, a (logic) variable, a string, or an atom:
Here's how we try to convert an argument ...

First we try to see if it's a number

> numArg :: StringMaybe Arg
> numArg x = Num (read x) |- all (flip elem ('.' : ['0' .. '9'])) x

Next, is it a (n anonymous) variable?

> varArg :: StringMaybe Arg
> varArg x@(h:_) = Var x |- (h == '_' || h `elem` ['A' .. 'Z'])

Maybe it's a string?

> strArg :: StringMaybe Arg
> strArg x@(h:t) = Str (chop t) |- (h == '"')

Okay, then, it must be an atom then

> atomArg :: StringMaybe Arg
> atomArg = return . Atom

... and chop we shamelessly steal from the Perl folks.

> chop :: StringString
> chop list = chop' [head list] (tail list)
> where chop' ans rest@(h:t) | t == [] = reverse ans
> | otherwise = chop' (h:ans) t
Now that we've laid the ground work, let's parse in the statements. A statement is a fact or a rule. Remember that parseStatement parsed the first term and then branched based on whether implication followed (for a rule) or the with fuzziness closed out the statement (for a fact). So, we'll tackle parsing in a fact first; since a fact is just a term, and it's already been parsed, pretty much all we need to do now is to reify the term into the fact type:
> parseFact :: Term[String]Maybe Fact
> parseFact term fuzzes = return $ Fact term (read $ chop (head fuzzes))
That was easy! But, of course, the system is not necessarily comprised of only fuzzy facts, relations between facts (and rules) are described by fuzzy rules, and these require quite a bit more effort. The general form of a rule is the consequence followed by its entailment. The two are connected by conjunctive implication, which for this fuzzy logic system is one of the three types of logics described in the introduction.
> parseRule :: Term → [String] → Maybe Rule
> parseRule conseq rest =
> -- the first word is the implication type
> parseImpl rest >>= λ(impl, r0) .
> -- then we have a term ...
> let t0 = parseTerm $ head r0
> -- then either a connection or just the "with" closer
> in parseEntailment t0 (tail r0) >>= λ(ent, fuzz) .
> return (Rule conseq impl ent fuzz)
Parsing the implication is easy: we simply lift the kind of the fuzzy logic used for the implication into the Implication data type:
> parseImpl :: [String] → Maybe (Implication, [String])
> parseImpl (im:rest) = guard (head im == '<') >>
> liftKind (tail im) >>= λkind .
> return (Impl kind, rest)
Parsing entailment also turns out to be a simple task (recall my description of how maybe works): we parse in a term, and then we attempt to parse in a fuzzy value. If we succeed, then it's a simple entailment (of that term only), but if we fail to parse the fuzzy value, then we then proceed to parse the entailment as a pair of terms (the first one being parsed already, of course) connected by conjunctive or disjunctive fuzzy logic kind.
> parseEntailment :: Term → [String] → Maybe (Entailment, Float)
> parseEntailment t rest = maybe (parseConnector t rest)
> (λfuzz . return (Goal t, fuzz))
> (parseFuzziness rest)
The parser for compound entailment is also a straightforward monadic parser: it lifts the connector into its appropriate Kind, parses the connected Term and then grabs the fuzzy value to complete the conjunctive or disjunctive Entailment.
> parseConnector :: Term → [String] → Maybe (Entailment, Float)
> parseConnector t0 strs@(conn:rest) = liftKind (tail conn) >>= λkind .
> parseFuzziness (tail rest) >>= λfuzz .
> mkConnection (head conn) kind t0 (parseTerm (head rest)) >>= λent .
> return (ent, fuzz)
Finally, parseFuzziness reads in the fuzzy value from the stream as a floating-point number, given that it is preceeded by "with" (as dictated by the grammar):
> parseFuzziness :: [String] → Maybe Float
> parseFuzziness trail = read (chop (cadr trail)) |- (head trail == "with")
The rest of system are low-level scanning routines and helper functions:
> cadr :: [a] → a
> cadr = head . tail

> splitters :: String
> splitters = "(), "

> token :: String → (String, String)
> token = consumeAfter splitters

> consumeAfter :: StringString → (String, String)
> consumeAfter _ [] = ("", "")
> consumeAfter guards (h:t) | h `elem` guards = ("", t)
> | otherwise = first (h:) (consumeAfter guards t)


Running the system

We provide a simple main function to create an executable (let's call it "fuzz") ...
> module Main where

> import FuzzyParser

> main :: IO ()
> main = do file ← getContents
> putStrLn ":- [prelude].\n"
> mapM_ (putStrLn . show) (parseFuzzy (lines file))
... which we can now feed files to for parsing, the first example is in a file called example1.flp:
p(X) <prod q(X,Y) &godel r(Y) with 0.8.
q(a,Y) <prod s(Y) with 0.7.
q(b,Y) <luka r(Y) with 0.8.
r(_) with 0.6.
s(b) with 0.9.
We run the system in the shell...
geophf$ ./fuzz < example1.flp > example1.pl
... obtaining the resulting logic program:
:- [prelude].

p(X, Certainty) :- q(X, Y, _TV1), r(Y, _TV2), and_godel(_TV1, _TV2, _TV3), and_prod(0.8, _TV3, Certainty).
q(a, Y, Certainty) :- s(Y, _TV1), and_prod(0.7, _TV1, Certainty).
q(b, Y, Certainty) :- r(Y, _TV1), and_luka(0.8, _TV1, Certainty).
r(_, 0.6).
s(b, 0.9).
... which can be loaded into any Prolog listener, such as Jinni or SWI:
geophf$ prolog

?- [example1].
yes

?- p(X, Certainty).
X = a, Certainty = 0.48 ;
X = b, Certainty = 0.32 ;
no
Similarly, a different fuzzy system, described in the file example2.flp:
p(X) <prod q(X) with 0.9.
p(X) <godel r(X) with 0.8.
q(X) <luka r(X) with 0.7.
r(a) with 0.6.
r(b) with 0.5.
... results in the following Prolog file (saved as example2.pl):
:- [prelude].

p(X, Certainty) :- q(X, _TV1), and_prod(0.9, _TV1, Certainty).
p(X, Certainty) :- r(X, _TV1), and_godel(0.8, _TV1, Certainty).
q(X, Certainty) :- r(X, _TV1), and_luka(0.7, _TV1, Certainty).
r(a, 0.6).
r(b, 0.5).
... and gives the following run:
geophf$ prolog

?- [example2].
yes

?- p(X, Certainty).
X = a, Certainty = 0.27 ;
X = b, Certainty = 0.18 ;
X = a, Certainty = 0.6 ;
X = b, Certainty = 0.5 ;
no


Conclusion

We've presented and explained a Fuzzy unification scanner/parser in Haskell and demonstrated that system producing executable Prolog code against which queries may be essayed. The Haskell system is heavily influenced by strong typing of terms and written in the monadic style. It is comprised of three modules, totalling less than 250 lines of code. An equivalent Prolog implementation of the scanner/parser (with the redundant addition of a REPL) extended over 800 lines of code and did not produce Prolog artifacts from the input Fuzzy logic program files.