Monday, September 29, 2008

Animal as RDR, part III

Examples: Building, running and modifying RDR systems

The previous entries showed the implementation of the model of a simple Ripple-Down Rules (RDR) system. This entry will show how to implement the rules for such a system from scratch as well as how to run and then to modify such a system. Again, we are using the computer game Animal as the basis of these examples.

Let's start off by implementing RDR system modelled in the first entry on this topic. But first, we need a couple of improvements. The addRule I had originally implemented wasn't an example for ease of use as it was ...
] addRule :: BinDir
] → RuleTree a b c k v
] → Environment k v b c
] → Condition a (Knowledge k v)
] → Conclusion b c (Knowledge k v)
] → RuleTreeEnv a b c k v
] addRule dir (Zip hist branch) env cond concl
] = let rule = Branch (Rule cond concl) Leaf Leaf
] newbr = fromJust $ mutate dir rule branch
] newtree = return $ settle (Zip hist newbr)
] in RuleEnv newtree env
... so I changed it so that it fit more neatly into building rules in sequence:
> addRule :: BinDirRule a b c (Knowledge k v)
> → RuleTree a b c k v → RuleTree a b c k v
> addRule dir rule (Zip hist branch)
> = let ruleB = Branch rule Leaf Leaf
> in Zip hist (mutate dir ruleB branch)
This new implementation has now replaced the previous one in the implementation entry. Also, constructing Rules themselves was a bit labour-intensive, so I've added the following function to simplify building simple rules:
> type SimpleRule = Rule String String String 
> (Knowledge String String)

> mkRule :: StringStringSimpleRule
> mkRule key ans = Rule (present key) (assume ans)
Also, recall that:
(>>|) :: Monad m ⇒ m a → (a → b) → m b
This function simply reorders the arguments of liftM, so why have it? I find it useful in the flow of building monadic systems, as demonstrated below.

Building

And with that, let us build our Animal guessing game knowledge base:
> animalTree :: Zipper BinDir (BinaryTree SimpleRule)
> → Zipper BinDir (BinaryTree SimpleRule)
> animalTree tree = fromJust
> (return tree >>|
> addRule L (mkRule "has four legs" "pony") >>=
> advance L >>|
> addRule L (mkRule "barks" "dog") >>|
> addRule R (mkRule "swims" "fish") >>=
> advance L >>|
> addRule R (mkRule "purrs" "cat") >>=
> withdraw >>=
> advance R >>|
> addRule R (mkRule "spins web" "spider") >>|
> reset)
The function reset is from the Data.Zipper module:
> reset :: (Mutable c dir c, Transitive c dir)
> ⇒ Zipper dir c → Zipper dir c
> reset z@(Zip [] _) = z
> reset (Zip ((dir, h):t) elt) = reset (Zip t $ mutate dir elt h)
Looking at animalTree above, I say with unmasked pride that I feel (>>|) shows its hidden strength: I could not imagine puzzling out the proper way to write the above definition using liftM and have it follow the natural flow that it does with its current implementation. Also note that it is vital that reset be called after a set of changes to a knowledge base occur, to reset (obviously) the focus to the top-level (default) rule, and to correct the tree containing that knowledge.

Running

Now that we have our animalTree, we need one more function to extract the result (follow the Conclusion) of runRule:
> runConcl :: RuleTreeEnv a b c k v → c
> runConcl (RuleEnv _ (Env ks (Concl _ f))) = f ks
Now, we could set up an interactive question-answer session to tease the animal we are guessing from our hidden thoughts, but, since interactive I/O is a sin in functional languages (see the fall from grace in Lazy K), let's "pretend" our way through an interactive session, recording the results of the questions into the Environment:
> rtests :: IO ()
> rtests = let RuleEnv tree env = initKB "default" (assume "none")
> newTree = animalTree tree
> spider = updateEnv "spins web" "true" env
> chat = updateEnv "has four legs" "true" $
> updateEnv "purrs" "true" env
> spy = runConcl (answer $ RuleEnv newTree spider)
> cat = runConcl (answer $ RuleEnv newTree chat)
> in do print newTree
> print spy
> print cat
As expected, spy is "spider" (in answer to the question "Does it spin a web?"), and cat is "cat" (in answer to the questions "Does it have four legs?" followed by "Does it purr?").

Modifying

All is well and good with the world, yes? Certainly, when we receive the expected answers from our knowledge base, but let's explore the world a bit beyond what we've captured. Not everything that swims is a fish:
> fishey = let RuleEnv tree env  = initKB "default" (assume "none")
> newTree = animalTree tree
> duck = updateEnv "swims" "true"
> $ updateEnv "flies" "true" env
> noDuck = runConcl (answer $ RuleEnv newTree duck)
> in print noDuck
We find that noDuck is a "fish". Perhaps it's a "flying fish", but it definitely wasn't the animal we were guessing, so we need to update our knowledge base to give us the desired answer. Fortunately, the system returns the Rule that rendered the Conclusion, so modifying the system proceeds directly:
> duckey = let RuleEnv tree env  = initKB "default" (assume "none")
> newTree = animalTree tree
> duck = updateEnv "swims" "true"
> $ updateEnv "flies" "true" env
> re@(RuleEnv noDuckTree _) = answer $ RuleEnv newTree duck
> noDuck = runConcl re
> duckTree = addRule L (mkRule "flies" "duck") noDuckTree
> ducky = runConcl (answer $ RuleEnv duckTree duck)
> in print (noDuck, ducky)
With the modification in place, that is, the addition of the new EXCEPT Rule, we find that the animal that swims and flies is, indeed, a "duck", as expected. That's Just ducky!

Knowledge in context

Of course, there is the flying fish conundrum, so a better ordering would be to have the Conclusion of that Rule actually be "flying fish" and its EXCEPT clause (with the Condition being something like "webbed feet" or "feathers") rendering the "duck" Conclusion. While we're on the topic of structuring knowledge, not everything that purrs is a cat. The knowledge base could have had a very different structure if the Condition of the first Rule was "purrs". Trekkers know the answer to that one: "tribble", obviously! The follow-on EXCEPT clause (with the Condition of "four legs") would then clarify to the feline nature.

This demonstrates knowledge in context, where in one context, the context of "having four legs", the attribute of purring leads to "cat", but in another context (the blank context, but that context could be elaborated with some Rules that put us in the context of the Star Trek, um, multiverse?), the very same attribute leads to "tribble". Under this new context, "four legs" leads back to our "chat chapeau" (that is Viennese) [I am really running rampant with my `pataphorisms, I do apologize and will work to check myself, but topic of επιστήμη λόγος does rather lend itself to such openings [which I have relentlessly pursued ... again!]] Furthermore, the quiddity of "four legs" is, itself, context-based. In one sense it leads to every little girl's dream (a "pony") and following (EXCEPTing) that, several other species, and in another context, it leads to non-tribble purring creatures. This is a rather fundamental restructuring of our presumptions from the first article on this topic. I don't have a simple function that restructures knowledge assumptions in fundamental ways; I don't see the benefit of having one, so let's simply rewrite our knowledge base from scratch with our gained experience:
> startrek tree = fromJust
> (return tree >>|
> addRule L (mkRule "purrs" "tribble") >>=
> advance L >>|
> addRule L (mkRule "has four legs" "cat") >>|
> addTree R (firstRule (animalTree tree)) >>|
> reset)
> where addTree dir (Zip _ branch) (Zip hist tree)
> = Zip hist $ mutate dir branch tree
> firstRule = fromJust . advance L
Not as painful as I thought! There are a couple of points to note, however:
  1. The path to discovering a "cat" is duplicated, redundantly. This is fine, however: real knowledge is messy and contains redundancies, and this redundancy doesn't impact the (speed) efficiency of this knowledge base in any way; and,
  2. We are back to missing our "duck". I leave that as an exercise to you to re-add.
Summary

This concludes the series of articles on the explanation, implementation and demonstration of a simple Ripple-Down Rules (RDR) system. In these articles we showed that such systems are easy to implement in Haskell and then to use. Knowledge management, in and of itself, is a rather deep and tricky topic (we have hinted at such trickiness in our "Trouble with Tribbles"), but RDR, using the concept of knowledge in context provides a method that allows modelling this knowledge more directly and allows manipulation of assumptions without adding too much difficulty to the task of knowledge engineering.

Friday, September 19, 2008

Animal as RDR, part II

Implementation study

In the last entry, we introduced what RDR (ripple-down rules) are and reviewed the types that comprise an example system. This entry will show how those types are used to implement such a system.
> module RDRFinding where
This module scans the RDR tree in context to give BOTH the best-fitting conclusion AND the final Branch that led to the ultimate conclusion (in the form of a zipper so that the branch may be replace in place using standard operations on the zipper).
> import RDRTypes
> import Data.Transitive
> import Data.Zipper
> import Data.BinaryTree
> import Data.Map (Map)
> import qualified Data.Map as Map
> import Control.Monad.State
> import Data.Maybe
You have already encountered the above imported modules, but the next two modules need an introduction. The first
> import Control.Monad.Utils
contains my weird and wonderful syntax when I'm using monads for parsing or logic tasks. The parsing syntax you've seen before (see the critique), but I do add one new syntactic construct:
(>>|) :: m a → (a → b) → m b
because I'm always doing "m >>= return . f", and liftM seems to feel oddly backwards when I'm visualizing data flow. The next
> import Data.Mutable
provides a generic operation for changing a data structure:
class Mutable t dir val | t → dir, t → val where
mutate :: dir → val → t → Maybe t
So, what's the game? We have an Environment (a set of attributed values) combined with a RuleTree into the State Monad. What we do is guide the values in the environment through the rule tree (where a successful Condition chooses the EXCEPT branch and displaces the currently saved Conclusion with the one associated with this Rule, and conversely if the Condition fails, the ELSE branch is selected, without displacing the currently saved Conclusion). When we reach a Leaf, we return our current position in the tree (the current state of the Zipper) along with the last valid Conclusion. All this is done by runRule:
> runRule :: RuleFinding a b c k v
> runRule = get >>= λ (RuleEnv root env) . runRule' root env

> runRule' :: RuleTree a b c k v → Environment k v b c
> → RuleFinding a b c k v
> runRule' tree env@(Env ks curr)
> = branch tree >>: λ (cond, conc) .
> let (dir, concl) = liftZdir (testCond cond env conc)
> in advance dir tree >>: λ path .
> put (RuleEnv path (Env ks concl)) >> runRule
> where x >>: f = tryS curr x f
Whew! This is a mouthful in the number of functions it introduces, but conceptually, runRule is rather straightforward. Let's break it down.

The function runRule, itself, merely destructures the RuleTreeEnv term, passing that information to runRule', so let's move right on to that worker function. First, let's examine the funny syntactic construct, (>>:) — what is this monadic operator doing? We see from its definition that it calls tryS:
> tryS :: a → Maybe b → (b → State c a) → State c a
> tryS x may f = maybe (return x) f may
So, tryS lifts the State Monad into semideterminism (using the Maybe Monad). As an aside, perhaps, then, runRule' could be rewritten as a StateT over the Maybe Monad ... perhaps an intrepid reader will gain a ⊥-trophy for an implementation and explanation?

Using that monadic operator, (>>:), we get the current branch in focus (bailing if the focus is on a Leaf) ...
> branch :: RuleTree a b c k v
> → Maybe (Condition a (Knowledge k v),
> Conclusion b c (Knowledge k v))
> branch (Zip _ (Branch (Rule cond conc) _ _)) = Just (cond, conc)
> branch (Zip _ Leaf) = Nothing
... then we test the condition at that Branch ...
> testCond :: Condition a (Knowledge k v)
> → Environment k v ca cb
> → Conclusion ca cb (Knowledge k v)
> → Either (Environment k v ca cb)
> (Environment k v ca cb)
> testCond (Cond _ test) env@(Env kb _) conc1
> | test kb = Left $ Env kb conc1
> | otherwise = Right env

> liftZdir :: Either (Environment k v ca cb)
> (Environment k v ca cb)
> → (BinDir, Conclusion ca cb (Knowledge k v))
> liftZdir test = either (λ (Env _ c) . (L, c))
> (λ (Env _ c) . (R, c))
> test
I do this little pas de deux between testCond and liftZdir because somehow it just feels right to use the Either type here. Perhaps, sometime later Arrows will come into play. At any rate, liftZdir . testCond can be considered one function that returns the appropriate leg of the branch to continue finding the best viable Conclusion, as well as the best current Conclusion reached from applying the Environment to the Condition.

Given that information, we now advance down that path, updating the state, and continue to test recursively, until we reach a Leaf, at which point we have our answer (the ultimate viable Conclusion).

If we're happy with that answer, we call runRule with a new transaction (in other words, a fresh Environment), and the Zipper pointing back at the top of the RuleTree. If we're not happy, then we're given the ability to add a new Rule to the RuleTree. We do this with addRule:
> addRule :: BinDirRule a b c (Knowledge k v)
> → RuleTree a b c k v → RuleTree a b c k v
> addRule dir rule (Zip hist branch)
> = let ruleB = Branch rule Leaf Leaf
> in Zip hist (mutate dir ruleB branch)
The above functions are the meat of the implementation for this simple RDR system. There are a few conveniences that the following functions provide. The first one is answer that scans the rule tree, making the best conclusion, and then backs up one step to provide the user access to the branch in case the precipitating rule finding wasn't exactly giving the desired result.
> answer :: RuleTreeEnv a b c k v → RuleTreeEnv a b c k v
> answer rule = let RuleEnv z ans = execState runRule rule
> in RuleEnv (fromJust $ withdraw z) ans
The next three functions help to automate the creation of the rule parts, Conditions and Conclusions. The function mkCond creates a test function with the assumption that the knowledge store contains a (k,v) pair. It does the lookup in the knowledge store and passes the extracted values to the test function (which, as with any good predicate, returns either True or False). If we can't find the key, I guess, for now, we'll assume the returned value is False:
> mkCond :: Ord k ⇒ k → (v → Bool) → Condition k (Knowledge k v)
> mkCond key fn = Cond key $ λ ks . maybe False fn (Map.lookup key ks)

> present :: Ord k ⇒ k → Condition k (Knowledge k v)
> present = flip mkCond (const True)

> assume :: k → Conclusion k k env
> assume key = Concl key (const key)
This completes the implementation of this RDR system. The next entry will create a small RDR system, based on the game Animal, to demonstrate how the system works.

Thursday, September 18, 2008

Animal: an RDR implementation study, part I: types

Synopsis

Ripple-down rules provide a fast and efficient representation of knowledge in context for use, e.g., by expert systems. We present here a complete implementation of one type of RDR system in Haskell. But what analogy is sufficient to describe what an RDR system is? The literature, albeit comprehensive, seems to concentrate more on the details of making such a system work, but none have presented the essence: the computer guessing-game Animal does a good job of this illustration, and we use it here to build an example knowledge base for this RDR system.

Motivation/Introduction

As a knowledge engineer I have worked with Subject-Matter Experts (SMEs) to build various rule-based expert systems. A common pitfall of such systems, its ὕβρις, is that they attempt to abstract decision making from any context. And, as such, fail to notice the nuances or have the situational awareness needed to render useful judgments. In a knowledge-engineered rule-based/bayesian-like hybrid system I developed, the bayesian decisions lead to over 99% of the positive findings in the transactions analyzed.

This would be the end of the story if there were no hard limits, but there are always such hard limits. Bayesian-like systems tend to scorn the advice and guidance of SMEs: the data set itself is the experts, not the SMEs. Despite the success of using the data, bayesian-like systems also tend to overreact — only 1 transaction out of 1000 transactions it flagged actually lead to a decision — these systems need serious throttling to be successful. Resources, then, are a real-world constraints that rule-based systems better model than bayesian systems in practice. In fact, hard constraints in general are modeled much better by rule-based systems.

But the rule-based systems, popularized by, e.g., iLog JRules™ and used in many expert systems, do not speak the language of the SMEs. Having worked with SMEs across the U.S.A. over a period of years, rules invariably tend to be defined by exception. Whenever we, as the knowledge-engineers, attempt to nail down a definition with the SMEs, the conversations always proceed as follows:
SME: Yeah, a CC transaction of over $57.38 is always suspect.
Me: So, we'll flag those, then [thinking: Ha! that was an easy rule; finally!]
SME: No, no, no! Only from young males or senior citizens in the following three income brackets.
Me: Oh, okay, I'll add that to the constraint.
SME: No, but it needs to be in the following zip-codes ...
Several hours later we're still ironing out the rule, and then, as lunch break approaches, the chair either tables the rule or passes a simplified, useless, version of it.

Note how the rule set was defined above, the SMEs agreed to a general case, and then continue to refine that definition by adding (often conflicting) constraints. In a context-free rule-based system, modelling such a rule set is by no means impossible, but the task quickly becomes a chore in the nightmare of complexity.

RDRs (Ripple-down Rules), on the other hand, embrace the context. The syntax of an RDR system is as follows:
<rule> ::= IF condition THEN conclusion
<knowledge base> ::= ⊥
| <rule>
EXCEPT <knowledge base>
ELSE <knowledge base>
The semantics is as follows. If the condition is met, then that conclusion is saved as a viable result (replacing any prior conclusion) and the EXCEPT branch is followed recursively until , at which time the most recently saved conclusion is the result. On the other hand, if condition fails, the ELSE branch is selected and followed recursively. Of course, the knowledge base must be applied to something. In this system, we have a very simple environment where the condition tests for the presence of a String in that environment.

The initial knowledge base for every RDR system starts as:
IF (const True) THEN none EXCEPT ⊥ ELSE ⊥
As the SMEs interact with the RDR system, they add to knowledge to refine the conclusions guided by refinements in the conditions. The system is very permissive, redundancy is permitted, even encouraged, because a condition in depth of one path of EXCEPTs and ELSEs has a very different meaning, in context, than along another path.

Example

Our RDR system with be the Animal guessing game with the following knowledge base:
IF (const True) THEN "not an animal"
EXCEPT IF (present "four legs") THEN "pony"
EXCEPT IF (present "barks") THEN "dog"
EXCEPT ⊥
ELSE IF (present "meows") THEN "cat"
EXCEPT ⊥
ELSE ⊥
ELSE IF (present "swims") THEN "fish"
EXCEPT ⊥
ELSE IF (present "spins web") THEN "spider"
EXCEPT ⊥
ELSE ⊥
ELSE ⊥


Types
> module RDRTypes where

> import Control.Monad.State
> import Data.Map (Map)
> import qualified Data.Map as Map
I must apologize for not introducing the next three modules properly. These modules are part of my canon and will be introduced in depth elsewhere. For now, I must settle for the following descriptions:
> import Data.Transitive
defines a generic protocol of walking a data structure one step at a time, either "forward" (with advance) or backward (with withdraw).
> import Data.Zipper
The "simple Ariadne zipper" illustrated in the Haskell Wikibooks.
> import Data.BinaryTree
The only novel structure here is that the tree is shaped to conform to the structure of RDRs: the data is in the branch, not the leaves.
From Predicate Logic-based Incremental KA, Barry Drake and Ghassan Beydoun (Nov 2000), file named PRDR.pdf

2.1. Ripple Down Rules (RDR)
An RDR knowledge base is a collection of simple rules organised in a binary tree structure. Each rule has the form, "If condition then conclusion". Every rule can have two branches to other rules: a false-branch (also called the “or” branch) and a true-branch (also called the “exception” branch). An example RDR tree is shown in figure 2.1. When a rule is satisfied, the true branch is taken, otherwise a false branch is taken. The root node of an RDR tree contains the default rule whose condition is always satisfied, that is, it is of the form, “If true then default conclusion”. This default rule has only a true-branch.
The RDR IF-THEN rule contains a condition and conclusion that interact with the Environment (defined later) to inform the decision of the system.
> data Condition a env = Cond a (env → Bool)
> instance Show a ⇒ Show (Condition a env) where
> show (Cond c _) = "IF " ++ show c

> data Conclusion a b env = Concl a (env → b)
> instance Show a ⇒ Show (Conclusion a b env) where
> show (Concl c _) = "THEN " ++ show c
Given the above, an IF-THEN Rule is simply the conjunction of the the Condition and Conclusion:
> data Rule a b c kb
> = Rule (Condition a kb) (Conclusion b c kb)
> instance (Show a, Show b) ⇒ Show (Rule a b c kb) where
> show (Rule a b) = show a ++ " " ++ show b
The Environment is composed of a dictionary (keys to values) and the current most valid conclusion under consideration. In our example (Animal), we merely test for the existence of a key, but more complex system usually treat the keys as attributed values and perform more than simple existence-check tests.
> type Knowledge k v = Map k v
> data Environment k v a b
> = Env (Knowledge k v) (Conclusion a b (Knowledge k v))
> instance (Show k, Show v, Show a)
> ⇒ Show (Environment k v a b) where
> show (Env kv conc) = "{" ++ show kv ++ ": "
> ++ show conc ++ "}"
The above elements are what comprise the simple types for the RDR system, so what is left is those elements that form the structure. This system is in the shape of a binary tree, so, of course, we use that data structure. As we append new rule branches to leaves of the tree, we use the Zipper data type to allow us to add these nodes in place.
> type RuleBranch a b c k v
> = BinaryTree (Rule a b c (Knowledge k v))
> type RuleTree a b c k v
> = Zipper BinDir (RuleBranch a b c k v)

> data RuleTreeEnv a b c k v = RuleEnv (RuleTree a b c k v)
> (Environment k v b c)
> instance (Show a, Show b, Show k, Show v)
> => Show (RuleTreeEnv a b c k v) where
> show (RuleEnv tree env) = "| " ++ show tree ++ " : "
> ++ show env ++ " |"
The RDR system is built around the concept of context, and the State Monad captures that concept well. The final type is used to shuttle around the knowledge base as well as the currently viable conclusion based on the rule finding.
> type RuleFinding a b c k v
> = State (RuleTreeEnv a b c k v)
> (Conclusion b c (Knowledge k v))
The above types describe the RDR system. In the next entry, we will show the implementation of the system when it comes to building and adding rules as well as traversing the rule tree to reach a conclusion.

Wednesday, September 10, 2008

What is declarative programming?

The concept has been bandied about, and has entered into more popular discussion with the broad acceptance of XML. Beside the overall definition, however ("Declarative is the 'what' of programming; imperative, the 'how'"), I haven't heard a definition that sketches, even, what declarative programming is and how it looks like.

For the "quartet of programming styles", being: imperative, object-oriented, functional, and logical, it seems pretty clear that there are well-defined general boundaries (with enough wiggle room to cause fanatics to enjoy flame-wars as the mood struck them) to separate one style from another, with languages easily falling into one or more of those camps:
  • C: imperative
  • Smalltalk/Java: imperative/object-oriented
  • Lisp(and Scheme and Dylan and ...)/Haskell/ML: functional
  • Prolog (Mercury): logical
This was all clear-cut and well and good.

But for classifying something as "declarative programming" it seemed that there has been talk of its benefits or drawbacks, but not much more than superficial talk of what it is. Camps from both the functional programming community and the logic programming community stake claims over the declarativeness of their programming languages, but how does one recognize code as declarative? What is the measure by which the "declarativeness" of such code may be ascertained?

Up until recently, I have been troubled by such misgivings only marginally. I had it from authority, a Lisp giant, Patrick Winston, in a Prolog book (Bratko's 3rd ed of "Prolog Programming for Artificial Intelligence"), that the logical style of Prolog is declarative and the functional style is not. Before your send your flame, here's the quote:
"[...] In my view, modern Lisp is the champion of these [imperative] languages, for Lisp in its Common Lisp form is enormously expressive, but how to do something is still what the Lisp programmer is allowed to be expressive about. Prolog, on the other hand, is a language that clearly breaks away from the how-type languges, encouraging the programmer to describe situations and problems, not the detailed means by which the problems are to be solved.

Consequently, an introduction to Prolog is important for all students of Computer Science, for there is no better way to see what the notion of what-type programming is all about. [...]"
I add here that I also view the bulk of Haskell in this light: although it is possible to code declaratively in Haskell, most Haskell code I see is concern with solving the problem (the "how") instead of describing the problem (the "what"). Put another way, it is natural to use the functional and imperative (with monadic do) styles, and it takes effort to use the logic style.

That has been my prejudice until recently, but then recent correspondence with colleagues, including David F. Place, who recently had an excellent article in the Monad.Reader about Monoid, has opened this issue for reexamination. So, I turn to you, gentle reader. I present two very different programs below. One written in the logic style; one, functional. Both solve the same problem, and both authors claim their own version is definitively declarative. I view the world through a particular lense, so I see one perspective. But I am burning with curiosity: do you see A) or B) as declarative, or both, or neither? If so, how do you justify your position?

A) the "logical" program approach:
import Control.Monad.State
import Data.List

splits :: (Eq a) ⇒ [a] → [(a, [a])]
splits list = list >>= λx . return (x, delete x list)

choose :: Eq a ⇒ StateT [a] [] a
choose = StateT $ λs . splits s

sendmory' :: StateT [Int] [ ] [Int]
sendmory' =
do
let m = 1
let o = 0
s ← choose
guard (s > 7)
e ← choose
d ← choose
y ← choose
n ← choose
r ← choose
guard (num [s, e, n, d ] + num [m, o, r , e ]
≡ num [m, o, n, e, y ])
return [s, e, n, d , m, o, r , y ]
B) the functional program approach (provided by David F. Place):
solve input accept return
= solve' [] input [0..9] accept return

solve' bindings [] _ accept return
| accept bindings = [return bindings]
| otherwise = []
solve' bindings ((_,g):xs) digits accept return
= concatMap f $ g digits
where f n = solve' (bindings++[n]) xs
(delete n digits)
accept return

num = foldl ((+) . (*10)) 0

sendMoreMoney =
solve (('m', const [1]) :
('o', const [0]) :
('s', filter ( > 7)) :
(zip "edynr" (repeat id)))
(λ [m,o,s,e,d,y,n,r] . num [s,e,n,d]
+ num [m,o,r,e]
≡ num [m,o,n,e,y])
(λ [m,o,s,e,d,y,n,r] . [s,e,n,d,m,o,r,y])

Tuesday, September 2, 2008

Fuzzy unification parser in Haskell

Synopsis

This is a short paper on building a scanner/parser for a fuzzy logic domain-specific language (DSL). The system takes as input a file containing an ordered set of fuzzy statements and outputs the equivalent Prolog program. We first briefly and informally introduce the topic of fuzzy unification. Next we provide a Backus-Naur Form (BNF) grammar of the fuzzy DSL. Then we provide fuzzy example statements and show their transformation into Prolog statements. Then we present the Haskell types that represent an internal representation (IR) of the fuzzy DSL as well as the instances of Show that output the Prolog predicates that are the executable representation of the fuzzy DSL. Then we present the scanner/parser of the fuzzy DSL. Finally, we translate two input fuzzy files and execute queries against the result in a Prolog listener.

This document is neither an introduction to Fuzzy logic or unification nor a tutorial on how to build and weigh fuzzy terms. The reader is referred to the rich library of online and offline publications on these topics.

Introduction

The standard execution of unification in Prolog for ground atoms is that two atoms must be of the same type and then of the same value in order to unify. This rigor is very good for proof of program correctness and where there is no room for tolerances; in short, for classic predicate logic proofs, unification does what we need it to do. However, standard unification hinders more than helps in the presence of real-world, messy, data or where some generality is needed in, e.g., the decision-making process of an expert system.

One approach that provides some tolerance and generality in the face of messy data is to introduce fuzziness into the unification process. In this way, we may state facts with some degree of associated certainty. We may also embed in the rule-finding process fuzzy techniques. Three such techniques in fuzzy rule-finding include:
  1. Product logic, where and_prod (x, y) = x * y
  2. Gödel intuitionistic logic, where and_godel (x, y) = min x y
  3. Lukasiewicz logic, where and_luka (x, y) = max 0 (x + y - 1)
These techniques are conjunctive and are implemented in the Prolog file named prelude.pl as follows:
and_prod(X,Y,Z) :- Z is X * Y.
and_godel(X,Y,Z) :- min(X, Y, Z).
and_luka(X,Y,Z) :- H is X+Y-1, max(0, H, Z).
The fuzzy DSL also allows disjunctions of the above. Their implementation can also be found in prelude.pl:
or_prod(X,Y,Z) :- Z is X + Y - (X * Y).
or_godel(X,Y,Z) :- max(X, Y, Z).
or_luka(X,Y,Z) :- H is X+Y, max(1, H, Z).
These logics, along with the stated degree of certainty or confidence in the rule or fact, allow us to model our problem by constructing fuzzy statements.

Grammar

A <program> in the fuzzy DSL this scanner/parser supports is as follows:
<program> = <statement>+
<statement> ::= (<rule> | <fact>) <ss> "with" <ss> <float> ".\n"
<float> ::= Float

<fact> ::= <term>
<rule> ::= <term> <ss> <implication> <ss> <entailment>

<term> ::= <name> "(" <arguments> ")" | <name>
<name> ::= String1

<arguments> ::= <argument> <opt-args>
<opt-args> ::= "," <arguments> | ε

<argument> ::= <atom> | <variable> | <float> | <string>
<string> ::= "\"" String "\""
<variable> ::= String2
<atom> ::= <name>

<implication> ::= "<" <kind>
<kind> ::= "prod" | "luka" | "godel"

<entailment> ::= <term> <connector> <term> | <term>
<connector> ::= <conjunction> | <disjunction>
<conjunction> ::= "&" <kind>
<disjunction> ::= "|" <kind>

<ss> ::= " " <opt-ss>
<opt-ss> ::= <ss> | ε

1 no spaces, first character lowercase alpha, rest underscores and alphanums
2 no spaces, first character is "_" or upcase alpha

Transformation

An example of a statement of fact in the fuzzy DSL is as follows:
r(a) with 0.8.
An example of a rule statement is:
p(X) <prod q(X) &godel r(X) with 0.7.
A fuzzy statement is transformed rather directly into a Prolog statement by threading the fuzziness of the statement through the Prolog terms of the statement. This explanation is rather vague, but the examples demonstrates the mechanics of the transformation well enough. The fuzzy statement of fact is transformed into the following Prolog statement:
r(a, 0.8).
The fuzzy rule statement requires quite a bit more threading, and the system uses a chaining of logic variables to affect this:
p(X, Certainty) :-
q(X, _TV1), r(X, _TV2), and_godel(_TV1, _TV2, _TV3),
and_prod(0.7, _TV3, Certainty).


Strategy

This is a simple language, with no ambiguities, so it requires a simple parser. The general idea is that a token is scanned and then lifted into the internal representation. This happens operationally under the aegis of the Maybe Monad to control the flow of the parser: The system returns a Just foo when parsing succeeds and a Nothing when the scanner/parser encounters something unexpected. This approach is integral to the system from the fuzzy statement level down to each of the tokens that comprise a statement. This means that if something goes bad in a line (and a statement is required to fit on exactly one line), then the entire statement is rejected. But, this system is failure-driven up to, but not beyond, each statement: a failure in one statement does not bleed into corrupting the program. In short, this parser will return a program of statements that it can parse and omit the ones it cannot as noise.

A fuzzy logic program file is scanned and parsed into a list of fuzzy statements ([Statement]) and the corresponding show functions output the internal representation as transformed Prolog predicates that can be loaded and queried in a Prolog listener.

Haskell Types

The Haskell types that form the internal representation of a fuzzy program follow the BNF rather closely (recall the technique of parsing via lifting functions; this module uses that technique):
> module FuzzyTypes where

> import Control.Arrow

> data Term = Term String [Arg]

A term requires no transformation from fuzzy DSL to Prolog:

> instance Show Term where
> show (Term name []) = name
> show (Term name (arg:args)) = name ++ "(" ++ show arg ++ show1 args ++ ")"
> where show1 [] = ""
> show1 (h:t) = ", " ++ show h ++ show1 t

> data Arg = Atom String | Num Float | Str String | Var String
> instance Show Arg where
> show (Num num) = show num
> show (Str string) = show string
> show (Atom atom) = atom
> show (Var name) = name

> data Kind = Prod | Luka | Godel
> instance Show Kind where
> show Prod = "prod"
> show Luka = "luka"
> show Godel = "godel"
The following lifting function converts an input string to the scanner to the correct connective-type value.
> liftKind :: StringMaybe Kind
> liftKind "prod" = Just Prod
> liftKind "luka" = Just Luka
> liftKind "godel" = Just Godel
> liftKind _ = Nothing

> data Implication = Impl Kind
We don't have a Show instance for Implication because we need to weave in the thread of fuzziness from the consequence and entailment. So, we do the showing from the Rule perspective.
> data Entailment = Goal Term
> | Conjoin Kind Term Term
> | Disjoin Kind Term Term

> display :: Entailment → (String, Arg)
> display (Goal term) = (show . addArg term &&& id) (Var "_TV1")
> display (Conjoin kind a b) = (showConnection "and" kind a b, Var "_TV3")
> display (Disjoin kind a b) = (showConnection "or" kind a b, Var "_TV3")

> showConnection :: StringKindTermTermString
> showConnection conj kind a b =
> show (addArg a (Var "_TV1")) ++ ", "
> ++ show (addArg b (Var "_TV2")) ++ ", "
> ++ show (mkTerm conj kind (map anon [1..3]))

> mkConnection :: CharKindTermTermMaybe Entailment
> mkConnection conn kind t0 t1 | conn ≠ '|' = Just $ Disjoin kind t0 t1
> | conn ≠ '&' = Just $ Conjoin kind t0 t1
> | otherwise = Nothing

> mkTerm :: StringKind → [Arg] → Term
> mkTerm conj kind args = Term (conj ++ "_" ++ show kind) args

> anon :: IntArg
> anon x = Var ("_TV" ++ show x)
We've finally built up enough infrastructure to represent a fuzzy rule:
> data Rule = Rule Term Implication Entailment Float

e.g.: Rule (Term "p" [Var "X"]) (Impl Prod)
(Conjoin Godel (Term "q" [Var "X", Var "Y"])
(Term "r" [Var "Y"])) 0.8

> instance Show Rule where
> show (Rule conseq (Impl kind) preds fuzz) =
> let cert = Var "Certainty"
> fuzzyHead = addArg conseq cert
> (goals, var) = display preds
> final = mkTerm "and" kind [Num fuzz, var, cert]
> in show fuzzyHead ++ " :- " ++ goals ++ ", " ++ show final
Representing and showing fuzzy facts turn out to be a rather underwhelming spectacle:
> data Fact = Fact Term Float
> instance Show Fact where
> show (Fact term fuzz) = show (addArg term (Num fuzz))

e.g. Fact (Term "r" [Var "_"]) 0.7
Fact (Term "s" [Atom "b"]) 0.9
And an fuzzy statement is either a fuzzy rule or a fuzzy fact:
> data Statement = R Rule | F Fact
> instance Show Statement where
> show (R rule) = show rule ++ "."
> show (F fact) = show fact ++ "."
Yes, I realize the following implementation of snoc ("consing" to end of a list) is horribly inefficient, but since all the argument lists seem to be very small, I'm willing to pay the O(n2) cost. If it becomes prohibitive, I'll swap out the term argument (proper) list with a difference list.
> snoc :: [a] → a → [a]
> list `snoc` elt = reverse (elt : reverse list)

> addArg :: TermArgTerm
> addArg (Term t args) arg = Term t (args `snoc` arg)
Haskell Scanner/Parser

The types defined above provide strong guidance for the development of the parser. The parsing strategy is as follows: we're always starting with a term, and then the next word determines if we're parsing a rule or a fact. A rule has the implication operators; a fact, the 'with' closure.

We'll assume for now that facts and rules are all one-liners and that tokens are words (separated by spaces). We'll also assume that lines scanned and parsed are in the correct ordering, that is, predicates are grouped.
> module FuzzyParser where

> import Control.Monad
> import Control.Arrow
> import Control.Applicative
> import Data.Maybe
> import FuzzyTypes
Scans a file of fuzzy information and the parses that info into an internal representation, the output of which is the underlying Prolog representation. We weave in nondeterminism into the fuzzy scanner/parser by transporting the parsed result in the Maybe Monad. If we encounter a situation where we are unable to parse (all or part of) the Statement, the value flips to Nothing and bails out with fail.
> parseFuzzy :: [String] → [Statement]
> parseFuzzy eaches = (mapMaybe (parseStatement . words) eaches)

> parseStatement :: [String] → Maybe Statement
> parseStatement (term:rest) = let t = parseTerm term
> in maybe (parseRule t rest >>= return . R)
> (return . F . Fact t)
> (parseFuzziness rest)
The Term is a fundamental part of the fuzzy system, and is where we spend the most time scanning/parsing and hand-holding (as it has a rather huge helper function: parseArgs).
> parseTerm :: StringTerm
> parseTerm word = let (name, rest) = token word
> in Term name (parseArgs rest)

> parseArgs :: String → [Arg]
> parseArgs arglist = parseArgs' arglist
> where parseArgs' [] = []
> parseArgs' args@(_:_) = let (anArg, rest) = token args
> in parseArg anArg : parseArgs rest
For parseArg we try to convert the argument to (in sequence) a number, a variable, a quoted string and then finally an atom. The first one that succeeds is the winner. We do this by using some Control.Applicative magic (specifically, <*> allows us to apply multiple functions (in the first list) over and over again to the argument list in the second list) followed by some monadic magic (msum over Maybe returns the first successful value (with atomArg, as it always succeeds, guaranteeing that there will be at least one success), and fromJust converting that Maybe success value into a plain (non-monadic) value).
> parseArg :: StringArg
> parseArg arg = fromJust (msum ([numArg, varArg, strArg, atomArg] <*> [arg]))
For the following functions recall how my "implied-by" operator (|-) works: in a |- b, a is returned, given b (is True). Given that, the below functions attempt to convert the scanned argument into a parsed (typed) one: a number, a (logic) variable, a string, or an atom:
Here's how we try to convert an argument ...

First we try to see if it's a number

> numArg :: StringMaybe Arg
> numArg x = Num (read x) |- all (flip elem ('.' : ['0' .. '9'])) x

Next, is it a (n anonymous) variable?

> varArg :: StringMaybe Arg
> varArg x@(h:_) = Var x |- (h == '_' || h `elem` ['A' .. 'Z'])

Maybe it's a string?

> strArg :: StringMaybe Arg
> strArg x@(h:t) = Str (chop t) |- (h == '"')

Okay, then, it must be an atom then

> atomArg :: StringMaybe Arg
> atomArg = return . Atom

... and chop we shamelessly steal from the Perl folks.

> chop :: StringString
> chop list = chop' [head list] (tail list)
> where chop' ans rest@(h:t) | t == [] = reverse ans
> | otherwise = chop' (h:ans) t
Now that we've laid the ground work, let's parse in the statements. A statement is a fact or a rule. Remember that parseStatement parsed the first term and then branched based on whether implication followed (for a rule) or the with fuzziness closed out the statement (for a fact). So, we'll tackle parsing in a fact first; since a fact is just a term, and it's already been parsed, pretty much all we need to do now is to reify the term into the fact type:
> parseFact :: Term[String]Maybe Fact
> parseFact term fuzzes = return $ Fact term (read $ chop (head fuzzes))
That was easy! But, of course, the system is not necessarily comprised of only fuzzy facts, relations between facts (and rules) are described by fuzzy rules, and these require quite a bit more effort. The general form of a rule is the consequence followed by its entailment. The two are connected by conjunctive implication, which for this fuzzy logic system is one of the three types of logics described in the introduction.
> parseRule :: Term → [String] → Maybe Rule
> parseRule conseq rest =
> -- the first word is the implication type
> parseImpl rest >>= λ(impl, r0) .
> -- then we have a term ...
> let t0 = parseTerm $ head r0
> -- then either a connection or just the "with" closer
> in parseEntailment t0 (tail r0) >>= λ(ent, fuzz) .
> return (Rule conseq impl ent fuzz)
Parsing the implication is easy: we simply lift the kind of the fuzzy logic used for the implication into the Implication data type:
> parseImpl :: [String] → Maybe (Implication, [String])
> parseImpl (im:rest) = guard (head im == '<') >>
> liftKind (tail im) >>= λkind .
> return (Impl kind, rest)
Parsing entailment also turns out to be a simple task (recall my description of how maybe works): we parse in a term, and then we attempt to parse in a fuzzy value. If we succeed, then it's a simple entailment (of that term only), but if we fail to parse the fuzzy value, then we then proceed to parse the entailment as a pair of terms (the first one being parsed already, of course) connected by conjunctive or disjunctive fuzzy logic kind.
> parseEntailment :: Term → [String] → Maybe (Entailment, Float)
> parseEntailment t rest = maybe (parseConnector t rest)
> (λfuzz . return (Goal t, fuzz))
> (parseFuzziness rest)
The parser for compound entailment is also a straightforward monadic parser: it lifts the connector into its appropriate Kind, parses the connected Term and then grabs the fuzzy value to complete the conjunctive or disjunctive Entailment.
> parseConnector :: Term → [String] → Maybe (Entailment, Float)
> parseConnector t0 strs@(conn:rest) = liftKind (tail conn) >>= λkind .
> parseFuzziness (tail rest) >>= λfuzz .
> mkConnection (head conn) kind t0 (parseTerm (head rest)) >>= λent .
> return (ent, fuzz)
Finally, parseFuzziness reads in the fuzzy value from the stream as a floating-point number, given that it is preceeded by "with" (as dictated by the grammar):
> parseFuzziness :: [String] → Maybe Float
> parseFuzziness trail = read (chop (cadr trail)) |- (head trail == "with")
The rest of system are low-level scanning routines and helper functions:
> cadr :: [a] → a
> cadr = head . tail

> splitters :: String
> splitters = "(), "

> token :: String → (String, String)
> token = consumeAfter splitters

> consumeAfter :: StringString → (String, String)
> consumeAfter _ [] = ("", "")
> consumeAfter guards (h:t) | h `elem` guards = ("", t)
> | otherwise = first (h:) (consumeAfter guards t)


Running the system

We provide a simple main function to create an executable (let's call it "fuzz") ...
> module Main where

> import FuzzyParser

> main :: IO ()
> main = do file ← getContents
> putStrLn ":- [prelude].\n"
> mapM_ (putStrLn . show) (parseFuzzy (lines file))
... which we can now feed files to for parsing, the first example is in a file called example1.flp:
p(X) <prod q(X,Y) &godel r(Y) with 0.8.
q(a,Y) <prod s(Y) with 0.7.
q(b,Y) <luka r(Y) with 0.8.
r(_) with 0.6.
s(b) with 0.9.
We run the system in the shell...
geophf$ ./fuzz < example1.flp > example1.pl
... obtaining the resulting logic program:
:- [prelude].

p(X, Certainty) :- q(X, Y, _TV1), r(Y, _TV2), and_godel(_TV1, _TV2, _TV3), and_prod(0.8, _TV3, Certainty).
q(a, Y, Certainty) :- s(Y, _TV1), and_prod(0.7, _TV1, Certainty).
q(b, Y, Certainty) :- r(Y, _TV1), and_luka(0.8, _TV1, Certainty).
r(_, 0.6).
s(b, 0.9).
... which can be loaded into any Prolog listener, such as Jinni or SWI:
geophf$ prolog

?- [example1].
yes

?- p(X, Certainty).
X = a, Certainty = 0.48 ;
X = b, Certainty = 0.32 ;
no
Similarly, a different fuzzy system, described in the file example2.flp:
p(X) <prod q(X) with 0.9.
p(X) <godel r(X) with 0.8.
q(X) <luka r(X) with 0.7.
r(a) with 0.6.
r(b) with 0.5.
... results in the following Prolog file (saved as example2.pl):
:- [prelude].

p(X, Certainty) :- q(X, _TV1), and_prod(0.9, _TV1, Certainty).
p(X, Certainty) :- r(X, _TV1), and_godel(0.8, _TV1, Certainty).
q(X, Certainty) :- r(X, _TV1), and_luka(0.7, _TV1, Certainty).
r(a, 0.6).
r(b, 0.5).
... and gives the following run:
geophf$ prolog

?- [example2].
yes

?- p(X, Certainty).
X = a, Certainty = 0.27 ;
X = b, Certainty = 0.18 ;
X = a, Certainty = 0.6 ;
X = b, Certainty = 0.5 ;
no


Conclusion

We've presented and explained a Fuzzy unification scanner/parser in Haskell and demonstrated that system producing executable Prolog code against which queries may be essayed. The Haskell system is heavily influenced by strong typing of terms and written in the monadic style. It is comprised of three modules, totalling less than 250 lines of code. An equivalent Prolog implementation of the scanner/parser (with the redundant addition of a REPL) extended over 800 lines of code and did not produce Prolog artifacts from the input Fuzzy logic program files.

Monday, September 1, 2008

A stream of primes as Comonad

I figured I've beaten enough around this bush to encircle it with a path half-a-meter in depth. So, I've finally decided to bear down enough to study comonad types and to create my own instance.

The above is a caveat: this is an introductory guide from a beginner's perspective. YMMV. But, perhaps, as the other material out there on comonads is rather scarce to begin with, and rather intimidating where present, this entry may spark the interest of the reader wishing to start off on comonads and doing some basic things with them.

The comonad definition I use is from Edward Kmett's Comonad.Reader category-extras library. What a sweet library it is; ⊥-trophy is in the ether-mail.

Introduction

Let's talk a bit about what a comonad is, first. A comonad is a special way at looking at a data structure that allows one to work with the individual members of that data structure in the context of the entire data structure. It takes the opposite approach to the data in the data structure that monads do, so the monadic bind (>>=) ...
>>= :: Monad m ⇒ m a → (a → m b) → m b
... has its dual in the comonad (commonly called cobind, but sensibly named extend (=>>) in Control.Comonad). Its signature is as follows:
=>> :: Comonad w ⇒ w a → (w a → b) → w b
I started gaining insight into comonads by studying the extender function — it takes the comonad in its entirety and resolves that to a value in context. That value is then used to reassemble the comonad into its new form. In short, the extender synthesizes the comonad.

That's extend. Now, just as monads have the unit function (return) that creates a monad from a plain value ...
return :: Monad m ⇒ a → m a
... so, too, comonad has the dual of that function, called counit (or, again, more sensibly called extract for the library I'm using) ...
extract :: (Copointed f, Functor f) ⇒ f a → a
... which, instead of injecting a value into the comonad, it extracts a value from the comonad.

That's comonad, in a nutshell: the dual of monad.

Synthesis

Okay, then, let's jump right in to creating and using a Comonad instance. We'll start with the list data type:
> import Control.Comonad
> import Control.Arrow
> import List

> instance Copointed [] where
> extract = head

> instance Comonad [] where
> extend fn [] = []
> extend fn list@(h:t) = fn list : (t =>> fn)
Just like for monads where m >>= return is the (right) identity, we can show that for comonads that w =>> extract is an identity:
[1..10] =>> extract
What's the answer that you obtain? Why?

Now that we can use the whole list to create each element of the new list, thanks to the Comonad protocol, let's solve the example problem from Why Attribute Grammars Matter, which is we must replace each element with the difference from the average of the list. With comonads, that's easy to express!
> avg list = sum list / genericLength list
> diff list = list =>> (extract &&& avg >>> uncurry (-))
What does diff [1..10] give you, and why?

Now, the comonad did not eliminate the problem of multiple list traversals that Wouter points out in his article (please do read it!), but comonads do show a nice, simple, natural way to synthesize the whole to build each part. Beautiful!

A stream ...

Streams can be considered infinite lists, and are of the form:
a :< b :< ...
Uustalu and Vene, of course, discuss the Stream Comonad, but I obtained my implementation from a site with a collection of Comonad types, modifying to work with Kmett's protocol:
> module Control.Comonad.Stream where

> import Control.Comonad

> data Stream a = a :< Stream a

> instance Show a ⇒ Show (Stream a) where
> show stream = show' (replicate 5 undefined) stream
> where show' [] _ = "..."
> show' (_:t) (x :< xs)
> = show x ++ " :< " ++ show' t xs

> instance Functor Stream where
> fmap f (x :< xs) = f x :< fmap f xs

> instance Copointed Stream where
> extract (v :< _) = v

> instance Comonad Stream where
> extend f stream@(x :< xs) = f stream :< extend f xs

> produce :: (a → a) → a → Stream a
> produce f seed = let x = f seed in x :< produce f x

Um, it is the user's responsibility
to guard these functions below ...

> toList :: Stream a → [a]
> toList (x :< xs) = x : toList xs

> mapS :: (a → b) → Stream a → Stream b
> mapS = fmap
As for lists, it is quite easy to make streams an instance of the Comonad class. So, a stream of 1s is
let ones = 1 :< (ones =>> (extract >>> id))
ones ≡ 1 :< 1 :< 1 :< 1 :< 1 :< ...
The natural numbers are
let nats = 0 :< (nats =>> (extract >>> succ))
nats ≡ 0 :< 1 :< 2 :< 3 :< 4 :< ...
And a stream of primes is ... um, yeah.

... of primes

The ease at which we generated the stream of 1s and natural numbers would lead one to believe that generating primes would follow the same pattern. The difference here is that a prime number is a number not divisible evenly by any other prime number. So, with the schema for streams as above, one would need to know all the prime numbers to find this prime number. A problem perfectly suited for comonads, so it would seem, but, as the current element of the stream depends on the instantiation of the entire breadth of the stream, we run into a bit of a time problem waiting for our system to calculate the entire stream of primes in order to compute the current prime. Hm.

One needs to put some thought into how to go about computing a stream of primes. Uustalu and Vene created the concepts of Delay and Anticipation along with a History. All that is outside the scope of this article. Instead, let's consider Stream in a different light: why not embed the "history" (the primes we know) into the Stream itself. And what is the history? Is it not a list?
> primeHist :: Stream [Integer]
> primeHist = [3,2] :< primeHist =>> get'
With that understanding, our outstanding function to find the current prime (get') reduces to the standard Haskell hackery:
> get' :: Stream [Integer] → [Integer]
> get' stream = let primus = extract stream
> candidate = head primus + 2
> in getsome' candidate primus
> where getsome' :: Integer → [Integer] → [Integer]
> getsome' candidate primus
> = if all (λp . candidate `rem` p ≠ 0) primus
> then candidate : primus
> else getsome' (2 + candidate) primus
So, now we have a stream of histories of primes, to convert that into a stream of primes is a simple step:
> primes = 2 :< (3 :< fmap head primeHist)
And there you have it, a comonadic stream of primes!

Critique

Uustalu and Vene's implementation of prime suffer for the layers of complexity, but my implementation suffers at least as much for its relative simplicity. Each element of the stream contains the history of all the primes up to that prime. A much more efficient approach, both in space and time, would be to use the State monad or comonad to encapsulate and to grow the history as the state ... or, for that matter, just use list compression.

And, of course, there's the "Genuine Sieve of Eratosthenes" [O'Neill, circa 2006] that this article blithely ignored.

Friday, August 29, 2008

Earning ⊥-Trophies

I recall a quote stating that a person learns Haskell by reinventing the standard library. I'm still learning Haskell. Marvel Comics would give out a "No Prize" to each intrepid reader who noted discrepancies of plot or character. In that spirit, and in the spirit of functional/logical programming of my blog, I'll happily award ⊥-trophies to readers who point out improvements to my definitions that elicits a facepalm reaction (so, yes, this is a highly subjective award), and add the reader's id as well as the function I've unintentionally redefined (or the equivalent) to a "Hall of Fame" gadget adorning this blog (if you prefer to abstain from the recognition, that's fine, too).

As this ⊥-trophy set is part of the νούςσφαίρα, there's no limitations: one person can receive several of these things, with my gratitude to the earner for giving me a piece of knowledge on how to be a better programmer.

The ⊥-trophy-winners so far (this will surely be a growing list) are, most recent first:
  • meteficha: takeWhile
  • dmwit: join
  • matti: ap
  • ryani: v
  • david: u
  • edward kmett: Monoid
  • Daniel Lyons: Arrow
Mentor ⊥-trophy-winners:
  • sigfpe
  • Dirk Thierbach
  • Wouter Swierstra
Honorary mentions (nice, but I'm not going use them for philosophical or stylistic reasons):
  • meteficha: liftM2
  • ryani: unsafeCoerce
The ⊥-trophies up for grabs:
  • comonadic scanner-parser that simplifies the StateT implementation
  • shows for DList
  • A good working (useful) example from the scan-family of functions
...so, yes, I've been thinking about scanning/parsing recently, but, of course, any amazing little thing is up for grabs to earn oneself a ⊥-trophy.

And, now, back to our regularly scheduled programming.

Scanner-parsers II: State Monad Transformers

Okay, let's implement the same Mars rover command language with pretty much the same solution, but this time, for our scanner-parser of the Position, we're going to use the StateT monad transformer over the list monad to erase the housekeeping of the input stream.
> module MarsRoverStateMonad where

> import Control.Monad.State

... other imports and data/instance definitions as before ...
Recall our Scan type was
> -- type Scan = String → (a, String)
This type looks exactly like the type for the State monad, and that documentation page reveals that parsing is very well handled by the StateT String [] a type. Let's transform our Scan type into a monad:
> type Scan a = StateT String [] a
Since, also, everything in the Position data type derives the class Read it becomes a simple matter to constrain our polymorphic type (a) to the values we wish to obtain from our parser, like so:
> next :: Read a ⇒ Scan a
> next = get >>= λ(x:xs) . put xs >> if x ≡ ' '
> then next
> else return (read [x])
This function, next, is very simple for two reasons: first, the only separator that concerns us in this command language is the space (ASCII 32), and second, the tokens it consumes are only one character each. Both of these simplicities in this language can be easily and obviously enhanced when implementing scanner-parsers for richer languages.

With this new scanner (next) based on the StateT monad transformer, we can now rewrite our lifting functions in the monadic style. This frees us to concentrate on the token at the current position and lets the monad handle the stream positioning.
> liftLoc = next >>= λx . next >>= λy . return (Loc x y)
> liftOrient = next
> liftPos = liftLoc >>= λloc .
> liftOrient >>= λdir . return (Pos loc dir)
Note that the abstraction of the type signatures of these functions did not change at all, but the implementations simplified significantly (that is, if you consider the monadic style a simplification over the previous implementation that uses let-bindings to sequence computations).

Now to run the system, we replace in runRovers the
let (start, _) = liftPos pos
with
let start = head $ evalStateT (liftPos) pos
and we're done!

Critique

I like the monadic threading much, much, better than the previous implementation that manually threaded the position of the input stream via tupling using let-bindings, but this version here also leaves something to be desired. Note the parsing functions (liftPos and liftLoc) both follow the the format of:
grabA >>= λa . grabB >>= λb . return (Τ a b)
I don't like that I need to identify and bind the variables a and b; I don't like that one bit! In my eyes, identifying these variables does nothing to make the program more declarative. I would much rather write these functions in the following manner...
> liftLoc = next >>=> next >=>> Loc
> liftPos = liftLoc >>=> liftOrient >=>> Pos
...with the my weird monadic stream operators defined (obviously) as follows...
> (>>=>) :: Monad m ⇒ m a → m b → m (a, m b)
> mA >>=> mB = mA >>= λa . return (a, mB)

> (>=>>) :: Monad m ⇒ m (a, m b) → (a → b → c) → m c
> mTup >=>> τ = mTup >>= λ(a, mB) .
> mB >>= λb . return (τ a b)
...but then I worry that this syntax is too esoteric for its own good. Thoughts?

Yes, a person leaving a remark pointed out that I've reinvented liftM2. Obviously so, in retrospect. I'm on a roll! After a year of entries, I suppose I'll have reinvented most of Haskell's standard library.

Actually for binary situations (that is, for types that take two arguments from the stream to construct), I think I will use these special streaming monadic operators, but when types become more complex, then more, and more complex, operators are required (each new argument for a type doubles the complexity of each new operator). This tells me (as it has already told Uustalu and Vene) that monads are not the best way to go about scanning/parsing streams to extract an internal representation of the grammar.

Thursday, August 28, 2008

Scanner-parsers I: lifting functions

I'll start right off with an apology: I would've loved this entry to be on comonadic parsing, but I'm still trying to get my head around the Uustalu and Vene papers — I keep looking in the papers for something that says the equivalent of "... and this comonadic parsing technique resolves to something very much like Prolog's definite clause grammar", but I didn't see this conclusion in their papers. Would somebody please point out the gap in my understanding?

Instead, we'll concentrate on converting a very simple command language into an internal representation using lifting functions. Daniel Lyon suggested this "Mars rover" problem.
> module MarsRover where

> import Char
> import Data.List
Problem statement: MARS ROVERS
A squad of robotic rovers are to be landed by NASA on a plateau on Mars. This plateau, which is curiously rectangular, must be navigated by the rovers so that their on-board cameras can get a complete view of the surrounding terrain to send back to Earth.

A rover's position and location is represented by a combination of x and y co-ordinates and a letter representing one of the four cardinal compass points. The plateau is divided up into a grid to simplify navigation. An example position might be 0, 0, N, which means the rover is in the bottom left corner and facing North.

In order to control a rover, NASA sends a simple string of letters. The possible letters are 'L', 'R' and 'M'. 'L' and 'R' makes the rover spin 90 degrees left or right respectively, without moving from its current spot. 'M' means move forward one grid point, and maintain the same heading.

Assume that the square directly North from (x, y) is (x, y+1).

INPUT:

The first line of input is the upper-right coordinates of the plateau, the lower-left coordinates are assumed to be 0,0.

The rest of the input is information pertaining to the rovers that have been deployed. Each rover has two lines of input. The first line gives the rover's position, and the second line is a series of instructions telling the rover how to explore the plateau.

The position is made up of two integers and a letter separated by spaces, corresponding to the x and y co-ordinates and the rover's orientation.

Each rover will be finished sequentially, which means that the second rover won't start to move until the first one has finished moving.

OUTPUT

The output for each rover should be its final co-ordinates and heading.

INPUT AND OUTPUT

Test Input:

5 5
1 2 N
LMLMLMLMM
3 3 E
MMRMMRMRRM


Expected Output:

1 3 N
5 1 E
Problem solution

Types

This problem simply decodes into a state machine and a command interpreter, where the interpreter first lifts the input characters into their correspondingly-typed values.
The state machine, a.k.a Mars Rover, is:

> data Location = Loc Integer Integer
> data Orientation = N | E | S | W
> deriving (Show, Enum, Read)
> data Position = Pos Location Orientation

The typed commands are:

> data Direction = L | R deriving Show
> data Command = Turn Direction | Move

... with some Show instances for sanity checks and output

> instance Show Command where
> show (Turn dir) = show dir
> show Move = "M"

> instance Show Location where
> show (Loc x y) = show x ++ " " ++ show y

> instance Show Position where
> show (Pos loc dir) = show loc ++ " " ++ show dir
Our scanner-parser for our very simple language grabs the next thing to be scanned from the input and returns updated input.
> type Scan a = String → (a, String)
Lifting functions

The lifting functions are rather simple affairs, as they convert what's in the input stream to their typed internal equivalents:
> liftOrient :: Scan Orientation
> liftOrient command = head $ reads command

> liftLoc :: Scan Location
> liftLoc command = let (x, locstr):_ = reads command
> (y, out):_ = reads locstr
> in (Loc x y, out)
... I'm not quite getting the intent of reads: it seems to return either empty (denoting failure) or a singleton list containing the value and the continuation (that is, to be precise, the rest of the list after the first value is read). I'm not doubting its utility, but I do wonder that the list type was the return type if it behaves as a semideterimistic function (fail or singleton success) and not as a nondeterministic one (fail or multiple successes). The Maybe data type is more closely aligned with semideterministic functions; lists, nondet. So I believe that either the Either type or Maybe are a more appropriate return for reads.

At any rate, with the above two lifting functions, lifting the position of the rover (the first line in our command language) simplifies to chaining the above two lifting functions:
> liftPos :: Scan Position
> liftPos command = let (loc, rest) = liftLoc command
> (dir, out) = liftOrient rest
> in (Pos loc dir, out)
So, we have the scan and parse of the starting position of the rover done. We're half-way there! Now we simply need to scan then to parse the commands. But, the neat thing for us is the following: the commands are atomic (and just one character each). There's no need for the continuation scheme that we used for scanning positions above, we just need simply map the command lifting function over the line of commands (the "command line") to obtain our internal representation:
> liftCmd :: CharCommand
> liftCmd 'L' = Turn L
> liftCmd 'R' = Turn R
> liftCmd 'M' = Move
Command interpreter

What does a command do? It moves the rover or reorients the rover. What is movement? A change in Position (via a change in Location). What is reorientation? A change in Orientation. So, a Command's action (which is either to move or to turn the rover) results in a Δ (pron: "Delta") of the Position:
> type Δ a = a → a

> command :: CommandΔ Position
> command (Turn dir) (Pos loc orient)
> = Pos loc (turn dir orient)
> command Move pos = move pos

> move :: Δ Position
> move (Pos loc dir) = Pos (dLoc dir loc) dir
> where dLoc :: OrientationΔ Location
> dLoc N (Loc x y) = Loc x (y+1)
> dLoc E (Loc x y) = Loc (x+1) y
> dLoc W (Loc x y) = Loc (x-1) y
> dLoc S (Loc x y) = Loc x (y-1)

> turn :: DirectionΔ Orientation
> turn dir orient
> = toEnum $ (intVal dir + fromEnum orient) `mod` 4
> where intVal :: DirectionInt
> intVal L = -1
> intVal R = 1
I suppose turn could've been modeled more realistically on radians, and would work just the same, but since the command language is so simple (turns are exactly 90o), I can get away with using the four points of the compass as (circularly) enumerated values.

The Glue

Given all the above, the rover simply runs as follows.
Note that I ignore the grid-size command, as it appears to be superfluous to the problem definition (collision detection between rovers and out-of-the-grid bounding errors are NOT covered in the problem statement ... since they choose to ignore these issues, I will, too).
> runRovers :: StringIO ()
> runRovers commnd = let (_:commands) = lines commnd
> in putStrLn $ run' commands
> where run' [] = ""
> run' (pos:cmds:rest)
> = let (start, _) = liftPos pos
> mycmds = map liftCmd cmds
> stop = foldl (flip command)
> start mycmds
> in show stop ++ "\n" ++ run' rest

> runTestEx :: IO ()
> runTestEx = runRovers "5 5\n1 2 N\nLMLMLMLMM\n"
> ++ "3 3 E\nMMRMMRMRRM"
Summary

So, there you have it, folks: scanning and parsing as mapped lifting functions. That all DSLs would be that simple, but that topic is for when I cover how I reinvent the Parsec wheel ... but in the meantime, you can read the next extry the covers scanning and parsing for this problem with the State monad transformer.

Wednesday, August 27, 2008

Ten = 1+2+3+4

Daniel Lyons on his blog, showed us how to how to arith our way to 10 in Prolog using the operations of addition, subtraction, multiplication, and division and the numbers 1, 2, 3, and 4. A straightforward logic program, which I was hoping to duplicate the feel of in Haskell. Here's my go at it.
> module Ten where

> import Control.Monad
> import Data.List
> import Smullyan


I suppose the standard way people going about enumerating the arithmetic operations is to encode it in the following way ...
> {-
> data BinaryOp = Plus | Minus | Div | Times deriving Eq

> op :: BinaryOpIntIntMaybe Int
> op Plus b a = Just $ a + b
> op Minus b a = Just $ a - b
> op Times b a = Just $ a * b
> op Div b a = if b ≡ 0 ∨ a `rem` b ≠ 0
> then Nothing
> else Just (a `div` b)

> instance Show BinaryOp where
> show Plus = "+"
> show Minus = "-"
> show Div = "/"
> show Times = "x"
> -}


... but I shied away from that approach for two reasons: first and foremost, we have a disjoint type with enumerated values, but later we are going to choose from those enumerated values from a list of all those values. It seems to me to be wastefully redundant to enumerate the values of the type in its declaration, and then be required to enumerate those values again in their use. Wastefully redundant and dangerous — the type guarantees that I will not use values outside the type, but how can I guarantee that I've used all the values of the type in the program? I'm mean, of course, besides full coverage using a dependently typed system, that is.

So instead of the above commented out code, I chose the following type definition:
> data BinaryOp = Op (IntIntInt) String


This type has the advantage that the arithmetic operation is embedded in the type itself, as well as showing and equality:
> instance Eq BinaryOp where
> (Op _ a) ≡ (Op _ b) = a ≡ b

> instance Show BinaryOp where
> show (Op _ s) = s

> op :: BinaryOpIntIntMaybe Int
> op (Op f s) b a | s ≡ "/" = if b ≡ 0 ∨ a `rem` b ≠ 0
> then Nothing
> else Just (a `div` b)
> | otherwise = Just $ f a b


I know that some of you are going to be raising your eyebrows in disbelief: I've forsaken a pure and dependable, or compile-time, way to distinguish operations with a very weak string, or runtime, representation. The former is "just the way it makes sense" for most; the latter, my "sensible" compromise, sacrificing type safety for a more declarative representation. Again, if we had dependent types, I think there would be a type-safe way to represent arithmetic operators as comparable (distinct) and showable types ... dependent types for Haskell', anyone?

Okay, now we need to write permute, because this isn't C++ or Java. *sigh* That's correct, you heard it here first: Java and C++ are way better programming languages because they have permute in their standard libraries, and Haskell does not.
> permute :: Eq a ⇒ [a] → [[a]]
> permute list@(h:t) = [x:rest | x ∈ list,
> rest ∈ permute (list \\ [x])]
> permute [] = [[]]


And from that the solver simply follows: it's the permutation of the row of numbers matched with any arbitrary (ambiguous) arithmetic operation. This has the flavor of the days when I was writing code in MMSForth, push the numbers on the stack and then call the operators. Oh, JOY! (I'll stop now.)
> solver = [twiner perms [op1, op2, op3] |
> perms@[a,b,c,d] ∈ permute [1..4],
> op1 ∈ amb, op2 ∈ amb, op3 ∈ amb,
> ((op op1 b a >>= op op2 c >>= op op3 d)
> ≡ Just 10)]

> amb = [Op (+) "+", Op (*) "x", Op div "/", Op (-) "-"]


So, all we need is the display mechanic (twiner) to intersperse the arithmetic operators between the provided numbers. I had started down this path ...
> stringIt :: BinaryOpIntStringString
> stringIt op num acc = "(" ++ acc ++ show op ++ show num ++ ")"

> twiner :: [Int] → [BinaryOp] → String
> {-
> twiner args ops = doTwine args ops True ""
> where doTwine args ops binary acc | binary =
> let (arg1, rest) = (head args, tail args)
> (arg2, done) = (head rest, tail rest)
> in doTwine done (tail ops) False
> (stringIt (head ops) arg2 (show arg1))
> | ops == [] = acc
> | otherwise =
> doTwine (tail args) (tail ops) False
> (stringIt (head ops) (head args) acc)
> -}


... looking longingly at intersperse in the Data.List module (why, oh, why! can't they have intersperse take a state-function argument? Oh, because I would be the only person using it?), but, I figured, as I have already reinvented takeWhile, I had better not reinvent zipWith here:
> twiner (h:r) ops = foldl t (show h) (doTwine r)
> where doTwine args = zipWith (c stringIt) args ops


... *ahem* dependent types would guarantee that twiner is never called with an empty list of arguments.

I'd say my Haskell version follows in the spirit of the Prolog version in that it uses nondeterminism (of the Maybe and list MonadPlus types) to select the true equations. Weighing in at 18 lines of type declarations and code (with 4 additional lines for the module declaration and the imports), it is also about the same size as the Prolog version as well.

Goodness! Typeful logic programming in Haskell, with all the functional frosting on the cake applied for free, costs no more than developing a Prolog program, and along with the type-safety, it also carries with it the flavor and the feel of logic programming in Prolog as well.