Friday, August 29, 2008

Earning ⊥-Trophies

I recall a quote stating that a person learns Haskell by reinventing the standard library. I'm still learning Haskell. Marvel Comics would give out a "No Prize" to each intrepid reader who noted discrepancies of plot or character. In that spirit, and in the spirit of functional/logical programming of my blog, I'll happily award ⊥-trophies to readers who point out improvements to my definitions that elicits a facepalm reaction (so, yes, this is a highly subjective award), and add the reader's id as well as the function I've unintentionally redefined (or the equivalent) to a "Hall of Fame" gadget adorning this blog (if you prefer to abstain from the recognition, that's fine, too).

As this ⊥-trophy set is part of the νούςσφαίρα, there's no limitations: one person can receive several of these things, with my gratitude to the earner for giving me a piece of knowledge on how to be a better programmer.

The ⊥-trophy-winners so far (this will surely be a growing list) are, most recent first:
  • meteficha: takeWhile
  • dmwit: join
  • matti: ap
  • ryani: v
  • david: u
  • edward kmett: Monoid
  • Daniel Lyons: Arrow
Mentor ⊥-trophy-winners:
  • sigfpe
  • Dirk Thierbach
  • Wouter Swierstra
Honorary mentions (nice, but I'm not going use them for philosophical or stylistic reasons):
  • meteficha: liftM2
  • ryani: unsafeCoerce
The ⊥-trophies up for grabs:
  • comonadic scanner-parser that simplifies the StateT implementation
  • shows for DList
  • A good working (useful) example from the scan-family of functions
...so, yes, I've been thinking about scanning/parsing recently, but, of course, any amazing little thing is up for grabs to earn oneself a ⊥-trophy.

And, now, back to our regularly scheduled programming.

Scanner-parsers II: State Monad Transformers

Okay, let's implement the same Mars rover command language with pretty much the same solution, but this time, for our scanner-parser of the Position, we're going to use the StateT monad transformer over the list monad to erase the housekeeping of the input stream.
> module MarsRoverStateMonad where

> import Control.Monad.State

... other imports and data/instance definitions as before ...
Recall our Scan type was
> -- type Scan = String → (a, String)
This type looks exactly like the type for the State monad, and that documentation page reveals that parsing is very well handled by the StateT String [] a type. Let's transform our Scan type into a monad:
> type Scan a = StateT String [] a
Since, also, everything in the Position data type derives the class Read it becomes a simple matter to constrain our polymorphic type (a) to the values we wish to obtain from our parser, like so:
> next :: Read a ⇒ Scan a
> next = get >>= λ(x:xs) . put xs >> if x ≡ ' '
> then next
> else return (read [x])
This function, next, is very simple for two reasons: first, the only separator that concerns us in this command language is the space (ASCII 32), and second, the tokens it consumes are only one character each. Both of these simplicities in this language can be easily and obviously enhanced when implementing scanner-parsers for richer languages.

With this new scanner (next) based on the StateT monad transformer, we can now rewrite our lifting functions in the monadic style. This frees us to concentrate on the token at the current position and lets the monad handle the stream positioning.
> liftLoc = next >>= λx . next >>= λy . return (Loc x y)
> liftOrient = next
> liftPos = liftLoc >>= λloc .
> liftOrient >>= λdir . return (Pos loc dir)
Note that the abstraction of the type signatures of these functions did not change at all, but the implementations simplified significantly (that is, if you consider the monadic style a simplification over the previous implementation that uses let-bindings to sequence computations).

Now to run the system, we replace in runRovers the
let (start, _) = liftPos pos
with
let start = head $ evalStateT (liftPos) pos
and we're done!

Critique

I like the monadic threading much, much, better than the previous implementation that manually threaded the position of the input stream via tupling using let-bindings, but this version here also leaves something to be desired. Note the parsing functions (liftPos and liftLoc) both follow the the format of:
grabA >>= λa . grabB >>= λb . return (Τ a b)
I don't like that I need to identify and bind the variables a and b; I don't like that one bit! In my eyes, identifying these variables does nothing to make the program more declarative. I would much rather write these functions in the following manner...
> liftLoc = next >>=> next >=>> Loc
> liftPos = liftLoc >>=> liftOrient >=>> Pos
...with the my weird monadic stream operators defined (obviously) as follows...
> (>>=>) :: Monad m ⇒ m a → m b → m (a, m b)
> mA >>=> mB = mA >>= λa . return (a, mB)

> (>=>>) :: Monad m ⇒ m (a, m b) → (a → b → c) → m c
> mTup >=>> τ = mTup >>= λ(a, mB) .
> mB >>= λb . return (τ a b)
...but then I worry that this syntax is too esoteric for its own good. Thoughts?

Yes, a person leaving a remark pointed out that I've reinvented liftM2. Obviously so, in retrospect. I'm on a roll! After a year of entries, I suppose I'll have reinvented most of Haskell's standard library.

Actually for binary situations (that is, for types that take two arguments from the stream to construct), I think I will use these special streaming monadic operators, but when types become more complex, then more, and more complex, operators are required (each new argument for a type doubles the complexity of each new operator). This tells me (as it has already told Uustalu and Vene) that monads are not the best way to go about scanning/parsing streams to extract an internal representation of the grammar.

Thursday, August 28, 2008

Scanner-parsers I: lifting functions

I'll start right off with an apology: I would've loved this entry to be on comonadic parsing, but I'm still trying to get my head around the Uustalu and Vene papers — I keep looking in the papers for something that says the equivalent of "... and this comonadic parsing technique resolves to something very much like Prolog's definite clause grammar", but I didn't see this conclusion in their papers. Would somebody please point out the gap in my understanding?

Instead, we'll concentrate on converting a very simple command language into an internal representation using lifting functions. Daniel Lyon suggested this "Mars rover" problem.
> module MarsRover where

> import Char
> import Data.List
Problem statement: MARS ROVERS
A squad of robotic rovers are to be landed by NASA on a plateau on Mars. This plateau, which is curiously rectangular, must be navigated by the rovers so that their on-board cameras can get a complete view of the surrounding terrain to send back to Earth.

A rover's position and location is represented by a combination of x and y co-ordinates and a letter representing one of the four cardinal compass points. The plateau is divided up into a grid to simplify navigation. An example position might be 0, 0, N, which means the rover is in the bottom left corner and facing North.

In order to control a rover, NASA sends a simple string of letters. The possible letters are 'L', 'R' and 'M'. 'L' and 'R' makes the rover spin 90 degrees left or right respectively, without moving from its current spot. 'M' means move forward one grid point, and maintain the same heading.

Assume that the square directly North from (x, y) is (x, y+1).

INPUT:

The first line of input is the upper-right coordinates of the plateau, the lower-left coordinates are assumed to be 0,0.

The rest of the input is information pertaining to the rovers that have been deployed. Each rover has two lines of input. The first line gives the rover's position, and the second line is a series of instructions telling the rover how to explore the plateau.

The position is made up of two integers and a letter separated by spaces, corresponding to the x and y co-ordinates and the rover's orientation.

Each rover will be finished sequentially, which means that the second rover won't start to move until the first one has finished moving.

OUTPUT

The output for each rover should be its final co-ordinates and heading.

INPUT AND OUTPUT

Test Input:

5 5
1 2 N
LMLMLMLMM
3 3 E
MMRMMRMRRM


Expected Output:

1 3 N
5 1 E
Problem solution

Types

This problem simply decodes into a state machine and a command interpreter, where the interpreter first lifts the input characters into their correspondingly-typed values.
The state machine, a.k.a Mars Rover, is:

> data Location = Loc Integer Integer
> data Orientation = N | E | S | W
> deriving (Show, Enum, Read)
> data Position = Pos Location Orientation

The typed commands are:

> data Direction = L | R deriving Show
> data Command = Turn Direction | Move

... with some Show instances for sanity checks and output

> instance Show Command where
> show (Turn dir) = show dir
> show Move = "M"

> instance Show Location where
> show (Loc x y) = show x ++ " " ++ show y

> instance Show Position where
> show (Pos loc dir) = show loc ++ " " ++ show dir
Our scanner-parser for our very simple language grabs the next thing to be scanned from the input and returns updated input.
> type Scan a = String → (a, String)
Lifting functions

The lifting functions are rather simple affairs, as they convert what's in the input stream to their typed internal equivalents:
> liftOrient :: Scan Orientation
> liftOrient command = head $ reads command

> liftLoc :: Scan Location
> liftLoc command = let (x, locstr):_ = reads command
> (y, out):_ = reads locstr
> in (Loc x y, out)
... I'm not quite getting the intent of reads: it seems to return either empty (denoting failure) or a singleton list containing the value and the continuation (that is, to be precise, the rest of the list after the first value is read). I'm not doubting its utility, but I do wonder that the list type was the return type if it behaves as a semideterimistic function (fail or singleton success) and not as a nondeterministic one (fail or multiple successes). The Maybe data type is more closely aligned with semideterministic functions; lists, nondet. So I believe that either the Either type or Maybe are a more appropriate return for reads.

At any rate, with the above two lifting functions, lifting the position of the rover (the first line in our command language) simplifies to chaining the above two lifting functions:
> liftPos :: Scan Position
> liftPos command = let (loc, rest) = liftLoc command
> (dir, out) = liftOrient rest
> in (Pos loc dir, out)
So, we have the scan and parse of the starting position of the rover done. We're half-way there! Now we simply need to scan then to parse the commands. But, the neat thing for us is the following: the commands are atomic (and just one character each). There's no need for the continuation scheme that we used for scanning positions above, we just need simply map the command lifting function over the line of commands (the "command line") to obtain our internal representation:
> liftCmd :: CharCommand
> liftCmd 'L' = Turn L
> liftCmd 'R' = Turn R
> liftCmd 'M' = Move
Command interpreter

What does a command do? It moves the rover or reorients the rover. What is movement? A change in Position (via a change in Location). What is reorientation? A change in Orientation. So, a Command's action (which is either to move or to turn the rover) results in a Δ (pron: "Delta") of the Position:
> type Δ a = a → a

> command :: CommandΔ Position
> command (Turn dir) (Pos loc orient)
> = Pos loc (turn dir orient)
> command Move pos = move pos

> move :: Δ Position
> move (Pos loc dir) = Pos (dLoc dir loc) dir
> where dLoc :: OrientationΔ Location
> dLoc N (Loc x y) = Loc x (y+1)
> dLoc E (Loc x y) = Loc (x+1) y
> dLoc W (Loc x y) = Loc (x-1) y
> dLoc S (Loc x y) = Loc x (y-1)

> turn :: DirectionΔ Orientation
> turn dir orient
> = toEnum $ (intVal dir + fromEnum orient) `mod` 4
> where intVal :: DirectionInt
> intVal L = -1
> intVal R = 1
I suppose turn could've been modeled more realistically on radians, and would work just the same, but since the command language is so simple (turns are exactly 90o), I can get away with using the four points of the compass as (circularly) enumerated values.

The Glue

Given all the above, the rover simply runs as follows.
Note that I ignore the grid-size command, as it appears to be superfluous to the problem definition (collision detection between rovers and out-of-the-grid bounding errors are NOT covered in the problem statement ... since they choose to ignore these issues, I will, too).
> runRovers :: StringIO ()
> runRovers commnd = let (_:commands) = lines commnd
> in putStrLn $ run' commands
> where run' [] = ""
> run' (pos:cmds:rest)
> = let (start, _) = liftPos pos
> mycmds = map liftCmd cmds
> stop = foldl (flip command)
> start mycmds
> in show stop ++ "\n" ++ run' rest

> runTestEx :: IO ()
> runTestEx = runRovers "5 5\n1 2 N\nLMLMLMLMM\n"
> ++ "3 3 E\nMMRMMRMRRM"
Summary

So, there you have it, folks: scanning and parsing as mapped lifting functions. That all DSLs would be that simple, but that topic is for when I cover how I reinvent the Parsec wheel ... but in the meantime, you can read the next extry the covers scanning and parsing for this problem with the State monad transformer.

Wednesday, August 27, 2008

Ten = 1+2+3+4

Daniel Lyons on his blog, showed us how to how to arith our way to 10 in Prolog using the operations of addition, subtraction, multiplication, and division and the numbers 1, 2, 3, and 4. A straightforward logic program, which I was hoping to duplicate the feel of in Haskell. Here's my go at it.
> module Ten where

> import Control.Monad
> import Data.List
> import Smullyan


I suppose the standard way people going about enumerating the arithmetic operations is to encode it in the following way ...
> {-
> data BinaryOp = Plus | Minus | Div | Times deriving Eq

> op :: BinaryOpIntIntMaybe Int
> op Plus b a = Just $ a + b
> op Minus b a = Just $ a - b
> op Times b a = Just $ a * b
> op Div b a = if b ≡ 0 ∨ a `rem` b ≠ 0
> then Nothing
> else Just (a `div` b)

> instance Show BinaryOp where
> show Plus = "+"
> show Minus = "-"
> show Div = "/"
> show Times = "x"
> -}


... but I shied away from that approach for two reasons: first and foremost, we have a disjoint type with enumerated values, but later we are going to choose from those enumerated values from a list of all those values. It seems to me to be wastefully redundant to enumerate the values of the type in its declaration, and then be required to enumerate those values again in their use. Wastefully redundant and dangerous — the type guarantees that I will not use values outside the type, but how can I guarantee that I've used all the values of the type in the program? I'm mean, of course, besides full coverage using a dependently typed system, that is.

So instead of the above commented out code, I chose the following type definition:
> data BinaryOp = Op (IntIntInt) String


This type has the advantage that the arithmetic operation is embedded in the type itself, as well as showing and equality:
> instance Eq BinaryOp where
> (Op _ a) ≡ (Op _ b) = a ≡ b

> instance Show BinaryOp where
> show (Op _ s) = s

> op :: BinaryOpIntIntMaybe Int
> op (Op f s) b a | s ≡ "/" = if b ≡ 0 ∨ a `rem` b ≠ 0
> then Nothing
> else Just (a `div` b)
> | otherwise = Just $ f a b


I know that some of you are going to be raising your eyebrows in disbelief: I've forsaken a pure and dependable, or compile-time, way to distinguish operations with a very weak string, or runtime, representation. The former is "just the way it makes sense" for most; the latter, my "sensible" compromise, sacrificing type safety for a more declarative representation. Again, if we had dependent types, I think there would be a type-safe way to represent arithmetic operators as comparable (distinct) and showable types ... dependent types for Haskell', anyone?

Okay, now we need to write permute, because this isn't C++ or Java. *sigh* That's correct, you heard it here first: Java and C++ are way better programming languages because they have permute in their standard libraries, and Haskell does not.
> permute :: Eq a ⇒ [a] → [[a]]
> permute list@(h:t) = [x:rest | x ∈ list,
> rest ∈ permute (list \\ [x])]
> permute [] = [[]]


And from that the solver simply follows: it's the permutation of the row of numbers matched with any arbitrary (ambiguous) arithmetic operation. This has the flavor of the days when I was writing code in MMSForth, push the numbers on the stack and then call the operators. Oh, JOY! (I'll stop now.)
> solver = [twiner perms [op1, op2, op3] |
> perms@[a,b,c,d] ∈ permute [1..4],
> op1 ∈ amb, op2 ∈ amb, op3 ∈ amb,
> ((op op1 b a >>= op op2 c >>= op op3 d)
> ≡ Just 10)]

> amb = [Op (+) "+", Op (*) "x", Op div "/", Op (-) "-"]


So, all we need is the display mechanic (twiner) to intersperse the arithmetic operators between the provided numbers. I had started down this path ...
> stringIt :: BinaryOpIntStringString
> stringIt op num acc = "(" ++ acc ++ show op ++ show num ++ ")"

> twiner :: [Int] → [BinaryOp] → String
> {-
> twiner args ops = doTwine args ops True ""
> where doTwine args ops binary acc | binary =
> let (arg1, rest) = (head args, tail args)
> (arg2, done) = (head rest, tail rest)
> in doTwine done (tail ops) False
> (stringIt (head ops) arg2 (show arg1))
> | ops == [] = acc
> | otherwise =
> doTwine (tail args) (tail ops) False
> (stringIt (head ops) (head args) acc)
> -}


... looking longingly at intersperse in the Data.List module (why, oh, why! can't they have intersperse take a state-function argument? Oh, because I would be the only person using it?), but, I figured, as I have already reinvented takeWhile, I had better not reinvent zipWith here:
> twiner (h:r) ops = foldl t (show h) (doTwine r)
> where doTwine args = zipWith (c stringIt) args ops


... *ahem* dependent types would guarantee that twiner is never called with an empty list of arguments.

I'd say my Haskell version follows in the spirit of the Prolog version in that it uses nondeterminism (of the Maybe and list MonadPlus types) to select the true equations. Weighing in at 18 lines of type declarations and code (with 4 additional lines for the module declaration and the imports), it is also about the same size as the Prolog version as well.

Goodness! Typeful logic programming in Haskell, with all the functional frosting on the cake applied for free, costs no more than developing a Prolog program, and along with the type-safety, it also carries with it the flavor and the feel of logic programming in Prolog as well.

Tuesday, August 26, 2008

"Lucky you!"?

I'm sure some of you reading this blog share my fortune in that you are getting paid to code declaratively. I also know others of you are stuck in that ghetto of churning out web-pages from JSPs or VB.NET or Oracle Forms and are wondering how to break into that fairy-tale world where you can use a sentence with the word "λ-term" in it and not have it confused with a Revenge of the Nerds reference.

Yes, Virginia, there are actually jobs/contracts out there that pay real money for you to code declaratively. In fact, if you're in the United States or Australia, then there are "lots" of these jobs, and, if you're in Europe or Asia, there are some jobs in your country or in a country near you.

So, here you are, and you're missing the key ingredients that will secure you one of these plum assignments: professional work experience in your LoCh ("Language of Choice") 'cause your stuck writing VB scripts, not (e.g.) Haskell, and phone calls from recruiters or companies wanting (e.g.) Haskell programmers 'cause you don't have (e.g.) Haskell on your resumé (because you never, ever lie on your resumé) and 'cause no recruiter nor company will ever admit they need something other than a programmer with Java or C# experience if they are hunting.

Real life story: a recruiter solicited me with no idea that I had Prolog experience for a Prolog req. because she was so used to seeking Java programmers, and all her other contracts needed Java programmers, that the thought never occurred to her. Only my "and, yeah, I also have experience in Mercury, which is like Prolog, and Dylan, which is like Lisp" woke her up to that nagging req. that she thought she was never going fill.

So, for all of you out there thinking about looking, you probably need a couple of questions answered. Like: how do you prepare for these jobs, and then, how do you find them? Well, I'm not going to give advice here on what should work, I'm going to tell you what worked for me. But since my experience may be atypical, maybe some of the readership that do have functional or logical programming jobs can chime in with your own stories that show easier routes to success.

Here's how I didn't get my paying jobs coding in Dylan and Mercury and Prolog and Haskell (yes, my current job is accepting some code in Haskell and they are building a foreign interface to Haskell in their Prolog engine. Win!):
  • I neither lied on nor fudged my resumé.

    Some people think that reading a book on Hibernate allows them to enter a bullet on their resumé as on-the-job work experience. Those kind of antics may get one in the door, but the thing that keeps one on the payroll is "am I confident and competent building systems with this technology?" ("did I really do this?"), not "can I learn the technology I listed on my resume on the new job?" ("do I think I can do this?")


  • I didn't get my job from any of the companies listed on haskell.org or schemers.org or <anything>.org.

    Why? Because these listings are usually from somebody who got to write a code snippet or a pet project in their favorite language under the radar and are crowing about it on their-favorite-language.org website. Sure it's wonderful that XYZ corp is using Haskell, but does the HR department know about it? No. Is is in their budget to hire a Haskell programmer? No. You don't need to believe me on this one, however. You can send an email to every single one of those companies, just like I did, and get the null-responses and the "this email address does not exist" bounces, just like I did. Knock yourself out.


  • I also didn't get a job or contract working for the luminous companies in the declarative world.

    Don't know why here, but I can guess. It may be that Galois or Jane's Capital or Ericsson or whatever gets a few solicitations a quarter, and they are already up to their gills in talent. If they already have all the work they want in that domain, and they have already some people with some time on their hands, they aren't going to hire a new person, and give that person nothing for their job, just because that new person is so excited about the opportunity to code declaratively.

    And the larger the company, the harder it is to thread one's way through the layers of bureaucracy to reach the isolated island that uses your LoCh. Sure, IBM (Tivoli) and Microsoft (Windows Registry) use Prolog, but good luck finding a person to speak about that in those empires. And since your LoCh is so niche in those empire, good luck finding an opening.


So, I guess it was pure-D-luck that I was on four contracts where I used Dylan, and one contract where I used Smalltalk, and one contract where I used Mercury, and two contracts where I used Prolog, and one contract (so far) that I'm using Haskell.

... or maybe not.

So, how did I get to use these languages that I (have) love(d) on these contracts?

  1. First and foremost, by being the best d*mned Java or C++ programmer their management had ever seen. In 1996, I took three Java books on my honeymoon, and read them. (Yes, Virginia, 12 years later, we are still married.) Every single night after work on my C++ contracts, I would read something from Design Patterns and Stroustrup. Work assignment scoped to two months, end-to-end, I would complete in two weeks.

    What did that give me? Trust, latitude, tolerance and time. Management knew they didn't need to look over my shoulder and they would allow me to do the jobs my way. It then was a very simple step from writing each web-page as its own JSP to writing a Dylan program that generated 100s of JSPs from templates. Then I would show management the Dylan program. ("This, boss, is how I've been so efficient").

    Most of the time it worked ("That's awesome, Doug." "Would you be willing to be a reference for me?" "Sure."), sometimes it didn't ("How are we going to maintain this when you're gone? Rewrite it in Java." "Yes, ma'am."). Where it worked, I added it to my resumé; where it didn't, I didn't.


  2. Next, I was the best salesman in the organization, better, even than the PR department. At socials with the customer, all the programmers would line up against the far wall ('cause programmers are shy, don't you know), except me. I would cross the room and walk right up to the customer with a ready handshake ("Larry, hi; I'm Doug Auclair, and I'm working on solving your problems"), this also applied to internal customers ("Paul, nice to meet you; I'm glad the VP of ITT is checking out this satellite program").

    What did this get me? Call backs and word-of-mouth that allowed me to set the terms of the contract ("Hi, Doug, this is Bruce at Raytheon, we need somebody good on this new contract, so we called you." "Sure, Bruce, this is my rate." "That won't be a problem").

    If you want what you want, you must be good where you are and you cannot be a wall-flower. People with the power to decide must think of you first because a) you've excelled in your current work and b) you've told them so.


  3. Next, I always had my eyes and ears open and, when I was good, I opened my mouth only after someone had gotten their gripe off their chest. You know what a gripe is? To me, a gripe is a contract. "!@#$%^*, we need a phoneme-based name-matcher but the programmer who built it in assembler retired years ago!" (Trans: Doug, we need you to write a fuzzy ILP system in Mercury) "I sure wish we could use the function object templates you've written about in the C++ system I'm building." (Trans: Doug, come on board to show me how Dylan will make this a whole bunch easier.) "We want your system to extract meaning from text documents, but extracting meaning from the images is too hard; don't bother with that." (Trans: Tell us about image processing and classification with Self-Organizing Maps and Pulse Coupled Neural Networks.). Each of the above gripes led to work, that is authorization and funding, for me to write in my LoCh (Mercury, Dylan, and Haskell) that was outside the, erhm, comfort-language of providing company.

    How did I have the confidence to offer those suggested courses of action and then have the competence to build them to delivery? Listening is key. Nobody listens in our industry, so when somebody does listen, the griper is so relieved that somebody cared that they weigh the response with much greater significance. That I had these novel, but workable, courses of action to offer comes from what I did in the next, final, point.


  4. Finally, I became a teacher. Being a teacher, a real teacher, means first you are a learner and learn more than anyone you know about the topic, and second you are a doer and are better at using the technology to solve the given problems than anyone else on the team. Those two things are prerequisites to being a teacher, but they don't make you a teacher. No, to be a teacher, you must write, write, and then write some more about the topic: documentation (users manuals and programmer manuals), white papers, articles, quizzes, aphorisms. Then after the writing comes the lecturing: I spent years developing and then teaching continuing education courses at the local community college after I gave brown bag lunch lectures at my place of work. I got two of my better contracts from company owners tripping over my written materials: a Dylan project and a Prolog-based agent project.

    By the way, teaching is not droning on for an hour or for four hours. Teaching is preparation, to the tune of 13 hours of prep work for each hour in front of the class (I did). Teaching is getting up on top of the desk to do flamenco (I did), teaching is jumping up and down, up and down, jackhammer-style to drill in the point (I did), teaching is having the students wonder what I was on and could they get a prescription ("Class, this is Doug without coffee!"). Teaching is inspiration; it is the flame to ignite the students' imagination.

    I had something like 3,000 students under my tutelage from the brown bags and my continuing education courses (that I was paid to teach), from coders to company presidents ... do you think I have a problem with my contact network?

    Oh, but that's easy for an extrovert like you, Doug! But I'm naturally shy. Dude, don't give me that line! On Myers-Briggs, I'm either INFJ or INTJ, my "I" (introversion) is so extreme that it actually blows the percentile curve. I HATE interacting with people, so much so that it takes an additional 8 hours for me to recover from a meeting or class that I run (and since I'm president of my independent consulting firm, I run all the meetings and teach all the classes), but I LOVE coding declaratively more. Once you know what you must have, you'll find a way to get it. The excuses you used to make for not getting what you must have become just that, excuses; when you have that burn, that's when you start finding ways to the desired end.


Well, unlike Java jobs or C# jobs or VB jobs, where one can scan the newspaper or go to dice.com, these magical jobs are not the ones I get when recruiters solicit me (three times a day). No, they appear between the cracks of the sky when it rends in two. All but one contract (that did come from dice.com) came from the company CEO pulling me aside or phoning me out of the blue from the materials I had published inside the company where I was employed, or from my web-sites on programming languages.

Usually and because of the exclusivity of the work it takes anywhere from at least 3 months to 2 years to secure these kinds of contracts, so the adage "Don't quit your day job" is an appropriate one here. When I do get these contracts, however, they usually last longer (3 years) than the Java/C++/XML/web-services-code-grinder ones (that usually last around 6 months), and the peer group is much more intelligent, genteel and just plain more interesting than the code-grinding crowd. Is is harder to find these magical contracts? Yes, they are rarified air. But are they worth preparing for and then finding? Definitely.

So, yes, I was lucky to work on 7 paying contracts using declarative languages, because I was the best code-grinder on my current job, I used my LoCh effectively and then wrote about the successes and taught techniques to my peers, all of which luckily attracted the notice of hungry companies needing a competitive advantage.

Luck, you see, is when preparedness meets opportunity.

Wednesday, August 13, 2008

Using Difference Lists

I posted my enthusiasm about difference lists for Haskell earlier this month, but then became sidetracked by combinators and reading lists, so, let's get back on track.

Difference lists are a (very) useful data structure in Prolog, particularly when it comes to parsing with definite clause grammars (DCGs) and when it comes to list construction (again, using DCGs). Why? Well, to answer that question, let's examine the structure of difference lists. The Prolog (or relational) representation of a difference list is of the form:
X = [1,2,3|Y] - Y
What that relation describes is that X is the difference of the pair of lists of (first) [1,2,3] appended with the list Y minus (second) the list Y.

Well, what is the value of the list Y? To a Prologian, this question is not important (for Y, being a logic variable, can be any list or, even, not a list at all because the variable is still uninstantiated (a "free" variable)). Its importance is that it gives us an "instant" index into the list after the elements we've already processed. The significance of that index for parsing is that we now have a stream over the parsed (list) data, which gives us the ability to translate BNF grammars directly into working parsers. The significance for list construction is that we "simply" walk "backward" through the DCG to construct a list from a (parsed) internal representation, without paying the linear cost of either (a) traversing the list each time to add a new element (which results in exponential time list construction) or (b) building the list in reverse (adding new elements to the head of the list) and then applying reverse to the end result.

"Big deal!" you scoff, "zippers [also called 'finger trees'] already give us indices into data structures. Who needs a funny-looking specialized zipper just for lists?"

"Good point!" is my response. Zippers are powerful and flexible data structures that not only provide that index over any data structure, but, even better than difference list's index that only goes forward (with any dignity) in constant time, the zipper's index can go both forward and backward through that (generic) structure in constant time. So, if you are not familiar with zippers, I request that you run, don't walk, to a good overview to learn more about their utility.

With all their flexibility and power, in certain situations, a zipper is not the correct choice for some list processing tasks; the difference list's simplicity shines in these situations. First, for the unidirectional index of the difference list (it moves forward through the list well, but backwards ... not so much), this, for list construction purposes, is hardly ever an issue, and if it becomes one, then backtracking is easily installed with some MonadPlus magic. Next, you, the coder, are responsible for constructing a zipper list data type each time you need one, but Don Stewart has provided Data.DList; no construction necessary! Last, an example structure of a zipper list, for the list a++b is (reverse a,b), so if we wish to reconstruct the original list, for an a of large magnitude, we still pay the linear (reverse) cost in reconstruction as well as the linear cost of appending b to that result.

You heard it here first, folks — difference lists: [zipper lists] can't touch this!

Okay, I've finished throwing it down for now. Let's look at difference lists from the Haskell perspective and then use this data structure in a list construction example.
> import Data.DList
For my examples, I'm going to be looking at the subset of difference lists that I can compare and see:
> instance Eq a => Eq (DList a) where
> x == y = toList x == toList y

> instance (Eq a, Show a) => Show (DList a) where
> show x | x == empty = "\\x -> [x]"
> | otherwise = "\\x -> [" ++ show (head x)
> ++ doShow (toList (tail x)) ++ ":x]"
> where doShow [] = ""
> doShow (a:b) = ", " ++ show a ++ doShow b
With the above instance declarations, we see that
(singleton 1) `snoc` 2 `snoc` 3
gives us the representation
λ x . [1, 2, 3:x]
... snoc appends an element to the end of a list (so it's the reverse of cons).

Prologians are always smug with their ability to create an infinite stream of 1s with X = [1|X]. And, no doubt about it, Prolog is a programming language that makes list construction easy ... *ahem* almost as easy as list construction in Haskell ("Your list comprehensions outshine even Prolog for sure..."):
repeat 1
[1,1..]
... and now with difference lists ...
let x = unDL (singleton 1) x in x
But no self-respecting Prologian would freeze their interpreter with a such a rash proof; Haskellians have no such concern, thanks to weak head normal form:
take 5 (repeat 1)
So there!

Let's examine an application of difference lists. I have a little scanner that I use to help me in my oration. The meat of the algorithm (before difference lists) was as follows:
> lettersIn word = lettersIn' word []

> lettersIn' [] ans = reverse ans
> lettersIn' (h:t) ans | isLetter h = lettersIn' t (h:ans)
> | otherwise = reverse ans
Ugh, yes, reverses galore! This is an oft-recurring pattern in Haskell code that is very easily fixed with some difference list magic:
> lettersIn word = lettersIn' word empty

> lettersIn' [] ans = toList ans
> lettersIn' (h:t) ans | isLetter h = lettersIn' t (ans `snoc` h)
> | otherwise = toList ans
A very simply example, I grant you, for if we take a list whose elements are all in the set of the alphabet, the above code is a very complicated way of reinventing the id function. Or, put another way, who's going to teach me about scanl? Hm, would I be required to use the continuation Monad to escape the scan, I wonder? But you get the general idea for list construction, right? Instead of building the list by adding to the head ("cons"ing) and then applying reverse to the result, the different list allows one to "snoc" each element and return the correctly ordered result. For a single list construction, the benefit may not be all that obvious, but when one builds, e.g., an XML generator for a complex internal representation, reversing the reverse of the reversed reversed child element becomes an odious labour — difference lists eliminate all the internal juggling, replacing that complication with a straightforward constant-time list and element in-place append.

So, next time you need to do some list construction, instead of falling back on the reverse standby, flex your new difference list muscles; you'll be pleasantly surprised.

Tuesday, August 12, 2008

Combinatory Birds as Types

This post is composed nearly entirely of the work of readers responding to my previous post that dealt with combinatory logic combinators. That post implemented the Smullyan combinatory birds as functions. This is the obvious approach, as birds and λ-terms have a very simple and straightforward (obviously bidirectional) mapping. A few readers improved and explained some of my combinators (S and W) by using the ((->) a) Monad, and I rolled those improvements into the original post, but these improvements are still in the combinators-as-functions domain.

Two readers, ryani and david, provided implementations for two combinators that I've had difficulty with. I've implemented the "void" combinator from the Unlambda programming language (or as Smullyan calls it, the "hopelessly egocentric" bird) using only the S and K combinators. Haskell is (thankfully) not so restricted, so ryani implemented that combinator as a type and then as a class. The thrust of ryani's void-as-type implementation is as follows:
> {-# LANGUAGE RankNTypes #-}
> newtype Void = V { runV :: forall a. a -> Void }
> v :: Void
> v = V (\x -> v)
Simple and sweet! And, the ranked type resolves the type-circularity ("void x is of type void"). So, for example:
> :t runV v 0
runV v 0 :: Void
> :t runV v "hello"
runV v "hello" :: Void
> :t runV (runV v 0) "hello"
runV (runV v 0) "hello" :: Void
... etc ...
In my implementation, I represented that the "void" combinator is "enough" K combinators (where "enough" was more K combinators than applications); ryani takes that exact approach using a class-based implementation:
> class Voidable r where vc :: r
> instance Voidable () where vc = ()
> instance Voidable r => Voidable (a -> r) where vc = k vc
(so void is represented as the Haskell type and datum () (pron: "unit")) e.g.:
> vc "hello" :: ()
()
> vc "hello" 123 :: ()
()
> vc "hello" 123 (Just k) :: ()
()
As ryani points out, the type-checker supplies the correct number of K combinators to match the number of function applications. Neat!

I had left a question on how to implement the U, or Turing, combinator, and david demonstrated an implementation where the class of all combinatory birds were a type (with two free implementations of good-ole factorial, to boot):
> data Bird a = B (Bird a -> Bird a) | Value a

> app :: Bird a -> Bird a -> Bird a
> app (B f) x = f x

> lift :: (a -> a) -> Bird a
> lift f = B (\(Value x) -> Value (f x))

> unlift :: Bird a -> (a -> a)
> unlift f = \x -> case (f `app` Value x) of Value y -> y

> -- Uxy = y(xxy)
> u = B (\x -> B (\y -> y `app` (x `app` x `app` y)))

> -- sage1 = UU [geophf mumbles: "Sweet!"]
> sage1 = u `app` u

> -- Yx = x(Yx)
> sage2 = B (\x -> x `app` (sage2 `app` x))

> fix f = unlift (sage1 `app` B (\g -> lift (f (unlift g))))
> fix2 f = unlift (sage2 `app` B (\g -> lift (f (unlift g))))

> facR :: (Integer -> Integer) -> Integer -> Integer
> facR f n = if n == 1 then 1 else n * f (n - 1)

> fac = fix facR
> fac2 = fix2 facR
I'm sure both ryani and david would claim that the M combinator (where m x = x x), so useful in Smullyan for implementing several birds (such as the "void" combinator), is not hard to do in Haskell given the above implementations. Do you see how to do it? Wanna race?

Getting Better, part ][

A reader wrote in follow up to the previous post ("How do I get better? A reading list") and asked "Which of the three following books has the most bang for the buck?
  • To Mock a Mockingbird, Smullyan

  • Algorithms, a Functional Approach, Rabhi, LaPalme

  • How to Solve it, Modern Heuristics, Michalewicz/Fogel"
Not that I particularly agree that the above three are the top three of the larger book list, but I responded thusly:

The difficulty lies in the differences in our opinion as to which direction the bang makes the buck well-invested.

A facile answer, but also useless for you, I suppose, is that they all have a high bang/buck ratio, much more so than most the drivel out there. So let me explain their utility to me, and perhaps that will help you make a more informed decision as to how to proceed.
  • To Mock a Mockingbird, Smullyan: I absolutely consumed. But are you ever going to use combinatory logic combinators when coding? I do, and no-one else in the world does (Okay, maybe David Turner still does an occasional SK bit of low-level coding). But as an introduction to the magic of first class functions and construction through composition (that is, in other words, "Coding in the Haskell-style") there is no equals as to its lightness of touch as well as the density of concepts. You learn from this book and delight in it.

  • Algorithms, Rabhi & LaPalme. I carry this book incessantly, into meetings (where I am to be a passive participant), on trips, etc. Certainly it teaches one about data structures and algorithms in Haskell, using practical examples. But it also opens one's eyes to possibilities of programming. It is a powerful (but not dense), little book. It is, however, dated, especially in the light of the explosion of new ways to use more-recently developed mathematical structures (from category theory, particularly, monads, arrows, and comonads). It does not cover these recent developments, but the ones it does cover, it covers extraordinarily well.

  • How to Solve It, Michaelwicz & Fogel. Thick, dense, impressive. I don't think I've gotten much further than chapter 2 — this is my fault, but it is also the book's unapologetic full coverage and well as its assumption that they are not addressing 7-year-olds that make progression, for me, at least, arduous. Worth reading? Absolutely, as it addresses the fundamental problems of problem-solving (at the top of their list is that problem solving is hard, and our education has flippantly ignored that bald fact) and proposes solutions that are novel, scintillating, simple, and, once discovered, obvious. I recommend you read more than the first three chapters in a book store (prepare to spend more than a few days in this task) to see if you can justify purchasing it for continued reading.
So, did I provide accurate guidance for the inquirer? Or, where my assessments off-base? I won't disclose the choice the reader made, but I will ask instead: which book would you have selected given the above reviews?

Sunday, August 3, 2008

How do I get better?

I just fielded a question from a programmer who wishes to improve his (Haskell) programming skill, and I responded with a reading list (below) and a referral to the Project Euler problems site. But I also see this as an excellent question for myself. How do you, genteel reader, recommend I improve my skill as a coder? Or, put another way, what book, or books, and which article, or articles, so totally transformed you from J. Random Hacker to J. Über Hacker (hacker being coder, or mathematician, or logician, or rule developer, or ...)?

My list is as follows. Would you kindly tell me what I need to read right now so I can see the light as you have?

Books for learning:

The Art of the Metaobject Protocol, Moon, et al
To Mock a Mockingbird, Smullyan
Reasoned Schemer, Bird, et al
Algorithms, a Functional Approach, Rabhi, LaPalme
Genetic Algorithms, Goldberg
How to Solve it, Modern Heuristics, Michalewicz/Fogel
Godel, Escher, Bach, an Eternal Golden Braid, Hofstadter
Prolog Programming for AI, Bratko
(and, after a year of programming in Prolog), Craft of Prolog, O'Keefe
A Grammatical View of Logic Programming, Deransart/Maluszynski
An Introduction to Mathematical Philosophy, Russell

Books for Joy:

Testaments Betrayed, Kundera
Last Samurai, DeWitt
Lord of Light, Zelzany
American Gods, Gaiman
Complete Enchanter, de Camp
Moor's Last Sigh, Rushdie

Noosphere:

"To Dissect a Mockingbird" article, Keenan
The lambda papers, Steele/Sussman
Growing a language article, Guy Steele
AI Junkie site
sigfpe.blogspot.com blog
randomhacks.net blog
the Monad.Reader (on haskell.org)
Comonad.Reader blog
"What the Hell are Monads?" article, Winstanley
"Monad Transformers, Step by Step" article, Grabmueller
"Generalising Monads to Arrows", Hughes
"Theseus and the Zipper", on the Haskell Wiki
"Escape from Zurg: Exercise in Logic Programming", Erwig
And for language construction/deconstruction: Lazy K, Jot, Iota and Whirl (well, at least they aren't INTERCAL ...), ... I suppose brainf**k should be mentioned here too ...
"The tale of N-categories" serial, starting with week 73, Baez

Saturday, August 2, 2008

Combinators in Haskell

In this article, instead of continuing along my path of shoving Prolog (predicate logic) into Haskell, I'll take a break from that and shove Haskell into Haskell. This turn comes by way of my recent professional experience working with Prolog — I found myself oftentimes needing to write some purely functional code, but I found the Prolog semantics getting in my way, so, borrowing some of the ideas from my previous professional programming experiences with Mercury and Dylan (when Harlequin was still a going concern, and had picked up the Dylan ball that Apple, then CMU, dropped, making a really sweet Dylan programming environment), I implemented a set of libraries for Prolog, including basic ZF-set operations, propositional logic syntax, list/set/bag utilities, and the combinator logic of Schönfinkel.
"geophf!" You exclaim, horrified, "you don't really use combinators to write working production code, do you?"

Well, yes, and I also proposed a new one in addition to the ones covered by Smullyan in To Mock a Mockingbird, and have studied the make-up of the "void" combinator of Unlambda, so I'm one of those kinds of programmers.

In short, when I was working with Prolog, I shoe-horned a Haskell interpreter into to be able to write map and fold when I wished to write map and fold. Since Prolog has DCGs (Definite Clause Grammars), I lived a monad-free and worry-free life. Ah, yes, those were the days.

That was quite a trip down memory lane. Looking back, I marvel at my fortune, not only have I professionally programmed in those languages (and VisualWorks Smalltalk; that was fun when I presented the boss a solution in two weeks that he was expecting in two months — he was so impressed that he bought the VisualWorks system for the contract, made me mentor a junior programmer on Smalltalk (Chris Choe, a good guy to work with), and had me teach a brown bag course at work on Smalltalk, but I digress, again), but it looks like Haskell is now part of our solution at my current place of work.

Life is good.

Of course, it would be so much better if I had access to the primitive combinators in Haskell (did Miranda allow that kind of low-level access, I wonder). I mean, why should one write out 'const', a whole FIVE characters, when one simply wishes to use the K combinator. I felt that very pain all too recently. I had that freedom using my combinator library when I was working on Prolog ... goodness, even MITRE fabricated the Curry Chip.

So, I could not put it off any longer, I pulled out my dog-eared copy of Mockingbird and wrote the CL module, as I didn't see one in the Haskell standard library. It went surprisingly quickly and easily. There were a few snags, Prolog, being dynamically-typed, allowed me to define the reoccuring combinators of L, M, and U (definitions of the combinators are available in my combinator library or tabled form or as graphical notation (which includes a very nice write up of propositional logic in CL)), but Haskell's type system complains of an occur-check error. I have the hack to define the Y combinator ...
fix f = let x = f x in x

... but I'm having difficulty hacking the other self-applying combinators; any suggestions on how to implement those? I'm particularly interested in implementing Turings' universal combinator ...
Uxy = y(xxy)

... because of its interesting properties I'd like to explore.

Anyway, here's the ones I do have.

> module Smullyan where

It was pointed out to me in the comments that if you make the ((->) a) type a Monad, then some of the combinators simplify to monadic operators.

> import Monad
> import Control.Monad.Instances

These are some of the combinators presented in Smullyan's To Mock a Mockingbird. Some have direct Haskell equivalents, e.g.: Iid, Kconst, Cflip, B(.), but then that just makes my job easier here. I also admit a preference to the Schönfinkel combinators over the renamed Haskell equivalents, so here is the library.

We will not be defining combinators here that cause an occurs-check of the type system, e.g. L, M, U, but we can define some interesting ones, such as O and Y by using work-arounds.

If we wish to start with λI, then its basis is formed from the J and I combinators.
> -- identity (I have no idea to which species belongs the 'identity' bird)
> i :: a -> a
> i = id

> -- jay: jabcd = ab(adc)
> j :: (a -> b -> b) -> a -> b -> a -> b
> j a b c d = a b (a d c)

I actually spent quite a stretch of time building the other combinators from the JI-basis, e.g., the T combinator is JII, the Q1 combinator is JI, etc. When I attempted to define the K combinator, I ran into a brick wall for some time, until I reread the section in Mockingbird about how the noble basis has no way to define that abhorred combinator. Since that time I've fallen from grace and have used λK, but I've always wondered since if a complete logic could be reasonably expressed in λI, and if so, how would that logic be implemented? I haven't come across any papers that address these questions.

Musing again, let's define the basis of λK which is founded on the S and K combinators.
> -- starling: sfgx = fx(gx)
> s :: Monad m ⇒ m (a → b) → m a → m b
> s = ap

> -- kestrel: kab = a
> k :: a -> b -> a
> k = const

... okay, that wasn't too hard, so, SKK should be I, right?

--- :t (s k k) :: a -> a ... oh, yeah!

let's continue with some of the other combinators:
> -- bluebird: bfgx = f(gx)
> b :: (b -> c) -> (a -> b) -> a -> c
> b = (.)

> -- cardinal: cfgx = gfx
> c :: (a -> b -> c) -> b -> a -> c
> c = flip

Now we start defining combinators in terms of simpler combinators. Although, we could have started doing that once we've defined S and K, as all other combinators can be derived from those two.
> -- dove: dfghx = fg(hx)
> d :: (d -> b -> c) -> d -> (a -> b) -> a -> c
> d = b b

> -- thrush: txf = fx
> t :: a -> (a -> b) -> b
> t = c i

> -- vireo (pairing/list): vabf = fab
> -- e.g. v 1 [2] (:) -> [1,2]
> v :: a -> b -> (a -> b -> b) -> b
> v = b c t

> -- robin: rxfy = fyx
> r :: a -> (b -> a -> c) -> b -> c
> r = b b t

> -- owl: ofg = g(fg)
> o :: ((a -> b) -> a) -> (a -> b) -> b
> o = s i

> -- queer: qfgx = g(fx)
> q :: (a -> b) -> (b -> c) -> a -> c
> q = c b

-- mockingbird: mf = ff
m = s i i

... ah, well, it was worth a try ...
> -- warbler: wfx = fxx
> w :: Monad m ⇒ m (m a) → m a
> w = join

> -- eagle: eabcde = ab(cde)
> e :: (a -> b -> c) -> a -> (d -> e -> b) -> d -> e -> c
> e = b (b b b)

With the above definitions, we can now type-check that my JI-basis is correct: the type of I already checks, and the type of J should be equivalent to B(BC)(W(BCE))), ...
:t (b (b c) (w (b c e)))
(b (b c) (w (b c e))) :: (d -> e -> e) -> d -> e -> d -> e

and it is ... yay!

-- lark (ascending): lfg = f(gg)
l = ((s ((s (k s)) k)) (k (s i i)))

l :: (a -> b) -> (c -> a) -> b
a b = a (b b)

... ah, well, another one bites the dust ...

but we can define Y, albeit with a let-trick, thanks to Dirk Thierbach, responding to the thread "State Monad Style" on comp.lang.haskell:
> fix :: (a -> a) -> a
> fix f = let x = f x in x

> -- sage/why: yf = f(yf)
> y :: (a -> a) -> a
> y = fix

So, there you go, 15 combinators to get you started; now you can program a functionally pure and mathematically sound version of Unlambda (which exists and is called Lazy-K, by the way) using your very own Haskell system.

After all, why should someone ever be forced to write \x -> x^2 when they have the option, and privilege, to write w (*) instead?

Apologies for late Spring cleaning

It was pointed out to me that the colour cyan is difficult to read against a white background; my syntax highlighter outputs Haskell keywords as cyan, so I understand that reading the code here can be bothersome. So, I'm in the process of converting the cyan to a darker colour (which first involved me converting every <font color='cyan'> to <font color='#00c8c8'>, but then, part way through that process, I moved all font metadata to the stylesheet and now simply mark code, keywords and data types with appropriate metadata tags).

A bit of a long-winded apology for what you may see as several (and repeated) repostings from this blog.

Legal Disclaimer

Parties:

"I", "me", "my", "mine", the author, geophf, aka Douglas M. Auclair
"You", "your", the reader, user, coder, redistributor, referencer. Some non-exhaustive examples of "You" include, J. Random Coder, J. Random Corporation, J. Random Defense Agency, J. Random Financial Institute, J. Random Blog, J. Random for-profit corporation/LLC, J. Random Sovereignty, or http://planet.haskell.org ... just as examples.
"This blog", http://logicaltypes.blogspot.com, its content and entries originating from me.

License:

The words of this blog, http://logicaltypes.blogspot.com, are freely available to be read, to be commented on, to be referenced, or to be copied. I, the author, would like attribution, either to this site (an URL as reference is appropriate) or to the author, geophf, aka Douglas M. Auclair, but do not require it. I would also like to know if you find the topics discussed here useful, by either leaving a comment on this blog in the standard fashion, or by sending me an email at the scrambled email address of doug.at.cotilliongroup.dot.com. Descramble in the standard way.

The code in this blog is also free and freely available for commercial, noncommercial, academic or non-academic (if you happen to be a maraudering visigoth coder) use. Attribute is requested as above, but not required. Notification, as above, is also requested, but not required.

Limitation of Liability:

Note that, although the code and words are free and freely available for use and redistribute, no guarantee is made for any word or code sample published here. Some non-comprehensive examples include the following: if you use this code in a missile guidance system, and that missile falls in a populated area of friendly forces and civilians, I, my words, and my code are not to be held responsible for the resulting damage to life and to property. Or, if you implement systems published in this blog in a clustered system, and that system becomes self-aware, like Cyberdyne or the Matrix, and decides either to wipe humanity off the face of the Earth by triggering an all-out global thermonuclear war or by constructing power plants composed of the now-enslaved human population, I, my words and my code are not to be held responsible for the loss of life, freedom or property. Or, if you use this code or the ideas explored here in a Mars rover, and it offends the space alien natives, and the descend in force, 'War of the Worlds'-style, I, my words, and my code are not to be held responsible for the resulting loss of life and damage to property. Or, if you implement this system in a new Qbit supercomputer, and it results in either a singularly, sucking up the world Donnie Darko-style, or it results in a readjustment of space-type causing a complete quantum collapse of the universe, then I, my words, and my code are not to be held responsible for the early onset of the rapture. The examples enumerated above serve only to illustrate that I, my words and my code may be used but any and all damage resulting from such use, no matter how small or how catastrophic, may not be held responsible for that damage, legally, financially, or otherwise.

Exemption from obligation:

Conversely, should you use this code, or the words of this blog, and the use results in making, earning or saving money, property or life, such as, non-exhaustively, making profit in the stock market, or gaining a strategic or tactical advantage on the battlefield, or locating 3 missing teenagers, or saving 150 sailor's lives at sea, or seizing 26 million USD-worth of illegal narcotics per month, you are under no obligation to recompense me for profits or property gained or seized, lives saved, or other benefits merited. If possible and desired, I would like attribution in the code or words used and notification of the positive result of said use, but I also understand whims of corporate and personal desires as well as restrictions of non-disclosure agreements and concerns of national/world security, so attribution and notification are desired, as above, but not required. The examples above illustrating exemption from your obligation are neither exhaustive, meaning any gain is covered in this exemption clause, nor are they meant to illustrate implied or real events that have occurred in the past, present or future: I make no claim as the efficacy of me, my words, or my code in any of the above illustrations, nor do I make a claim that the above illustrations are based on any past, present or future events that may or may not have occurred.

Of course, I will not be uncivil, either. And gratuities are gratefully accepted (but, as per above, neither required nor expected). If, in gratitude, you do wish to recompense me for beneficial use of my words or code, please contact me at (scrambled) doug.at.cotilliongroup.dot.com to discuss methods of renumeration.

Coverage:

This license is put into effect for all entries posted by me. All comments by me are also covered by this license. Comments published by others are at the risk of the originators: I bear no risk nor liability of words published by others on this blog. This license is put into effect immediately, August 2, 2008, and covers all entries from the first entry ("Trivial Monad solutions" dated May 14, 2008) to all entries and content following it until such time that it may be superseded by a new license from me.

Friday, August 1, 2008

Difference Lists in Haskell!

Oooh! Scrumptious!

I've discovered while exploring one of my Tangents, the Comonad.Reader, that a member of the Galois team, Don Stewart, has developed a Difference List library ... in Haskell. My initial scan of the library is that it is simple, elegant and practical. Three feathers in his cap!

Excuse me while, initially, I replace in my code the 'cons then reverse' list-building pattern with Don's DList. I also can't wait to find other applications for this type.

While I'm away doing that, you can read sigfpe's blog, particularly his clever implementation of the Fibonacci series as a comonad.