In this article, instead of continuing along my path of shoving Prolog (predicate logic) into Haskell, I'll take a break from that and shove Haskell into Haskell. This turn comes by way of my recent professional experience working with Prolog — I found myself oftentimes needing to write some purely functional code, but I found the Prolog semantics getting in my way, so, borrowing some of the ideas from my previous professional programming experiences with
Mercury and
Dylan (when Harlequin was still a going concern, and had picked up the Dylan ball that Apple, then CMU, dropped, making a really sweet Dylan programming environment), I implemented a
set of libraries for Prolog, including basic ZF-set operations, propositional logic syntax, list/set/bag utilities, and the combinator logic of Schönfinkel.
"geophf!" You exclaim, horrified, "you don't really use combinators to write working production code, do you?"
Well, yes, and I also proposed a
new one in addition to the ones covered by Smullyan in
To Mock a Mockingbird, and have
studied the make-up of the "void" combinator of
Unlambda, so I'm one of those kinds of programmers.
In short, when I was working with Prolog, I shoe-horned a Haskell interpreter into to be able to write map and fold when I wished to write map and fold. Since Prolog has
DCGs (Definite Clause Grammars), I lived a monad-free and worry-free life. Ah, yes, those were the days.
That was quite a trip down memory lane. Looking back, I marvel at my fortune, not only have I professionally programmed in those languages (and VisualWorks Smalltalk; that was fun when I presented the boss a solution in two weeks that he was expecting in two months — he was so impressed that he bought the VisualWorks system for the contract, made me mentor a junior programmer on Smalltalk (Chris Choe, a good guy to work with), and had me teach a brown bag course at work on Smalltalk, but I digress, again), but it looks like Haskell is now part of our solution at my current place of work.
Life is good.
Of course, it would be so much better if I had access to the primitive combinators in Haskell (did Miranda allow that kind of low-level access, I wonder). I mean, why should one write out '
const', a whole FIVE characters, when one simply wishes to use the
K
combinator.
I felt that very pain all too recently. I had that freedom using my
combinator library when I was working on Prolog ... goodness, even MITRE fabricated the Curry Chip.
So, I could not put it off any longer, I pulled out my dog-eared copy of
Mockingbird and wrote the CL module, as I didn't see one in the Haskell
standard library. It went surprisingly quickly and easily. There were a few snags, Prolog, being dynamically-typed, allowed me to define the reoccuring combinators of
L
,
M
, and
U
(definitions of the combinators are available in
my combinator library or
tabled form or
as graphical notation (which includes a very nice write up of propositional logic in CL)), but Haskell's type system complains of an occur-check error. I have the hack to define the
Y
combinator ...
fix f = let x = f x in x
... but I'm having difficulty hacking the other self-applying combinators; any suggestions on how to implement those? I'm particularly interested in implementing Turings' universal combinator ...
Uxy = y(xxy)
... because of its interesting properties I'd like to explore.
Anyway, here's the ones I do have.
> module Smullyan where
It was pointed out to me in the comments that if you make the ((->) a) type a Monad, then some of the combinators simplify to monadic operators.
> import Monad
> import Control.Monad.Instances
These are some of the combinators presented in Smullyan's
To Mock a Mockingbird. Some have direct Haskell equivalents, e.g.:
I
≡
id,
K
≡
const,
C
≡
flip,
B
≡
(.)
, but then that just makes my job easier here. I also admit a preference to the Schönfinkel combinators over the renamed Haskell equivalents, so here is the library.
We will not be defining combinators here that cause an occurs-check of the type system, e.g.
L
,
M
,
U
, but we can define some interesting ones, such as
O
and
Y
by using work-arounds.
If we wish to start with λ
I, then its basis is formed from the
J
and
I
combinators.
> -- identity (I have no idea to which species belongs the 'identity' bird)
> i :: a -> a
> i = id
> -- jay: jabcd = ab(adc)
> j :: (a -> b -> b) -> a -> b -> a -> b
> j a b c d = a b (a d c)
I actually spent quite a stretch of time building the other combinators from the JI-basis, e.g., the
T
combinator is
JII
, the
Q1
combinator is
JI
, etc. When I attempted to define the
K
combinator, I ran into a brick wall for some time, until I reread the section in
Mockingbird about how the noble basis has no way to define that abhorred combinator. Since that time I've fallen from grace and have used λ
K, but I've always wondered since if a complete logic could be reasonably expressed in λ
I, and if so, how would that logic be implemented? I haven't come across any papers that address these questions.
Musing again, let's define the basis of λ
K which is founded on the
S
and
K
combinators.
> -- starling: sfgx = fx(gx)
> s :: Monad m ⇒ m (a → b) → m a → m b
> s = ap
> -- kestrel: kab = a
> k :: a -> b -> a
> k = const
... okay, that wasn't too hard, so,
SKK
should be
I
, right?
--- :t (s k k) :: a -> a
... oh, yeah!
let's continue with some of the other combinators:
> -- bluebird: bfgx = f(gx)
> b :: (b -> c) -> (a -> b) -> a -> c
> b = (.)
> -- cardinal: cfgx = gfx
> c :: (a -> b -> c) -> b -> a -> c
> c = flip
Now we start defining combinators in terms of simpler combinators. Although, we could have started doing that once we've defined
S
and
K
, as all other combinators can be derived from those two.
> -- dove: dfghx = fg(hx)
> d :: (d -> b -> c) -> d -> (a -> b) -> a -> c
> d = b b
> -- thrush: txf = fx
> t :: a -> (a -> b) -> b
> t = c i
> -- vireo (pairing/list): vabf = fab
> -- e.g. v 1 [2] (:) -> [1,2]
> v :: a -> b -> (a -> b -> b) -> b
> v = b c t
> -- robin: rxfy = fyx
> r :: a -> (b -> a -> c) -> b -> c
> r = b b t
> -- owl: ofg = g(fg)
> o :: ((a -> b) -> a) -> (a -> b) -> b
> o = s i
> -- queer: qfgx = g(fx)
> q :: (a -> b) -> (b -> c) -> a -> c
> q = c b
-- mockingbird: mf = ff
m = s i i
... ah, well, it was worth a try ...
> -- warbler: wfx = fxx
> w :: Monad m ⇒ m (m a) → m a
> w = join
> -- eagle: eabcde = ab(cde)
> e :: (a -> b -> c) -> a -> (d -> e -> b) -> d -> e -> c
> e = b (b b b)
With the above definitions, we can now type-check that my
JI
-basis is correct: the type of
I
already checks, and the type of
J
should be equivalent to
B(BC)(W(BCE)))
, ...
:t (b (b c) (w (b c e)))
(b (b c) (w (b c e))) :: (d -> e -> e) -> d -> e -> d -> e
and it is ... yay!
-- lark (ascending): lfg = f(gg)
l = ((s ((s (k s)) k)) (k (s i i)))
l :: (a -> b) -> (c -> a) -> b
a b = a (b b)
... ah, well, another one bites the dust ...
but we can define
Y
, albeit with a let-trick, thanks to Dirk Thierbach, responding to the thread "State Monad Style" on comp.lang.haskell:
> fix :: (a -> a) -> a
> fix f = let x = f x in x
> -- sage/why: yf = f(yf)
> y :: (a -> a) -> a
> y = fix
So, there you go, 15 combinators to get you started; now you can program a functionally pure and mathematically sound version of Unlambda (which exists and is called
Lazy-K, by the way) using your very own Haskell system.
After all, why should someone ever be forced to write
\x -> x^2
when they have the option, and privilege, to write
w (*)
instead?