Sunday, September 25, 2011

Monads in Java

A while back Brent Yorgey http://byorgey.wordpress.com posted a menagerie of types. A very nice survey of types in Haskell, including, among other things, the monadic types.

A reader posted a question, framed with 'you know, it'd be a lot easier for me to understand monads if you provided an implementation in Java.'[1]

Well, that reader really shouldn't have done that.

Really.

Because it got me to thinking, and dangerous things comes out of geophf's thoughts ... like monads in Java.

Let's do this.

Problem Statement

Of course, you just don't write monads for Java. You need some motivation. What's my motivation?

Well, I kept running up against the Null Object pattern, because I kept running up against null in the Java runtime, because there's a whole lot of optionality in the data sets I'm dealing with, and instead of using the Null Object, null is used to represent absence.

Good choice?

Well, it's expedient, and that's about the best I can say for that.

So I had to keep writing the Null Object pattern for each of these optional element types I encountered.

I encounter quite a few of them ... 2,100 of them and counting.

So I generalized the Null object pattern using generic types, and that worked fine ...

But I kept looking at the pattern. The generalization of the Null Object pattern is Nothing, and a present instance is Just that something.

I had implemented non-monadic Maybe.

And the problem with that is that none of the power of Maybe, as a Monad, is available to a generalized Null Object pattern.

Put another way: I wish to do something given that something is present.

Put another-another way: bind.

So, after a long hard look ('do I really want to implement monads? Can't I just settle for 'good enough' ... that really isn't good enough?), I buckled down to do the work of implementing not just Maybe (heh: 'Just Maybe'), but Monad itself.

Did it pay off? Yes, but wait and see. First the implementation. But even before that, let's review what a monad is.

What the Hell is a Monad?

Everybody seems to ask this question when first encountering monads, and everybody seems to come up with their own explanations, so we have tutorials that explain Monads as things as far-ranging from spacesuits to burritos.

My own explanation is to punt ... to category theory.

A monad is simply a triple (that's why it's called a monad, ... because it's a triple), defined thusly:

(T, η, μ)

Simple right?

Don't leave me!

Okay, yes. I understand you are rolling your eyes and saying 'well, that's all Greek to me.'

So allow me to explain in plain language the monad.

Monad is the triple (T, η, μ) where

  • T is the Monad Functor
  • η is the unit function that lifts an ordinary object (a unit) into the Monad functor; and,
  • μ is the join function that takes a composed monad M (M a) and returns the simplified type of the composition, or M a.

And that is (simply) what the hell a Monad is. What a monad can do ... well, there are articles galore about that, and, later, I will provide examples of what Monad gives us ... in Java, no less.

The Implementation

We need to work our way up to Monad. There's a whole framework, a way of thinking, a context, that needs to be in place for Monad to be effective or even to exist. Monad is 'happy' in the functional world, so we need to implement functions.

Or more correctly, 'functionals' ... in Haskell, they are called 'Functors.'

A functor is a container, and the function of functors is to take an ordinary function, which we write: f: a → b (pronounced "the function f from (type) a to (type) b" and means that f takes an argument of type a and returns a result of type b), and give a function g: Functor a → Functor b.

Functor basically is a container for the function fmap: Functor f ⇒ (a → b) → (f a → f b)

Well, is Functor the fundamental type? Well, yes and no.

It is, given that we have functions defined. In Java, we don't have that. We have methods, yes, but we don't have free-standing functions. Let's amend that issue:


  public interface Applicable<T1, T2> {
      public T2 apply(T1 arg) throws Failure;
  };

And what's this Failure thingie? It's simply a functional exception, so:


  public class Failure extends Exception {
      public Failure(String string) { 
          super(string); 
      }
  

/// and the serialVersionUID for serialization; your IDE can define that }

I call a Function 'Applicable' because in functional languages that's what you do: apply a function on (type) a to get (type) b. And, using Java Generics terminology, type a is T1 and type b is T2. So we're using pure Java to write pure Java code and none will be the wiser that we're actually doing Category Theory via a Haskell implementation ... in Java, right?

Right.

So, now we have functions (Applicable), we can provide the declaration of the Functor type:


  public interface Functor<F, T> {
       public <T1> Applicable<? extends Functor<F, T>,
                              ? extends Functor<F, T1>>
          fmap(Applicable<T, T1> f) throws Failure;
  

public T arg() throws Failure; }

The above definition of fmap is the same declaration (in Java) as its functional declaration: it takes a function f: a → b and gives a function g: Functor a → Functor b

Okay, we're half-way there. A Monad is a Functor, and we have Functor declared.

The unit function, η, comes for free in Java: it's called new.

So that leaves the join function μ.

And defining join for a specific monad is simple. Join for the list monad is append. Join for the Maybe monad and the ID monad is arg. What about join for just Monad?

Well, there is no definition of such, but we can declare its type:


  public interface Joinable<F, T> extends Functor<F, T> {
      public Functor<F, ?> join() throws Failure;
  }

Too easy! you complain.

Well, isn't it supposed to be? Join is simply a function that takes a functor and returns functor.

The trick is that we have to ensure that T is of type Functor<F, ?>, and we leave that an implementation detail for each specific monad implementing join.

That was easy!™

Monad

So now that we have all the above, Functor, Join, and Unit, we have Monad:

... BUT WAIT!

And here's the interlude of where the power of Monad comes in: bind.

What is bind? Bind is a function that says: do something in the monadic domain.

Really, that sounds boring, right? But it's a very powerful thing. Because once we are in the monadic domain, we can guarantee things that we cannot in Java category (if there is such).

(There is. I just declared it.)[2]

So, in the monadic domain, a function thus bound can be guaranteed to, e.g., be working with an instance and not a null.

Guaranteed.

AND! ...

Well, what is bind?

Bind is a function that takes an ordinary value and returns a result in the monadic domain:

Monad m ⇒ bind: a → m b

So the 'AND!' of this is that bind can be chained ... meaning we can go from guarantee to guarantee, threading a safe computation from start to finish.

Monad gives us bind.

And the beauty of Monad? bind comes for free, for once join is defined, bind can be defined in terms of join:

Monad m ⇒ bind (f: a → m b) (m a) = join ((fmap f) (m a))

Do you see what is happening here? We are using fmap to lift the function f one step higher into the monadic domain, so fmap f gives the function Monad m ⇒ g: m a → m (m b) so we are not actually passing in a value of type a, we are passing in the Monad m a, and then we get back a type of m (m b) which then join does it magic to simplify to the monadic type m b.

So to the outside world, it looks like we are passing in an a to get an m b but this definition allows monadic chaining, for a becomes m a and then m (m b) is simplified to m b that — and here's the kicker — is passed to the next monadically bound function.

You can chain this as far as you'd like:

m a >>= f >>= g >>= ... >>= h → m z

And since the computation occurs in the monadic domain, you are guaranteed the promise of that domain.

There is the power of monadic programming.

Given that explanation, and remembering that bind can be defined in terms of join, let's define Monad:


  public abstract class Monad<M, A> implements Joinable<M, A> {
      public <T> Monad<M, T> bind(Applicable<A, Monad<M, T>> f) throws Failure {
          Applicable<Monad<M, A>, Monad<M, Monad<M, T>>> a
              = (Applicable<Monad<M, A> Monad<M, Monad<M, T>>>) fmap(f);
          Monad<M, Monad<M, T>> mmonad = a.apply(this);
          return (Monad<M, T>) mmonad.join();
      }
  

public Monad<M, A> fail(String ex) throws Failure { throw new Failure(ex); } }

The bind implementation is exactly as previously discussed: bind is the application of the fmap of f with the joined result returned. So, we simply define join for subclasses, usually a trivial definition, and we have the power of bind automagically!

Now fail is an interesting beast in subclasses, because in the monadic domain, failure is not always a bad thing, or even an exceptional one, and can be quite, and very, useful. Some examples, fail represents 'zero' in the monadic domain, so 'failure' for list is the empty list, and 'failure' for Maybe is Nothing. So, when failure occurs, the computation can continue gracefully and not always automatically abort from the computation, which happens with the try/catch paradigm.

There you have it: Monads in Java!

The rest of the story: ... Maybe

So, we could've ended this article right now, and you have everything you need to do monadic programming in Java. But so what? A programming paradigm isn't useful if there are no practical applications. So let's give you one: Maybe.

Maybe is a binary type, it is either Nothing or Just x where x is whatever value that is what we're really looking for (as opposed to the usual case in Java: null).

So, let's define the Maybe type in Java.

First is the protocol that extends the Monad:


  public abstract class Maybe<A> extends Monad<Maybe, A> {
  

Do you see how the Maybe type declares itself as a Monad in its own definition? I just love that.

Anyway.

As mentioned before, fail for Maybe is Nothing:


      public Maybe<A> fail(String ex) throws Failure {
          return (Maybe<A>) NOTHING;
      }

(NOTHING is a constant in Maybe to be defined later. Before we define it ('it' being NOTHING), recall that a Monad is a Functor so we have to define fmap. Let's do that now)


      protected abstract <T> Maybe<T> 
              mbBind(Applicable<A, Monad<Maybe, T>> arg) throws Failure;
  

public <T> Applicable<Maybe<A>, Maybe<T>> fmap(final Applicable<A, T> f) throws Failure { return new Applicable<Maybe<A>, Maybe<T>>() { public Maybe<T> apply(Maybe<A> arg) throws Failure { Applicable<A, Monad<Maybe, T>> liFted = new Applicable<A, Monad<Maybe, T>>() { public Maybe<T> apply(A arg) throws Failure { return Maybe.pure(f.apply(arg)); } }; return (Maybe<T>)arg.mbBind(liFted); } }; }

No surprises here. fmap lifts the function f: a → b to Maybe m ⇒ g: m a → m b where in this case the Functor is a Monad, and this case, that Monad is Maybe.[3]

There is the small issue(s) of what the static method pure and the method mbBind are, but these depend on the definitions of the subclasses NOTHING and Just, so let's define them now.


      public static final Maybe<?> NOTHING = new Maybe() {
          public String toString() {
              return "Nothing";
          }
          public Object arg() throws Failure {
              throw new Failure("Cannot extract a value from Nothing.");
          }
          public Functor join() throws Failure {
              return this;
          }
          protected Maybe mbBind(Applicable f) {
              return this;
          }
      };

Trivial definition, as the Null Object pattern is trivial. But do note that the bind operation does not perform the f computation, but skips it, returning Nothing, and forwarding the computation. This is expected behavior, for:

Nothing >>= f → Nothing

The definition of the internal class Just is nearly as simple:


      public final static class Just<J> extends Maybe<J> {
          public Just(J obj) {
              _unit = obj;
          }
  

public Maybe<?> join() throws Failure { try { return (Maybe<?>)_unit; } catch(ClassCastException ex) { throw new Failure("Joining on a flat structure!"); } } public String toString() { return "Just " + _unit; } public Object arg() throws Failure { return _unit; } protected <T> Maybe<T> mbBind(Applicable<J, Monad<Maybe, T>> f) throws Failure { return (Maybe<T>)f.apply(_unit); } private final J _unit; }

As you can see, the slight variation to NOTHING for Just is that join returns the _unit value if it's the Maybe type (and throws a Failure if it isn't), and mbBind applies the monadic function f to the Just value.

And with the definition of Just we get the static method pure:


      public static <T> Maybe<T> pure(T x) {
          return new Just<T>(x);
      }

And then we close out the Maybe implementation with:

}

Simple.

Practical Example

Maybe is useful for any semideterministic programming, that is: where something may be true, or it may not be. But the question I keep getting, from Java coders, is this: "I have these chains of getter methods in my web interface to my data objects to set a result, but I often have nulls in some values gotten along the chain. How do I set the value from what I've gotten, or do nothing if there's a null along the way."

"Fine, no problems ..." I begin, but then they interrupt me.

"No, I'm not done yet! I have like tons of these chained statements, and I don't want to abort on the first one that throws a NullPointerException, I just want to pass through the statement with the null gracefully and continue onto the next assignment. Can your weirdo stuff do that?"

Weirdo stuff? Excusez moi?

I don't feel it's apropos that functionally pure languages have no (mutable) assignment, as that throws provability out the window and is therefore the root of all evil.

No matter how sorely tempted I am.

So, instead I say: "Yeah, that's a bigger problem [soto voce: so you should switch to Haskell!], but the same solution applies."[4]

So, let's go over the problem and a set of solutions.

The problem is something like:


  try {
      foo.setD(getA().getB().getC().getD());
      foo.setE(getA().getB().getC().getE());
  

bar.setJ(getF().getG().getH().getJ()); bar.setM(getF().getG().getK().getM());

// and ... oh, more than 2000 more assignments ... 'hypothetically'

} catch(Exception ex) { ex.printStackTrace(); }

And the problem is this: anywhere in any chain of gets, if something goes wrong, the entire computation is stopped from that point, even if there are, oh, let's say, 1000 more valid assignments.

A big, and real, problem. Or 'challenge,' if you prefer.

Now this whole problem would go away if the Null Object pattern was used everywhere. But how to enforce that? In the Java category, you cannot. Even if you make the generic Null Object the base object, the null is still Java's ⊥ — it's 'bottom' — as low as you go in a hierarchy, null is still an allowable argument everywhere.

And if getA(), or getB(), or getC() return null you've just failed out of your entire computation with an access attempt to a null pointer.

Solutions

Solution 1: test until you puke

The first solution is to test for null at every turn. And you know what that looks like, but here you go. Because why? Because if you think it or code it, you've got to look at it, or I've got to look at it, so here it is:


  A a = getA();
  if(a != null) {
      B b = a.getB();
      if(b != null) {
          C c = b.getC();
          if(c != null) {
              foo.setD(c.getD());
          }
      }
  }

What's the problem with this code?

Oh, no problems, just repeat that deep nesting for every single assignment? So you have [oh, let's say 'hypothetically'] 2000 of these deeply nested conditional blocks?

And, besides the fact that it entirely clutters the simple assignment:

foo.setD(getA().getB().getC().getD());

in decision logic and algorithms that are totally unnecessary and entirely too much clutter.

All I'm doing is assigning a D to a foo! So why do I have to juggle all this conditional code in my head to reach that eventual assignment.

Solution 1 is bunk.

Solution 2: narrow the catch

So, solution 1 is untenable. But we need to assign where we can and skip where we can't. So how about this?


  try {
      foo.setD(getA().getB().getC().getD());
  } catch(Exception ex) {
      // Intentionally empty; silently don't assign if we cannot;
  }
  

try { foo.setE(getA().getB().getC().getE()); } catch(Exception ex) { // ditto }

try { bar.setJ(getF().getG().getH().getJ()); } catch(Exception ex) { // ditto }

try { bar.setM(getF().getG().getK().getM()); } catch(Exception ex) { // ditto }

And I say to this solution, meh! Granted, it's much better than solution 1, but, again, you made a computational flow described declaratively in the problem to be these set of staccato statements, having the reader of your code switch into and out of context for every statement.

Wouldn't it be nice to have all the statements together in one block, because, computationally, that's what is happening: a set of data is collected from one place and moved to another, and that is what we wish to describe by keep these assignment grouped in one block.

Solution 3: Maybe Monad

And that's what we can do with monads. It's going to look a bit different that how you're used to the usual Java fair, as we have to lift the statements in the Java category into expressions in the monadic one.

So here goes.

First of all, we need to define generic functions (not methods![5]) for getting values from objects and setting values into object in the monadic domain:


  public abstract class SetterM<Receiver, Datum>
          implements Applicable<Datum, Monad<Maybe, Receiver>> {
      protected SetterM(Receiver r) {
          this.receiver = r;
      }
  

// a monadic set function public final Monad<Maybe, Receiver> apply(Datum d) throws Failure { set(receiver, d); return Maybe.pure(receiver); }

// subclasses implement with the particular set method being called protected abstract void set(Receiver r, Datum d);

private final Receiver receiver; }

This declares a generic setter, so to define setD on the foo instance, we would do the following:


      SetterM<Foo, D> setterDinFoo = new SetterM<Foo, D>(foo) {
          protected void set(Foo f, D d) {
              f.setD(d);
          }
      };

The getter functional is similarly defined, with the returned value lifted into (or 'wrapped in,' if you prefer) the Maybe type:


  public abstract class GetterM<Container, Returned>
          implements Applicable<Container, Monad<Maybe, Returned>> {
  

public Monad<Maybe, Returned> apply(Container c) throws Failure { Maybe<Returned> ans = (Maybe<Returned>)Maybe.NOTHING; Returned result = get(c);

// c inside the monad is guaranteed to be an instance // but result has NO guarantees!

if(result != null) { ans = Maybe.pure(result); // and now ans is inside the monad it must have a value. // ... even if that value is NOTHING } return ans; }

protected abstract Returned get(Container c); }

With the above declaration, a getter monadic function (again, not method) is simply defined:


      GetterM<A, B> getterBfromA = new GetterM<A, B>() {
          protected B get(A a) {
              return a.getB();
          }
      };

In Haskell, the bind operator has a data-flow look to it: (>>=), so writing one of the 'assignments' is as simple as flowing the data to the setter:

getA >>= getB >>= getC >>= getD >>= setD foo

But we have no operator definitions, so we must soldier on using the Java syntax:


      try {
          getterA.apply(this).bind(getterBfromA).bind(getterCfromB).bind(getterDfromC).bind(setterDinFoo);
      } catch {
          // a superfluous catch block
      }

That's one of the assignment expressions.[6]

Let's walk through what happens with a couple of examples.

Let's say that A has a value of a, but B is null. What happens?

  1. getterA forwards Just a to getterBfromA
  2. getterBfromA gets a null, converts that into a NOTHING and forwards that to getterCfromB
  3. getterCfromB forwards the NOTHING to getterDfromC
  4. getterDfromC forwards the NOTHING to setterDinFoo
  5. setterDinFoo simply returns NOTHING.
And we're done (that is: we're done doing, literally: NOTHING).

Let's counter with an example where every value has an instance:

  1. getterA forwards Just a to getterBfromA
  2. getterBfromA gets a b, converts that into Just b and forwards that to getterCfromB
  3. getterCfromB gets a c, converts that into Just c and forwards that to getterDfromC
  4. getterDfromC gets a d, converts that into Just d and forwards that to setterDinFoo
  5. setterDinFoo gets the wrapped d and sets that value in the Foo instance.

Voilà!

The beauty of this methodology is that we can put them all into one block, and every expression will be evaluated and in a type-safe manner, too, as NOTHING will protect us from a null pointer access:


      try {
          getterA.apply(this).bind(getterBfromA).bind(getterCfromB).bind(getterDfromC).bind(setterDinFoo);
  System.out.println("Hey, I've executed the first assignment");
  getterA.apply(this).bind(getterBfromA).bind(getterCfromB).bind(getterEfromC).bind(setterEinFoo);
  System.out.println("...and the second...");
  getterF.apply(this).bind(getterGfromF).bind(getterHfromG).bind(getterJfromH).bind(setterJinBar);
  System.out.println("...and third...");
  getterF.apply(this).bind(getterGfromF).bind(getterKfromG).bind(getterMfromK).bind(setterMinBar);
  System.out.println("...and fourth...");
  

// another more than 2000 of these expressions System.out.println("...and we're done with every assignment we could assign...");

} catch { // a superfluous catch block }

And, because the monadic Maybe works it does, every assignment that can occur will, and ones that cannot will be 'skipped' (by flowing NOTHING through the rest of the computation, and the proof will be in the pudding ... on the standard output will be all the statements printed.

Summary

We presented an implementation of monads in Java, with a concrete example of the Monad type: the Maybe type. We then showed that by putting a computational set into the monadic domain, the work can be performed in a type-safe manner, even with the possible presence of nulls. This is just one example of practical application of monadic programming in Java: the framework presented here confers the benefits of research of monadic programming in general to projects done in Java. Use the framework to discover for yourself the leverage monadic programming gives.


End notes

  
[1] Exact verbage: Phil says: January 14, 2009 at 7:01 pm I think that this could all be easily cleared up if one of you FP guys would just show us how to write one of these monad thingies in Javaâ To which a semi-mocking response was: 'you know, objects would be a lot easier for me to understand if you provided an implementation in Haskell.' Exact verbage: Cory says: April 23, 2009 at 6:26 am I may be a bit late to the game here, but Phil, that can be rephrased: I think that this could all be easily cleared up if one of you OO guys would just show us how to write one of these object thingies in Haskellâ Of course you can, but it's a different type of abstraction for a different way of thinking about programmingâ I write 'semi-mocking,' because: 1. OOP with message passing has been implemented in a Haskell-like language: Common Lisp. The Art of the Metaobject Protocol, a wonderful book, covers the 'traditional' object-oriented programming methodology, and its implementation, quite thoroughly and simply.
[2] I leave it as an exercise to the reader to scope the Java category (Consult the Ω-calculus papers, Cardelli, et al;).
[3] Monad<Maybe, A> and Maybe<A> are type-equivalent (but try telling the poor Java 1.6 compiler that).
[4] "The same solution applies!" Geddit? Haskell is an applicative language, so the same solution applies! GEDDIT?
[5] The distinction I raise here between methods and functions is the following: a method is enclosed in and owned by the defining class. A function, on the other hand, is a free-standing object and not necessarily contained in nor attached to an object.
[6] Again, there is a distinction between an expression and a statement. An expression returns a value; a statement does not. A statement is purely imperative ... 'do something.' On the other hand, an expression can be purely functional ... 'do'ing nothing, as it were, e.g.: 3 + 4. A pure expression has provable properties that can be attached to it, whereas it is extremely difficult to make assertions about statements in the general sense. The upshot is that having pure (non-side-effecting) expressions in code allows us to reason about that code from a firm theoretical foundational basis. Such code is easier to test and to maintain. Side-effecting statement code make reasoning, testing and maintaining such code rather difficult.

Thursday, April 14, 2011

Attributed Variables: their uses and one implementation

Here's a problem that comes up enough, computationally: find a value within some range. You don't know what the value is at the moment, but you will know it, eventually, as the range becomes further restricted by refinements of the constraints or data.

Now, in general, this is an easy problem for Haskell to solve, and Haskell provides more than one way to approach the solution. For example, if we have some x in some domain D, one way to look at an arbitrary x until we find the one, true x we are looking for is the following code snippet:

> [ x | x <- D, ... further constraints ]

More variables? Interrelated ones? No problem:

> [ (x, y, z) | x <- D, y <- D, z <- D, x < y, ... further constraints ]

Okay, let’s take it to the next level. What if the interrelation is such that one variable consumes from the domain, so that value is now unavailable to other variables. Pithily: the variables’ values are mutually exclusive.

Well, one way to go about that is to embed a guard for each variable into the list compression:

> [ (x, y, z) | x <- D, y <- D, y /= x, z <- D, z `notElem` [x,y], ...

The drawback here is that this kind of guarded coding becomes tedious and prone to errors. On top of that we see that the guard follows the select, causing unnecessary backtracking through the search space,[1] especially given that we know the guarded values a priori.

Well, then, we can automate guarded selection with a state (transformer) monad:[2]

> ...
> f = do x <- choose
> y <- choose
> z <- choose
> ...
> runStateT f [0..9]


Okay, excellent! All of this works and works well … for a fixed and predetermined variable set.

The Problem



Things start to get tricky for me, programmatically, however, when one set of constrained values spill over onto another set of constrained values, such as the results of one generation in a cellular automata set or genetic (algorithm) pool affect the next set:

> evalStateT (gen row1guards) domain >>= \ans1 ->
> evalStateT (gen (row2guards ans1)) domain >>= \ans2 ->
> evalStateT (gen (row3guards ans1 ans2)) domain >>= \ans3 ->
> ... etc


The state transformer isn’t enough, so it needs to be some RWS monad, perhaps pushing the interim results into the Reader monad and the accumulated ones into Writer while the State monad handles the flow of the computation?

It seems to me that the work becomes more about monad management and less about the computation itself.[3]

Then there’s the even more general computation. What if we do not know, a priori, how many values we are generating in our problem solving? The problem space I’m looking at is something like:
For a variable number of variables, map selection and guards for each variable.

How do I write that as a list compression expression, or as a monadic one? The problem with Haskell variables is that they aren't (variables), and so something so easily accomplished in a Prolog program:

kakuro([], Vars, Functor) :-
Fn =.. [Functor, Vars],
call(Fn).
kakuro([H|T], Acc, Functor) :-
assign(H),
kakuro(T, [H|Acc], Functor).


called with:

?- kakuro([A, B, C, D], [], sum_equals_10).

or:

?- kakuro([X, Y, Z], [], sum_equals_10).

... or with a list of any other length isn't so easily written out in Haskell when I go to attack the problem. After all, what is the type of a list of unbound variables? Haskell doesn't (so easily) allow this, and I was stuck: I didn't know how to proceed using monadic programming or list compression.

A solution



Fortunately for me, I've been studying constraint-based programming, and that research has let me to some papers on attributed variables[4] which faciliate that programming style.

Attributed variables are variables ranging over a domain that can have attributes attached to them that restrict (or constrict) the range of values. It's a very simple step to go from a set of unbound attributed variables to a (simple or façile) constraint solver by adding (constraining) attributes to the variables.

More formally, I declare (denotationally) an attributed variable as:

> data Attribute a = Attribute [Constraint a] (Domain a)

where Constraint and Domain are declared as:

> newtype Constraint a = Constraint { resolve :: a -> Bool }

> type Domain a = [a]


Or, to put words to it, an Attributed Variable is an object that has a set of (constraining) functions over an assigned domain.[5]

The above denotational declaration of attributed variables works, but I find, operationally, grounding an attributed variable once it's reduced to a single value to be useful:

> data Attribute a = Attribute [Constraint a] (Domain a)
> | Ground a


So, that's my declaration.[8] How is this implemented in practice?

Implementation



The approach I take is that an attributed variable is created in a free state over its domain:

> makeAttrib :: Domain a -> Attribute a
> makeAttrib list = Attribute [] list


And then, as constraints are added to the attributed variable, the system 'tries to solve' the attributed variable over the set of the constraints:

> constrain (Attribute constraints dataset) constraint
> = solve (Attribute (Constraint constraint:constraints) dataset)
> constrain (Ground x) _ = Ground x


And if the solution results in a singleton list, then the attributed variable becomes Ground:

> solve attr@(Attribute _ []) = attr
> solve attr@(Attribute constraints list@(h:t))
> = let result = solve' constraints list
> in if length result == 1
> then Ground (head result)
> else Attribute constraints result


Of course, the 'solution' to a Ground attributed variable is itself:

> solve (Ground x) = Ground x

The above code is simply a dispatch along the binary type. The worker-bee, solve' is simply (again) an implementation of the dual of filter:

> solve' :: [Constraint a] -> [a] -> [a]
> solve' _ [] = []
> solve' constraints (h:t) =
> if (solveConstraints constraints h)
> then (h:solve' constraints t)
> else solve' constraints t

> solveConstraints :: [Constraint a] -> a -> Bool
> solveConstraints [] _ = True
> solveConstraints (f:rest) x = resolve f x && solveConstraints rest x


So something from Control.Applicative to reduce that to a one-liner? I leave that as an exercise to the reader.

And there you have it, that is a working implementation of attributed variables. I have test functions for groundedness, solvability and the current state of the domain of an active attributed variable, but those are simple helper functions, easily implemented, and provided in the attached source code.

Uses



So, now, what do we have?

Now we have variables over which we may specify a domain ('loosely' typing a variable) and then constrain. How can we use such a system?

One way is that we can write constraint logic programs in Haskell. The illustration from Prolog (above) has a nearly direct transliteration into Haskell (with the additional benefit of strong typing):

> kakuro :: [Int] -> Int -> [[Int]]
> kakuro list total = nub $ summer (sort $ map trib list) [] total

> trib :: Int -> Attribute Int
> trib x | x == 0 = makeAttrib [1..9]
> | otherwise = Ground x

> summer :: [Attribute Int] -> [Int] -> Int -> [[Int]]
> summer [] list total = do
> guard $ sum list == total
> return $ sort list
> summer (h:t) list total = do
> a <- domain $ constrain h (flip notElem list)
> summer t (a:list) total


And with the above code, we can now do the following:

*Kakuro> kakuro [0,0,0] 10
[[1,2,7],[1,3,6],[1,4,5],[2,3,5]]
*Kakuro> kakuro [0,0,0,0] 10
[[1,2,3,4]]


Note that I use sort above, but attributed variables are not (denotationally) ordered types. Operationally, that is: in logical programming, it makes sense to eliminate early, so if we have Ground values, we wish to consume those first. So, to that end, I declare the following instances:

> instance (Eq a) => Eq (Attribute a) where
> Ground x == Ground y = x == y
> _ == _ = False

> instance (Ord a) => Ord (Attribute a) where
> Ground x <= Ground y = x <= y
> Ground _ <= _ = True
> _ <= _ = False


And with those instance declarations, sort automagically works. Neat!

Interlude: Haskell is "Neat"



I don't know about you, but I'm guessing that many of you are taking the gift that Haskell is for granted. Or I suppose I should say 'the way of thinking that using Haskell provides.' Over and over again in industry (yes: stone knives and bearskins) I hear 'that can't be done.' And I find myself puzzled. What's the showstopper here?

For example, this past week at work, we needed to convert a data mapping document in Excel (stop snickering, please) into an equivalently flat XML schema representation with a accompanying XML sample file. So fields of the form:

/Field/Name="BusinessAddress"
/Field/Value="plaintext"

/Field/Name="BusinessAddress"
/Field/Value="encrypted"


And there were enumerated values as well:

/Field/Name="PropertyOccupancyTypeOwnerOccupied"
/Field/Value="plaintext"

/Field/Name="PropertyOccupancyTypeRentedToTenant"
/Field/Value="plaintext"

/Field/Name="PropertyOccupancyTypeVacant"
/Field/Value="plaintext"


There were over 1,000 of these element names. I mentioned I would write a parser to generate the XSD and XML and was told by the development team point-blank that that was impossible and the only way to go about this transcription was manually, line-by-line, by hand.

So I wrote the script. I suppose I could have used Perl, but as I'm on a Windows box, the shell isn't very inviting for Perl-like piping, so I used Haskell, stripping the metainformation and pushing the parsed names into a map (or, to be precise, a Data.Map) where enumerated (mutally exclusive instance) types went into a list named by their type name and other values were singletons. Then for the XSD I simply iterated over the elements of the map (recursively), and for the sample XML file, I just chose an element (in this case, the first element) of each type.

The net result was that all the elements analysis team compiled into an Excel spreadsheet by hand over a six week period I was able to deliver in two days as a full XSD schema and a comprehensive XML sample file.

Two members from the development team assisted me on this task, one of them was the lead developer. He was intrigued and asked me: "What does this script do?" after being impressed that he could generate a schema definition file and a sample file in seconds. So I explained Haskell as a 'data-flow' language and I traced the flow of the input file through the monad and function composition showing the translation of the internal representation into XSD and then, in a different context, into XML. I'm not sure he got my explanation, but he did walk away impressed with what Haskell can do.

Yes, these are simple things: parsing, mapping, iterating. But for most of the software engineering community in industry, it is easier and faster to go through, line-by-line, by hand, than to write a piece of code to automate that process ("Can that even be done?" "No, it's impossible, just do it manually").

I have encountered this over and over again in industry. Teams are willing to expend more than two hundred person hours to a reoccuring task and are willing to accept the associated fatigue errors to schedule systems by hand. Then they are shocked that I build a functional or logic-based system that autogenerates a pristine result in sub-second time.

Where does this leave us?

What I see as status quo is that industry is resigned to slog through repetitive tasks manually, because it's too hard in Language X to consider tackling the problem: more time would be spent writing, compiling and fixing a program to do the task than to do the task by hand.

On the academic side, I see researchers isolated from industry, feeling their advances have no practical use as there appears to be no interest from industry. I attended a talk on dependent types where the thesis was that this typing makes coding more declarative, more powerful and less error prone. I approached the lecturer and asked how I could implement such a system and was asked what application dependent types had for industry.

Hm. Let me think on that for a while.

Why would industry be interested in coding more declaratively, more powerfully, and with less errors? I was flabbergasted that the researcher saw no benefit for industry in his presentation.

We have a powerful tool in Haskell. We can do more, cleaner, safer, and do it faster than the mainstream alternatives industry picks over and over again by default.

I've found, however, that talking about typing, or monads, or pure functional programming only causes the eyes to glaze.

What lights a fire in management's eyes? When you rescue an overdue project, delivering before the deadline, and you educate other workers to be as productive ... perhaps not writing Haskell themselves (I've been asked: "Is it like Pascal?" ... and that's after a code walkthrough), but using the tools you've used to deliver these 'tremendous' results, and earning their trust that when you say, 'just let me write a Haskell script to do that: I'll take a day to write it, and we'll start getting results right away,' that that is actually what happens. I've found that 'we should use Haskell because of [all the wonderful arguments put forward]' does not convince anybody. Delivering 'incredible' and 'impossible' results, significantly faster than anticipated (two days instead of two months for the XML project)? That moves mountains.

Where I work, Haskell is installed on two lead developer's systems other than my own, and I'm given permission to code in Haskell, on a 'we only allow Language X' project.

Other Uses



So attributed variables can be used to help one solve kakuro puzzles, or to write a sudoku solver. Fine.

And I've been inching my way toward attributed variables, so it can be applied to symbolic logic puzzles as well, replacing my Whatever type with attributed variables over the enumerated domain.

So the domain is not restricted to Number, but these are not very interesting or pragmatic examples in and of themselves. Not very gripping.

So how about this example?

*Stocks> makeAttribute [Buy, Sell, Hold]

with Constraints being input feeds filtered through weighted moving averages and other indicators. Now this is a system of keen interest for me and 'currently under active and intense research.'

Will such a system work or not work? I'm not at liberty to say,[9] but here is an area that appears to fit well in the language space of constraint programming.

As problem spaces attacked by software projects — workflow analysis, data mining, planning and scheduling, budgeting, forecasting, (war)gaming, modelling and simulation — many seem good candidates to use constraint-based reasoning. Attributed variables give Haskell programmers another way to express these problem domains.



Endnotes:



[1] Y’all don’t mind if I speak logically, and in the logic-programming paradigm, do you?

[2] See my article in The Monad.Reader, issue 11, “MonadPlus: What a Super Monad!” for an implementation of choose, et al.

[3] No, I’m not speaking disparagingly of the monadic programming style. I rather like that style, and use it in my domain language of predicate logic that I layer on top of ‘regular’ Haskell programming. The beauty of monads, for me, is that they are invisible (in coding them) and powerful (in their effect). When the code becomes more about monads and less of what they are there to do is when I see this as an opportunity to examine things in a different light.

[4] http://www.swi-prolog.org/pldoc/doc_for?object=section(2,'6.1',swi('/doc/Manual/attvar.html'))

[5] My representation of Domain is straightforward and workable for most cases, but it hides more than a few compromises (or 'sell-outs') beneath the surface. For example, it makes the (gross) assumption that the range of the domain is enumerable. If Domain is in ℜ that no longer holds (at all), so then Domain shouldn't have a list representation but a pair (or bookending) bounds. But then what if Domain extends beyond ℜ and goes into the surreal numbers?[6] Then we have infintesimals such as '*' (pron: 'star') where:

* > 0, * < 0, * + * = 0

and:

* || 0 (that is: 'star' is incomparable to 0)

So how does one bounds-check on incomparables?

But these concerns are, in the main, not ones to lose one's head over. For the most part, simple linear programming solves a vast majority of the World's problems,[7] and those interesting cases are exactly that: interesting cases.

[6] http://en.wikipedia.org/wiki/Surreal_number

[7] The preface of How to Solve It: Modern Heuristics by Zbigniew Michalewicz and David B. Fogel makes this assertion, from hard-won experience in the field. Seconded.

[8] This is one approach to attributed variables, that is: constraints or attributes are applied to the variable over the domain until it is reduced to a single value, at which point the constraints and even the domain become superfluous and simply go away. An advantage to this approach is that a test for groundedness becomes trivial. A disadvantage is that once grounded one loses the path or information that got one there except through the data flow through the algorithm (the stack trace, as it were). Another disadvantage is that an attributed variable, once grounded, automatically rejects constraints. This may lead to unexpected program behavior, because AV grounded to 5 with a new constraint of x > 23 should fail, but my implementation forwards the 5 value in a success state.

[9] An audit trail of at least a year of data is necessary before any claims can be made.