>>=
. What >>=
(which is pronounced: "bind") does is to give the value the monad contains to the function that then uses it. More formally defined (specific to sigfpe's trivial monad example): W a >>= f ≡ f a
An example of the syntax (again, following sigfpe's post) is as follows:
W 5 >>= f
(where
f
was defined in sigfpe's post as an increment function for his trivial monad: f x = W (x + 1)
). The above example would result in the (monadic) value W 6
. So, now you see >>=
(trivially) in action, and sigfpe asks us to put it into use for the following exercises.The first exercise that we'll continue on in this entry is exercise 3, which is to prove the three monad laws:
Left Identity: return a >>= f ≡ f a
or, a monadized value bound to a function is the same as applying a function to the plain value.
Right Identity: m >>= return ≡ m
or, a monad bound to return is just that monad, again (or, the identity function for monads is
bind return
), and finally:Associativity: (m >>= f) >>= g == m >>= ((λ x . f x) >>= g)
which is similar in spirit to associativity for addition, i.e.:
(a + b) + c ≡ a + (b + c)
.Just a side note here, some people become discouraged when they are told "to prove" something, whereas if they are asked "to solve" a problem, they turn right to it. Fogel and Michalewicz rightly point out that proving and solving are basically the same process!
My cara spoza asked me, as I was attempting to (re-)proveliftM2 (+)
for exercise 2, "Is proving difficult?" The question gave me pause: "Well, sometimes the simplest proof can elude me for days, and sometimes something that appears much more difficult comes in a flash."
So, let's prove each of these laws in turn, using the definitions of
>>=
and return as axioms.First up, Left Identity:
Prove: | return a >>= f | ≡ | f a | |
---|---|---|---|---|
1. | return a >>= f | ≡ | W a >>= f | return definition |
2. | ≡ | f a | >>= definition | |
Q.E.D. |
Next, Right Identity:
Prove: | m >>= return | ≡ | m | |
---|---|---|---|---|
1. | m >>= return | ≡ | W a >>= return | monad identity |
2. | ≡ | return a | >>= definition | |
3. | ≡ | W a | return definition | |
4. | ≡ | m | monad identity | |
Q.E.D. |
And, finally, let's prove associativity. We'll start out by reducing the left hand side ("lhs") to something simpler:
Simplify: | (m >>= f) >>= g | |||
---|---|---|---|---|
1. | (m >>= f) >>= g | ≡ | (W a >>= f) >>= g | monad identity |
2. | ≡ | f a >>= g | >>= definition |
... but it was here that I became stymied, for I didn't see how to transform
f a >>= g
into something resembling the right hand side (rhs), which is m >>= (λ x . f x >>= g)
. Do you? Give it a try!...
The thing to do, when one becomes stuck solving or proving something, is to acknowledge that fact, and then to try something entirely different. One way to try something different is to rework the simplification of the lhs so it assumes a form much closer to what the rhs is. I didn't see this approach to be fruitful, however: I reduced the lhs to be something pretty simple, so a rework would make the lhs more complicated -- clutter often obscures correlation. Another approach is to leave the lhs be and work on simplifying the rhs -- maybe a simplification there would yield an useful correspondence. Let's do that:
Simplify: | m >>= (λ x . f x >>= g) | |||
---|---|---|---|---|
1. | m >>= (λ x . f x >>= g) | ≡ | (W a >>= (λ x . f x >>= g) | monad identity |
2. | ≡ | (λ x . f x >>= g) a | >>= definition | |
3. | ≡ | f a >>= g | β-reduction |
Ha! Now we see that the simplification of the lhs is identical to the simplification of the rhs. That was easier than expected.
Q.E.D.
To summarize this exercise, proving something or solving something is really a rather simple task: using the tools you have to match the solution to the problem. The fun part comes in when you discover new ways to use the given tools, or, finding out the given tools aren't sufficient, which means finding new tools to solve the problem. Once you have your tools, along with the problem statement and desired outcome, it is often the case that there is a simple walk from the problem to the solution, the solution to the problem, or both meet somewhere in the middle. Odds are, you are not the first one solving a problem, if you need help, there's a whole big world out there -- someone probably has already left the solution out there for you to find.
Exercise 4: Monads cannot be unwrapped completed back to the plain value they carry, but if a monad is layered, that is a monad contains a monad, it is possible to remove one or more layers.
join
is such a function that does this "unlayering". Given a layered monad mm = W (W x)
, join mm
will return the value W x
. The declaration for join
is:join :: W (W a) → W a
Define it.
Solution 4: Well, the declaration for
join
looks suspiciously like an application of >>=
(you are pronouncing that operator "bind", right?), as it hands the value carried by the monad for function application. join mm = mm >>=
?f?
The question then becomes "what function to apply the carried value to?" The problem is that
>>=
has already given us the answer, W x
, so we just wish to have join
return that value unaltered. Fortunately for us, there is such a function from combinatory logic called the identity combinator, I
, which has the form λ x . x
, and Haskell already has that definition (it is named join mm = mm >>= id
Q.E.D.
In conclusion, I hope these entries have helped you to see that monads are actually a rather simply concept that are easy to use. This is just the tip of the iceberg, however: monads are used pervasively in Haskell, tackling a great many tasks. May I suggest the excellent and practical tutorial on Monad Transformers? Please do not be put off by the imposing front page to the paper, as this tutorial shows how monads and their transformers shine, giving a step-by-step introduction of new functionality of monads into a plain-vanilla system.
third monad law has wrong bracketing (too much brackets)
ReplyDelete