The above is a caveat: this is an introductory guide from a beginner's perspective. YMMV. But, perhaps, as the other material out there on comonads is rather scarce to begin with, and rather intimidating where present, this entry may spark the interest of the reader wishing to start off on comonads and doing some basic things with them.
The comonad definition I use is from Edward Kmett's Comonad.Reader category-extras library. What a sweet library it is; ⊥-trophy is in the ether-mail.
Introduction
Let's talk a bit about what a comonad is, first. A comonad is a special way at looking at a data structure that allows one to work with the individual members of that data structure in the context of the entire data structure. It takes the opposite approach to the data in the data structure that monads do, so the monadic
bind
(>>=
) ...>>= :: Monad m ⇒ m a → (a → m b) → m b
... has its dual in the comonad (commonly called cobind
, but sensibly named extend
(=>>
) in Control.Comonad
). Its signature is as follows:=>> :: Comonad w ⇒ w a → (w a → b) → w b
I started gaining insight into comonads by studying the extender function — it takes the comonad in its entirety and resolves that to a value in context. That value is then used to reassemble the comonad into its new form. In short, the extender synthesizes the comonad.That's
extend
. Now, just as monads have the unit function (return) that creates a monad from a plain value ...return :: Monad m ⇒ a → m a
... so, too, comonad has the dual of that function, called counit (or, again, more sensibly called extract
for the library I'm using) ...extract :: (Copointed f, Functor f) ⇒ f a → a
... which, instead of injecting a value into the comonad, it extract
s a value from the comonad.That's comonad, in a nutshell: the dual of monad.
Synthesis
Okay, then, let's jump right in to creating and using a Comonad instance. We'll start with the list data type:
Just like for monads where> import Control.Comonad
> import Control.Arrow
> import List
> instance Copointed [] where
> extract = head
> instance Comonad [] where
> extend fn [] = []
> extend fn list@(h:t) = fn list : (t =>> fn)
m >>= return
is the (right) identity, we can show that for comonads that w =>> extract
is an identity:[1..10] =>> extract
What's the answer that you obtain? Why?Now that we can use the whole list to create each element of the new list, thanks to the Comonad protocol, let's solve the example problem from Why Attribute Grammars Matter, which is we must replace each element with the difference from the average of the list. With comonads, that's easy to express!
What does> avg list = sum list / genericLength list
> diff list = list =>> (extract &&& avg >>> uncurry (-))
diff [1..10]
give you, and why?Now, the comonad did not eliminate the problem of multiple list traversals that Wouter points out in his article (please do read it!), but comonads do show a nice, simple, natural way to synthesize the whole to build each part. Beautiful!
A stream ...
Streams can be considered infinite lists, and are of the form:
a :< b :< ...
Uustalu and Vene, of course, discuss the Stream Comonad, but I obtained my implementation from a site with a collection of Comonad types, modifying to work with Kmett's protocol:As for lists, it is quite easy to make streams an instance of the Comonad class. So, a stream of 1s is> module Control.Comonad.Stream where
> import Control.Comonad
> data Stream a = a :< Stream a
> instance Show a ⇒ Show (Stream a) where
> show stream = show' (replicate 5 undefined) stream
> where show' [] _ = "..."
> show' (_:t) (x :< xs)
> = show x ++ " :< " ++ show' t xs
> instance Functor Stream where
> fmap f (x :< xs) = f x :< fmap f xs
> instance Copointed Stream where
> extract (v :< _) = v
> instance Comonad Stream where
> extend f stream@(x :< xs) = f stream :< extend f xs
> produce :: (a → a) → a → Stream a
> produce f seed = let x = f seed in x :< produce f x
Um, it is the user's responsibility
to guard these functions below ...
> toList :: Stream a → [a]
> toList (x :< xs) = x : toList xs
> mapS :: (a → b) → Stream a → Stream b
> mapS = fmap
The natural numbers arelet ones = 1 :< (ones =>> (extract >>> id))
ones ≡ 1 :< 1 :< 1 :< 1 :< 1 :< ...
And a stream of primes is ... um, yeah.let nats = 0 :< (nats =>> (extract >>> succ))
nats ≡ 0 :< 1 :< 2 :< 3 :< 4 :< ...
... of primes
The ease at which we generated the stream of 1s and natural numbers would lead one to believe that generating primes would follow the same pattern. The difference here is that a prime number is a number not divisible evenly by any other prime number. So, with the schema for streams as above, one would need to know all the prime numbers to find this prime number. A problem perfectly suited for comonads, so it would seem, but, as the current element of the stream depends on the instantiation of the entire breadth of the stream, we run into a bit of a time problem waiting for our system to calculate the entire stream of primes in order to compute the current prime. Hm.
One needs to put some thought into how to go about computing a stream of primes. Uustalu and Vene created the concepts of Delay and Anticipation along with a History. All that is outside the scope of this article. Instead, let's consider Stream in a different light: why not embed the "history" (the primes we know) into the Stream itself. And what is the history? Is it not a list?
With that understanding, our outstanding function to find the current prime (> primeHist :: Stream [Integer]
> primeHist = [3,2] :< primeHist =>> get'
get'
) reduces to the standard Haskell hackery:So, now we have a stream of histories of primes, to convert that into a stream of primes is a simple step:> get' :: Stream [Integer] → [Integer]
> get' stream = let primus = extract stream
> candidate = head primus + 2
> in getsome' candidate primus
> where getsome' :: Integer → [Integer] → [Integer]
> getsome' candidate primus
> = if all (λp . candidate `rem` p ≠ 0) primus
> then candidate : primus
> else getsome' (2 + candidate) primus
> primes = 2 :< (3 :< fmap head primeHist)
And there you have it, a comonadic stream of primes!Critique
Uustalu and Vene's implementation of
prime
suffer for the layers of complexity, but my implementation suffers at least as much for its relative simplicity. Each element of the stream contains the history of all the primes up to that prime. A much more efficient approach, both in space and time, would be to use the State monad or comonad to encapsulate and to grow the history as the state ... or, for that matter, just use list compression. And, of course, there's the "Genuine Sieve of Eratosthenes" [O'Neill, circa 2006] that this article blithely ignored.
For comparison, and to demonstrate how this works in the imperative D language, of which I am a fanboy:
ReplyDelete// difference of values from average
// works with any kind of collection
ElemType!(T)[] diff_avg(T)(T inp) {
__auto res = new ElemType!(T)[inp.length];
__real avg = 0.0;
__foreach (v; inp) avg += v;
__avg /= inp.length;
__foreach (i, v; inp) res[i] = abs(v - avg);
__return res;
}
And to define a stream of primes:
int delegate() primes() {
__return stuple([2], 3) /apply/ (ref int[] primes, ref int current) {
____while (true) {
______scope(exit)
________current ++;
______auto limit = cast(int) (sqrt(current) + 1);
______bool found;
______foreach (prime; primes) {
________if (prime > limit) break;
________if (current % prime == 0) {
__________found = true; break;
________}
______}
______if (found) continue;
______primes ~= current;
______return current;
____}
__};
}
A bit longer, I guess, but not that much.
Sorry for the strange indentation. It appears Blogger lacks a code tag.
--feep
Your diff is wrong. The average of the list keeps getting taken on a different list... i.e.
ReplyDeletediff [1..10]
returns
[-4.5,-4.0 .. 0.0]
when it should return
[-4.5, -3.5 .. 4.5]
And how does expressing this computation comonadically help to eliminate extra passes of the list? I'll make two points:
1. Your incorrect implementation of "diff" makes many, many needless passes. ;-) It's quadratic in the length of the list when the real diff is linear. Even the incorrect version could be implemented linearly.
2. Laziness and tying-the-knot can indeed be employed, sometimes, to eliminate multiple passes of data. However, applying that techique to "diff", a la "Why Attribute Grammars Matter", is a red herring. That computation _cannot_ be done in a single pass. Period.
In that paper, the "optimized" version of diff builds a list of thunks, not Nums, making the second necessary pass implicit. Compiled with GHC -O, the "optimized" version (1 explicit pass + 1 implicit pass) takes basically the same amount of time as the naive version. (3 explicit passes.)
Moreover, the "optimized" version breaks GHC's list fusion scheme, whereas the naive version is a "good producer." (though not a good consumer.) Thus the optimized version can actually perform _worse_ than the naive version when inlined inside the right context.
You are much better off combining the two passes in the naive version of "avg" into a single pass, implementing "diff" as 2 explicit passes. This will actually speeds things up.
("Why Attribute Grammars Matter" is an ok paper, probably worth reading. But that example is bad and my feeling is that it doesn't really make it's case of why they matter as a result.)
I'd love to see a comonadic version of diff, with the optimizations needed to reduce the number of passes to 2.
This comment has been removed by a blog administrator.
ReplyDelete