For the "quartet of programming styles", being: imperative, object-oriented, functional, and logical, it seems pretty clear that there are well-defined general boundaries (with enough wiggle room to cause fanatics to enjoy flame-wars as the mood struck them) to separate one style from another, with languages easily falling into one or more of those camps:
- C: imperative
- Smalltalk/Java: imperative/object-oriented
- Lisp(and Scheme and Dylan and ...)/Haskell/ML: functional
- Prolog (Mercury): logical
But for classifying something as "declarative programming" it seemed that there has been talk of its benefits or drawbacks, but not much more than superficial talk of what it is. Camps from both the functional programming community and the logic programming community stake claims over the declarativeness of their programming languages, but how does one recognize code as declarative? What is the measure by which the "declarativeness" of such code may be ascertained?
Up until recently, I have been troubled by such misgivings only marginally. I had it from authority, a Lisp giant, Patrick Winston, in a Prolog book (Bratko's 3rd ed of "Prolog Programming for Artificial Intelligence"), that the logical style of Prolog is declarative and the functional style is not. Before your send your flame, here's the quote:
"[...] In my view, modern Lisp is the champion of these [imperative] languages, for Lisp in its Common Lisp form is enormously expressive, but how to do something is still what the Lisp programmer is allowed to be expressive about. Prolog, on the other hand, is a language that clearly breaks away from the how-type languges, encouraging the programmer to describe situations and problems, not the detailed means by which the problems are to be solved.I add here that I also view the bulk of Haskell in this light: although it is possible to code declaratively in Haskell, most Haskell code I see is concern with solving the problem (the "how") instead of describing the problem (the "what"). Put another way, it is natural to use the functional and imperative (with monadic do) styles, and it takes effort to use the logic style.
Consequently, an introduction to Prolog is important for all students of Computer Science, for there is no better way to see what the notion of what-type programming is all about. [...]"
That has been my prejudice until recently, but then recent correspondence with colleagues, including David F. Place, who recently had an excellent article in the Monad.Reader about Monoid, has opened this issue for reexamination. So, I turn to you, gentle reader. I present two very different programs below. One written in the logic style; one, functional. Both solve the same problem, and both authors claim their own version is definitively declarative. I view the world through a particular lense, so I see one perspective. But I am burning with curiosity: do you see A) or B) as declarative, or both, or neither? If so, how do you justify your position?
A) the "logical" program approach:
B) the functional program approach (provided by David F. Place):import Control.Monad.State
import Data.List
splits :: (Eq a) ⇒ [a] → [(a, [a])]
splits list = list >>= λx . return (x, delete x list)
choose :: Eq a ⇒ StateT [a] [] a
choose = StateT $ λs . splits s
sendmory' :: StateT [Int] [ ] [Int]
sendmory' =
do
let m = 1
let o = 0
s ← choose
guard (s > 7)
e ← choose
d ← choose
y ← choose
n ← choose
r ← choose
guard (num [s, e, n, d ] + num [m, o, r , e ]
≡ num [m, o, n, e, y ])
return [s, e, n, d , m, o, r , y ]
solve input accept return
= solve' [] input [0..9] accept return
solve' bindings [] _ accept return
| accept bindings = [return bindings]
| otherwise = []
solve' bindings ((_,g):xs) digits accept return
= concatMap f $ g digits
where f n = solve' (bindings++[n]) xs
(delete n digits)
accept return
num = foldl ((+) . (*10)) 0
sendMoreMoney =
solve (('m', const [1]) :
('o', const [0]) :
('s', filter ( > 7)) :
(zip "edynr" (repeat id)))
(λ [m,o,s,e,d,y,n,r] . num [s,e,n,d]
+ num [m,o,r,e]
≡ num [m,o,n,e,y])
(λ [m,o,s,e,d,y,n,r] . [s,e,n,d,m,o,r,y])
>Lisp(and Scheme and Dylan and ...)/Haskell/ML: functional
ReplyDeleteJust one thing.Lisp(Scheme,Common Lisp),is a multi-paradigm programming language with
functional aspects,while Haskell is a purely functional,modular programming language.
Just wanted to point that out,because I have been seeing a lot of post recently about Lisp being a functional language while this is incorrect,if you are allowed to say that you can say as well that Python is a functional language.
And Dylan as well is a multi-paradigm programming language with functional,and object-orientation support.
Common Lisp is multi-paradigm,with functional,object-orientation,and reflection.
I agree with Call. This is actually something I've been meaning to write about for a while, I see two major axes in languages: declarative vs imperative, and applicative vs procedural. C is a typical imperative/procedural language; OO languages and Lisp-like programming is typically imperative/applicative; conventional Prolog is declarative/procedural; pure functional and category theoretic programs are examples of declarative/applicative.
ReplyDeleteIn my experience the applicative and declarative traits are often not distinguished, though they are quite orthogonal. Applicative languages have compositional properties that are lacking in most logical languages (with hybrid paradigm logic-functional languages being the obvious exception). Similarly, the sequentialism in the evaluation ordering of Prolog-like languages has a procedural quality, even though it's non-imperative about it. Of course most logic languages aren't entirely declarative, since they inherit impurities like cut from Prolog.
Given all this, the Lisp heritage languages and the ML heritage languages should be discussed separately since they are quite different. While both are applicative, the latter are more declarative, whereas the former are more imperative.
As for your question, I think the "logical" version is more declarative, but I also think it's more functional. "Functional" means declarative/applicative to me. The latter example captures the applicative part of things, but it's typically Lispish to my eyes: too focused on primitive recursion and not higher-order enough or compositional enough to say what it means.
The problem, of course, is that a Prolog programmer actually needs to know exactly how their particular prolog evaluates their program (i.e. the search algorithm) to have any kind of hope of a) efficiency, and b) termination within the lifetime of the universe. Cuts (etc.) are a symptom of this.
ReplyDeleteSure, it might theoretically be "more declarative", but that doesn't matter if the program only (effectively) semidecides your problem.
Declarative programming does not exist (at least if it is conveniently defined). If you describe what do you want, but not how, computer is required to solve a problem; but it can't solve non-trivial and important problems, precisely the things you want them to solve. If computer solves some restricted type of tasks, it's because an algorithm for exactly this type of tasks was entered into computer before. Declarative programming is a delusion similar to that of conscious machines. The only type of compiler for declaraive PL is a programmer. ;-)
ReplyDeleteActually, I didn't mean that I thought my version (the "functional" one) is more declarative, only more perspicuous. The meaning of the more "logical" one is hidden in the interaction of the implementation of StateT and the List Monad. The choose function reveals that interaction in a way I'd rather not know about. :-) Whereas in my version, the meaning can be investigated by looking up the definition of simple library functions.
ReplyDeleteI think programs which can be interpreted with some kind of equational reasoning are all equally declarative.
Cheers,
David
I'd say the difference is whether or not I need Hoare triples to analyze the code. If I need them, it's imperative; if I don't it's declarative (that is, another program will compile them down to something that requires Hoare triples to analyze).
ReplyDeleteFrom this point of view, your logic-ish is imperative: it invokes a State monad, which by definition means Hoare triples.
That, however, is an artifact of how you chose to do it. Personally I think unification and first class relations should be typical control structures provided in a programming language, a la Kanren.
Look at "A history of Haskell" chapter 4.4. I think sometimes it is about the feeling..
ReplyDeleteAs our discussions evolved, it became clear that there were two
different styles in which functional programs could be written:
“declaration style” and “expression style”. For example, here is
the filter function written in both styles4:
filter :: (a -> Bool) -> [a] -> [a]
-- Declaration style
filter p [] = []
filter p (x:xs) | p x = x : rest
| otherwise = rest
where
rest = filter p xs
-- Expression style
filter = \p -> \xs ->
case xs of
[] -> []
(x:xs) -> let
rest = filter p xs
in if (p x)
then x : rest
else rest