Pages

Wednesday, December 19, 2012

That's a pretty good guess... Naïve Bayesian Classifiers

... so, I don't need to write any more, correct? Classification problem? Apply a naïve bayesian system to it, and you are way ahead of the curve, and you've expended a fraction of the effort to stand the system up.

... shortest posting ever written.

So, anyway.

So, a Bayesian classifier is a predictor.  It says, that is, if it were to speak, it says, "Hm, I've seen this before, so I think this is a that." And, well, given a reasonable tolerance for error or noise, it's pretty good at doing its job. In fact, it works supremely well in noisy environments, shines in them, even, where other models that are more rigidly defined tend rapidly to become unstable at the slightest irregularity (the inductive-logic-programming model comes to mind).

Yes, I've worked on classification systems. We've had a hybrid system composed of a deductive rule-based system (that we hashed out with Subject Matter Experts over months of establishing, parameterizing, then fine tuning each rule) running in parallel with a Bayesian classifier. Guess where 99% of our usable results came from?

Sir Bayes walked away with the win, nearly every time.

How does the naïve Bayesian classifier work?

The algorithm is simplicity itself ... embarassingly so. It does an observation, and that observation is that there is a probability of something happening or being so, and that case depends on all of its antecedents.

Obviously. All these effects occured, lining up to cause this thing to be.

Let's give a real-world example.

The probably of Tennessee winning last Monday's game (they did) was predicated on:
  • It was a Monday night game -- Monday or  F1
  • They were playing on their home field -- Home or  F2
  • They were favored to win (with an even 2-point spread advantage) -- 2.0 spread or  F3
  • They were playing against the New York Jets -- vs(NYJ) or  F4
or p(win(Tennessee) | Monday, Home, 2.0 spread, vs(NYJ))

That works, but only when everything lines up exactly so.

So the brilliancy of Bayes was that the above probability can be rewritten with the following observation:

p(win(Tennessee)) * Πi p(Fi)

And there you have your classifier: you run your classes through the probabilities of their antecedents and for the class where the probability is the greatest is the winner.

Simple, and as the system gets more input data, matched against labeled classes, the better it gets at classification.

Problems

Well, I never promised you a rose garden...

One of the problems is a real-world concern, not a mathematical one, and that is many of these probabilities are minute, and multiplying them to each other ramps the scale down to the miniscule, very quickly. These days computers are ... well: 'eh' at handling very small fractional numbers, and they are architected in such a way that imprecision gets rolled into these numbers, compounding the problem with each multiplication.

How to handle these very small numbers?

Well, obviously, scale them. And this is very simply handled, for, after all, if it's a sense of scale that concerns you, you just rescale your domain. The logarithm does that. 'Automagically.'

SO, the above classifier can be rewritten:

log(p(win(Tennessee))) + Σi log(p(Fi))

And the scaling issue for computers just goes away.

There's another problem, however, and that is how to deal with the messiness of real-world input data.

The naïve Bayesian classifier assumes that all the evidence is present for each class. If a single piece of evidence is missing, that probability will be zero, instantly voiding the entire formula.

So, what to do?

I've observed that voiding the formula for one datum often leads to classifiers that oscillate too wildly and are therefore unusable.

So, I neutralize the nulled probability, instead of nulling it and making the formula void, I set it to unit and it therefore acts as a 'pass-through.'

This is an artifice, I know. I'm intentionally saying a missing datum is actually fully present, but I've found that this works 'eh' ... that is 'passably well.'

I've done this on two classifiers so far, and the classifiers are pretty darn good, but not good enough to my mind.

This is a fundamental problem with the naïve Bayesian classifier, and I'm sure everyone who undertakes this approach has to deal with this, one way or the other. What are good ways to make the naïve Bayesian classifier much more effective, but without excessive effort or thought around null data points that come up?

A problem, but not a crippling one.

I won last week's picks, after all, and that's not the first time

Tuesday, December 18, 2012

Logical Types, LLC, Archived News

Archived
News


2007-09-05:
For those of us not familiar with computing with intervals, this Google talk provides an engaging introduction. As Dr. Walster points out, we have become very concerned about speed of computation, but have ignored their accuracy -- sometimes with deadly results. Intervals provide a type-safe, exception-free, approach to numerical computing with built-in feedback on the accuracy. A central repository of interval computing can be found at http://www.cs.utep.edu/interval-comp/
2007-03
29:
I have eliminated my "corrections" to the Mercury extras xml library (I had an outdated version of the library as outlined in the 2007-03-18 news post, so my corrections were redundant the the Mercury team's). At the same time, I have moved my XML enhancements into the utils library and promoted thedoug_graph module from alpha; it is now also included in utils.
18:
A new library and several improvements can be found in the shared repository:
  • The xml library provided in the Mercury extras distribution is out-of-date; it no longer compiles. I have fixed the compile errors and added several modules (to assist in XML transformations and pretty-printing) and tests.
    Update 2007-03-28:
    I was in error in the above item: my copy of the xml library provided by the Mercury team was out of date; the version supplied with the extras distribution by the Mercury team is working correctly.
    This means my fixes to the xml library are redundant, so I withdraw them. The extended functionality, however, I do continue to find (very) useful, so I am moving these enhancements as an xml library under the utils umbrella.
  • Several improvements are available for qcheck2:
    1. I have modified the qcheck2 library so that it now uses the RNG protocol as proposed in the Mercury users' maillist. I have also modified the reporting feature to accept a polymorphic type for module.predicate unit tests ... this improvement 'upgrades' qcheck2 to be an independent library (qcheck was also an independent library).
    2. The program qcpt that generates the module.predicate test points for a system has also been updated to use the new qcheck2 reporting protocol. qcpt is bundled with ltq.
  • I have entirely changed the utils library:
    1. Although useful for a small number of repetitions, the peano module becomes unweildly for large cycles (1,000,000 is represented as 1,000,001 cons cells!). So, I have discarded it in favor of a slightly more sophisticated counting algorithm (where 1,000,000 is represented by 7 cons cells) in the utils.series module that now also includes loop abstraction with func unfold/3 and pred svunfold/6 (the latter being used when one must also update a dependent state variable).
    2. Julian Fondrant on the Mercury Users maillist proposed a RNG typeclass and protocol, and published a module implementation using the tausworthe3 algorithm. I have incorporated this module (as utils.random) with a simplified façade and other minor corrections.
2007-02
14:
  • The dynamic syntax compiler ltq ("Logical Types Quicksilver") and qcpt (the QuickCheck type generator) have been modified to work with the new structure of Melbourn's Mercury ops module. The new system has been updated and is available in our shared repository.
  • The qcheck2 library has been updated to work with the current ltq; it includes two sample modules with different reportage options. It can be found in the shared repository.
  • The peano module has received some new counting functions to simplify the user's task of constructing small peano numbers.
02:
We present a very tiny foreign interface to libtiff. The module tiffany allows reading and writing simple RGBA TIFFs with two-dimensional arrays. As thematrix module matures, we will use that protocol for more comprehensive image filtering. This interface is presented with some samples and (currently) no documentation (other than code comments) as is available from our shared repository. Both libtiff and the matrix protocol must be available to run the samples.
Update 2007-02-06:
Per the suggestions of the Mercury team, I have updated the foreign protocol calls from the old C-specific interface to the generic and supported interface. Also, made the library "more" threadsafe by eliminating the global file pointers and subsuming those values into the image structure.
Update 2007-02-07:
Changing the c_pointer type to a foreign_type pragma (again) to avoid the old C-style foreign interface and also to eliminate unnecessary warnings about int-to-pointer casts.
2007-01
31:
As the Mercury standard library does not have a matrix protocol, I am building a module that handles some basic matrix operations; this module and its test suites will be added to the utils library when in serviceable condition.
30:
Thanks to the efforts of the Mercury research group at Melbourne, their Mercury compiler (mmc) is now available for Macintosh computers with the Intel chipset. This is the alpha ROTD (2007-01-21) as opposed to the currently sanctioned 0.13.1 release, so some things are experimental in this version of the compiler. Email logicaltypes.com if you wish to have a binary distribution.
Our compiler, ltq (that allows syntactic extensions with the op/3 directive, see the news archives at 2006-03-02), may need some work as the Mercury research team has changed the ops module. This, of course, also means that systems that depend on these improvements, such as the test-building framework qcpt (see news archive post 2006-05-12), must also be retested. We will post an update when qld is again tested and working.
2006-07
17:
The various data structures provided in the ROTD extras/ distribution that handles solver types (particularly the various any modes) do not compile because of purity issues; also, since their release, the coding style has changed for module qualification. The fixes to get these types compiled are here. For the Mercury team's review are the diffs to fold back into the distribution.
2006-05
13:
We present a complete rewrite of Mercury's QuickCheck implementation: qcheck2 (the sample unit test module (peano_unit_tests) requires the peanomodule in order to run). The essence of testing à la qcheck -- type discernment to obtain (random) test values with a test specification language -- remains unchanged. The new features of this new version enhance the system with:
  1. complete control over what is reported and when it is reported;
  2. dynamic control over ranges of random number-like values (ints, floats and chars) as well as the random number generator type itself;
  3. and for the code-hacker: qcheck2 is broken up into separate modules along functional lines
The documentation for qcheck (upon which qcheck2 is built) is a model for any system to follow; so, documentation, to include a system description and transition guide, for qcheck2 is under development.
Update 2006-05-20:
As the test description (Description) may now be of any type -- even non-comparable types, such as, for example, function types -- using a map with Description keys is will cause errors when using Description types that cannot be compared. As such, the qstate now uses an assoc_list to accumulate the test results.
Update 2006-05-21:
Added information to the summary report: this report now shows which predicates were not tested. The work-in-progressdocumentation now covers the reporting facility comprehensively.
12:
ltq (Logical Types Quicksilver compiler) now comes with qcpt (QuickCheck Predicate Types), a system that discerns the interface predicates and functions of a system (to facilitate comprehensive unit testing). Perhaps even more important is the inclusion of a README that doubles as a HOWTO and INSTALL document.
11:
Completed the document describing an implementation of mutable syntax for Mercury. This article in available in the literature section (see above).
2006-04
30:
The graph module takes a rather non-directed approach to locating a path from Node1 to NodeN. This is fine if the path has no associated cost, but if one is looking for the best path (where a path has an associated cost), this laissez-faire approach becomes problematic.
As I often need a efficient path in a graph, I have added best_path/5 and other supporting predicates that cannot be implemented well operationally given the protocol of module graph and submitted these changes to the Mercury team for review. While they consider these changes, I provide the implementation here as module doug_graph, with an associated example. Locating the best path in the given example using the naive pathimplementation took over 20 seconds; the reimplementation here completes the computations in less than a second.
27:
The following Mercury compiler distributions are available from here on request:
  • mercury-2006-04-26-rotd-powerpc.apple.darwin8.5
  • mercury-2006-04-26-rotd-sparc.sun.solaris2.8
These distributions include the extras/, the samples/ (in extras/) and the HTML documentation (in doc/).
2006-03
29873210165432108:
The Mercury development team has fixed the problem with nesting of disjunctive terms when output with term_io.write_term/4with ROTD-2006-03-07 compiler distribution. It, along with the op/3 declaration enhancement (ltq), are available from us. Send an email (see contact information below) if you wish to obtain one of the following distributions:
  • mercury-2006-03-25101854310789-rotd-powerpc.apple.darwin8.35
  • mercury-2006-03-287210185431079-rotd-sparc.sun.solaris2.8
These distributions include the extras/, the samples/ (in extras/) and the HTML documentation (in doc/).
0346:
term_io.write_term/4 does not properly handle disjunctions. I've submitted a patch to the Mercury development team, but in the interim, I've included the change in distributions (as dopp relies on write_term/4 to output the op/3-free code results). Also,compiler/prog_io.m has quite a few disjunctions handling (now dead) declaration types, I've submitted this patch to the Mercury team, but have also patched local distributions. The most recent distributions available are:
  • mercury-2006-03-01-rotd-fixed-write_term-sparc.sun.solaris2.8
  • mercury-2006-03-0134-rotd-fixed-write_term-powerpc.apple.darwin8.3
Both these distributions have ltq included. Email me if you wish a copy of one of the above distributions or of the stable release (0.12.2) on either architecture.
Update 2006-03-08:
The Mercury team has corrected the compiler so that terms are appropriately parenthesized. This eliminates the need for my term_io.write_term/4 hack, so future releases of the distributions from this site will revert to straight-up Mercury. Enhancements, such as op/3 declarations, will be included inextras/ (the source code) and in bin/.
02:
There is quite a debate going on at the developers discussion forum as to the merits, extent and implementation of the op/3declaration. Up to this point, we have patched the Mercury compiler distribution so that it accepts and processes op/3 declarations. This has proved to be rather onerous as new distributions have been coming out regularly. Instead, prompted by the Mercury team, I have developed an op/3 preprocessing system. ltq  ("Logical Types Quicksilver") creates build/Makefile, then executes the Makefile which calls dopp  ("Dynamic Op PreProcessor") which translates op/3-enhanced files into plain vanilla Mercury ones. The Makefile then builds the executable with the 'mmc --make --infer-all ' command. This build system is locally available from this site, and also included in Mercury distributions that we produce from this month onward.
Update 2006-03-10:
I have updated ltq so that it takes any number of arbitrary command-line arguments (these preceed the argument). These arguments are passed, unmolested, to mmc.
Update 2006-03-13:
Currently, we are fully integrating op/3 declarations into working products. In building auxilary libraries we have found that we need to mirror mmc's library-building indicator (prepending 'lib' to the target module's name). Adding this functionality required a modification to ltq's build process. This change is reflected in build system offered here. We leave the library installation process to mmc; that is, noddingly: in the generated Makefile, there is an command that passes the proper arguments to mmc with a make install command.
Update 2006-03-24:
The library building and installation process differed slightly in ltq. Eliminated that difference and published a new version of this system.
Update 2006-03-29:
ltq now automatically creates the build/ directory (the repository of build products, including the files converted from op/3-enhanced sources to canonically represented sources). This eliminates errors inltq's build process when it cannot find the nonexistent directory. The new distribution is available, as always, here.
2006-02
17:The bleeding-edge releases of the Mercury compiler (the ROTD (release of the day) 2006-02-11), both for standard Mercury and the Quicksilver enhancements from Logical Types (op/3 declarations and binary-trees and -sets with externalizable structure) are compiled and available for Mac OSX.4 systems. These releases are alpha quality, but do contain several interesting developments, such as improved constraint logic programming syntax,injections (bi-directional maps) and improved term-as-XML handling. Please email if you wish to obtain a copy (see contact information).
Update 2006-02-19:
The Quicksilver distribution has been updated to the ROTD-2006-02-1618.
Update 2006-02-268:
There are a new set of distributions available:
  • ROTD-2006-02-235 distribution now available for sparc.sun.solaris2.8
  • ROTD-2006-02-23 distribution for powerpc.apple.darwin8.3
  • Quicksilver ROTD-2006-02-25 distribution for powerpc.apple.darwin8.3 and for the sparc.sun.solaris2.8 as well
2006-01
31:Quicksilver version 0.12.2 for Mac OS-X.4 is available along with installation instructions.
Update (2006-03-04):
I have removed all distributions from this repository. Please email me if you wish to obtain a copy of any distribution mentioned here.
31:Quicksilver version 0.12.2 for Solaris 8 is available along with installation instructions.
31:Logical Types, LLC is no longer producing binary distributions of Mercury that do not support op/3 declarations. Quicksilver (Mercury with op/3 support) is a strict superset of Mercury and what Logical Types produces.
16:I've posted a review of the PADL-2006.
06:Here is a module that prints out a parsed term in its canonical form (write_canonical.m; useful for inspecting terms with op/3 syntax) and a test module (test_op.m) that exercises write_canonical. This can be used with the regular Mercury compiler, but is designed for Quicksilver compiling modules with embedded op/3 declarations.
04:Mercury compiler enhanced with op/3 declarations for both PowerPC architecture Mac OS-X.4 systems (with installation instructions) and Sparc architecture Solaris 8 systems (with its own installation instructions) available for download. The compiler executable is named "lqc" for "Logical TypesQuicksilver Compiler". A supporting document, detailing the changes to the compiler to permit users to insert op/3 declarations (followed eventually by the typed operators in context) into module implementations, will follow in short order.
Update (2006-01-31):
These distributions (and the ones mentioned below) are obsolete. Email me for the current stable release.
2005-12
27:Mercury compiler for Sparc architecture Solaris 2.8 systems available for download. See the installation instructions to get Mercury running on your SPARC.
24:Mercury compiler for PowerPC architecture Mac OS-X.4 systems available for download. See the installation instructions to get Mercury running on your Mac.

Logical Types Libraries

Libraries, etc.

Logical types has the following libraries, systems, and sample code. All provided with the usual caveats (they are not guaranteed to work, and Logical Types is not liable for you downloading and using this code):
 
Libraries
utils
a set of utilities useful for building production systems
contains modules:
Moduletest statusdocumentation
utils1/4 submodules testedno
utils.graph0/1 new preds testedno
utils.random0/10 preds testedno
utils.series3/3 preds/funcs testedno
utils.xml2/3 submodules testedno
utils.xml.facade1/1 pred testedno
utils.xml.pprint2/2 preds testedno
utils.xml.transform4/9 funcs/preds testedno
qcheck2
A testing/verification framework for Mercury programs
see qcheck2 justification (work in progress)
 
Systems
ltq
Extends Mercury with op/3 declarations, allowing syntax modification
see installation instructions
see ltq doc
see write_canonical that interprets op/3 declarations

sampleshello.m and play.m
 
Fixes/Patches
anys
A set of fixes to the utilities provided in the Mercury extras distribution supporting operations with the any mode
see anys-diffus.zip simply to apply patches
 
AlphaYou Have Been Warned!
matrix
Work in progress to add a viable matrix protocol; will be rolled into the utils library.
tiffany
A very small, thread-un-safe, foreign interface to libtiff; also requires matrix (which it already bundles).
 

Installation Instructions for QuickSilver

Installation Instructions for Quicksilver
PowerPC/Apple architecture
  1. Decompress the archive in a directory of your choice (alias==$dir)
  2. Modify the script mmc in directory $dir/quicksilver-0.12.1.powerpc-apple-darwin8.3/bin so that the MERCURY_COMPILER variable points to$dir/quicksilver-0.12.2.powerpc-apple-darwin8.3/lib/mercury/bin/powerpc-apple-darwin8.3/lib/mercury_compile and so that theMERCURY_CONFIG_DIR variable points to $dir/quicksilver-0.12.2.powerpc-apple-darwin8.3/lib/mercury
  3. Export the following environmental variables with the following values:
    MERCURY_HOME$dir/quicksilver-0.12.2.powerpc-apple-darwin8.3
    MERCURY_STDLIB_DIR$MERCURY_HOME/lib/mercury
  4. Add the following paths to your DYLD_LIBRARY_PATH environmental variable:
    $MERCURY_STDLIB_DIR/lib/reg.gc/powerpc-apple-darwin8.3
    $MERCURY_STDLIB_DIR/lib/powerpc-apple-darwin8.3
  5. Add the following path to your PATH environmental variable:
    $MERCURY_HOME/bin
  6. Add the following path to your MANPATH environmental variable:
    $MERCURY_HOME/man
  7. You should be able to do the following with the file hello.m:
    $ mmc --make hello
    $ ./hello
  8. Since this compiler allows op/3 declarations, the following module, play.m, demonstrates this capability. I intentionally left out some declarations, so compiliation is slightly different:
    $ mmc --infer-all --make play
    $ ./play
Creating syntax with op/3 can become complicated when several operators interact to create a term. I've provided a module that prints the canonical representation of a parsed term (write_canonical.m) and a testing module (test_op.m) that allows prototyping of operator declarations and allows submitting terms under that syntax. The whole test system may be built in the usual way:
$ mmake test_op.depend
$ mmake test_op

Logical Types Company Mission Statement



Rule-based
Systems

Rule-based systems process data according to sets of constraints established by the user. The results of these systems is a collection of rule findings that can be used to construct the final product or to assist the user in rendering an informed decision.
The above description may seem all-too-general, as that describes the what most programs do. This is indeed correct: rule-based programming is sufficiently powerful enough to describe any computable system. In fact, the rule-based approach is now being viewed by the mainstream as the preferred method for workflow analysis, process, resource scheduling, service-based systems, etc. Nearly every software system has a set of rules, explicit or implied, to which it adheres. The rule-based programming style model these rule constructs directly and facilitate their manipulation as the system grows and changes.
Deductive
Logic

The traditional approach to building logic systems is to construct a set of clauses where the head of the clause matches a condition and the rest of the clause verifies the match through a set of goals for that match. These clauses are known as rules and a set of rules is a predicate, and this predicate can be used, in turn, as a goal in a new rule. Deductive logic matches problem specifications very closely and is a very effective way to convert a set of requirements into a production system. Some examples of types of problems that are easily modelled in deductive logic are expert systems, planning systems and scheduling systems.
Deductive systems are used when the rules are clear, when the user requires certain outcomes, and are very good at "explaining" what the rule findings are and how they were arrived at.
Inductive
Logic

The opposite approach to a deductive system is an inductive one. Whereas in an deductive system, the user has very exacting control over the process and outcome, in an inductive system, the rules are obtained by deriving the relations between input data and their outcomes, with very little guidance, if any, from the builder of the system. Traditional inductive systems required very clean data and had little tolerance for deviation -- a slight perturbation in the data set could cause the system to fall into an undefined state. Modern inductive systems have taken a different approach: reaping the benefits of recent advances in probability and statistics, these systems (such as Bayesian systems and neural networks) are highly redundant and adaptive. These new systems consistently perform well: they have excellent success narrowing to a classification from apparently unrelated attributes, and they have a high rate of stability, being very fault-tolerant, even in the presence of very noisy data. It is also trivial to convert a statically trained inductive system to one that learns continuously from new inputs and outcomes.
Inductive systems are used when users cannot explain how they arrive at decisions (attributing the outcomes to a "feel" for the situation), and where gradual trends result in eventual changes to outcomes. These systems excel at making the correct decision with a very high degree of confidence, but are poor at explaining what prompted the decision.
Our Approach
Logical Types, LLC uses both deductive and inductive logic to build systems as the needs of the customer demand:
  • To rediscover the implicit rules of a phoneme-based name matching system, we created a purely inductive system that output a new program that had the phoneme contruction rules explicit.
  • A human resource scheduling system that required a set of clearly stated rules that filled duty times with a pool of personnel under a guiding principle of "fairness" was built using purely deductive logic.
  • A combined learning system and knowledge-engineered rule-based expert system was designed using a deductive rule manager with a supervised learning, Bayesian-like, component.

PADL Symposium 2006 Post-Mortem


After Action Report: PADL 2006

Practical Aspects of Declarative Languages

Introduction

This is an highly-opinionated report on the presentations at the PADL 2006. The sessions are presented chronologically, but are tabled in the contents by topic, and cross-referenced by their practicality (i.e. in what I found useful in the presentation).

Presentations

Cross References
TopicPracticality
Invited SpeakersPractical Results
Phil WalderDominators
Erik MeijerProbabilistic Prolog
No show: David RoundyDistribution type
 Theorem Provers (no, really!)
"Practical" "Applications"1Declarative debugging
CHRs for testingImproving the search from failed branches
Probabilistic musicCliques as power sets
Genomes with distribution typeComplexity measures
  
Constraints"Theoretical" 2 (including compiler
Optimal paths with dominatorsimplementations and optimizations)3
Constraints in MercuryTabling and tabling 4
Informing constraints with failureConstraints 5 
 Model checking
Analysis and VerificationReimplementing cut
Typing Prolog with cliquesDescription logics  6
Model Checking model checkers 
  
Logic Programming 
Generic (foreign) Cuts 
Tabling in Mercury 
Correct Tabling in XSB 
Declarative debugging 
  
Browsing and Querying 
Logical Eclipse plugin 
description logic queries 
Queries with sets with regex 

Retrospective

As you can see below, my critisms are very harsh. This begs two questions: harsh against whom? and would I go again to a PADL? The answer to the latter is resoundingly in the affirmative (as William Byrd would pipe up: 't'): the density of useful concepts I took away from this conference was much greater than most other conferences I've attended, including, not surprisingly, the POPL.
So, why so harsh in my reviews? Was I critizing the speakers? In some cases, I was indeed critizing the speakers, particularly when they sacrifice rigor for herd thinking (particularly when they do not acknowledge such lapses), but in the main, my criticisms, such as 'this topic does not interest me', was not a criticism of the speaker, nor even the choice of topic (with the caveat that of the 17 of the 33 papers accepted, the papers accepted should be more practical and less esoteric, if the reviewers chose academic curiosities over less polished papers on the topic of applications then I do fault the selection process), but a severe criticism of me. What I mean here is that the PADL's mission is to gather reports from the front-lines, as it were. That's where companies, like mine, operate. So, companies, like mine, including mine, should be submitting these papers. We do not, and the PADL has suffered for it.
In short, the PADL was what it set out to be -- a free exchange of ideas of the application of research and an open channel between researchers and industry. AND the PADL, as good as it was, could be even better with more practical, real, applications developed in the declarative style showcased for all to see. It motivates the researchers ('my work has revelance! an audience! a fan club!'), and it improves the tools industry uses to deliver these applications ('you can do that?'). Win/win. QED.

The Challenge

So you companies that use declarative programming ... you know who you are (*cough* galois *cough* ... um, Doug, what about you? Yeah, me, too), belly up to the bar, 'cause drinks are on the house! (trans: submit papers; you are not divulging company secrets, and you're gonna get lots of neato ideas that will help you on your current projects).

Day 1: Monday, January 9, 2006

9:00 amLinks: Linking Theory to Practice for the Web
 Phil Walder
 
Phil presented a proposal for a functional language to cover the three tiers of building web apps. *YAWN* What came out of this talk was nothing immediately useful for us, but he did ask a series of questions to which he didn't have answers. He had the guts to go to lunch with a bunch of us logic programmers and was willing to listen to the logic programming side of designing a language and the trade-offs using that methodology.
What I learnt ("Life Lessons")
  • first, tell everybody what you do not know how to do currently, and ask how they do it
  • second, mingle among the group that has a different perspective than you or that knows stuff you don't
What to learn ("Talk pointers")
  • Timbre is a functional concurrent programming language (typed?)
  • Hope first functional language to implement the Hindey-Milner type system
  • "Antinomies": n. things that are directly opposed.
10:30 amUsing CHRs to generate functional test cases for the Java Card Virtual Machine
 Sandrine-Dominique Gourand (presenter), Arnaud Gotlieb
 
Sandrine (or is it 'Dominique') presented a test generating system that examines disjoint clauses of predicates and, by using pattern-, or type-, matching generates appropriate unit test cases. She pointed out the language JSL ('Jakarta Specification Language' which has nothing to do with the much more popular Jakarta Struts) which was used to generate the system to generate the test cases -- it's very Lisp-like. JSL is now an unsupported branch of COQ, a theorem-prover.
I talked with both Sandrine and Arnaud after the presentation, re: automating unit testing; Sandrine gave me two papers to review along those lines. The first was onQuickCheck for Haskell; the second was on using Isabelle to generate test cases from a specification.
Life Lessons
  • CHRs (constraint handling rules) have real-world applicability (beyond Sudoku): generating test cases.
  • Theorem provers (COQ and Isabelle) have real-world applicability -- generating comprehensive test cases.
Talk Pointers
11:00 amProbabilistic-logical Modeling of Music
 Jon Sneyers, et al.
 
Jon presented a learning system for music. The learning system was implemented on "probabilist Prolog" (PRISM). It took 75 selections from Bach pieces and ~25 selections from Mozart pieces. He used a description language to model the significant parts of the musical selections (rhythm, tone (polyphony), etc, but not many other facets of music) as an ordered list of triples (triples of pairs for two voices, triples of triples for three voices). He then fed these triples under the appropriate selections as training sets. The learning system was a Markov chain ("It took two lines of code to implement in PRISM"). It properly classified all test cases (~30); when he removed rhythm, it properly classified most of the the test cases (~26/~30).
This program not only classifies, but also generates fresh example selections from the learnt selections. Jon was pleased to report snipplets sounded like things from the composers, but didn't sound very good -- his pleasure was based on the implication that good music is not machine generated (with a consequence that composers still have a role).
Jon is currently pursuing a new approach, using CHRs for the learning process. The work he presented is now two years old.
Life Lessons
  • In the after-session Q&A, I asked Jon about the memory footprint. Jon admitted that learning with the Markov Chains grows exponentially with the size of the selection, so that's why he used selections from music pieces, and not the entire piece.
    Markov Chains or connectionist systems are nice and all, but there is a large price to be paid for using them.
Talk Pointers
  • PRISM; a probablistic Prolog.
11:30 amModeling Genome Evolution with a DSEL for Probabilistic Programming
 Martin Erwig and Steve Kollmansberger (presenter)
 
Steve wrote a system in Haskell using monads to describe genome interaction. As these interactions are stocastic (not deterministic), the standard way to go about this is algorithmically: writing a random number generator and assigning values. Steve chose a novel approach: he created a data type called "Distribution" that encompassed all possible (finite) values as well as their probabilities. Then using monadic transform, he described several basic operations (addition being the one that sticks in my mind) that could be used directly under the transformation. Using this approach, modeling the genetic interaction (which is more like an inhibitor) become facile, reducing nearly to the point of simple arithmetic.
Life Lessons
  • Of course, data modeling eliminates the alternative algorithmic implementation, so ...
  • Consider novel data structuring approaches before grabbing that same old algorithmic hammer.
Talk Pointers
2:00 pmUsing Dominators for Solving Constrained Path Problems
 Luis Quesada, et al.
 
Luis demonstrated that by using information of the graph of the search for the shortest path (e.g. nodes 5 and 12 must be on the path, these nodes are said to dominate the path, and if a particular mandatory node precedes another waypoint, that mandatory node is said to dominate the waypoint), it allows one to eliminate paths that do not lead to an eventual solution.
Talking with Luis after his presentation, I asked about ACO (Ant Colony Optimization). He was familiar with the approach, having talked with a research group studying it, but opted for this approach. ACO is not guaranteed to terminate, dominator-search does. He and I also talked about the problem our company is facing, and I drew out the problem on a napkin. He said he had studied resource-constrained search paths for the problem (vehicles that have limitted fuel supply), and asked me to provide a more detailed problems summary, so we could look at ACO vs. dominator approaches.
Life Lessons
  • Luis' talk, while given in an abstract, theoretical, graph-theory approach reaked of experience from solving real-world problems. More presentations at PADL should have this same smell.
  • I was sure that ACO was the only way to go with our problem-space, and after talking with Luis, I'm still pretty sure of this, but I've also become aware of another possibility: that the problem, once specified, may turn out to be so trivial as to need none of this -- the "shortest path" may actually turn out to be the smallest great circle, or some such. If one knows exactly what the problem is (intuitively), then a specification description might itself contain the solution.
  • Know what else is out there. I'm glad that Luis knew about ACO, considered its merits and decided on the dominator approach. I don't necessarily agree with his conclusions, but I admire his thoroughness.
Talk Pointers
2:30 pmAdding constraint solving to Mercury
 Ralph Becket, et al. presented by Zoltan Somogyi (co-author)
 
Zoltan covered what the Mercury team is doing to add constraint solvers as native constructs into the Mercury language. One interesting thing that came out of this topic is that the predicates that add constraints to the solver are declarative, but those that read the state are not. This, of course, raised eyebrows in the room (particularly Phil Walder, who asked a question about it). Zoltan explained this choice by stating that the solver should be logically consistent, no matter the order in which constraints are added (making added constraints declarative, or order-independent), but reading out the solver's state (finding the current solution set), actually results in a "write" elsewhere (into users of the constraint solvers), in a way, destructively changing the state of the program at large, and therefor a read is not declarative. Zoltan owned that this is completely opposite to prevalent declarative wisdom, but when examined in this light, is consistent with the prevalent view.
As is always with these presentations, a holy war exploded. This case: Prolog vs. Mercury, and on the subject of typing. How is this even remotely related to the topic of this presentation? Isn't it sweet that one can go these genteel sessions to peer into the vaulted chambers of academe? I always love a good brawl. *sigh* I would offer some advice about how such fist-to-cuffs over such trivialities cements views of the world at large about the researchers being out of touch with reality and how that consequently makes adoption of research ideas all the more difficult, but I know I would only be 1) wasting my breath and 2) starting another pointless flame war.
Life Lessons
  • Constraints ... who needs 'em? Dunno; but that's a good thing that came out of this symposium: it points out things I don't know that I don't know, so I boughtConstraint-based Local Search by Pascal Van Hentenryck and Laurent Michel, so I can learn a bit more about the topic.
Talk Pointers
3:00 pmA Hybrid BDD and SAT Finite Domain Constraint Solver
 Peter Hawkins (presenter) and Peter Stuckey
 
Peter's presentation piggybacked on Zoltan's in that he used the Mercury constraint solver for this work. Peter noticed that SAT solvers are very fast (with the cost of expressivity: their expressions are booleans in propositional logic, ugh!), because SAT solvers 'learn' from rejected branches, whereas regular constraint solvers perform not so well (but allow inequalities, etc). So, what he did was use a binary transformation (the BDD) to accumulate learned knowledge from rejected branches on the regular constraint solver in the form of proposition from a SAT solver. He also did some redundancy elimination. The profiled results were very good for his hybrid execution-time-wise. Not only that, because of this 'learning' his constraint solver was able to tackle problems that other constraint solvers choked on (SAT solvers also suffered because their limitted syntax prevented formulating some constraint problems). Very cool.
BUT after the presentation I approached Peter. After complimenting his presentation, I stated that it made no sense to me, as I did not see a real-world application for constraint-based programming. Peter was unable to provide examples to me [that was exceedingly disappointing], but did refer me to Pascal as an authoritative source, as his area of research is constraints.
Life Lessons
  • Use information learnt from rejected branches to improve the search on new branches.
4:00 pmEfficient top-down set-sharing analysis using cliques
 Jorge Navas, Francisco Bueno and Manuel Hermenegildo (presenter)
 
This presentation was about proving properties of Prolog programs, such as parameter types and determinism, by using cliques (sets of relations). Manuel demonstrated the system in action; it was slick! It gave runtime feedback on a series of asserts he made about the program (the assertions were ':-'/1 declarations in Ciao Prolog). The key point of this presentation was that cliques could be represented as power sets, so that 6 relations (xyz, xy, xz, x, y, z) could be reduced to one powerset (over xyz). Then, by widening some assumptions, the system was able to make more general inferences, but at the cost of complete correctness.
Life Lessons
  • Data representation! Using the power set, instead of explicitly enumerating all the relations explicitly saves both space and time, simplifying the programming task and the subsequent debugging/maintenance.
4:30 pmAutomatic Verification of a Model Checker by Reflection
 Bow-Yaw Wang
 
Bow needed to verify the correctness of his model checker (that he built for what practical purpose? I was unable to determine this), and had an epiphany: the model checker could itself be checked by reflection; that is, by building a model of the model checker and verifying the correctness of that model. Of course, this begs the question (which, of course, was asked after the presentation, but, surprisingly, not by me (someone else beat me to it)): how does one stop this line of induction. Bow really didn't have a satisfying answer for this (but, practically speaking, one can make a pretty good determination that "enough" rigor has been applied...).
Model checking is all well and good, I suppose, but, from what I've seen of it, it seems like a silly exercise: pre-production model checking is based on requirements that will change during the development process, and post-production model checking is a forgone conclusion ("Hey! this working system really works! Fancy that!"). I'm filing model checking under 'works of "theoretical" interest'.

Day 2: Tuesday, January 10, 2006

9:10 am8Generic Cut Actions for External Prolog Predicates
 Tiago Soares, Ricardo Rocha and Michel Ferreira (presenter)
 
The presentation noted that some Prologs, including XSB (*cough* Big Surprise *cough* tabling *cough*), had problems interacting with, e.g., C, particularly with database cursors or other persistent structures that required eventual deallocation. The problem is especially noticeable when a cut ('!'/0) in the Prolog code prunes away the goal that calls the C deallocation routine (go figure).
Michel's solution was to modify the (in this case, Yap) Prolog compiler so that the foreign interface required a C function to call when Prolog encountered a cut. This C function is an extra argument for each predicate defined by a foreign (C) function.
This solves the problem for memory leaks, but I asked a different question to the group. Dynamic languages pay a penalty whenever they make a foreign call, something on the order of 40 for marshalling and unmarshalling data between the languages. I asked was there a way to represent the foreign interface natively, e.g. a database wrapper in Prolog, not C; or, generally, is there a way to eliminate this penalty.
The group shot me down. Zoltan said you either pay the penalty, or write your own database, which (the latter) is a losing proposition. Others concurred, saying that was in the nature of dynamic languages. Zoltan later expressed surprise at the size of the penalty, saying he observed only a factor of 2 for foreign calls in Mercury [*cough* strongly typed means less marshalling *cough* Mercury's not dynamic *cough*].
In a separate conversation over lunch, Michel explained to me the raison d'être for this system was they built a classic inductive system and stored the deductive database in MySQL (?), as the deductions were numerous. I asked if his group had solved the problem of inductive systems: a very slight perturbation in the data causes such systems to fall into an undefined state or causes wildly incorrect resulting rules. He said they hadn't, but were looking at that problem.
Life Lessons
  • Use statictical methods for induction (bayesian or neural); classical inductive systems are just too delicate to be useful with real-world problems.
9:40 amTabling in Mercury: Design and Implementation
 Zoltan Somogyi (presenter) and Konstantinos Sagonas
 
Zoltan presented tabling in Mercury given the constraints of the type system. Given the system we are developing, tabling is not a propos; I didn't get much out of this talk.
10:10 amIncremental Evaluation of Tabled Prolog: Beyond Pure Logic Programs
 Diptikalyan Saha and C. R. Ramakrishnan
 
Okay, here's the thing; you all now know that tabling is not one of my favorite topics, the the point of having two talks on a compiler implementation detail is what again? To prove that XSB Prolog doesn't do tabling correctly presently? Isn't that clear enough already?
Anyway, what's the point of going in the beyond pure logic? Isn't that going in a very bad direction? Perhaps one should explore what happens when one movestoward pure logic, not away from it? (hint, hint, Mercury, *cough*)
Life Lessons
  • Never! Ever! use XSB Prolog: it tables incorrectly and requires the user to turn off tabling explicitly!
  • Accomodating errors as a goal of design is erroneous! Instead of accomodating flaws, eliminate them (perhaps by going to a better system).
10:40 amControlling search space materialization in a practical declarative debugger
 Ian MacLarty and Zoltan Somogyi (presenter)
 
Holy Smart Debugger Design, Batman! This debugger, instead of having you step through the code in the traditional manner, does an inductive debugging search: it presents you with a predicate with instantiated arguments and the result and ask if the result was expective, if it wasn't it drills into the predicate, seeking the cause of the disconnect, if it was, it moves on. Groovy!
Life Lessons
  • Prolog's traditional debugging system is wonderful; much better than most every other thing out there, so it's unthinkable that something could be better --imagine the impossible; then create it
  • Having the computer handle drudge work, repetitive tasks, and assisting the user in decision making ... What a concept!
2:00 pmLINQ: Reconciling objects, relations and XML in the .NET framework: a Personal Perspective
 Erik Meijer
 
Erik talked about the socialogical aspects of linking a semi-equivalent of Haskell's typeclasses into Visual Basic, making sure it was backwards compatible, even down to the IL. He also showed how XML syntax could be weaved into VC procedures, allowing the 'copy-and-paste' coding style (the usual XML element instantiation remains for those who wish to do it the longer, or more programatic, way). He followed a whole side monologue about how the DOM-XML is bad (oh, my goodness, how can that possibly be!) but not for the usual reasons (ummmm, that would be, all of them?) but because an element is entirely dependent on a document instance to exist, so, things like moving a set of elements from one document to embed into another are ridiculously hard. He solves this problem by reifying element independently of the document. He solved this problem, and probably thousands of others have, too, using the same approach. That's called, 'reuse', right? One of the thrusts of his presentation was that results from research were working their way into the mainstream software engineering community.
Okay, a question: anything that permits and encourages bad programming ('copy-and-paste' coding) ... that's good?
3:30 pmA Generic Code Browser with a Declarative Configuration Language
 Kris De Volder
 
Finally! a practical application! Kris demonstrated a plugin for eclipse (the Java IDE) that used a mercury-like scripting language to create on-demand code browsers. Slick! Kris was also well-informed and reasonable: fully recognizing that he had a small acceptance window ('5 seconds'), and a tiered expertise base. So, he designed the tool so that most features were automatic (GUI-selectable), some features obtainable via a regex-like query system, and finally advanced features only required the full language. He also saw his niche: smack inbetween the rigid prebuild gui browsers and the nothingness of rolling your own browser from scratch, and he documented this niche thoroughly. Bravo! an unqualified success!
4:00 pmTranslating Description Logic Queries to Prolog
 Zsolt Nagy, Gergely Lukácsy (presenter) and Péter Szeredi
 
Gergely presented a system that he and his colleagues had developed that converts description logic assertions and queries (in the ALC description language, which is developed from, and is, a frame logic) down to Prolog terms. He spent some time showing the differences between ALC and Prolog, particularly highlighting Prolog's closed-world model (where if a fact is not known, it is false) and ALC's open-world assumpution (OWA) model (where something must be stated as false to be false; if something is unknown, then it is simply unknown. This, of course, impacts the semantics of negation: Prolog's negation as failure is not congruent with the OWA model. But it also affects transitive closures: he demonstrated Prolog's weakness (and the strengths of frame logic) with the canonical Oedipus/Iocaste patricide example (which brought forth chuckles from the audience when he blushed on the moral implications of his example).
Given all this, Gergerly demonstrated the system at efficiently reduced ALC terms and queries down to Prolog. He also compared the results against other systems. This system was very much faster than others. Gergerly pointed out, in fairness, that other systems also handled other description logic languages, so they were perhaps not fully optimized for ALC. No matter: this system is a full frame logic implementation, and it performs well; even finding solutions to queries that other systems do not terminate, and this system is very effective in ignoring "noise" in the knowledge base (noise being in the form of many other terms that have no relation to the queries being posed -- other systems lag exponentially in the face of larger, noisier, knowledge bases).
Life Lessons
  • If you are a father, do not allow any of your sons to be named 'Oedipus'!
  • Too much generality in a tool can kill what you need to do. The best tool for the job may not be the most featureful.
Talk Pointers
4:30 pmQuerying Complex Graphs
 Yanhong A. Liu (presenter) and Scott D. Stoller
 
Annie presented a system based on sets and regular expressions that allowed intuitive querying of (semi-?)structured data. This system converted the queries to datalog-equivalents and eventually resulted in the query in C code. The interesting thing about this system was that the query itself was only half the answer, the other half of the answer was a measure of the complexity of the query. In this way the query could be reformulated so that the result would entail less complexity.Very sweet!
Life Lessons
  • You don't miss what you don't know. Anyone, on seeing how complex their system is, sees the solid benefit of this measurement. Why is this not everywhere? Because nobody knows it is possible.

Endnotes

1As all of the talks in the "Practical" "Applications" were neither practical (they were all used to further research or as dissertations) nor applications (several speakers thoughout the conference, when I queried them about aspects of their presentation stated that the works they discussed where under development still), I query the accuracy of the topic heading for these talks. A saving grace is that all of the talks on this topic did have results that were practical and immediately usable for real-world applications, unlike what some of the other presentations had to offer.
2
3
4Yes, there were two presentations on tabling. Both of them I found to be very "useful" in a "theoretical" sense of the word.
5Okay, okay, so I am looking into constraints. This does point out an issue with the symposium: it may point out things that I don't know I don't know, but it wasn't very helpful in educating me about the practical applications in these areas.
6I think a more approapriate term for 'description logics' or 'ontologies' is 'hooey'. Bluntly, descriptions logics are as innovative and as useful as Java is ('the most distressing thing to hit computing since MS-DOS' Alan Kay). With the set of claims proponents are making grandiosely about knowledge representation, which I must remind everyone is a similar set of claims the AI community was making for symbolic representation of knowledge (what, again, is description logic? Oh, the 'symbolic representation of knowledge'? Ah, yes, we have a winner here!), in the same self-satisfied tone, just before AI went dark for twenty years, AND with the bedfellows it has (whispered: 'eggs'-'em'-'ill'), I am shocked that more gullible consumers of this hogwash haven't made the obvious connections and avoided this dead end...
7The 'random' in "'random' testing" means 'comprehensive with random generation of parameters', not 'a random selection [e.g. not comprehensive] of possible tests'.
8This is a departure from the published schedule; the invited speaker for the morning was a no-show.


(originally published January 16, 2006)