Next: True Multi-Class Classification (Constraint Up: The SNoW Architecture Previous: Naive Bayes Contents


Extensions to the Basic Learning Rules

SNoW contains a large number of options that serve to modify the behavior of the basic update rules. These include eligibility of features, options for discarding features, conditional prediction based on a prediction threshold, and others. They are all selected via the command line, and all command line parameters are described in Chapter 5.

Below we describe only the main algorithmic extensions over the basic update rules. These training policies and update rules are implemented independently of each other. As a consequence, for example, the true multi-class (Constraint Classification) [Har-Peled et al., 2002] training policy can be invoked with either of the Perceptron or Winnow update rules. Similarly, one can enable a function approximation (regression) training policy with Perceptron to perform a stochastic approximation to Gradient Descent [Mitchell, 1997] or with Winnow to perform Exponentiated Gradient Descent [Kivinen and Warmuth, 1997]. In addition, each example can be applied only to selected targets when using Winnow or Perceptron as in the sequential model [Even-Zohar and Roth, 2001]. For one final example, it is also possible for nodes' output to be cached and processed along with the output of other nodes to produce a more complicated decision support mechanism4.8 (as in, e.g., [Golding and Roth, 1999]).



Footnotes

... mechanism4.8
See 4.3.5 for a more concrete description of this mechanism.


Subsections

Next: True Multi-Class Classification (Constraint Up: The SNoW Architecture Previous: Naive Bayes Contents
Cognitive Computations 2004-08-20