Next: Extensions to the Basic
Up: Basic Learning Rules
Previous: Perceptron
Contents
Naive Bayes
In the case of naive Bayes, a feature's weight in a target node is simply the
logarithm of the fraction of positive examples (positive with respect to the
target) in which the feature is also active (see, e.g.,
[Roth, 1999,Roth, 1998]). In addition, the relative weight of the target is
used as ``prior'', and a fixed smoothing weight is added for evaluation
of active features that were never observed in training4.7.
Notice that when using Perceptron and Winnow with the default training policy,
all examples are presented to every target node, and an update may occur
whether that target is active in the example or not. This is not the case for
naive Bayes. Each target node takes into account only the examples labeled
with it (i.e., the target's hypothesis is learned only from positive
examples, and training is not mistake driven).
Naive Bayes does not apply a sigmoid function; its activation is equivalent to
its sigmoid activation.
Footnotes
- ... training4.7
- See the -b command line parameter for more information on smoothing in naive Bayes.
Next: Extensions to the Basic
Up: Basic Learning Rules
Previous: Perceptron
Contents
Cognitive Computations
2004-08-20