In addition to the function approximation mode in
Section 4.3.3, SNoW implements another update rule in which the
weight update rate is not constant (as in or
for Winnow, or
for Perceptron) but rather depends on how far the activation
value is from the threshold. With Threshold-Relative Updating (option -t), an update always causes the activation of an example on which the
network has made a mistake to jump directly to the immediate vicinity of the
threshold instead of taking a small step towards it.
Let
be the set of active features in a
given example that are linked to target node
, let
be the weight
of feature
in target
, let
be the strength of feature
in the
example, and let
be the threshold at target node
. Then
is the activation of target
before updating.
The Winnow threshold relative updating rule is:
where is
if the example is positive and
if it's
negative. Notice that in this case, following the update, we get:
Similarly, the update rule for Perceptron becomes:
where is the Perceptron algorithm parameter
if the
example is positive and
if it's negative, and
is the total number of active features in the example.
Again, it's easy to see that in this case, following the update, we get:
That is, the example's new activation will be roughly equal to the threshold with a small buffer between the updated activation and the threshold. See option -t for details.