Regularization in SNoW is implemented as a small modification to the Winnow and Perceptron update rules so that they try to fit a ``thick hyperplane'' in between positive and negative examples [Dagan et al., 1997,Grove and Roth, 1998,Li et al., 2002]. In fact, SNoW allows for a different thickness for positive and negative examples which can be used to incorporate a non-symmetric loss function. It is not difficult to show that the modified update rules still have a mistake bound that depends on the margin of the data (with the additional thickness parameter).
Specifically, if the floating point thickness parameters are set to and
target node
encounters a positive example, its activation will have to be
greater than or equal to
for SNoW to interpret the node's
prediction as correct. Otherwise, it will be promoted. Similarly, if
encounters a negative example, SNoW will interpret
's prediction as correct
if its activation is less than
. Otherwise, it will be
demoted.
See option -S for usage details.