The number of class labels considered as possible predictions for a given example, also called the size of the ``confusion set'', plays an important role in the performance of multi-class learners. Typically, the larger the confusion set is, the lower the average prediction accuracy is4.10.
SNoW supports a dynamic decision, per example, on which targets should be
included in the confusion set. Assume that ,
,
are the
class labels in a learning scenario. SNoW allows the user to include, along
with the example, a subset of targets in
,
,
. That subset
is viewed as a specification of which target nodes this example should be
presented to. This can be done both in training and testing.
The determination of this subset can be done by an external process if it is
capable of suggesting that a given example, say, will clearly not be labeled
. See [Even-Zohar and Roth, 2001] for a study of the sequential model
and [Li and Roth, 2002] for an application.
The Sequential Model can be thought of as a training policy, although no command line parameter need be set to enable it. Examples simply need to have the appropriate information added to them. See Chapter 6 for details on an example's format both with and without the sequential model.