public class SparseWinnow extends LinearThresholdUnit
Learner.labeler
is a single
discrete classifier whose returned feature values are available through the
Classifier.allowableValues()
method. The second
value returned from Classifier.allowableValues()
is treated as "positive", and it is assumed there are exactly 2 allowable values. Assertions will
produce error messages if these assumptions do not hold.
This algorithm's user-configurable parameters are stored in member fields of this class. They may
be set via either a constructor that names each parameter explicitly or a constructor that takes
an instance of Parameters
as
input. The documentation in each member field in this class indicates the default value of the
associated parameter when using the former type of constructor. The documentation of the
associated member field in the
Parameters
class indicates
the default value of the parameter when using the latter type of constructor.
Modifier and Type | Class and Description |
---|---|
static class |
SparseWinnow.Parameters
Simply a container for all of
SparseWinnow 's configurable parameters. |
Modifier and Type | Field and Description |
---|---|
protected double |
beta
The rate at which weights are demoted; default equal to
1 /
LinearThresholdUnit.learningRate . |
static double |
defaultInitialWeight
Default for
LinearThresholdUnit.initialWeight . |
static double |
defaultLearningRate
Default for
LinearThresholdUnit.learningRate . |
static double |
defaultThreshold
Default for
LinearThresholdUnit.threshold . |
allowableValues, bias, defaultThickness, defaultWeightVector, initialWeight, learningRate, negativeThickness, positiveThickness, threshold, weightVector
candidates, encoding, extractor, labeler, labelLexicon, lcFilePath, lexFilePath, lexicon, lossFlag, predictions, readLexiconOnDemand
containingPackage, name
Constructor and Description |
---|
SparseWinnow()
LinearThresholdUnit.learningRate , beta , and LinearThresholdUnit.threshold take default
values, while the name of the classifier gets the empty string. |
SparseWinnow(double a)
Sets
LinearThresholdUnit.learningRate to the specified value, beta to 1 / LinearThresholdUnit.learningRate
, and the LinearThresholdUnit.threshold takes the default, while the name of the
classifier gets the empty string. |
SparseWinnow(double a,
double b)
Sets
LinearThresholdUnit.learningRate and beta to the specified values, and the
LinearThresholdUnit.threshold takes the default, while the name of the classifier
gets the empty string. |
SparseWinnow(double a,
double b,
double t)
Sets
LinearThresholdUnit.learningRate , beta , and LinearThresholdUnit.threshold to the
specified values, while the name of the classifier gets the empty string. |
SparseWinnow(double a,
double b,
double t,
double pt)
Use this constructor to fit a thick separator, where both the positive and negative sides of
the hyperplane will be given the specified thickness, while the name of the classifier gets
the empty string.
|
SparseWinnow(double a,
double b,
double t,
double pt,
double nt)
Use this constructor to fit a thick separator, where the positive and negative sides of the
hyperplane will be given the specified separate thicknesses, while the name of the classifier
gets the empty string.
|
SparseWinnow(double a,
double b,
double t,
double pt,
double nt,
SparseWeightVector v)
Use this constructor to specify an alternative subclass of
SparseWeightVector , while
the name of the classifier gets the empty string. |
SparseWinnow(SparseWinnow.Parameters p)
Initializing constructor.
|
SparseWinnow(String n)
|
SparseWinnow(String n,
double a)
Sets
LinearThresholdUnit.learningRate to the specified value, beta to 1 / LinearThresholdUnit.learningRate
, and the LinearThresholdUnit.threshold takes the default. |
SparseWinnow(String n,
double a,
double b)
Sets
LinearThresholdUnit.learningRate and beta to the specified values, and the
LinearThresholdUnit.threshold takes the default. |
SparseWinnow(String n,
double a,
double b,
double t)
Sets
LinearThresholdUnit.learningRate , beta , and LinearThresholdUnit.threshold to the
specified values. |
SparseWinnow(String n,
double a,
double b,
double t,
double pt)
Use this constructor to fit a thick separator, where both the positive and negative sides of
the hyperplane will be given the specified thickness.
|
SparseWinnow(String n,
double a,
double b,
double t,
double pt,
double nt)
Use this constructor to fit a thick separator, where the positive and negative sides of the
hyperplane will be given the specified separate thicknesses.
|
SparseWinnow(String n,
double a,
double b,
double t,
double pt,
double nt,
SparseWeightVector v)
Use this constructor to specify an alternative subclass of
SparseWeightVector . |
SparseWinnow(String n,
SparseWinnow.Parameters p)
Initializing constructor.
|
Modifier and Type | Method and Description |
---|---|
double |
computeLearningRate(int[] exampleFeatures,
double[] exampleValues,
double s,
boolean label)
Returns the learning rate, which is
LinearThresholdUnit.learningRate (alpha) if it is a positive
example, and beta if it is a negative example. |
void |
demote(int[] exampleFeatures,
double[] exampleValues,
double rate)
Demotion is simply
w_i *= betax_i . |
double |
getBeta()
Returns the current value of the
beta variable. |
double |
getLearningRate()
Returns the current value of the
LinearThresholdUnit.learningRate variable. |
Learner.Parameters |
getParameters()
Retrieves the parameters that are set in this learner.
|
void |
promote(int[] exampleFeatures,
double[] exampleValues,
double rate)
Promotion is simply
w_i *= learningRatex_i . |
void |
read(edu.illinois.cs.cogcomp.core.datastructures.vectors.ExceptionlessInputStream in)
Reads the binary representation of a learner with this object's run-time type, overwriting
any and all learned or manually specified parameters as well as the label lexicon but without
modifying the feature lexicon.
|
void |
setBeta(double t)
Sets the
beta member variable to the specified value. |
void |
setLearningRate(double t)
Sets the
LinearThresholdUnit.learningRate member variable to the specified value. |
void |
setParameters(SparseWinnow.Parameters p)
Sets the values of parameters that control the behavior of this learning algorithm.
|
void |
update(int[] exampleFeatures,
double[] exampleValues,
double base)
This method performs an update
w_i *= basex_i , initalizing weights in
the weight vector as needed. |
void |
write(edu.illinois.cs.cogcomp.core.datastructures.vectors.ExceptionlessOutputStream out)
Writes the learned function's internal representation in binary form.
|
void |
write(PrintStream out)
Writes the algorithm's internal representation as text.
|
allowableValues, classify, clone, discreteValue, featureValue, forget, getAllowableValues, getBias, getInitialWeight, getNegativeThickness, getPositiveThickness, getThreshold, getWeightVector, initialize, learn, score, score, scores, setInitialWeight, setLabeler, setNegativeThickness, setParameters, setPositiveThickness, setThickness, setThreshold, shouldDemote, shouldPromote
classify, classify, classify, classify, countFeatures, createPrediction, createPrediction, demandLexicon, discreteValue, discreteValue, doneLearning, doneWithRound, emptyClone, featureValue, featureValue, getCurrentLexicon, getExampleArray, getExampleArray, getExtractor, getLabeler, getLabelLexicon, getLexicon, getLexiconDiscardCounts, getLexiconLocation, getModelLocation, getPrunedLexiconSize, learn, learn, learn, learn, read, readLabelLexicon, readLearner, readLearner, readLearner, readLearner, readLearner, readLearner, readLexicon, readLexicon, readLexiconOnDemand, readLexiconOnDemand, readModel, readModel, readParameters, realValue, realValue, realValue, save, saveLexicon, saveModel, scores, scores, scoresAugmented, setCandidates, setEncoding, setExtractor, setLabelLexicon, setLexicon, setLexiconLocation, setLexiconLocation, setLossFlag, setModelLocation, setModelLocation, setParameters, setReadLexiconOnDemand, unclone, unsetLossFlag, write, writeLexicon, writeModel, writeParameters
classify, discreteValueArray, getCompositeChildren, getInputType, getOutputType, realValueArray, test, toString, valueIndexOf
public static final double defaultLearningRate
LinearThresholdUnit.learningRate
.public static final double defaultThreshold
LinearThresholdUnit.threshold
.public static final double defaultInitialWeight
LinearThresholdUnit.initialWeight
.protected double beta
1 /
LinearThresholdUnit.learningRate
.public SparseWinnow()
LinearThresholdUnit.learningRate
, beta
, and LinearThresholdUnit.threshold
take default
values, while the name of the classifier gets the empty string.public SparseWinnow(double a)
LinearThresholdUnit.learningRate
to the specified value, beta
to 1 / LinearThresholdUnit.learningRate
, and the LinearThresholdUnit.threshold
takes the default, while the name of the
classifier gets the empty string.a
- The desired value of the promotion parameter.public SparseWinnow(double a, double b)
LinearThresholdUnit.learningRate
and beta
to the specified values, and the
LinearThresholdUnit.threshold
takes the default, while the name of the classifier
gets the empty string.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.public SparseWinnow(double a, double b, double t)
LinearThresholdUnit.learningRate
, beta
, and LinearThresholdUnit.threshold
to the
specified values, while the name of the classifier gets the empty string.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.public SparseWinnow(double a, double b, double t, double pt)
a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.pt
- The desired positive thickness.public SparseWinnow(double a, double b, double t, double pt, double nt)
a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.pt
- The desired positive thickness.nt
- The desired negative thickness.public SparseWinnow(double a, double b, double t, double pt, double nt, SparseWeightVector v)
SparseWeightVector
, while
the name of the classifier gets the empty string.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.pt
- The desired positive thickness.nt
- The desired negative thickness.v
- An empty sparse weight vector.public SparseWinnow(SparseWinnow.Parameters p)
SparseWinnow.Parameters
object.p
- The settings of all parameters.public SparseWinnow(String n)
n
- The name of the classifier.public SparseWinnow(String n, double a)
LinearThresholdUnit.learningRate
to the specified value, beta
to 1 / LinearThresholdUnit.learningRate
, and the LinearThresholdUnit.threshold
takes the default.n
- The name of the classifier.a
- The desired value of the promotion parameter.public SparseWinnow(String n, double a, double b)
LinearThresholdUnit.learningRate
and beta
to the specified values, and the
LinearThresholdUnit.threshold
takes the default.n
- The name of the classifier.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.public SparseWinnow(String n, double a, double b, double t)
LinearThresholdUnit.learningRate
, beta
, and LinearThresholdUnit.threshold
to the
specified values.n
- The name of the classifier.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.public SparseWinnow(String n, double a, double b, double t, double pt)
n
- The name of the classifier.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.pt
- The desired positive thickness.public SparseWinnow(String n, double a, double b, double t, double pt, double nt)
n
- The name of the classifier.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.pt
- The desired positive thickness.nt
- The desired negative thickness.public SparseWinnow(String n, double a, double b, double t, double pt, double nt, SparseWeightVector v)
SparseWeightVector
.n
- The name of the classifier.a
- The desired value of the promotion parameter.b
- The desired value of the demotion parameter.t
- The desired threshold value.pt
- The desired positive thickness.nt
- The desired negative thickness.v
- An empty sparse weight vector.public SparseWinnow(String n, SparseWinnow.Parameters p)
SparseWinnow.Parameters
object.n
- The name of the classifier.p
- The settings of all parameters.public void setParameters(SparseWinnow.Parameters p)
p
- The parameters.public Learner.Parameters getParameters()
getParameters
in class LinearThresholdUnit
public double getLearningRate()
LinearThresholdUnit.learningRate
variable.LinearThresholdUnit.learningRate
variable.public void setLearningRate(double t)
LinearThresholdUnit.learningRate
member variable to the specified value.t
- The new value for LinearThresholdUnit.learningRate
.public double getBeta()
beta
variable.beta
variable.public void setBeta(double t)
beta
member variable to the specified value.t
- The new value for beta
.public double computeLearningRate(int[] exampleFeatures, double[] exampleValues, double s, boolean label)
LinearThresholdUnit.learningRate
(alpha) if it is a positive
example, and beta
if it is a negative example.computeLearningRate
in class LinearThresholdUnit
exampleFeatures
- The example's array of feature indices.exampleValues
- The example's array of feature values.s
- The score.label
- The example label.public void promote(int[] exampleFeatures, double[] exampleValues, double rate)
w_i *= learningRatex_i
.promote
in class LinearThresholdUnit
exampleFeatures
- The example's array of feature indices.exampleValues
- The example's array of feature values.rate
- The learning rate at which the weights are updated.public void demote(int[] exampleFeatures, double[] exampleValues, double rate)
w_i *= betax_i
.demote
in class LinearThresholdUnit
exampleFeatures
- The example's array of feature indices.exampleValues
- The example's array of feature values.rate
- The learning rate at which the weights are updated.public void update(int[] exampleFeatures, double[] exampleValues, double base)
w_i *= basex_i
, initalizing weights in
the weight vector as needed.exampleFeatures
- The example's array of feature indices.exampleValues
- The example's array of values.base
- As described above.public void write(PrintStream out)
LinearThresholdUnit.learningRate
, beta
,
LinearThresholdUnit.initialWeight
, LinearThresholdUnit.threshold
,
LinearThresholdUnit.positiveThickness
, LinearThresholdUnit.negativeThickness
,
and finally LinearThresholdUnit.bias
.public void write(edu.illinois.cs.cogcomp.core.datastructures.vectors.ExceptionlessOutputStream out)
write
in class LinearThresholdUnit
out
- The output stream.public void read(edu.illinois.cs.cogcomp.core.datastructures.vectors.ExceptionlessInputStream in)
read
in class LinearThresholdUnit
in
- The input stream.Copyright © 2016. All rights reserved.