private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException
Strings
are
intern()
ed.IOException
ClassNotFoundException
int arrayIndex
int arrayLength
int arrayIndex
int arrayLength
DiscreteFeature left
DiscreteFeature right
short valueIndex
short totalValues
ByteString identifier
identifier
string distinguishes this Feature
from other
Feature
s.ByteString value
DiscreteFeature referent
ByteString identifier
identifier
string distinguishes this Feature
from other
Feature
s.String identifier
identifier
string distinguishes this Feature
from other
Feature
s.private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException
Strings
are
intern()
ed.IOException
ClassNotFoundException
FVector features
FVector labels
double weight
FeatureVector realCache
FeatureVector.makeReal()
method.int arrayIndex
int arrayLength
int arrayIndex
int arrayLength
ByteString identifier
identifier
string distinguishes this Feature
from other
Feature
s.double value
String identifier
identifier
string distinguishes this Feature
from other
Feature
s.double value
RealFeature referent
ByteString identifier
identifier
string distinguishes this Feature
from other
Feature
s.String identifier
identifier
string distinguishes this Feature
from other
Feature
s.Classifier labeler
String value
edu.illinois.cs.cogcomp.infer.ilp.ILPSolver solver
Object head
Learner weakLearner
int rounds
Learner[] weakLearners
double[] alpha
edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector allExamples
String[] allowableValues
Learner weakLearner
int rounds
double learningRateA
String lossFunctionA
double[] diagonalVector
double[] weightVector
double[] gradientVector
boolean areVectorsInitialized
double learningRateP
String lossFunctionP
double bias
double initialBias
BiasedWeightVector.bias
.double bias
double beta
BinaryMIRA.defaultBeta
. The learning
rate changes as a function of beta
.double beta
BinaryMIRA.defaultBeta
. The
learning rate changes as a function of beta
.edu.illinois.cs.cogcomp.core.datastructures.vectors.IVector parents
Lexicon.lexiconInv
) serve a
dual purpose; first, to indicate by absolute value the number of other features currently
stored in this object that have the corresponding feature as a child, and second, to indicate
by sign if the corresponding feature has been marked for removal.Lexicon parentLexicon
Classifier labeler
Classifier extractor
Lexicon lexicon
Lexicon
.Lexicon labelLexicon
Lexicon
.String encoding
FVector predictions
URL lcFilePath
URL lexFilePath
boolean readLexiconOnDemand
boolean lossFlag
int candidates
int rounds
Map<K,V> lexicon
FVector lexiconInv
String encoding
boolean encodingSet
Lexicon.encoding
has been assigned a value yet or not. Using
this flag, we enforce the constraint that once an encoding has been set, it can never be
changed. This way, a user will only be capable of using the same lexicon object in two
different learners if they have the same encoding. See the implementation of
Learner.setLexicon(Lexicon)
.edu.illinois.cs.cogcomp.core.datastructures.vectors.IVector featureCounts
edu.illinois.cs.cogcomp.core.datastructures.vectors.IVector2D perClassFeatureCounts
int pruneCutoff
Lexicon.lexiconInv
or higher have been pruned. -1
indicates that no pruning has been done.ChildLexicon lexiconChildren
double learningRate
LinearThresholdUnit.defaultLearningRate
.SparseWeightVector weightVector
double initialWeight
LinearThresholdUnit.defaultInitialWeight
.double threshold
LinearThresholdUnit.defaultThreshold
.double bias
double positiveThickness
LinearThresholdUnit.defaultThickness
.double negativeThickness
LinearThresholdUnit.positiveThickness
.String[] allowableValues
double learningRate
LinearThresholdUnit.defaultLearningRate
.SparseWeightVector weightVector
double initialWeight
LinearThresholdUnit.defaultInitialWeight
.double threshold
LinearThresholdUnit.defaultThreshold
.double thickness
LinearThresholdUnit.Parameters.positiveThickness
and
LinearThresholdUnit.Parameters.negativeThickness
; default LinearThresholdUnit.defaultThickness
.double positiveThickness
double negativeThickness
Learner baseLearner
null
.edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector network
String defaultPrediction
Learner
doesn't
exist; default MuxLearner.defaultDefaultPrediction
.Feature defaultFeature
MuxLearner.defaultPrediction
.Learner baseLearner
null
.String defaultPrediction
Learner
doesn't
exist; default MuxLearner.defaultDefaultPrediction
.double smoothing
NaiveBayes.defaultSmoothing
.edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector network
NaiveBayes.NaiveBayesVector
for each observed prediction value.private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException
NaiveBayes.Count.updateLog
is set to
true
.IOException
ClassNotFoundException
double count
edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector counts
Lexicon
key.NaiveBayes.Count priorCount
scaledAdd
method has been
called.double smoothing
NaiveBayes.defaultSmoothing
.double stddev
int instanceNumber
Random random
SparseAveragedPerceptron.AveragedWeightVector awv
LinearThresholdUnit.weightVector
casted to
SparseAveragedPerceptron.AveragedWeightVector
.double averagedBias
edu.illinois.cs.cogcomp.core.datastructures.vectors.DVector averagedWeights
SparseWeightVector.weights
, this vector provides enough information
to reconstruct the average of all weight vectors arrived at during the course of
learning.int examples
double confidence
SparseConfidenceWeighted.defaultConfidence
.double initialVariance
SparseConfidenceWeighted.defaultInitialVariance
.SparseWeightVector variances
double variancesBias
SparseConfidenceWeighted.variances
vector.double confidence
SparseConfidenceWeighted.defaultConfidence
.double initialVariance
SparseConfidenceWeighted.defaultInitialVariance
.SparseWeightVector variances
LinearThresholdUnit.defaultWeightVector
.edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector network
boolean conjunctiveLabels
LinearThresholdUnit baseLTU
SparseNetworkLearner.defaultBaseLTU
.edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector network
int numExamples
int numFeatures
boolean conjunctiveLabels
LinearThresholdUnit baseLTU
SparseNetworkLearner.defaultBaseLTU
.edu.illinois.cs.cogcomp.core.datastructures.vectors.DVector weights
Lexicon
key.double beta
1 /
LinearThresholdUnit.learningRate
.double beta
1
/
LinearThresholdUnit.learningRate
.SparseWeightVector weightVector
StochasticGradientDescent.defaultWeightVector
.double bias
double learningRate
StochasticGradientDescent.defaultLearningRate
.SparseWeightVector weightVector
StochasticGradientDescent.defaultWeightVector
.double learningRate
StochasticGradientDescent.defaultLearningRate
.boolean warningPrinted
String solverType
SupportVectorMachine.defaultSolverType
unless there are more than 2 labels
observed in the training data, in which case "MCSVM_CS" becomes the default. Note that if you
are doing multi-class classification, you can still override the "MCSVM_CS" default to use
another solver type.
Possible values:
"L2_LR"
= L2-regularized logistic regression;
"L2LOSS_SVM_DUAL"
= L2-loss support vector machines (dual);
"L2LOSS_SVM"
= L2-loss support vector machines (primal);
"L1LOSS_SVM_DUAL"
= L1-loss support vector machines (dual);
"MCSVM_CS"
= multi-class support vector machines by Crammer and Singer
double C
SupportVectorMachine.defaultC
double epsilon
SupportVectorMachine.defaultEpsilon
.double bias
SupportVectorMachine.bias
>= 0, an instance vector x becomes [x; bias]; otherwise, if SupportVectorMachine.bias
< 0, no bias term is added.int biasFeatures
boolean displayLL
liblinear
-related messages are outputint numClasses
int numFeatures
boolean conjunctiveLabels
double[] weights
liblinear
.edu.illinois.cs.cogcomp.core.datastructures.vectors.IVector allLabels
edu.illinois.cs.cogcomp.core.datastructures.vectors.OVector allExamples
String[] allowableValues
Lexicon newLabelLexicon
SupportVectorMachine.doneLearning()
in case the training examples observed by
SupportVectorMachine.learn(int[],double[],int[],double[])
are only a subset of a larger, pre-extracted
set. If this is not the case, it will simply be a duplicate reference to
Learner.labelLexicon
.String solverType
SupportVectorMachine.defaultSolverType
.
Possible values:
double C
SupportVectorMachine.defaultC
double epsilon
SupportVectorMachine.defaultEpsilon
.double bias
SupportVectorMachine.bias
>= 0, an instance vector x becomes [x; bias];
otherwise, if SupportVectorMachine.bias
< 0, no bias term is added.boolean displayLL
liblinear
-related output should be displayed; default
false
String attributeString
weka.classifiers.Classifier baseClassifier
weka.classifiers.bayes.NaiveBayes
.weka.classifiers.Classifier freshClassifier
weka.core.FastVector attributeInfo
Learner.learn(Object)
method can be called.
Here is an example of a valid attribute string:
nom_SimpleLabel_1,2,3,:str_First:nom_Second_a,b,c,d,r,t,:num_Third:
nom
stands for "Nominal", i.e. the feature SimpleLabel
was declared
as discrete
, and had the value list {"1","2","3"}
.
str
stands for "Stirng", i.e. the feature First
was declared to be
discrete
, but was not provided with a value list. When using the
WekaWrapper
, it is best to provide value lists whenever possible, because very
few WEKA classifiers can handle string attributes.
num
stands for "Numerical", i.e. the feature Third
was declared to
be real
.
weka.core.Instances instances
boolean trained
WekaWrapper.doneLearning()
method has been called and the
WekaWrapper.forget()
method has not yet been called.String[] allowableValues
weka.classifiers.Classifier baseClassifier
WekaWrapper.defaultBaseClassifier
.String attributeString
WekaWrapper.defaultAttributeString
.LinkedVector parent
LinkedChild previous
LinkedChild next
int start
int end
String label
Feature[] vector
int size
Copyright © 2016. All rights reserved.