Next: Naive Bayes Up: Basic Learning Rules Previous: Winnow Contents


Perceptron

The Perceptron update rule is implemented similarly. It takes only two parameters, a threshold $ \theta_t$ and a learning rate $ \alpha_t$. As in Winnow, whenever a mistake is made, the weights of active features are updated. In this case, they are updated via either addition or subtraction, depending on whether the mistake was made on a positive or negative example respectively.

A Perceptron update proceeds as follows:

Perceptron's sigmoid activation is calculated with the following formula:

$\displaystyle \sigma(\theta, \Omega) = \frac{1}{1 + e^{\theta - \Omega}} $

This is exactly the formula used by [Mitchell, 1997] as a sigmoid function in the context of neural networks.



Next: Naive Bayes Up: Basic Learning Rules Previous: Winnow Contents
Cognitive Computations 2004-08-20