Perceptron rule
This is a way to train a perceptron using the binary step activation function. To classify training data
To simplify the treatment of
Then checking a random training example
Pseudocode
perceptron_rule(T):
Input: Training data T = {((x_1, x_2, ..., x_n, x_{n+1} = -1), y)}
Output: perceptron weights w_1, w_2, ...., w_n, w_{n+1} = theta.
1. Set w_i = 1 for 1 = 1, 2, ..., n+1.
2. hat{y}_t = set_y_hat(T, w_i)
3. While there exists t in T such that hat{y}_t not = y_t, and let count be the number of intterations:
3.1. Choose random t in T such that hat{y}_t not = y_t
3.2. Set eta = 1/count
3.3. for i = 1, ... , n+1
3.3.1. Set delta w_i = eta (y - hat(y)_t) x_i
3.3.2. Set w = w_i + delta w_i
3.4. hat{y}_t = set_y_hat(T, w_i)
4. Return w_i
set_y_hat(T, w_i):
Input: Training data T = {((x_1, x_2, ..., x_n, x_{n+1} = -1), y)} and
perceptron weights w_1, w_2, ...., w_n, w_{n+1} = theta.
Output: For each t in T new hat(y)_t
1. for t = ((x_1, ..., x_{n_1}), y) in T:
1.1. Set hat{y}_t = bool(sum_{i=1}^{n+1} x_i >= 0)
2. return hat{y}_t
Runtime
To make sure
This will only stop if