Adaptive Linear Neuron
ADALINE (Adaptive Linear Neuron) is a single unit (neuron) that receives inputs from
multiple units. ADALINE (Adaptive Linear Neuron) was invented by Widrow & Hoff (1960).
Its architecture is similar to perceptron. The difference with perceptron is in the way the
weights are modified. The weights are modified by the delta rule (often called least mean
square)
Initialize all weights and biases (generally Wi=b = 0) Determine the rate of
comprehension (=a).
For simplification, a is usually given a small value (=0.1).
Determine the allowable error tolerance.
As long as max Awi > tolerance limit, do
a. Input unit activation set x = si (i = 1,...,n)
b. Calculate the output unit response:
net = i xi wi + b
y = f (net)= net
c. Correct the weights of patterns containing errors (y is not equal to t) according to
equation:
Wi(new) = wi(old) + α (t-y) xi
b(new) = b(old) + α (t-y)
Once the training process is complete, ADALINE can be used for pattern recognition. For
that, a bipolar threshold function is generally used (although it is possible to use other forms).
The method is as follows:
1. Initialize all weights and biases with the training weights and biases.
2. For each bipolar input X perform:
a. Input unit activation set Xi = Si (i = 1,..., n).
b. Calculate the net output vector =
i xi wi + b
c. Apply the activation function:
y = 1 if y_in >
y = 0 if - y_in
y = -1 if y_in < Example
Use the ADALINE model to recognize 'AND' logic function patterns with bipolar input and
target.
Use tolerance limit = 0.05 and a = 0.1
Answer:
With a =0.1, the weight change = wi 0.1 (t-f(net))
Xi = 0.1 (t-y) xi.
1. Epoch -1
Iterations for epoch -1 shown in the table, f(net)=net :
Maximum Awi=0.07>tolerance, then the iteration continues for the second epoch,
which is shown in the following table.
2. Epoch 2
Maximum Awi = 0.02 < tolerance, then the iteration is stopped and the last weight
obtained (w1 = 0.29, w2 = 0.26, w3 = 0.32) is the weight used in the pattern
recognition..
In pattern recognition, the activation function is:
y = 1 if y_in > y = -1 if y_in <
The following table is the pattern recognition of the "AND" function using the research
weights.
It can be seen that the network output concludes that the pattern can be recognized perfectly
using the training weights exactly equal to the target,