Hey everyone! Welcome back to Neural
Network lectures. In this, we will be
discussing some questions related to Hebbian mechanism and single layer perceptron.
So let's move on to the first question.
The question says that: Classify the
given two dimensional input pattern using Hebb rule and also we need to draw
the network architecture So this is the
pattern one and this is the pattern 2
and also given that, the output of
pattern 1 is 1 and the output of pattern
2 is -1. So let us first vectorize
these patterns. Let me denote these
black squares as -1 and white
squares as +1.
If you write this in vector form, let me
denote pattern 1 as X1, can be written as
 
See an order like this in this fashion. Similarly X2
can be written as, that is pattern 2,
Since the initial value of
weight is not given, let's take it as 0.
Therefore W(0) is equal to 0
where the dimension of W(0)
is same as the dimension of input pattern.
Now while studying Hebbian mechanism we
have learned that, the change in weight
ΔWkj is given by the activity
product rule which says that xj*yk
so here the desired output for pattern 1 that
is Y1 is 1. Similarly the desired output
pattern 2 is -1. Now since the input
patterns are 1x9 and the
output pattern is 1x1, we will
write this equation as xj transpose*yk .This transpose is given so that we
can apply matrix multiplication. 
Therefore ΔWkj for the zeroth
iteration is given by:
which is this vector
transpose into Y1. That is this value.
This can be written as
Also we know that the
weight of updation rule is given by
Therefore, inorder to find updated weight that is W(1), we need to add
I am taking the transpose of W(0) so
that we can add the two matrices.
On addition gives us
which is this matrix and transposed plus ΔWkj(0) that is this value
This on addition gives us,
Now in similar
fashion, you have to find ΔWkj for one and that is given by
 
The value of
Y2 is -1. Therefore -1. So here
is a transpose and this on addition gives us
Now the updated weight W(2) is given by
 
 
 
So this is the final updated value of the
synaptic weights. Now let's move on to draw the architecture. So if you try to draw
the architecture, you can see that this
is the architecture with 9 input
nodes and there's a linear summerizer
from which the output is taken and also we
provide the weights as per we found in
this equation. Now going on to the next
question.
 
 
 
So firstly let's draw the neuron model
this is a bias X0 equal to 1 and this
is X1, X2 and there is a linear
summerizer Sigma(Σ) there are weights, W11, W12, W13 and there is an activation
function phi(Φ) and we could take the output y. Now that we have drawn the neuron model,
Let's initialize the weights. Let me take W11 equal to 1, W12 equal to -1
and W13 equal to -1. This is just a random
choice, you can take any value you like.
Let's take the activation
function as a threshold function, that is
 
For the input set P1, we will have
X0 is 1, i.e
bias is 1 * W11.
 
We get the final value as Φ(1)
which is equal to 1 according to our
definition. So these set of weights
satisfy for the first input pattern.
Let's take the second input pattern P2
we get
Φ(1) which is one, which is also the desired
output. Therefore we'll check it for
third set of input, that is this one.
Similar fashion
we get the
value as Φ(1) which is 1, which is the desired value for input
pattern 3. Therefore it is clear for
input pattern 3 also. Now let's check for
the input pattern 4.
We get
the value as Φ(1) which is 1 and
we can see that this is not equal to the
desired value T4, so we need to update these weights.
Now let's see what happens .
which is
Φ(-0.5) which is 0 and this
is equal to T4. Therefore if you take
W11 as 0.5 and W12 and W13 as -1, equation gets satisfied for the
P4 input pattern. Now we need to check
whether this new set of weights that is W11
equal to 0.5, W12 equal to -1 and W13 equal to -1 satisfy for
the first three sets. Therefore let's
recheck again.
we
get it as Φ(0.5) which
is equal to 1 which is also the desired
value. Similarly check for y2
we get it as
Φ(0.5) which is equal to 1
and this is equal to the desired value.
for T2. Let us check y3.
 
we get it as Φ(0.5) which is
equal to 1 and the desired value is also
1. Therefore these new set of weights
that is W11 equal to 0.5, W12 equal to -1 and W13 equal to -1
satisfy for input set
P1, P2, P3 and P4. Now let's take for P5
 
we get it is
as Φ(-0.5) which is equal to 0.
Therefore it is satisfied for input set P5
also. Now drawing the architecture.
This is X0 equal to 1, X1, X2 ,that is
linear summerizer. W11 is 0.5, W12 is -1 and W13 is -1 and that is the
activation function which is threshold
function and we take the output where
this V is given by
Now moving to second part of the question,
that is, draw and describe its decision
boundary. So let's draw the input space
for this pattern. After drawing we get
the input pattern something like this
where this is P1, P2, P3, P4 and P5 also
the outputs of P1, P2 and P3 are 1.
Therefore these red circles denote when
the outputs are 1 and the outputs of P4
and P5 is 0. Therefore these gray circles
represent 0 outputs. Now we can see that
if we draw a line like this, this line
divides the input space into two parts
such that if
the input pattern lies to this side of the
decision boundary, then its output is
taken as 1 and if the input pattern
lies to this side of the decision
boundary, then its output value is taken as 
zero and the value of the
decision boundary is given
In this particular case it
is given by
Why? because our W11 was 0.5, our X0 was 1, W12 was  -1
and W13 was -1. So that concludes
this lecture and if you have any doubts
please do ask in the comment section.
 
Thanks for watching and have a nice day :D
