Hello everyone, welcome back to Neural
Network Lectures. In the previous video, we
have discussed and implemented logical
AND gate OR gate NOT gate and NOR gate
using McCulloch-Pitts model. Also, I have
asked you to implement NAND gate as a
homework. I hope that everyone did that,
okay! So continuing from what have left, in
this video we'll be implementing XOR and
XNOR gate using McCulloch-Pitts Model.
So without wasting time, Let's move on to that. As always, will write the
truth table first. Therefore X1 X2 and
output Y and the inputs are 0 0 0 1 1 0
and 1 1 so the corresponding outputs are
0 1 1 0. Now, will write the expression
for XOR gate. Y equal to X1'. X2 plus
X2'.X1. I hope that everyone is familiar
with this. Now, we know the Boolean
expression A.A' is equal to 0,
right?
Therefore using the same logic, I will
rewrite this equation Y equal to X1'.X2 + X2'.X1 + X1'.X1
+ X2'.X2 . See that these two terms are
0, therefore it doesn't change the
equation. Now, using distributive law, I
will write this as X1(X 1' + X2')
+ X2(X1' + X2')
which can again be written as
(X1' + X2')(X1 + X2), right? Now
using de Morgan's law, you know that
(A.B)'=A' + B'.  Using the same logic, we will write this
part as (X1.X2)'.(X1 + X2), okay? So, if we look into it
we'll have a NAND gate over here, we have
an OR gate over here and this total
expression is "ANDed",right? So we need NAND gate, OR gate and AND gate to implement the
XOR gate. Therefore AND gate can be written
as... Remember the previous video? This is
the AND gate. X1 + X2 - 1.5. And we need OR gate, right?
See, OR gate... And OR gate can be written as
Yk =X2 + X1 -1
Therefore, OR equal to X2 + X1 - 1, right? Now, what we need is NAND gate.
And NAND gate can be,... I hope that everyone did the homework..., can be written as
-X1 - X2 + 1. Now, we will
try to implement this. Therefore, first we
have three input nodes, right? One is X0 = 1 and this is X1 and this is X2
So, first we will implement this OR gate,
okay?
OR gate is X2 + X1 - 1.
Therefore we, can write it is OR and X1, X2
where the weights are 1 1 and the
weight of the bias is -1, right? Now, we
will implement the NAND gate part, okay? And...
this is the linear summarisor for NAND
gate. For NAND Gate, X1 is -1 X2 is
-1 and the bias is 1, okay? Now, what
you have to do is take the AND of
NAND and OR gate. Therefore, we take the AND gate.
and  the weights are 1 1 and we
have a bias also. The bias is minus -1.5.
Therefore this is -1.5 . Now, if you feed this
to an activation function Phi( K),
which is obviously the threshold
function, we will get the output as XOR gate, okay? Thus, we have implemented the
XOR gate and  just study this carefully
because this is a bit different from
what we have implemented previously, okay?
Now, we will try to implement the XNOR
gate. Before going on to that, I want all
of you to pause the video and try it
yourself. Then you can watch the video. That way you will be able to learn, right?
Okay, in order to implement XNOR gate
also, proceeding as before, first we'll
write the truth table x1 x2 and y 0 0
0 1 1 0 and 1 1 at the outputs are 1
0 0 1. Now, writing the Boolean
expression for XNOR gate,
Y=X1.X2 + X1'X2' , right? Now,
using De Morgan's Law, you can again
write this is X1.X2 + (X1 + X2)'
So, you can see a AND gate
over here..  this is AND... you can see a NOR gate over here and this is an OR
gate, right? And in the previous video, we
have derived the expression for AND
Gate as X1 + X2 - 1.5. OR gate as
X1+ X2 - 1 and for NOR Gate as
-X1 - X2 + 0.5. Now, we will
try to implement this over here, okay? So
first we have the input nodes... These are the
input node. this is X0 which is the
bias. This X1, this is X2, then we'll
first implement the AND gate, okay? So this
is the linear summariser for AND gate. So X1 is 1, X 2 is 1 and we have the bias as -1.5
Therefore this is -1.5. Now
we will implement the NOR gate and NOR
gate is this. This is NOR gate and this is -1. X2 is
also  -1 and the weight for bias is
0.5. Therefore this is 0.5, okay? Now
we'll take the OR of these. Therefore, let
this be the linear summariser for OR
gate. From AND gate, we have a weight of 1, From the NOR gate, we have a weight of 1
and the bias is, see..,  -1. Therefore, this
bias can be taken as-1 and this
will be passed on to an activation
function Phi of Vk, which is the
threshold function and we will get the
output Y which is the XNOR gate,
okay? Thus, we have successfully
implemented the XOR gate and XNOR gate
using McCulloch-Pitts model. Now, we
will move on to the next topic, that is,
implementation of logic is using Hebbian
mechanism, okay? Before going on to that
let's have a quick glance on how weights were
updated in Hebbian mechanism. So in the Hebbian mechanism, we have Delta Wkj(n)
equal to eta.Xj.Yk ,remember? And we had Wkj(n+1)
equal to Wkj(n) + delta Wkj(n).
So, I hope that everyone remembers this
part or else please watch the previous
video on Hebbian mechanisms, okay? Now
coming to the implementation part
initially we will take all the weights,
that is, all Wkj's as 0 only. Now, let
the input vectors be given by Xn'
equal to {X1' , X2',.... Xn'} ,
Okay? And the corresponding output
vectors Y' equal to {Y1, Y2....Yn}, okay? Now, each of the input
vectors may have multiple inputs, like x1
x2 etc, okay? And also the output can also
be in similar way, that is, each output
vector can have multiple outputs, like
y1, y2 ... ym, okay?
Now not to find delta Wkj(n), what we do is,
we'll take the vector product of
Xi'.Yi' . Note that, this Xi'
is the input vector, okay, not the
individual inputs, okay? And this eta is
either given in thequestion or if it is
not given in question, we can take
eta as one, okay? After this, we will update
the weight such that Wkj(n + 1) is
equal to Wkj(n) + Delta Wkj(n), okay? So this is the manner in which you
need to proceed and you need to repeat
these steps until we have taken all the
input and output pairs, okay? Now, let us
move to the first question, that is,
implementation of logic AND gate using Hebb Mechanism.
See, this is the truth table for AND
gate. Note that, instead of 0 we have
written as -1. Why? Because if it was
0, then when we try to update the weight
this Xj will become 0, therefore
the updated weight or the change in weight
delta Wkj will be 0. In order to tackle
that problem, we have written this -1
instead of 0. Therefore 1x1 is 1, 1x-1 is  -1. Because the 1x0
was 0, right? And -1x1 is -1 and -1x-1 is
-1. Now, we are also given with a
bias and note that bias is also an input
in the neuron model we have discussed,
right? Therefore we will try to implement
the steps. So
proceeding in the manner we have
discussed earlier Delta Wkj(0)...,
wait, see, since we are given the input
vector starting with X1 let's name it as
1, okay? Instead of 0, I am naming it as
1... is equal to and we are not provided
with eta, so we're taking eta as 1. 
The first vectpr is X1, therefore
[1 1 1]
See, these are the 3 inputs for the first
input vector, right? And we have output
vectors y equal to 1. Therefore, this is 1.
Then we get it as [1 1 1]. Now we will try
to update the weights
that is wkj of two is equal to wkj of
one plus delta w kg of wood see I have
already told you that the initial
weights are always taken as zero
therefore this is a zero vector 0 0 ok
therefore this is 0 0 0 plus 1 1 1 c
delta w kg is taking us one right
therefore we can t test 1 1 1 now we'll
move on to the second input output pair
therefore delta w KJ of 2 is equal to
input back please x2 equal to 1 minus 1
1 and we have ETA 1 and the output
vector is minus would therefore minus 1
we can write this minus 1 1 minus 1
therefore w KJ of 3 can be written as w
KJ of 2 plus delta w kg of 2 which is 1
1 1 from here plus minus 1 1 minus 1 can
be written as C 0 2 0 right now we'll
move on to the third input output pair
therefore Delta W K G of 3 can be given
as the learning but I'm it whisk 1 you
do input vertex threes minus 1 1 1 and
the output is minus 1 therefore minus 1
this can be written as 1 minus 1 minus 1
right therefore w KJ of four can be
given us w KJ of 3 plus Delta W KJ for
three and Delta W kg of three is this
therefore adding these and this will get
it is 1 1 and minus 1 right now moving
on to the fourth input/output pair we
will write it as
delta w AZ of the four equal to the
earning parameter that is born do minus
1 minus 1 1 see inputs are minus 1 minus
1 and 1 and the output is minus 1
therefore you do minus 1 this can be
written as 1 1 and minus 1 and we can
write wkj of 5 is equal to wkj of 4 plus
delta w KJ o 4 and we have WK job for us
see this 1 1 and minus 1 therefore 1 1
that minus 1 plus we have found delta w
KJ or 4 as 1 1 minus 1 1 1 and minus 1
which can be written as 2 2 and minus 2
so for the implementation of logical and
gate using help mechanism we have the
final weight of the system us W 1 1 as 2
W 1 to s 2 and W 1 3 us minus 2 where
the inputs are X 1 X 2 and X 3 where X 3
is the bias okay and output is y would
write this we have implemented the and
hit now we will move on to the ordinate
okay and before moving on to the or gate
it is a better thing if you possibly do
then implement it yourself okay now
we'll implement the lower scale over it
in the similar fashion as before we'll
take the initial Waits's w KJ equal to 0
0 0 then for the first input output pair
we have delta w KJ of warn us since the
learning but haven't we eat is not given
we will take it as pawn you can do first
input that is 1 1 1 and I hope that
everyone is familiar with this notation
that is instead of 0 we are using minus
1 okay and the output is 1 this can be
written as 1 1 1 and you can write w kg
of to us Delta W K G of 1 plus w KJ of
one right
therefore delta w kg of quantities c 0
therefore 0 0 0 plus 1 1 1 which is
nothing but 1 1 1 now we'll move on to
the second input output where therefore
delta w kg of two candlelit dinners eta
is 1 and our input vector is 1 minus 1 1
the output is 1 which can be written as
1 minus 1 1 and we have w kg 3 us w kg
of 2 plus delta w kg of 2 which is 1 1 1
plus 1 minus 1 1 this can be written as
2 0 2 right now moving on the third part
we can write delta w KJ of 3 us 1 into
input vector is minus 1 1 1 minus 1 1 1
and the output is 1 therefore minus 1 1
1 therefore we can write w KJ for us
Delta W KJ o 3 plus w kg of 3 w kg of
threes 2 0 2 therefore 2 0 2 and Delta W
kg of threes minus 1 1 1 therefore my
name is 1 1 1 this can be written as 1 1
and 3 now we want to the fourth input
output pair you can write it as delta w
KJ or for us the input vector is minus 1
minus 1 1 minus 1 minus 1 then 1 and the
output when twist is what therefore this
is minus 1 and we get the output as 1 1
and minus 1 now we can write w KJ of v
us w KJ of 4 plus Delta W KJ for 4 and
we have w kg or for us 1 1 3 therefore 1
one three and delta w kg or for us one 1
and minus 1 right this can be written as
2 & 2 therefore the final help in
network the final modified brain sir w 1
1 equal to 2 w 1 2 equal to 2 and W 1 3
equal to 2 so we have implemented the
logical or game for IBM again sir ok now
in similar fashion try to look at the
nor gate also in case of nor gate we get
the input output us we can take the
input vector X 1 s 1 bias is taken as 1
and the output vector will be 0 right
and in or gate whenever the input vector
is high we get the output as low
therefore this should be 0 but I stated
before will be taking us this one and
the second input vector will be minus 1
biases always will be 1 and this will be
1 okay the only thing you need to take
care is that do not take 0 anywhere take
minus 1 instead of 0 because of the
reason I have to argue before ok now
another very important thing you need to
notice that you cannot implement the XOR
gate using KPM against the reason is
that the XOR gate inputs are not
linearly separable ok let me explain
this to you by plotting the extra gate
inputs to a graph ok
therefore let this be the first input
let this be the second input let this be
1 and let this be 1 at this point is 0
so we know the XOR gate input says X 1 X
2 y 0 0 0 1 1 0 point 1 and the output
says bureau 1 1 0 so whenever one of the
input is high we get the output as high
let me plot this so in this case it is 1
1 and when when it is 1 1 we have input
us 0 and let me use the red color to
plot the
see your outputs okay so when I went
input is 0 0 also the output is 0
for this point is also right now let me
use the green color to plot the outputs
when it is one therefore when it is
Figaro one down to this one and when it
is point zero the output is 1 now you
can see that there is no way to separate
the ones and zeros using a single line
that is to separate the greens and Red's
using a single line wait let me draw the
input graph for n here so it will be
more clear so let this be x1 let this be
X 2 and this is 1 and this is 1 and we
are drawing for an gage okay this is 4
and it so as usual we will use the green
color for once and engage is 1 only when
both inputs are what right therefore
this is green and all other points are
red right now in this case we are able
to separate the green segments that is
one sensing rows using your life right
see this line will separate all the
zeroes to one side at all the keys to
other side in similar fashion we are not
able to draw a line in this graph right
that is the problem with X omit that is
it is not linearly separable and we can
use heavier mechanism only for linearly
separable inputs okay therefore it is an
important criteria to check whether the
inputs are linearly separable before
proceeding to implement the havean
mechanism okay otherwise all the answers
you get will be bold because it is
impossible to implement a heavier
mechanism for that input right so always
keep this not in mind while doing a
question based on kayvyun mechanism okay
I hope anyone understood the concepts in
this video and if you have any doubts
please do ask in the comment section ok
thanks for watching have a good day
