Hi, today I will show you, what is a perceptron
and how can you train it. I will also show
you how can you train your perceptron to predict
the OR LOGICAL GATE outputs in C#.
Perceptron is an algorithm
for binary classifiers. This means, our algorithm
is able to tell us that a given input is belong
to some specific class.
Let's see a very simple example. The OR Gate
is logical Gate, that has 2 inputs value and
one output. Each input and the output value
could be only one or zero. The gate is called OR,
because if at least on of the two inputs has
the value of 1, the output will also has the
value of 1. If both inputs are 0, then the
output will be also 0.
Now let's talk about the perceptron. Of course
our perceptron will have 2 inputs. For sure
Each input value only could be one or zero
as in the OR LOGICAL GATE, and we need to
add another value to control each inputs,
and that's what we call weigths. We also need
another value, which we call bias. The bias
does not depend on any input value.
Our task is to calculate the layer value,
so we can map it to the activation function.
This can be done with a simple dot product.
We have to multiply the inputs with its weights.
For this, we also have to add the value of
the bias. If the dot product + bias is greater
or equal than 0, then the value of the activation
function will be 1. If it is less than 0,
then it will be 0.
Lets see how it looks with an OR gate in a
real example. Our training set will be the
4 possible combination of the LOGICAL GATE
INPUTS. This means, 0||0 = 0, 0||1=1, 1||0=1,
1||1=1. We have to generate random weights
and bias.
We create a do while loop, until the actual
values are not match with the expected values.
Then we calculate the activation function
for each inputs. If the actual value are not
match with the expected value, our error value
won't equal to zero.
We calculate the error value by subtracting
the actual value from the expected value.
If it is not zero, then we have to calculate
a new weigth and a bias. For the new weight
value, we add its input value multiplication
by the error. For the new bias value, we simply
add it to the error value.
Now lets run the program.
As you can see, the inputs are 00, so the
expected output should be also 0, but unfortunately
our value is 1. This means, the error value
should be negative one. Because both of the
inputs value were 0, the weights won't change,
but the bias does. At the second line, if
we calculate the activation function, the
actual value will match with the expected
value, so we don't have to do anything.
At the third line, the actual value is 0,
which is surely calculated from the activation
function, but unfortunately, the expected
value is one, so the error value will be also
one.
We get the forth line weight0 by multiply
the previous input0 value with the error value,
plus the previous weight0 value. So negative
.one three PLUS one will be positive .eight
seven. With this weights and bias, the actual
value will match with the expected value.
Because not all of the error were 0, we have
to calculate it again.
We do it until all of the error values will
be 0.
I hope it was simple and understandable, and
if you liked the video, don't forget to subscribe
and hit the like button.
Thank you for watching, see you next time,
Bye.
