Friends welcome to my youtube channel
Dhanesh here. So in this video, I am going to discuss about the fully connected layer of
convoluted neural network.
See we have discussed about the convolution layer , regular max Pooling layer .these layers we finished
See convolution layer. See I
Already discussed  convolution. It's a mathematical
Operation here we combines two functions to a third function.
in our case, we  combine the
Image and the kernel to produce a feature map. That's what we do in the convolution operation.
why we are doing this operation because ,
When we take an image,
an animal ,
For example dog. So in that species, you know different animals little bit look differently.
Identification how we can do ? For that only we use the convolution operation
Convolution algorithm we have discussed in the previous video.
 
Relu function is used to introduce
Non-linearity in the neural network .
CNN we are used to learn nonlinear function most of the functions.
So we are going for  RELU function.
Why can't we go with  sigmoid or hyperbolic orTanh?
They are also nonlinear . The advantage with the ReLU is,  it's  having a high level of saturation.
If you if you analyze hyperbolic tan or sigmoid, their saturation level is less.
why we are using max pooling ? it's for shrinking. So it's for shrinking the
output of the  previous
operation. we are shrinking to a smaller size by using the max pooling layer.
So all these operations we have discussed. So today I am getting into the fully connected layer.
So in this video I am going to discuss about  fully connected
Layer.
Fully connected
layer in artificial neural network.
 
Before getting into the fully connected layer,  you may be aware,
after the ReLU  operation we got a 4 by 4
matrix . This is the map and its size is 4 by 4.
See we  started with a 7 by 7 matrix and it changed its size to 4 by 4.
Now  we will  stack up all these layers. We will do one more
Convolution, relu and pooling. See  we will repeat this operation.
see first time we have done a
convolution,
ReLU,
Then  max pooling
This we have done again one more time. We will repeat this process.
Convolution , ReLU
Plus pooling one more time. We will repeat. So here you got a 4 by 4 matrix.
The map  will be shrunk into 2 by 2.
So this is the output . We stacked of all these layers.
Convolution relu pooling .And  4 by 4 map becomes 2 by 2.
So this is what we are going to give it to the fully connected layer. So you understood this now.
One important point you need to understand about fully connected layer . The input of the fully connected feed-forward
network is
flattened data . So what we are giving here is flattened data.
That is the one important point.
 
 
Into that fully connected feed-forward
Network the data we are viewing is the flattened data.
So what is the process of flattening ?
 
Flattening
collapses the spatial dimensions of input into channel dimensions.
 
we take the output of the max pooling layer and put this into a tensor known as vector.
 
In all the previous operations
we used three filters( Kernels).   I told you can use any number of filters (Kernels ).
The more the number of filters ,  the accuracy in prediction will be higher.
We use the three filters and the output we got  from these three filters.
Let us assume that this is the output we got is  2 by 2
matrix.
2 by 2 map. So this is the 3 outputs.
 
 
0.5
Here it is
0.5
Here, let's assume some arbitrary values .
 
 
 
 
 
We are putting it in a tensor  known as vector.
 
So this is the vector.
 
four five six seven eight nine
ten
eleven twelve
 
So I am arranging these values.
Then this 0.5
then 1
then we will come to the second one the second map it is, you know again 1
one put it here then again
0.5
Then you have 0.5
then you have
0.5.
0.5 then
1.0. So we will take it like this
 
 
then again
Then 0.5
So we put all these values into a vector. This process we call it as flattening.
This is very important.
When you use the libraries like Kera,  the tools lake care these operations.
 
 
This is the process of flattening. This flattened
data is given to  the fully connected layer.
 
Flattened input we are going to give to the fully connected layer.
That's what I am going to explain you now, so the fully connected layer is
 
You have the input X 1 X 2
like this
I here you have  hidden layer
 
We call it as the fully connected  layer.
In a convolutional neural network this hidden layer, we call it as the fully connected layer.
That's another point. You need to understand and here you will be having
Multiple outputs for classification. Let's have two outputs. See this is you know all are connected
Interconnected, you know by using it just one second. We will use them, you know
Black for the weights this represents the weights, you know, this is connected
Interconnected here. Also, it is interconnected
This is connected this is connected interconnected and here it is, you know again,
It is interconnected
By using the weights of sorry, so this is the wrong. Sorry
You have this
Then you have this
This you have this you have this
Again, you have this you have this so these are all the two outputs so you can be able to take it as much so
This is the you know, this is known as the fully connected layer the hidden layer here
We call it as a fully connected. So the one important point is to fully connected feed-forward network
we have you know feeding the
Flattened data, I told you it's in a vector we have this, you know flattened data
now so the real
classification see the exact classification happens in this
Fully connected layer. This is very important
The actual classification happens in the fully connected layer and we have see if you see the output do you have you know?
Multiple neurons in the outputs. That's very important. You have multiple neurons for classification
See when we analyze the vector see I have you know, I have shown you the vector previously
So if you analyze the vector
There are certain values are high for X and when we repeat the same process for another letter
Y or a certain other values will be high
See I can show you the vector. Once again, see if you see the vector you can see it. You know, I have written previously
This is the vector. You got it if you see this vector
This was the vector it is like 1
Here it is 0.5
Let's assume some values 0.5
here 1
then again 1
Then you have you know
0.5
0.5
0.5
Then you have again
0.5
The space is not here I am writing it here
See you have a game 0.5
Then you have you know again of this is a single vector, you know
You please understand it is a single vector one and here it is
0.5. So this is a single vector. It's a continuation
Continuation here you you have to understand that
So if you analyze this vector, this is a single vector see this value is high
This is one inside. Here. It is. High and here it is. Higj and here it is. High Here it is
High, so it is you know the , then you know first row second third fourth row
Then the fifth row then you know of six seven eight nine
Ten and eleven to see these values are high. So from this, you know, the
letter X can be
identified for when you
Do this convolution relu?
Convolution algorithm relu , max pooling and finally come up with an output in the vector after the flattening operation
These specific values are high from this we need to understand for X
These values are high for another letter Y if you take some other values will be high. So this is the you know
No way. The classification happens based on that
so the you know
When we analyze the vector there are certain values are high for X
and when we repeat the same process again convolution, relu and all these
Operation max pooling everything flattening certain other values will be high for another letter
Y or M or n so though
Here by using this values the fully connected layer does the
classification. see the real
Classification you need to understand the training is to be done. So for our model is to be trained
So here how we trained how the training is happening
You know for a fully connected feed-forward network is trained using back propagation
Algorithm, you know that the fully connected trained
Network it is, you know, the training is done by a back propagation algorithm. You know that
you are familiar with
This algorithm. This is the most important algorithm. You need to learn it
So it is trained using back propagation algorithm
So back propagation is supervised learning algorithm for training neural networks here
We will update the weights and biases using gradient descent until the loss function is minimum, you know this
Algorithm. So we you we use this
you know back propagation algorithm and we trained our fully connected Network our
convolutional neural network such that
Now the our you know a network know this is X if you you know
Give some value see whatever input it is having
for you know, the
X
In our other trained model, this is X
Now if I give you another value for X
Again, the same process it will repeat and come up with a vector like this and compare that
vector with this vector
So for the trained, you know of
Convolutional neural network. This is X. This is the X
Now if we give a a new value for X
So we are giving a new value for X for example, whatever value I am giving now is
let's assume that I am giving an input image and
For X and the same process its repeated the same process
It is repeated and this is the  trained
data, it's there in the
CNN now I have given a new image new input image
I am having so the new input image its processed all the operations. It's done convolution
You know
The relu max pooling flattening everything it's done and finally our network. got, you know
These are all the values it has to compare. Yeah, so the got the values it is 0.9 then you know, it's having
0.65. Let us assume some values then again 0.5 then
0.96 then again 1 then again
0.5 then again
0.55 then again 0.55 then
0.55
The
0.9
again 0.9 then
0.5. So these are all the maps. This is the vector it got it for the new input image
That's what we are using for prediction. So this is the map. It's Got. This is the one it's got
 
 
It's checking its checking the first one of this
then the fourth one fifth one one four five
one four five six seven eight nine
Then 10 and 11 1 4 5
10 11 these are all the values it's checking for X whether it's high or not
So here we have you know, the first one 1 we have taken
This is 1 then you will check this one
Then you will check this one then again these 2
5 values debility so we will so when you sum all these values for the  trained data when you sum it
Or what you will get it it is five
So when you sum it here, you will getting it as you know, five here here. It is 1 1 1
1 1 1 plus 1 2 2 plus 1 3 plus 1 4
So these are all the values you know it so when you sum all these values this one this one
This one and this one when you summit you will get 5 so this is 5
Now we will divide so here you have when you sum 0.9
0.9. 6 1 again 0.9 0.9
Almost you will get some rages I know around
4.5 7
so the answer from this it is you are getting
0.9 one it is, you know somewhat near to one. So this is
0.9 1
Then if you give another letter and showing it as you know X again
It will take the values for another letter. You have given the values are like let us assume like you know, it is
Now some different values
Sorry the different values. This one is you know?
0.4. This one is also Omega. No, sorry. This is 0.5. This is 0.5. This is also
0.5. Let us assume that those values you have short another letter Y so
Let us assume that the values we are getting it is different here
So here, you know
here you are getting it as
the first row you are getting it as
0.5. Here you are getting it as
0.5 here. Also you are it is not
0.9 it is there. Also you are getting
0.5 and 0.5 all the values you are getting let us assume that it is 0.5
so when you add it and
What what you will get it is, you know, it's of 0.5 plus 0.5 1 then
You know 1 plus 1 2 it will be 2 point 5
so this 2 point 5 divided by again 5 this will be
0.5. So this 0 point 5 is less than
zero point 9 1 so zero point 9 1 is higher is a higher value that means this is
This the second one whatever I told
Is not  X, the first one is the value X
So you got the point see this is the trained value the convoluted neural network after the backpropagation
Algorithm it's used and it's trained and this is the vector it's having so when we give an input value
It will add all those values the corresponding place 1 2 3. Sorry 1
4 5 these values
These are all the higher values 5 6 and these values it will add and here also
It will add and that value it will calculate it
It got 0.9. Won the second case again. It will check the corresponding higher values here
It is 0.5 0.5
When you add it
You will get to 0.5 again divided by the sum of these values
5 that is 0.5 here 0.9 1 is higher than this 0.5
So the first image you have given is X and the second image you have given is not X
So this is the way it does the classification in fully connected Network, so thanks for watching
