Welcome to another skymind video tutorial
In this screen cast I am going to demonstrate
adding a neural net
to the datavec image pipeline example
so the in the datavec image pipeline example
we write a directory images and the pipe labels to them
In this example I am going to take that
previous example, and add a multi-layer network to it
I gonna train the model against our training model data set
and then we gonna test the model with  test data set
This example will be added to our
example repo
it is available in this location, and the instruction for downloading that
are available at the second
URL. Thank you.
And our previous example
I build a image data pipeline that processes a
collection of files
of images stored in directory. So our directory name is label of the image
And that example is here, and it is detailed in another video.
Now I am going to take the next natural step. I am going to add a neural net
to that example
So I am working in the inteliJ, I recommend you to do the same
I am going to copy the class that we had previously.
And I just gonna call this AddNeuralNet
What do we need to do if I am converting this to
real working example in the neural net. The first thing
This batch size of 1
was there for demonstration purposes only.
So lets go ahead and set that to 128
when we process our data
we wanna specify the number of
down here
we no longer need this little loop
that just shows us the format of the data
I think we are ready to start adding
our neural net
as I work through this example
It might be useful to switch into presentation mode
so lets go ahead and build our model.
lets output some log information
And how we do we  go about building our model
There it is. Multi-layer configuration. And I gonna call that conf, and thats a
NeuralNetConfiguration Builder is what we net
In additional to set a random seed  to a particular value so that we could
get reproducible result
we can also specify the optimization algorithm
And, we gonna use STOCHASTIC_GRADIENT_DESCENT
specify the number of iteration as 1
specify a learning rate
I gonna specify the learning rate as 0.006
we specify an updater
our momentum
regularization set to true
Now we can start adding a layer. First layer
layer 0
we gonna use DenseLayer.Builder for this
As our input, we specify the height and the width of the images.
our number out this specify the number
of neurons in our next layer
lets set this to 100
specify our activation algorithm
specify the initial weight
and build.
Now that we are done with our first layer
We can add our second layer
And this is gonna be our output layer.
specify the loss function
specify the number in
the number in gonna match the number out from the previous
the number out
that would be 10, right ? Because we have 10 digits that we are classifying
in this example.
specify our activation function
specify our initial weight
and build
Now we've configured our network, and we've added our two layers
lets do pretrain as false
and backprop as true
We need to set the input type and
our record reader is gonna feed us a input type. We can call it convolutional
and then we specify the height
width, and the depth
actually height within channel
for the size of our images and the layer of the images
so 28 x 28 and a single layer, because it is grayscale
and then build
and that's it for our network
After we build our model, lets go ahead and use it
lets set a listener
so we can get output to see if it is training
and then listen every 10 updates
then lets go ahead and train the model..
so a for-loop
for the number of Epochs
And then in this for-loop
we want to
call fit
using our dataIter.
And that should be it.
Let's go ahead and see if that runs.
A few things before we run this example.
You absolutely want to make sure that you've comment out this
LogRecordListener
That's going to write one line of output for every image it has read
60,000 lines of output
I also had a typo here
I had a height comma width
it's height times width
And now I should be able to run this example.
As we see while our model runs
the output training number, it is training
so it's getting smaller, in terms of its recommendations
So we do have a somewhat successful network here.
Now that we have a model that is training
that seemed like it's going to work for our data set
we need to evaluate that model.
So what do we need to evaluate our model?
Well, we need to take our test data
and compare it to the output of the neural net.
So to step through our test data, we're going to reset our record reader.
And then we need to read the test data.
And the input split we label as test up above
and here's the code that we're taking a look at.
Remember in the beginning we specified a file split
for the train data, and a file split for the test data
we're going to use the record reader
that we used against the training data
we're now going to use it against the test data.
And we're going to pass those data sets
through to the network to validate.
So we're specifying use the record reader we defined earlier
use the batch size that we defined earlier
specify where the offset for the label is
and the total number of labels.
We scale the images
that we sent to retrain, so we need to scale these images as well.
Let's start by creating evaluation object
with 10 possible classes.
And then let's loop through the test iterator.
While we're looping through the test iterator, you'll recognize some of this code
from the first example where I displayed
the contents of the iterator as a data set.
So we want the data set
from next.
While in this loop, we want to compare the model's output
the model's prediction
to the label
for the data set, based upon the directory that it was in.
And we do that like this.
So we pass our data set next through to the model
and get the feature matrix output from that.
And then we compare the two.
Let's log the statistics of that evaluation.
Now that we've built our model
we've run it briefly to verify that it was training.
Then we've added some evaluation code to check our test data against our training data
against our trained network's output.
Let's go ahead and run this.
This is going to take some time to run,
So I'm going to stop the recording.
And then I'll just show you the final evaluation output.
Once our model completes fitting
for 15 epochs, and then
runs the test data through,
we can take a look at the statistics.
So looks like label 0
was labeled as 0 a thousand times
and there were a few times that it was misclassified
And the same goes for most from right over 90% accuracy, we have a little trouble with our 1
being labeled as a 2
so it makes a little bit of sense, so that's our output.
So fairly accurate trained, fairly quickly.
So it's not so good with 9's.
So there you have it.
In our next video in this series,
we will store the trained model
load the trained model, and then evaluate new images.
