[BELL RINGS]
And we're back.
I am ready, in this
video, to show you
how to save the train
bottle with ml5.
So if you recall, what
I previously just did,
in the previous video,
is I added a feature
to my example which
will load a data set
and immediately start
training the model.
So you can see here a whole
bunch of labeled x,y points.
The model is now training all
the way up until 200 epochs.
And when it gets to the end--
we've got a train model and
I can click around and have
the model guess a particular
note for a particular x,y
coordinate in the canvas.
But if I press stop
and run it again,
I have to retrain the model.
And this is incredibly
useful, because I
might want to try
adjusting the data set,
recollecting the data,
retraining the model,
trying different
parameters, adding
more epochs, fewer epochs--
all sorts of possibilities.
But once I'm done,
once I feel like I'm
satisfied with this
model, I would also
like to be able to
just save the model
and reload it so that I
don't have to do anything
with the data again.
In other words, we're finished
with these first two steps,
collect the data,
train the model.
We've used saved data and
load data, previously.
And now, I just want
to save the model
so that I could deploy it.
So to do that, there's really
just one other function that I
need, save.
So if I just say save, that's
the model, not the data.
And then, also, of
course, I'm going
to make use of this
load function as well.
So let's go back to my
trusty key press interface
and add one more key.
Let's use m for
saving the model.
Change the key to m, change
the function to save,
and then I'm going to call
this mouse-notes as well.
Actually, maybe I need
some more arguments here.
Let's go and check
the ml5 reference.
So indeed, it's
the same as before.
I need to give it
a name and then
a callback for when
it's completed.
But, in this case,
I don't need to know
when it's completed, because
I'm just going to see
that the files are there.
Let's run it, train
it, and save it.
[MUSIC PLAYING]
[NOTES PLAYING]
I'm just testing to make sure
the model seems reasonable,
which it does.
And now, I can hit m.
The files have been downloaded.
Interestingly, there
are now four files
in my downloads directory.
Now, I should point out that
this file is from before.
This is the data file.
And I'm just going to delete
it right now, because there
are actually now three files.
So saving the model is
different than saving the data.
With the data, you
just have one file--
the actual data in JSON format.
When you're saving the
model, there are three files.
Let's call one model.json.
The next file is
model_meta.json.
And one more file,
model_weights.bin.
Now, I think, while recording
this video tutorial,
I just discovered a bug in ml5,
because these should actually
be called mouse-notes.json,
mouse-notes_meta.json,
mouse-notes.weights.bin.
But it just used the
default naming model.
So maybe by the time
you're watching this,
there'll be a new version
of ml5 that fixes that.
But those are the default names.
And the reason why
there are three files
is there's a lot of
information to store related
to a machine learning model.
If we travel back in
time for a moment,
you might remember a diagram I
drew in this very first video
as part of this series.
[MUSIC PLAYING]
This is the neural
network diagram.
Looking at this, we can see
the overall architecture
of the network.
There are two inputs,
there is a hidden layer,
and there is an output layer.
This architecture is
described in model.json.
This is what that model.json
file actually looks like.
It's a little terrifying.
There's lots of
lower level details
related to how machine learning
and neural networks work.
But we could even start to pick
and choose little bits here
that we can begin to understand.
For example, this is a
sequential neural network.
There's a sequence-- the inputs
come into the hidden layer
to the output--
feed forward.
There is a dense layer.
A dense layer means every
single input is connected
to every single hidden node.
And all of the connections,
it's very dense.
Everything is connected
to everything.
We can see that the input
shape has a 2 in it.
There's 2 inputs.
We could go down and
find the output layer,
and the output
layer has 7 units.
Well, that doesn't match up with
3 here, but, if you remember,
the new data set that I
trained in this example
has C, D, E, F, G, A, B--
seven notes.
So that output layer
has 7 units to it.
A lot of stuff that's in here
matches up with the properties
that you set for your
ml5 neural network.
So that is model.json.
Now, what about model_meta.json?
I mean, isn't this already
all the metainformation
for the model?
So while that is
true, everything
that you're seeing here
is the metainformation,
the architecture of the
model that TensorFlow.js
is expecting.
But ml5 also keeps track
of additional information
that's for the ml5
library only and not
related to TensorFlow.js.
And that's what's in model_meta.
ml5, for example, keeps track
of what the actual label
names are.
That's not something that
the neural network has--
everything's just
numbers, but that's
something that we want
to use in our code.
So ml5 is keeping
track of that for us.
If you recall, there was this
whole normalization process
that I talked about in
the first video, where
you need to take
the input values
and squash them into a
standardized range between 0
and 1.
So you need the minimum
and maximum ranges
for those data fields, and
ml5 is storing that for you
in these inputMin and
inputMax variables.
So it's keeping track of that.
So this is just additional
metainformation related
to the model, specific to ml5.
Now, there's one more file.
This file is a bit mysterious--
model.weights.bin.
It's not a JSON file,
it's a binary file.
And that's why if I tried to
open it in Visual Studio Code,
I just see a lot
of gobbledygook.
Because it's just binary data.
And what it's storing--
it's storing the secret
source of the neural network--
the values that result
after the training process
and the weights of all
of these connections.
So every single connection
between any given node--
an input node and a hidden node,
a hidden node and an output
node, has a weight
associated with it.
And those weights are like
dials that the neural network
is tuning as it's
being trained to try
to optimize towards getting
the proper outputs to match
with given inputs.
So every time we train
the neural network,
we might have different weights.
The architecture is fixed.
We've sort of defined
this neural network model
as a fixed architecture
of fixed metainformation.
But the weights are
going to be different
based on the nuances of how the
training process actually went.
So now that we've established
how to save the model
and what files you get
when you save the model,
we're ready for the next step.
And that is, when we
first run this sketch,
can we load all of these
files into the model
and have what is, in essence, a
pre-trained model ready to go?
And the function we need to
do this is the load function.
So I need to say model.load.
First things first, I want
to upload those model files.
So I'm going to create a folder.
I'm going to call it model.
And then here I'm
going to add file,
and I'm going to upload
these three files.
So now, we can see that
in the web editor I have,
in a model directory,
model.json, model_meta.json,
and the weights.
So I don't want to
load data anymore.
There might be a reason why
I also want to load the data,
but in this case, I don't.
I just want the train model.
Say model.load-- and then I
need to give it those files--
I'm just going to
say files, right now,
and then I'm going to write a
callback called modelLoaded.
[MUSIC PLAYING]
In the callback, I'm
just going to give myself
a message that the
model is loaded.
And then what goes here?
What goes in files?
This is a little bit tricky,
because there are three files.
But there's an easy
way to handle this.
And if I go back to the
ML5 documentation page,
it's right here.
So I'm going to grab this
little bit of code right here,
and I'm going to paste
it into my example.
And this is actually
the model info,
which I will put right here.
And the path is where I put
it, which is under model.
So now, if I run
this sketch, I should
see, in the console, "model
loaded," and no errors.
That's what I'm hoping for.
Model loaded, no errors.
But did it really work?
In order to test it, I've got
to send my sketch straight
into prediction-- the
state of prediction.
So before I had a collection
state, a training state,
and a prediction state.
But if I'm loading
the model directly,
I can just set the state
right equal to prediction
and run it again and see.
[NOTES PLAYING]
Looks like it works.
Now, if I wanted to
look at the data,
I could also load
the data, and we
could have everything in here.
I do not want to
train the model.
So let me try running this.
I should see the training
data, but I don't--
I load the pre-trained
model and have it work.
[NOTES PLAYING]
Hurray!
OK.
[BELL RINGS]
This example is complete.
We have seen all of the steps--
how to collect the data,
how to train the model,
and then how to deploy
and use the model.
Now, we've also added
being able to save
the data after collecting it,
so that if we rerun the sketch
later, we could reload it, as
well as save the actual trained
model so we could go
and just load it again.
So one thing that I might say
is I've kind of done everything
all in one sketch, which
is quite useful, actually,
and there's a lot of
interactive possibilities
there, but you might also
consider breaking it apart.
Maybe you want to have
three different sketches--
one that's a data
collection sketch, one
that is a loading the data
and training the model sketch,
and then one that
is just a loading
a pre-trained model
and inference sketch.
So that could be an
exercise for you as well,
to divide all these pieces up.
I've got more that
I want to show.
So in the next video, now that
I have all these features,
and I have this fully
working example,
I want to show you a regression.
And what I mean by
regression is the output
of the neural network,
instead of being a label,
a single note is a
predicted number.
In this case, I'll
use a frequency.
So hopefully it'll
make more sense
what I mean by that when
I do the actual example,
and I will talk through it a
bit more in the next video.
So thanks for watching,
and I'll see you soon.
Goodbye.
[MUSIC PLAYING]
