YUFENG GUO: In this episode
of Cloud AI Adventures,
find out how to train on
increasingly complex datasets
by converting a linear model
to a deep neural network.
As the number of feature
columns in a linear model grows,
it can be increasingly difficult
to achieve high accuracies
in your training
as the interactions
between different
columns get more complex.
This is a known problem, and one
particularly effective solution
for data scientists to use
is deep neural networks.
Deep neural networks are able to
adapt to more complex datasets
and better generalize to
previously unseen data because
of its multiple layers.
Hence why they're called deep.
These layers allow them to
fit more complex datasets
than linear models can.
However, the trade-off is
that the model will generally
tend to take longer to
train, be larger in size,
and have less interpretability.
So why would anyone
want to use it?
Because it can lead to
higher final accuracies
on this complicated dataset.
One of the tricky things
about deep learning
is that getting all the
parameters to be just right
can be very tough.
Depending on your dataset,
the configurations
can seem virtually limitless.
However, TensorFlow's
built-in deep classifier
and deep regressor classes
supply a reasonable set
of defaults that you can
get started with right away,
allowing you to get
going quickly and easily.
Let's see an example of
how to update our Iris
example from a linear model to
a deep neural network, often
abbreviated as DNN.
Now, we don't have time in
this video to properly show off
a 2,000-column model that
a DNN would leverage,
so I'm just going to use the
four that we've been using
throughout this series.
The main change comes from
replacing our linear classifier
class with DNN classifier.
This will create a deep
neural network for us.
Virtually everything
else remains the same.
There is one
additional parameter
that is required for
the deep neural network
that we did not
include previously.
Since deep neural networks
have multiple layers,
and each layer has potentially
a different number of nodes,
we'll add a hidden
units argument.
The hidden units
argument allows you
to supply an array with the
number of nodes for each layer.
This allows you to create
a neural network simply
by considering
its size and shape
rather than writing the whole
thing from scratch by hand
and wiring everything up.
Adding or removing layers is
as simple as adding or removing
an element in the array.
Of course, with any
preconstructed system,
what you gain in
convenience, you often
lose in customizability.
DNN classifier tries
to get around this
by including a number
of additional parameters
that you can optionally use.
When left out, some regional
bold defaults are utilized.
For example, the optimizer,
the activation function,
and dropout ratios are all
available to customize, along
with many others.
What else do you
have to do in order
to convert our model
from linear to deep?
Nothing.
That's the beauty of using
the estimators framework,
one common way to organize your
data, training, evaluation,
and model exporting, while
still giving you the flexibility
to experiment with
different models
and different parameters.
Sometimes, deep neural networks
can outperform linear models.
In these situations,
TensorFlow makes
it easy to switch from a linear
model to a deep one with very
little code changes
by leveraging
the estimators framework to
replace just one function call.
This means more time to work on
the data, model, and parameters
rather than wiring up
your training loop.
I'm Yufeng Guo, and if
you want to learn more, be
sure to subscribe
to the channel,
and check out the rest
of Cloud AI Adventures.
Don't forget to use
TensorFlow estimators
for easy deep neural networks.
Thanks for watching that
episode of Cloud AI Adventures.
You can catch the
whole series here.
And if you have any
thoughts and comments,
feel free to leave
them down below.
