Artificial neural networks are very useful
tools that are able to learn and recognize
objects on images, or learn the style of Van
Gogh and paint new pictures in his style.
Today, we're going to talk about recurrent
neural networks.
So, what does the recurrent part mean?
With an artificial neural network, we usually
have a one-to-one relation between the input
and the output.
This means that one image comes in, and one
classification result comes out, whether the
image depicts a human face or a train.
With recurrent neural networks, we can have
a one-to-many relation between the input and
the output.
The input would still be an image, but the
output would not be a word, but a sequence
of words, a sentence that describes what we
see on the image.
For a many-to-one relation, a good example
is sentiment analysis.
This means that a sequence of inputs, for
instance, a sentence is classified as either
negative or positive.
This is very useful for processing movie reviews,
where we'd like to know whether the user liked
or hated the movie without reading pages and
pages of discussion.
And finally, recurrent neural networks can
also deal with many-to-many relations, translating
an input sequence into an output sequence.
Examples of this can be machine translations
that take an input sentence and translate
it to an output sentence in a different language.
For another example of a many to many relation,
let's see what the algorithm learned after
reading Tolstoy's War and Peace novel by asking
it to write [exactly] [in that style].
It should be noted that generating a new novel
happens letter by letter, so the algorithm
is not allowed to memorize words.
Let's take a look at the results at different
stages of the training process.
The initial results are, well, gibberish.
But the algorithm seems to recognize immediately,
that words are basically a big bunch of letters
that are separates by spaces.
If we wait a bit more, we see that it starts
to get a very rudimentary understanding of
structures - for instance, a quotation mark
that you have opened must be closed, and a
sentence can be closed by a period, and it
is followed by an uppercase letter.
Later, it starts to learn shorter and more
common words, such as fall, that, the, for,
me.
If we wait for longer, we see that it already
gets a grasp of longer words and smaller parts
of sentences actually start to make sense.
Here is a piece of Shakespeare that was written
by the algorithm after reading all of his
works.
You see names that make sense, and you [really]
have to check the text thoroughly to conclude
that it's indeed not the real deal.
It can also try to write math papers.
I had to look for quite a bit until I realized
that something is fishy here.
It is not unreasonable to think that it can
very easily deceive a non-expert reader.
Can you believe this?
This is insanity.
It is also capable of learning the source
code of the Linux operating system and generate
new code that also looks quite sensible.
It can also try to continue the song "Let
it Go" from the famous Disney movie, Frozen.
Or, it can write its own grooves after learning
from other people's works.
So, recurrent neural networks are really amazing
tools that open up completely new horizons
for solving problems where either the inputs
or the outputs are not one thing, but a sequence
of things.
And now, signing off with a piece of recurrent
neural network wisdom:
Well, your wit is in the care of side and
that.
Bear this in mind wherever you go.
Thanks for watching, and I'll see you next time!
