Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér.
This paper discusses possible roadmaps towards
building machines that are endowed with humanlike
thinking.
And before we go into that, the first question
would be, is there value in building machines
that think like people?
Do they really need to think like people?
Isn't it a bit egotistical to say "if they
are to become any good at this and this task,
they have to think like us"?
And the answer is, well, in some cases, yes.
If you remember DeepMind's Deep Q-Learning
algorithm, it was able to play on a superhuman
level on 29 out of 49 different Atari games.
For instance, it did quite well in Breakout,
but less so in Frostbite.
And by Frostbite, I mean not the game engine,
but the Atari game from 1983 where we need
to hop from ice floe to ice floe and construct
an igloo.
However, we are not meant to jump around arbitrarily
- we can gather these pieces by jumping on
the active ice floes only, and these are shown
with white color.
Have a look at this plot.
It shows the score it was able to produce
as a function of game experience in hours.
As you can see, the original DQN is doing
quite poorly, while the extended versions
of the technique can reach a relatively high
score over time.
This looks really good...until we look at
the x axis, because then we see that this
takes around 462 hours and the scores plateau
afterwards.
Well, compare that to humans can do at least
as well, or a bit better after a mere 2 hours
of training.
So clearly, there are cases where there is
an argument to be made for the usefulness
of humanlike AI.
The paper describes several possible directions
that may help us achieve this.
Two of them is understanding intuitive physics
and intuitive psychology.
Even young infants understand that objects
follow smooth paths and expect liquids to
go around barriers.
We can try to endow an AI with similar knowledge
by feeding it with physics simulations and
their evolution over time to get an understanding
of similar phenomena.
This could be used to augment already existing
neural networks and give them a better understanding
of the world around us.
Intuitive psychology is also present in young
infants.
They can tell people from objects, or distinguish
other social and anti-social agents.
They also learn goal-based reasoning quite
early.
This means that a human who looks at an experienced
player play Frostbite can easily derive the
rules of the game in a matter of minutes.
Kind of like what we are doing now.
Neural networks also have a limited understanding
of compositionality and causality, and often
perform poorly when describing the content
of images that contain previously known objects
interacting in novel, unseen ways.
There are several ways of achieving each of
these elements described in the paper.
If we manage to build an AI that is endowed
with these properties, it may be able to think
like humans, and through self-improvement,
may achieve the kind of intelligence that
we see in all these science fiction movies.
There is lots more in the paper - learning
to learn, approximate models for thinking
faster, model-free reinforcement learning,
and a nice Q&A section with responses to common
questions and criticisms.
It is a great read and is easy to understand
for everyone, I encourage you to have a look
at the video description for the link to it.
Scientists at Google DeepMind have also written
a commentary article where they largely agree
with the premises described in this paper,
and add some thoughts about the importance
of autonomy in building humanlike intelligence.
Both papers are available in the video description
and both are great reads, so make sure to
have a look at them!
It is really cool that we have plenty of discussions
on potential ways to create a more general
intelligence that is at least as potent as
humans in a variety of different tasks.
What a time to be alive!
Thanks for watching and for your generous
support, and I'll see you next time!
