There are all kinds of games that computers
are better at than humans these days.
Chess.
Jeopardy.
Go.
An artificial intelligence, or AI, is a computer
system designed to solve the problems you’d
normally expect to need a human brain to solve.
And the latest advance?
AIs are getting real good at beating humans
at poker.
I’m Stefan Chin.
And you might recognize me from my riveting
performance on the SciShow Quiz Show.
And today I’m here to tell you about how
artificial intelligence is taking over the
world.
In January, an AI called Libratus beat 4 expert
human players after playing about 120,000
games of poker.
And in a paper published yesterday in the
journal Science, a separate research group
announced that their AI, called DeepStack,
beat 10 out of 11 expert human players after
playing about 45,000 games.
Both AIs played a version of poker known as
Texas Hold ‘em, where each player gets two
facedown cards that only they are allowed
to look at.
There are also five face-up cards that everyone
can see, and three rounds of betting.
Most of the games that AIs have conquered
so far, like the strategy game Go, are what
are known as perfect-information games, meaning
that all the players have the same information
about the game.
For example: in chess or Go, both players
can see all the pieces on the board, so they’re
making decisions based on the same information.
But Texas Hold ‘em is an imperfect-information
game.
Since players can’t see each other’s face-down
cards, they don’t all have the same information.
That makes things much more complicated, because
you have to make guesses about the other player’s
hand.
Like, say your opponent raises the bet.
Is that because they actually have great cards?
Or are they bluffing because they think you’re
bluffing?
And do they think you’re bluffing because
in the last round of bets you thought they
were bluffing?
Those kinds of brain-bendy questions come
up in imperfect-information games all the
time, and these two new AIs each used different
techniques to figure out the most likely answers.
They both only played against one opponent
per game, which helped because the more players
there are, the more ways the game can play
out.
But they also both played the no-limit version
of Texas Hold ‘em, meaning that the players
could bet however much they wanted.
And that made things harder, because when
you can bet whatever you want, the results
of each round affect the way you bet in the
later rounds, so the game has way more possible
outcomes.
Specifically, there were 10^160 possible outcomes
for each game.
That’s a 1 with 160 zeroes after it.
It’s a number so big, that there’s no
way even the most powerful computer could
actually consider all of those possibilities.
For the Libratus AI that won against 4 people
in January, researchers first had it play
literally trillions of games against itself.
They programmed it to learn from those games
so it could work out the best strategies in
different situations, based on how the rest
of the game would play out.
Then, they unleashed Libratus on the four
human players in a massive tournament that
lasted 20 days.
At first, the human players found some weaknesses
in the AI’s gameplay, and for the first
six days or so, they weren’t losing too
badly.
But the researchers also designed the AI to
learn from its games against the human players.
So every night, it would refine its strategies
before the next day’s games.
And around the seventh day, the AI started
beating the humans by a wider and wider margin.
By the end of the tournament, it had won more
than $1.2 million.
On the other hand, the researchers behind
the DeepStack AI, designed it to use neural
networks.
Neural networks involve layers of processors
working together to solve a problem, with
each layer using the results of the other
layers in its calculations.
It’s a strategy that’s modeled after the
way brains work, and it’s being used in
some of the most advanced AIs in the world.
Like Libratus, DeepStack trained itself by
studying random games — although it only
looked at about 11 million of them.
But Deepstack wasn’t designed to consider
how a move would affect the whole rest of
the game before deciding on a strategy.
Instead, it looked at how different decisions
would affect only the next few moves, then
used what it had learned about the game to
calculate whether those next moves brought
it closer to winning.
So DeepStack tries to forecast how the next
part of the game might go, without trying
to predict the whole thing.
And when the researchers had DeepStack play
against 11 expert human Texas Hold ‘em players,
it outperformed 10 of them across thousands
of games.
So even though Libratus and Deepstack were
designed very differently, both AIs mastered
a complicated, imperfect-information game.
And now there’s one more thing that computers
are better at than humans.
But this is a big step toward some broader
advancements, too.
There are lots of real-world situations where
you have to make decisions even though you’re
missing some information, just like in Texas
Hold ‘em.
And the success of these two AIs means we’re
on our way to creating systems that can analyze
those kinds of problems better than a human
can.
For things like deciding on the best treatment
for a disease, that’s an awesome plus.
And these AIs could also be useful for things
like stock trading and diplomacy.
One thing’s for sure, though: the future
is gonna have some amazing AI poker players.
And I for one welcome our new robot overlords.
Thank you for watching this episode of SciShow
News, and thanks especially to all of our
patrons on Patreon who make this show possible.
If you want to help us keep making episodes
like this, just go to patreon.com/scishow­.
And don’t forget to go to youtube.com/scishow
and subscribe!
