-Artificial intelligence, or AI,
is everywhere.
It's in your conversations
with Alexa and Siri.
It's what's making sure
all of those shows
you plan
on binge-watching on Netflix
keep popping up
in the recommendations.
But can we really
call artificial
intelligence intelligence?
The concept of AI has been
around since the '50s.
Mathematician Alan Turing
was the first
to explore the question,
"Can machines think?"
when devising the imitation
game,
better known
as the Turing test.
The term "artificial
intelligence" itself
was established at
a Dartmouth conference in 1956,
where a computer scientist
came together to discuss
how a machine could demonstrate
the ability to learn.
Out of this,
the Perceptron was born.
Created by Frank Rosenblatt,
this machine-learning algorithm
was the first concrete attempt
to emulate a human brain
by categorizing patterns,
such as shapes or letters
of the alphabet.
In the end, it wasn't clear
if the Perceptron had learned
to recognize the right patterns
needed to separate them
into distinct categories.
Today, AI has advanced
and is able to work
with larger data sets.
Professionals in the health
industry
are hoping to use
these advances
to effectively look
for patterns of disease.
Dr.
Jae
Ho Sohn is working
with an AI algorithm
to look at thousands
of PET scans
to build up
a digital library
of what Alzheimer's disease
looks like.
-Particular technology we use
is called deep learning,
also known as artificial
neural network.
And it's able to analyze pixel
by pixel of the entire
PET scan of the brain.
And with the AI algorithm
seeing a lot of these examples,
it's able to learn over time
what features in the brain scan
correspond to those of
Alzheimer's disease patients.
-Dr.
Sohn
uses a popular type of PET scan
that looks
at glucose metabolism,
or the amount of sugar
being used by your neurons
in order for the brain to work.
-For our algorithm,
our neural network
was looking at the entire brain
as a whole,
which initially, we were
a little bit perplexed about.
But soon came to us that
Alzheimer's disease
is really a diffused process
throughout the entire brain,
which is why it's so difficult
for radiologists
to make this early prediction.
-Through Dr.
Sohn's research,
AI-assisted algorithms could
potentially catch early signs
of Alzheimer's years
before symptoms begin to appear.
As of now, we still need humans
to interpret this type of data.
So, how will we know
if machine-learning algorithms
can confidently see
the right patterns?
In order to help build
a better artificial brain,
researchers are looking
at how our own brains work
with pattern recognition.
Inside the neocortex,
or that wrinkly,
2 1/2-millimeter-thick
outer layer of our brain,
are billions of neurons
receiving sensory information
from other parts of the body.
Those billions of neurons
are organized
into structurally uniform
units called cortical columns.
Jeff Hawkins has been
studying the brain for 33 years
and thinks these structures
are significant
to understanding
intelligence.
-Every one of these
cortical columns on its own
builds a model of the world.
It's not like we have this
one place
where all this information
comes together.
You actually have thousands
and thousands of models
of the world.
And that model
builds your movement.
It's what's called
sensory-motor interaction.
When it get an input,
it figures out where
in the world that input is.
You cannot learn the structure
of a house
without walking
through the house.
You cannot learn how
a computer works
unless you type on the keys
and move the mouse and so on.
Today's AI for the most part
doesn't do that at all.
If you look at today's
convolutional neural networks,
they're just, like,
image classifiers
and pattern classifiers.
You cannot learn about the world
without moving.
-Despite the growth in
practical applications,
Hawkins believes AI systems,
even with their greater speed
and capacity,
are still at a roadblock.
-You can have a system
that recognizes images,
and it's very easy to fool it.
Meaning you can say,
"What's this a picture of,"
and you say,
"Oh, that's a dog,"
and an AI system says,
"Yeah, that's a dog."
And then you can just tweak
that image very slightly
so that you can I could not
even see the difference.
And all of a sudden,
the computer says,
"No, that's a car."
And it's wrong.
-Companies like Google and IBM
are developing tools
to look closer at how
those decisions are being made
in an attempt to get AI
to explain them.
While artificial intelligence
is taking on a more
significant role
in everyday decision-making,
it's still incredibly limited
and will continue
to need oversight.
-It can be argued that AI
can do "X" percent
of doctor's job
really, really well.
Or potentially, maybe,
in the coming decades,
let's suppose it can do
98% of what doctors can do.
But it's always a 2%
of critical errors that it make.
Is this a deadly error?
How bad is the error
that it makes?
And for those errors,
oftentimes actually,
they oftentimes turn out
to be pretty
critical errors
in human eyes.
Humans really need
to intervene and make sure.
-Being able to see
where there are any errors
in pattern recognition
will be important
as we open them up to more data.
While our brains continue
to inspire machine learning,
knowing where
artificial intelligence
has its limitations
can be just as informative.
