This video is brought to you
in thanks to Surfshark,
a VPN that encrypts your online data
to help you stay private and protected.
♪ Do you like milk with Flavor Straws ♪
♪ Magic straws, Flavor Straws ♪
- So, here I am.
(TV buzzing)
- [Announcer] The thing
with almost human brain,
Elektro the robot.
(intense music)
You asked for it.
(intense music)
- I am Elektro, mightiest of all robots.
(television buzzing)
- [Ankur] Artificial
Intelligence has been a topic
of growing prominence in the media
and mainstream culture since 2015,
as well as in the investment world
with companies that
even mentioned the word
in their business model, gaining
massive amounts of funding.
While to many, the hype
around AI may appear sudden,
the concepts of modern
artificial intelligence
have been around for over a century,
and extending further,
the concept of artificial
intelligence and artificial beings
have been in the minds of
humans for thousands of years.
To better understand and
appreciate this technology
and those who brought it to us,
as well as to gain insight
into where it will take us,
sit back, relax, and
join me in an exploration
on the history of artificial intelligence.
(upbeat music)
Since at least the
times of ancient Greece,
mechanical men and artificial
beings have been dreamt about,
such as a Greek myths of Hephaestus,
the Greek god of smithing,
and his designed some mechanical men
and other autonomous machines.
Progressing forward toward the Middle Ages
and away from fables and
myths of ancient times,
realistic humanoid automatons
and other self operating machines
were built by craftsmen
from various civilizations.
Some of the more prominently known
are of Ismail al-Jazari
of the Turkish Artuqids dynasty in 1206,
and Leonardo da Vinci in the 1500s.
Al-Jazari designed what is believed
to be the first programmable
humanoid robots,
a boat carrying four mechanical musicians
powered by the flow of water,
and da Vinci of his various
mechanical inventions
built in knight automaton
that could wave its arm
and move its mouth.
Moving forward to the 1600s,
brilliant philosophers and mathematicians,
Thomas Hobbes, Gottfried
Leibniz, and René Descartes
believed in the concept
that all rational thought
could be made as symmetric
as algebra or geometry.
This concept was originally
birthed by Aristotle
in the fourth century, referred
to as syllogistic logic,
where a conclusion is drawn based
on two or more propositions.
As Thomas Hobbes stated
in his book "Leviathan",
"When a man reasons, he does nothing else
"but conceive a sum total
from addition of parcels
"or conceive a remainder from subtraction
"of one sum from another.
"These operations are not
incident in numbers only,
"but to all manner of things
that can be added together
"and taken one out of another.
"The logicians teach the same
in consequences of words,
"adding together two names
to make an affirmation,
"and two affirmations to make a syllogism,
"and many syllogisms to
make a demonstration."
Leibniz took Hobbes's
philosophies a step further
and laid the foundations
for the language machines
communicate in today, binary.
His motivation for doing
so was because he realized
that mathematical computing processes
can be done much easier
in a number encoding
with less digits.
Descartes examined the
concept of thinking machines,
and even proposed a test
to determine intelligence.
In his 1637 book,
"Discourse on the Method,"
where Descartes famously stated the line,
"I think, therefore I am."
He also stated in that book,
"If there were machines
"that bore a resemblance to our bodies,
"and imitated our actions
as closely as possible,
"we should still have
two very certain means
"of recognizing that
they are not real humans.
"The first is that such a machine
"should produce arrangements of words,
"as to give an appropriately
meaningful answer
"to whatever is said in its presence.
"Secondly, even though some machines
"might do things as well as we do them,
"or perhaps even better,
"they would inevitably fail in others,
"which would reveal
"that they are not acting
from understanding."
Also in the 1600s and
throughout the Middle Ages,
on the other side of the spectrum,
entertainment and spirituality
growing from Greek myth,
the concept of artificial
beings continued to be explored,
such as in fields like ancient chemistry,
in other words alchemy,
which was more of a pseudo science
with the goal to transform
the pure into the rare,
transforming mind into matter.
Countless stories during this time period
also portray this concept,
such as a Golem in Jewish folklore,
which is a being created
from inanimate matter.
Progressing forward, we see
this trope again in stories
such as Frankenstein,
first published in 1818,
with a being reanimated
from inanimate flesh.
- It's alive.
(thunder droning)
- It's alive.
(thunder droning)
It's alive, it's moving!
It's alive.
Oh, it's alive.
It's alive, it's alive!
- [Ankur] After the height
of the first Industrial
Revolution in the mid 1800s
where machines began
replacing human muscle
and the beginnings of the
field of modern computing,
we see these stories take a turn
towards modern sci-fi elements
and portraying technology
as evolving into human form.
Take for example this clip from
the silent film Metropolis.
(intense music)
The field of modern
computing was officially born
with Charles Babbage's
mechanical analytical engine
in the 1840s.
Although it was never built
due to a variety of reasons,
rebuilding of his designs in present day
show that they would have worked.
This then means Ada Lovelace
was the world's first programmer,
with her algorithm on
calculating Bernoulli numbers
on Babbage's machine.
Early computers had to be
hard coded to solve problems,
and Lovelace, being the first programmer,
had serious doubts on the feasibility
of artificial intelligence.
Nearly 200 years after Descartes,
she shared similar sentiments,
stating about the analytical engine,
"It has no pretensions
whatsoever to originate anything.
"It can do whatever we know
how to order it to perform.
"It can follow analysis,
"but it has no power
"of anticipating any analytical
relations or truths."
This is referred to as
Lovelace's Objection.
As a side note, be sure to check my video
on the history of computing
if you want more background knowledge
on the evolution of
the field of computing.
Back on topic,
a decade after Babbage's
analytical engine,
in the 1850s, George Boole,
an English mathematician and philosopher,
revolutionized the field of computing
and laid the first true steps
for computing based
artificial intelligence.
Boole, like those before him,
also believed human
thinking could be mastered
by laws described by the
means of mathematics.
He took the principles
of syllogistic reasoning
from Aristotle and expanded
much deeper on the relationship
between logic and math
that Leibniz had set,
thus resulting in the
birth of Boolean logic,
essentially replacing
multiplication with and,
and addition with or,
with the output being
either true or false.
This abstraction of logic by Boole
was the first step in giving
computers reasoning ability.
This because as the field
of computing evolved,
a number of researchers noticed
that binary numbers, one and zero,
in conjunction with Boolean
logic, true and false,
could be used to analyze
electrical switching circuits.
This is referred to as
combinational logic.
In other words, logic gates
that output a resultant
based on their inputs.
There are a variety of
different types of gates,
and, or, XOR, not, et cetera,
and as the connections
between different gates
became more complex,
led to the design of electronic computers.
Combinational logic is the
first layer in automaton theory,
in other words, the study of abstract
and self operating machines.
As computing evolved,
additional layers began to be established,
with the next one being
finite state machines.
These machines essentially
black box sets of logic gates,
and use logic between the black boxes
to trigger more complex events.
For an illustrative example
of a type of state machine,
think of an oven that has three states,
off, heating, and idle.
In state diagrams, we can
illustrate state transitions
and the values that will trigger them.
For example, the on and off
button presses over the oven,
the oven being too hot, the
oven being too cold, et cetera.
The next layer in automaton
theory is pushdown automaton,
in other words, machines with memory,
which was pioneered by many individuals,
such as William Eccles
and Frank William Jordan,
who invented the first
circuits capable of memory,
flip flops,
and John von Neumann,
who abstracted the relationship of memory
in a computing system.
Finally, the last layer
of automaton theory
in the class of machines we
use today is Turing machines.
Before continuing, I wanna point out
that this was an extremely
simplistic overview
of a subset of automaton theory,
and to definitely research
with other sources
for more in depth overview.
The final layer of automaton theory
was based on a mathematical
model of computation
that Allen Turing proposed in 1936,
dubbed the Universal Turing Machine.
Once again, like those before him,
Turing broke down logic
into a mathematical form,
in this case, translating it to a machine
that reasons through
abstract symbol manipulation,
much like the symbolic
reasoning done in our minds.
As stated earlier, early
computing devices were hard coded
to solve problems.
Turing's belief with
this universal computer
was instead of deciding
what a computer should do
when you build it,
design a computer in such a way
that it can compute anything
that is computable,
so long as it was given
the right instructions.
This concept is the basis
of modern computing.
At this point in the 1930s,
with the field of modern
computing officially born
and rapidly evolving,
the concept of artificial beings
and intelligence based
on computing technology
began permeating across
mainstream society of at a time.
The first popular display
of this was Elektro,
the nickname of a humanoid robot
built by the Westinghouse
Electric Corporation,
and shown at the 1939,
New York World Fair.
- Ladies and gentlemen.
I'll be very glad to tell my story.
I am a smart fellow as I
have a very fine brain.
- [Ankur] Elektro wowed many,
and one can say is the basis
of how mainstream society thinks
of a computing based
artificial intelligence,
as evident by the various
movies, TV shows, books,
and other entertainment
media portraying the concept.
As a side note, Westinghouse's
Elektro draws many parallels
to modern day Hanson robotic, Sophia.
They are not truly intelligent,
but are more of a way
for mainstream society
to get a glimpse of future technology,
in other words, they're
imitating intelligence.
Going back to Alan Turing in the 1950s,
he pondered to this dilemma
of true versus imitated intelligence
in section one of his paper,
"Computing Machinery and Intelligence",
titled "The Imitation Game".
In this paper, he lays the foundations
for what we now refer
to as the Turing test.
The first serious proposal
and the philosophy
of a computing based
artificial intelligence,
the Turing test essentially states
if a machine acts as
intelligently as a human being,
than it is as intelligent
as a human being.
An example often thrown
around is an online chat room,
in which if we are talking to an AI bot
but aren't told this until after,
and believed during the
conversation that it was a human,
then the bot passes the Turing test
and is deemed intelligent.
Around the same time as Turing's proposal,
another titan of the field of computing,
the father of the Information
Age, Claude Shannon,
published the basis of information theory
in his landmark paper,
"A Mathematical Theory of
Communication" in 1948.
Information theory is the backbone
of all digital systems today,
and a very complex topic.
In layman's terms and in
relation to computing,
Shannon's theory states all information
in the entire universe could
be represented in binary.
This has profound implications
for artificial intelligence,
meaning we can break down human logic,
and more so the human brain,
and replicated its processes
with computing technology.
This fact was demonstrated
a few years later in 1955
by what is dubbed
as the first artificial
intelligence program
called Logic Theorist,
a program able to prove 30
of the first 52 theorems
in "Principia Mathematica",
a three volume work on the
foundations of mathematics.
This program was written by Allen Newell,
Herbert Simon, and Cliff Shaw,
who like philosophers and
mathematicians before them
also believed human
though can be broken down,
with them stating,
"The mind can be viewed
as a device operating
"on bits of information
according to formal rules."
That being, they realize on a machine
that can manipulate numbers,
could also manipulate symbols,
and that symbol manipulation
is the essence of human thought.
As a fun side note, Herbert Simon stated,
"A system composed of matter
"can have the properties of mind,"
a throwback to alchemy of the Middle Ages
in which matter was attempted
to be converted the mind.
Also during this time period
in 1951, Marvin Minsky,
one of the founding fathers
of the field of artificial intelligence,
built the first machine
incorporating a neural net,
the Stochastic Neural Analog
Reinforcement Calculator,
SNARC for short.
As you can see,
at this point in the mid 1900s
with computers becoming
more capable every year,
increasing research into
abstracting human logic
and behavior, development
of the first neural net,
and various other innovations,
the field of modern computing
based artificial intelligence
was being born.
We'll cover the official birth
of AI leading to present day
in the next video on this AI series.
Speaking of present day
AI and advanced algorithms
that require more and more
of your personal data,
now more than ever, privacy
online is essential,
and tools such as VPNs
are able to provide this
by encrypting your internet connection.
One such VPN I use myself and
as a sponsor of this video
is Surfshark.
They use the military grade
AES256-GCM encryption,
which is used by banks
and financial institutions
around the world.
Surfshark is also one
of the few truly no log VPNs out there,
as they are based in the
British Virgin Islands,
which has no data retention laws.
Moving on to the features of this VPN,
they are supported by all major platforms
including Android and iOS.
Some other great features
that give me ease of mind
is a kill switch feature,
which will automatically
terminate your internet connection
if the VPN connection drops unexpectedly,
and hack lock, which is the
future which will scan the web
and notify you of vulnerable
emails and passwords
from data leaks.
While there are many
VPN options out there,
Surfshark has become one of my favorites.
And as a bonus, is also one of
the most affordable as well.
To support Futurology and
learn more about Surfshark,
go to surfshark.deals/futurology.
By using that link and entering
the promo code Futurology
at checkout, you will save 83%
as well as get one additional
month to your plan for free.
(screen buzzing)
(upbeat music)
At this point the video has concluded.
We'd like to thank you for
taking the time to watch it.
If you enjoyed it,
consider supporting us on
Patreon or YouTube membership
to keep this brand growing,
and if you have any topic suggestions,
please leave them in the comments below.
Consider subscribing for more content,
and check out our website and
our parent company, EarthOne,
for more information.
This has been Ankur.
You've been watching Futurology,
and we'll see you again soon.
(upbeat music)
