The history of computer science is heckin’
cool.
It features the upending of basic questions,
like—what is information?
It has biographical oomph—only this story’s
war-hero scientist, Alan Turing, was punished,
not celebrated, after helping the Allies win
World War Two.
And the history of computer science raises
profound questions about technology and society,
like—how do we know that our big complex
beautiful brains aren’t really just big
complex… computing machines?
And, if we can one day build machines that
think as fast as humans, will we have to grant
them human rights?
[INTRO MUSIC PLAYS]
Questions about thinking machines are relatively
recent in history.
But all kinds of doing machines are not, and
some of this doing involves solving mathematical
problems and other high-level functions.
Some time before 60 BCE, the Greeks constructed
an analog computer now called the Antikythera
mechanism.
Using many gears, the mechanism may have been
used to predict eclipses or other astronomical
events.
But the mechanism appears to have been a one-off.
So historians often give credit for the first
mechanical computer to the Artuqid-Turkmen
engineer, Al-Jazarī, who died in CE 1206.
We met him way back in episode seven when
dude built a robotic musical band.
And a robot toilet helper!
And Al-Jazarī built an astronomical clock
that showed the signs of the zodiac and could
be reprogrammed to compensate for changing
lengths of the day.
Then, in 1642, French mathematician Blaise
Pascal invented a mechanical adding machine
that used a collection of rotating numbered
wheels, similar to a car’s odometer.
Our friend from episode seventeen, German
mathematician Gottfried Leibniz, built commercial
mechanical calculators in the late 1600s.
And in 1801, in the early days of the Industrial
Revolution, French merchant Joseph Marie Jacquard
incorporated the punch card into a textile
loom to control patterns—arguably the first
industrial use of computing!
But devices like calculators and looms are
pretty far from the computers we rely on today.
So then the question becomes, like... what is a computer?
Well, that word has changed a lot over the
years.
In fact, up until the 1950s, a “computer”
was a person who computes—usually a woman.
The basic idea today is that a “computer”
is a machine that can be programmed to perform
logical tasks—like math problems—automatically.
For many historians, the dream of a somewhat
recognizable modern computer that can be programmed
to perform all sorts of calculations without
continuous human number-punching, dates back
to 1837.
That’s when British mathematician Charles
Babbage fully conceived a digital, programmable—but
mechanical—computer called the difference
engine.
This was a general purpose information processor:
it wasn’t just for a single task, but for
solving general logic problems.
Sadly, the difference engine was never completed.
Babbage started working on it, but never finished
due to cost overruns and fights with his machinist.
But we have his notes and those of his chronicler,
British mathematician Ada Lovelace, who wrote
the first algorithm intended for processing
using a computer—basically, the first computer
program!—in 1843.
Fun fact, Lovelace was the daughter of Romantic
poet Lord Byron!
Another early computer was actually made and
put into use in the United States.
A young mathematician–inventor named Herman
Hollerith combined the old technology of punch
cards with the new technology of electrical
circuits to produce a sorting and tabulating
machine.
With his machine, the 1890 census was finished
in weeks instead of years.
Hollerith went on to found the Tabulating
Machine Company.
And it’s still in business today—as the
International Business Machines Corporation,
or IBM.
But neither Babbage and Lovelace’s way-ahead-of-their-time
designs nor Hollerith’s super-sorter established
computing as a science.
Some important developments happened in the
years before World War Two.
For example, starting in the late 1920s, influential
American engineer Vannevar Bush created an
analog computer called a differential analyzer,
which could solve calculus problems with as
many as eighteen independent variables.
But the war shoved computer science into the
scientific limelight.
In the 1930s, British mathematician, linguist,
cryptographer, philosopher, and all-around
smarty pants Alan Turing laid the foundation
for a mathematical science of computing.
ThoughtBubble, introduce us:
Turing proposed the aptly named Turing machine—a
thought experiment to figure out the limitations
of mechanical computation.
A Turing machine can theoretically perform
an algorithm, or programmed operation.
It’s a universal computer.
Turing couldn’t make an abstract perfect
computer, but he could lay out how the logic
of writing and reading programs should work,
and how a relatively simple device could,
given enough memory, accomplish any logical
operation.
During the war, Turing went to work in the
super-secret “Ultra” program at Bletchley
Park, which was an estate for British codebreakers.
Turing wasn’t the only computer innovator
at Bletchley.
For one thing, eight thousand women worked
there!
Also, an engineer named Tommy Flowers designed
some for-the-time hyper-advanced computers
called the Colossus series, which also helped
the Allies a lot.
And were kept secret until the 1970s!
But Turing’s job, leading Ultra Hut Number
Eight, was to decipher encrypted messages
about German naval movements.
The Germans used a device called an Enigma
machine to create supposedly unbreakable ciphers,
or ways of encoding messages so that only
someone with the same cipher could read the
message.
But Turing broke through, using a computer
he built called the bombe, based on a Polish
computer.
These wartime computers weren’t super fast
or sophisticated.
They were smart ways of automating a lot of
dumb tasks.
Thanks ThoughtBubble
After the war, Turing kept working on computers.
His 1948 essay “Intelligent Machinery”
gave more details on the Turing machine.
Then, in 1950, he published “Computing Machinery
and Intelligence” in the journal Mind.
Go read it, it holds up!
Basically, this article became foundational
text in artificial intelligence or AI.
Turing famously stated that the appearance
of intelligence is proof of it.
Turing arrived at this idea by thinking about
a limit case: consider a computer that appears
truly intelligent, like a human.
How would we know whether or not it is intelligent?
Turing proposed a game to test the computer:
talk to it like a person!
This game is called the Turing Test and is
still used as a challenge in AI:
a human asks questions to both a computer
and another human, through a terminal, and
tries to guess which is which from their responses.
The Turing Test was based on an old party
game, in which you did the same thing via
written notes, and tried to guess which
of two unknown people was a man and which
a woman.
The Turing Test led to the Church–Turing
Hypothesis: computation power is computation
power.
It doesn’t matter if that power comes from
electrical circuitry or a human brain, or
how fast the individual parts of the machine
are.
So any machine of sufficient power should
be able to do any computation that a brain
can do.
So… a sufficiently complex machine would
be as intelligent as a brain—or more.
The only limit to computational power is memory.
But in real life, no computer—whether brain
or series of electrical circuits—has infinite
memory.
Even more ahead of his time, in his 1950 paper,
Turing suggested that—instead of trying
to straight-up build a computer as intelligent
as an adult human—it would be smarter to
build a child’s mind and then teach it how
to learn on its own.
BAM, machine learning!
So what recognition did Turing get for all
of his hard work?
In 1952, in the course of a police investigation
of a burglary at his home,
the officials became aware that he was in
a relationship with another man, and the British
government pressed charges.
Turing was convicted of “gross indecency”
and sentenced to take libido-lowering hormones.
He died in 1954, possibly of suicide by cyanide-poisoned
apple, possibly by inhalation of cyanide while
working.
Either way, one of the greatest minds to ever
live died at age forty-one.
He was not pardoned until 2016.
But before Turing died, he met with some important
folks in the United States…
Hungarian-American physicist John von Neumann
met Turing in the 1930s
and worked on foundational aspects of computer
science and AI.
Von Neumann proposed the idea of storing a
computer program in the computer’s memory.
So instructions could be stored externally,
instead of having to be fixed permanently
in a given machine.
Turing also met with American mathematician
named Claude Shannon during the 1930s, sharing
his ideas about the Turing Machine.
Shannon, who invented the word “bit” and
founded digital computing and circuit design
theory while still a graduate student at MIT.
And conducted some Turing-like codebreaking
during World War Two.
But he’s most well known for publishing
a series of papers after the war that founded
information theory, which examines how information
is encoded, stored, and communicated.
We could do a whole episode on information
theory, but some of the effects of Shannon’s
work were to help transition computers, televisions,
and other systems of moving around information
from analog to digital.
And information theory led to the Internet!
And over at Harvard, American physicist Howard
Aiken worked with the military and IBM to
design and build a computer, the Harvard Mark
I, in 1944.
This device was used by von Neumann to run
a program to help design the atomic bomb.
One of the other first programmers of the
Mark I was American computer scientist and
rear admiral Grace Hopper,
who invented one of the first compiler tools
to translate programming language into machine
code.
She then worked on machine-independent programming
languages, developing the early programming
language COBOL.
Computers after World War Two quickly became
bigger, faster, and more complex—like the
U.S. Navy-sponsored Electronic Numerical Integrator
and Computer, or ENIAC, 1946,
which filled up a large room, and UNIVAC in
1951, which was commercially mass-produced.
These general-purpose computers were based
on the principles laid out by theorists like
Turing, von Neumann, and Shannon, and they
used the languages developed by programmers
like Hopper.
These computers were built using a digital
code—binary, with values of only “one”
or “zero”.
And real-world computing really took off after
the invention in 1947 of the solid-state transistor
by William Shockley at Bell Laboratories and
room-filling “mainframe” computers for
businesses.
In a later episode, we’ll get to back to
computers—and introduce one of our very
best friends in the history of technology,
the Internet.
But for now, let’s remember that, up until
the 1950s, a computer was a person, usually
a woman, who was a number cruncher—
that is, someone who computes, using a machine.
One of those “computers” who became an
engineer who used a computer was African-American
rocket scientist Annie Easley.
In the era of Jim Crow laws, Easley left Alabama
and went to work for NASA in Ohio.
She developed computer code for NASA missions
for decades.
Thus next time—humans finally get to play
golf on the moon.
It’s the birth of air and space travel!
Crash Course History of Science is filmed
in the Dr. Cheryl C. Kinney studio in Missoula,
Montana and it’s made with the help of all
this nice people
and our animation team is Thought Cafe.
Crash Course is a Complexly production.
If you wanna keep imagining the world complexly
with us, you can check out some of our other
channels like Scishow, Eons, and Sexplanations.
And, if you’d like to keep Crash Course
free for everybody, forever, you can support
the series at Patreon;
a crowdfunding platform that allows you to
support the content you love.
Thank you to all of our patrons for making
Crash Course possible with their continued
support.
