Here's an idea-- it's
unethical to not develop
artificial intelligence.
OK.
Forgive me my double
negative for one second
and just let me explain.
Artificial intelligence--
specifically
robots which can learn, problem
solve, and be creative--
have been a signifier
for futureness
since about the
mid 20th century.
Sure, in the present
we have Siri and Watson
in Deep Blue and even
the Kinect, all of which
are built on
artificial intelligence
but lack a central feature
of that shiny metal
future from the movies--
they are not embodied.
They are stuck
inside little boxes
and left to interact
with the world
through their computerized
voices and not-- well, bodies.
Siri, tell me a joke.
SIRI: I can't.
I always forget the punchline.
Charming as they
may be, these AI
aren't much more than
simple information valets,
though that might soon change.
If you've yet to
have the pleasure,
allow me to introduce
you to Baxter.
At $22,000 a pop, this
robot-- which has hands, eyes,
is trainable, and
comes preprogrammed
with a certain amount
of common sense--
is cheaper than a year's
worth of most repetitive
Fordist human labor and is
smarter than his robo factory
ancestors.
For some people that is pretty
threatening, and rightfully so.
If you could have a
staff of trainable,
multitaskable Baxters why
would you hire people?
People need to
managing and health
care and birthday parties
and casual Fridays on which
to wear their Hawaiian shirts.
And Baxter is not the
only smarty pants robo
bro out there.
The Google self driving
car is just a predecessor
to the [INAUDIBLE].
Pro Joe, the teaching
robot, could become
CGP-crazed digital Aristotle.
Watson-- yes, Watson
from "Jeopardy"--
is being rejiggered to work
in diagnostic medicine.
He could be the grandfather
to Dr. Perceptron.
There are even some
very smart people
developing a Perry Mastron for
all of your legal bot needs.
What I'm saying is that
robots might someday
replace us, professionally.
There's an idea
Now, you probably
think you know
what I'm going to say next.
You think I'm going to say,
all of these robots doing
these jobs-- that's bad.
Putting all those people
out of work, it's unethical.
Except I'm not
going to say that.
I'm actually going to say
that replacing human laborers
with steely automatons is
arguably one of the most
ethical things you could do.
Now, there's an easy
line here saying
that we would just
make the robots do
the jobs that human beings
shouldn't be doing anyway.
It's a well traveled path so
we're not going to go down it.
Besides, we're not talking
about artificial intelligence
replacing just the dangerous
and menial jobs but also
the complex, fast paced,
extremely precise,
and the knowledge heavy jobs.
That's a lot of jobs.
And so that might cause
a lot of problems.
And the problems that
come with large scale
social, economic, and
corporate restructuring
are many and varied and
some are real scary.
But the more complex
ethical discussion
doesn't involve the
relationship between humans
and each other or robots, but
between humans and the future.
Is it ethical to
stop improvement?
Would it be ethical for us
to stop making and deploying
Baxters and Google cars
and warehouse robots
and streamline, simplify,
cost reduce, and increase
the reliability of any
number of processes
because people are
currently doing them,
because up until this
point people is all we had?
Philosopher Alain
Badiou describes what
he calls the ethic of truths.
The most important
thing is the event
which seizes humanity and
breaks it from the norm.
That event, and the way
people are faithful to it,
can contribute to
humanity's immortality
as a group of beings
which create and continue
and-- there's a
lot of hand waving
when you talk about the future.
Badiou writes that there
is only one question
in the ethic of truth-- how
do I, as someone, continue
to exceed my being?
How will I link
the things I know
in a consistent fashion
via the effects of being
seized by the unknown?
Not following through,
not continuing
to make plastic pals
that are fun to be with
could be seen as a betrayal
of that norm breaking event.
For instance, do we
deny future generations
the possibility of cheaper,
better medical care
from robot doctors because
we want to maintain
the no robo status quo?
I mean the printing
press threatened so much
about the status
quo and we're all
very glad we saw that
through, aren't we?
Not to mention the computer
and the automobile.
But hm.
Because human progress gave
us medicine and the internet
and cup holders,
but it also gave us
the atomic bomb and Furbies.
Any ethics of progress
has to account
for the fact that on the
horizon of that progress
lays some terrible atrocity.
It has to approach ideas
of progress unobjectively.
You can't stop progress isn't
exactly the most comforting
ethic, is it?
Progress, as an ethic,
can't unequivocally
prioritize that progress
before everything else.
Happiness and physical safety
are both pretty important,
but an ethics of progress
can help us organize
what comes next in line.
We have to accept the
possibility of the bad stuff
that comes packaged
with progress
and focus on what happens after
the progress dust has settled.
We have to keep going.
And why?
Because of the greater,
grander human experience
we'd only be able to
achieve with the help
of our artificially
intelligent robot friends.
Arigato.
What do you guys think?
Is it unethical to
stop the development
of artificial intelligence?
Let's us know in the comments.
And I, for one, welcome
our new robot overlords.
Subscribe.
Subscribe.
Subscribe.
I think the saddest
music in the world
is probably any song
written by the Vengaboys.
Let's see what you guys had to
say about the source of emotion
in music.
CB George says that our
emotional response to music
might be a kind of chicken or
the egg problem in that we are
trained, in some way, watching
movies and other visual media
to associate certain kinds
of sounds with certain images
and that's kind of-- he uses the
word Pavlovian, which is great.
Ciscoql asks, if sad music
isn't sad does that mean
that a sad picture isn't sad?
I think a lot of the
same stuff applies,
but a picture can be
a lot more literal.
It's not nearly as encoded
as a piece of music.
But, you know,
personal experience
still plays a huge
role in determining
what your emotional reaction
to an artwork or something
is going to be.
Congratulations to
DOOSH MASTA, who
managed to summarize the
entire episode rather well
in one comment.
If you can find it, this
conversation between OpDday2201
and Symbiotisism.
Is really great.
I suggest you fun
out and get some--
get some Control F happening,
see if you can find it.
Jesse Harris makes
the astute point
that the only piece
of music which
contains objective
emotion is "Yakety Sax"
and I think I'd totally agree.
[INAUDIBLE] has a
further correction
on a previous correction,
saying that MTGox actually
was Magic the Gathering
and not Mining Team Gox.
I'm-- I-- I don't know
what to think anymore.
[INAUDIBLE] makes a
really interesting point
about the enjoyment of
non mainstream music
and music that doesn't contain
standardized emotional content
and wonders whether or not
when people enjoy that music,
it's a kind of reaction or that
they are comforting themselves
because they are responding
to the mainstream, which
I think is-- is really
interesting, really good.
This week's episode
was brought to you
by the work of these
diligent people
and the Tweet of the Week
comes from O. WolfgangSmith,
who imagines the internet as
one building, which it is.
