The subtitle of our book is Finding Cassandras
to Stop Catastrophes.
Cassandra in Greek mythology was someone cursed
by the gods, who could accurately see the
future but would never be believed.
When we say “Cassandras” throughout the
book, we’re talking about people who can
accurately see the future.
People who are right—Cassandra was right—people
who are right about the future but are being
ignored.
Having derived what we think are the lessons
learned from past Cassandra events, we then
looked at people today who were predicting
things and being ignored.
And we looked at issues first and then tried
to see if there was someone warning about
them.
So the book is about people, 14 people: Seven
who we known were Cassandras, and seven who
we are examining to find out if they are.
Usually Cassandras are people who are not
directly involved in the thing that they worry
about.
There are people who observe it.
Then there are people who study it.
But in the case of Jennifer Doudna at the
University of California Berkeley, she’s
the person who created it and she’s also
our Cassandra.
The “it” in this case is CRISPR-Cas9,
a method that she invented—and I’m sure
someday will get a Nobel prize for—a method
of doing gene editing that allows for removal
of genetic defects in the strain or addition
into a strain of new capabilities.
Now this is going to revolutionize human life.
It’s already beginning.
It’s going to mean that all of the genetic
defects that have caused so much pain and
suffering for people for millions of years,
all of that could potentially be removed.
So why does the great woman who invented this
wake up in the middle of the night worrying
about it?
What she told us was she’s afraid that she
might have become Dr. Frankenstein.
That the technique that she developed could
be misused in horrible ways.
It could be misused, for example, to create
biological weapons, to create new forms of
threat to human beings, threats for which
we don’t have any known antidote.
Or it could simply be used to create human
beings of far superior capability.
Not just taking genes and removing defects
but adding new super capabilities.
And so one scenario we discussed with her
was what if the North Koreans or the Chinese
decided that they would create super soldiers?
Physically large people with great athletic
ability designed to be soldiers, designed
to be aggressive, designed to be able to fight
for long periods of time.
Or what if they simply created people who
were brilliant at computer programming and
had IQs off the charts?
What if in the process of that kind of gene
editing we created a caste society where some
people were genetically designed to do menial
tasks and didn’t have the capability of
doing anything else?
And other people were designed to be the rulers
with huge IQs and the capability of understanding
things beyond the pale for lesser humans.
That’s something that scared the creator
of CRISPR-Cas9 and it scared us.
When we heard Jennifer's story, we asked ourselves,
"does she fit the template of a Cassandra
that we developed in the first half of the
book looking at the first seven?"
Is she an expert?
Absolutely.
She is the expert.
She created it.
Is she data-driven?
Yes.
She has a wealth of data on CRISPR-Cas9 and
what it can do.
Is she predicting something that is first-occurrence
syndrome?
Something that's never happened before?
And the answer to that is "yes."
Is it kind of outlandish?
Is the stuff of Hollywood fiction?
Yes it is.
What about the audience—the decisions maker?
One of the things we saw with the earlier
Cassandras was it wasn't always clear there
was a decision maker.
People always pointed at each other saying
"that's your job, or at least it's not my
job."
And in this case, making decisions about what
gene editing can happen, and can't happen,
and enforcing that is a matter of law, and
international law, and it's not at all clear
whose job that is.
One of those issues we looked at was artificial
intelligence.
Now frankly my co-author R.P. Eddy and I disagreed
about whether or not to do artificial intelligence.
I said, “I don’t think this is a problem.”
After all if a computer acts up, you can unplug
it.
Obviously I didn’t understand the issue.
And the way that my co-author, R.P. Eddy,
convinced me that we should look for someone
on this issue was by saying, “who are the
people who are talking about this today?”
Not the experts in AI but the people who are
generally concerned about it.
And who are they?
Bill Gates, the founder of Microsoft.
Elon Musk, the founder of Tesla.
Stephen Hawking, the great physicist from
Oxford.
And when I heard that I said “Okay, fine.
Maybe if those guys think this is a problem,
maybe we should look for the expert who is
predicting that this could be a future disaster.”
And we found Eliezer Yudkowsky, who not only
thinks this could become a disaster, he’s
dedicated his life and all of his work to
dealing with the future threat of artificial
intelligence.
Because he doesn’t think it’s inevitable
that artificial intelligence should be a problem.
But he does have a scenario whereby it could
be if we don’t do some of the things he
has in mind.
What’s the problem?
The problem could be that artificial intelligence
starts writing software.
Complex software.
Maybe even encrypted software that human beings
do not understand.
And can’t deal with.
That future is just around the corner.
Already we have software writing software.
Already at Google we have artificial intelligence
writing software for further artificial intelligence.
And the Google program is getting to the point
where they’re afraid they don’t fully
understand how it’s doing what it’s doing.
What Eliezer Yudkowsky fears most is that
superintelligence will come into existence.
That means artificial intelligence programs
that are significantly smarter than human
intelligence, and even human intelligence
today augmented by computers.
And what he sees as possible, looking at the
rate of advance in technology, is that this
will not be a linear growth in the capabilities
of software.
But it could be overnight.
One day, artificial intelligence might be
under the control of humans beings, and the
next day it might have jumped into superintelligence—far
more capable than anything we could possibly
understand.
If you then put artificial intelligence onto
networks that are running critical infrastructure—the
Internet of Things, another subject we look
at in the book—it’s possible in the worst
case scenario that human beings will lose
control of the infrastructure of society.
In even worse case scenarios than that, artificial
intelligence will decide it doesn’t need
humans at all.
And it is that fear that causes him to agree
now as a planet, as a number of different
countries and societies, to put limits on
the development of artificial intelligence,
and to do that by international treaty and
to have observation to make sure artificial
intelligence doesn't break out of pre-determined
limits agreed by human beings and their governments.
Now you’ve seen that plot before.
You’ve seen that in a Hollywood movie.
And that’s part of the problem.
With so many of the possible Cassandras that
we looked at today.
That humans have seen these threats before,
they saw them in science fiction.
So whether it’s the possibility of an asteroid
hitting the Earth or human beings being genetically
engineered or artificial intelligence taking
over, part of the reason we don’t take these
Cassandras seriously is we’ve seen it in
the movies, we’ve seen it in science fiction.
A corollary issue to artificial intelligence
is the rise of robotics.
And already in this country we’re hearing
debates about the possibility that the next
wave in automation rather than just shifting
jobs from one function to another which has
happened in the past with automation maybe
the next wave of automation would be far more
advanced and complex and actually throw humans
out of work.
It’s a debate that’s going on and we don’t
know who’s right.
Some people say, "people will be thrown out
of work and there’ll be less need for humans
to do work and we’ll have to pay humans
for doing nothing."
Tax computers is one—tax robots is a proposal.
And the other theory is that just as in the
past when technology advances it may displace
certain jobs but it will create new ones.
We don’t know who’s right there, but we
do know and all of our future Cassandras,
or our present day Cassandras predicting things
about the future, that they need to be listened
to, and there needs to be examination of the
theories that they’re putting forward and
the data that they’re putting forward, even
if they are an outlier—a minority view among
experts.
