I think it's absolutely possible
that we can have conscious robots,
conscious AIs.
I don't think it's desirable.
I think we can learn
what we want to learn,
which is a lot about the nature
of human consciousness
with simpler models.
Smart tools, yes-
but we don't need artificial colleagues,
because if we really succeeded,
then they would be precisely
as autonomous as we are,
and we're very dangerous.
And unless we could be sure
that they would have
the same opportunities
for learning how to behave
and what to value in culture,
it would be reckless for us to
give them that much autonomy.
Given that I'm pessimistic about
the safety of conscious AIs,
I'm also optimistic about the
difficulty of getting there.
I don't think we're anywhere close;
I don't think it's coming in 10 years
or 20 years or 50 years,
and I don't think that the wonderful
successes of deep learning,
machine learning,
the latest wave of enthusiasm in AI,
that's giving us some great fabrics,
but it's not giving us
the architectures of conscious agency.
Time and memory are the basis of agency,
of human life.
Learning depends on being
able to extract information
from your past and apply it in the future.
All of life is a matter
of exploiting the past
to anticipate the present and the future.
And the sense of the passage of time
is a byproduct of our agency,
of the fact that we're
evolved to take of ourselves,
to fend off the inexorable demands
of the second law of thermodynamics
that say that we're going to be
ground down to dust eventually.
So time matters for us
the way it doesn't matter
for planets or mountains or the ocean.
It matters for living things
because their time is limited.
Well, one of the hardest lessons to accept
is that people that don't agree
with you are not flaming idiots.
There's something behind what they think,
that if you don't understand,
you're not going to be
able to communicate.
However gratifying,
however satisfying it may be
to your sense of righteousness and order,
you can't beat people over
the head with a logic stick
and get them to change their minds.
You have to meet them on their own ground,
and it's hard work.
And it's often doesn't work,
but we just have to keep trying.
Belief is not a voluntary act.
If I offered you a $100 to
believe that snow was blue,
you couldn't.
No way to collect that money.
You could pretend.
Beliefs are established by our experience
and by our interaction with the world.
And they can be very hard to dislodge,
and it takes a lot of effort to examine
and reconsider and particularly
to abandon opinions,
convictions that you had when you
learn that there's alternatives.
And I think that one of the hardest tasks
a human mind can face
is reserving judgment
about a conviction that has
always seemed to you obvious
and thinking about how maybe
that might not be right.
Well, I think one of the
things that I've learned
and have made it a big part of my practice
as a philosopher you
almost never see somebody
change their mind because they've been
given a formal argument with premises
and a logical conclusion
and that's been just beaten into them.
Accept this conclusion or die.
That doesn't work.
You have to enliven the imagination.
You have to sometimes even trick,
cajole, seduce people into
seeing things in a different way,
inverting their view in some way.
I think philosophy is not
so much a science as an art,
and it's the art of imagination
instruction, in a way.
I think one of the
things that I've learned
is that when somebody in a class
or in an audience says,
"This may be a dumb question, but..."
it's usually the best question.
The experts seldom ask
really surprising questions.
The undergraduates are great
at asking good questions.
In fact, one of my sort of maxims
over the years is that if you're not
teaching undergraduates in philosophy,
you're going to get
inbred and artifactual.
Undergraduates, bold as brass,
they ask the good questions.
And if you can't explain
what you're doing to
bright undergraduates,
then you don't know what you're doing.
