>> JOI: This is
 the first substantive session
 of the Ethics and
Governance in AI class that
 we are doing together with
 Media Lab and the
Berkman Center are doing
 together, and I am
 co-teaching it with
Professor Jonathan Zittain who
 will be speaking after
 me. I thought I'd
start out by talking a
 bit about why we are doing
 this class, what I hope
you think about as we
 go through this class, and
 maybe touching on and
framing some of the things
 that we would like to
 work on together. As we
said at dinner yesterday, this
 is such a new field that
 by the end of this
class, you will know
 more than 99.99percent of
 people who think about
this thing. Just like any
 new field -- I think Jonathan
 and I are old enough
to have been in that
 period of the Internet where
 there literally were only a
handful of people who
 knew enough about each part
 of the Internet to
help get it started. It really
 is like that.  I think that
 AI, broadly, is a fairly
large space with lots of
 people working on it. This
 field of AI and
Governance and Ethics, I wouldn't
 say it's just this room,
 but it is a small
enough number of people thinking
 about in a smart way
 that it really is an
opportunity to contribute. I
 hope that this will kickstart
 some of you to
make this, if not your
 main thing, at least a
 peripheral thing that you are
interested in. I want
 to start with this image,
 and this comes from
Lawrence Lessig's 1999 book,
 Code and Other Laws
 of Cyberspace, and
this is a really important
 book because this is the
 first book where he
describes the relationship
 between code and law.
 We argue about
whether we call this
 "Lessigonian" or "Lessigian," but
 the idea is that
either behavior of an individual
 or the behavior of society
 or the future of
civilization is somehow governed
 by four quadrants. He's
 a lawyer so he
put law on top, but
 the technical architecture are the
 values and norms of
society, and the markets
 are the businesses, the
 market, the economic
reasons for things to happen.
 The example he gives in
 the book which is
a good example is if you
 have a street and you want
 to get people to drive
slower, you can put
 speed bumps, which would
 be a technical
intervention, or you could
 post a speed limit and
 have a police officer
enforce the speed limit
 and that would be a
 legal intervention. In many
cases you have the choice
 of whether to do a
 legal intervention, or you
could have norms. You
 can make it very clear
 to everybody in the
neighborhood that it is not
 a cool thing to drive
 fast, or you could create
some sort of car that
 if you drove fast you have
 to pay extra. There are
different ways to intervene.
 Interestingly, in this class, we
 have a lot of
engineers, we have a
 lot of lawyers, we
 also have philosophers and
neuroscientists, a bunch
 of different people. You
 probably would put
yourself in one or more
 of these quadrants as a
 professional. As you try
to affect the behavior of
 people or the future of
 society, you are probably
using the tool that you're
 most familiar with to do
 that. Like with the
speed bumps, last
 year's class, which was
 broadly about Internet
technology and society. This year
 it is going to focus
 a little bit more on
AI. One of the
 best conversations I saw was
 one of our cryptographers
talking to a lawyer, and
 at the beginning, the law
 to him seemed like the
laws of physics, something
 you had to build
 technology around. But to
the lawyer, they thought
 of cryptography as some
 brick or maybe even
something bigger like a
 dollhouse that they had to
 build the laws around.
But it turns out that
 cryptography is like putty,
that you can make it and do
really interesting things with it.
 The law was also a
 product of a bunch of
conversations and designed by
 people and when the
 two, the lawyer and t
he cryptographer, got
 together, they realized that
 there are many
problems that we have
 that could be solved either
 by cryptography or by
law, but even better by
 sort of putting both of
 those together. I think
through the course of this
 course, you will hopefully start
 to see all four of
these quadrants as areas that
 you can use to think
 about how we might
affect the future. Now, I
 will talk about it towards
 the end but notice I
didn't say "to make the
 world a better place" because
 I think that's one of
the problems that we seem
 to think often, that we
 are all on the same
page about what we want
 to do, which is where
 the ethics stuff comes in.
So, this is
 just a little bit
 of where we are.
 [VIDEO]
>> Human beings have played games
against computers. At first,
computers struggled but then
they started winning, and now
they've become so dominant
 that they are raising doubts
 about the future of humanity.
The latest
 emblem for existential dread
 is Google's DeepMind Project,
which created
 AlphaGo, an AI program
 that's become unbeatable
as the most
 complex strategy game
 on the planet.
Or the game of
 Checkers has 10 to the
 power of 20 possible outcomes,
and a game of
 Chess has 10 to the
 power of 40 possible outcomes.
Go has more than
 10 to the power
 of 80 possible outcomes. AlphaGo
is trained to
 analyze situations itself by
breaking the game down into tiny
parts and visualizing all
 possible moves. Last week, it
 played the world's best
Go player, 19-year-old Ke
 Jier, with the help of
 a human handler. AlphaGo
beat him three
 times, and after doing
 what it was designed
to
 do retired from
 the game.
[CHINESE
 LANGUAGE]
If a system like AlphaGo can
learn all the moves in Go
well enough to beat a person,
then it has the potential to
replace lawyers and accountants
among dozens of other jobs.
It might be perfect,
 but it has no
 way to navigate human politics.
The Chinese government
 banned the live stream after
 Jier lost the first game.
The loss to
 an American company was
 an attack on country's pride.
 [AUDIO]
>> The Chinese government has
 made a big effort to proclaim
 that they are moving ahead rapidly
in artificial intelligence,
 that they will be the
 people who dominate the AI.
To have the dreaded
 Google come in and beat
 China at its own game,
it's just piling
 insult on top insult.
 It's kind of amazing.
>> But it
 was more than
 a national crisis.
 [CHINESE]
>> JOI: First of all,
 the science is all wrong
 just to be clear.
It doesn't compute
 all of the
 moves. Actually, it's
impossible. There are more
 moves than there are
 atoms in The Universe.
That part is wrong. It's
 just the way they describe
 it isn't correct. It actually
is fundamentally different from
 Chess in that because
 it can't compute
every move, it is doing
 something that looks a lot
 more like creativity, a lot
more like intuition. We will go
 into this later in more of
 the AI stuff. First of
all, I had to comment here
the science is wrong, the
second piece is see
how quickly they move it
into a us versus China
thing; which is, I think,
also a thing that the
 media is trying to do, and
 I think we will have
conversations later about whether
 that's true and if
 that's true, what we
do about it. I think
 it's also interesting because they
 said it can't get
involved, it can't do
 human politics. Well, that's not
 necessarily true. It may
be, and this is something
 we should talk about during
 the class because I
think that some people do believe
 that a lot of what politics
 is, not all of it,
a game. The extent
 at which computers get
 better and better winning
games, it's possible that
 machines may do more than
 -- We actually have
somebody here from OpenAI.
 This is OpenAI which
 is a nonprofit in
Silicon Valley that's funded
 by Sam Altman and Elon
 Musk and a few
others. This is actually
 a multiuser game which is
 very, very complex, and
OpenAI has been able
 to win in tournaments. There
 are other things that
they have been doing
 at OpenAI that show that
 these machines can, by
playing against themselves much
 like how AlphaGo did
 with even less
supervision in some of these
 cases, start winning at all
 kinds of games. I
guess I don't know how
 much of it is publicly available,
 but I will just say
that this is a very
 complex game, but they are
 also doing things that
involve lots of physics
 and that I would say,
 again, I don't remember
exactly what they said I
 can say, but they have a
 rough idea or a belief
that machines will be
 able to win at any
 game against humans pretty
soon. The interesting thing here
 and the meta point that
 I would have here
is that the particular person
 I talked to is like, "So
 it's the end or the
beginning or something." But it's
 a big deal, and it is
 a big deal because a
lot of things are
 games. Markets are like games,
 voting can be like
games, war can be like
 games. If you could imagine
 a tool that could win
any game, who controlled it,
 and how it is controlled
 has a lot of bearing
on where the world goes.
 On the other hand, when I
 was probing, a lot of
people say, "Well, if it
 can win at any game,
 it's a superintelligence." There
are roughly three categories
 of artificial intelligence that
 people are for.
There's artificial intelligence
broadly, but then inside of
 that there are
three. One is AGI,
 so artificial general intelligence.
 Things like OpenAI
where you can point it
 at a general problem without
 giving it much detail,
like all the Atari games, and
 it will figure out how to win
 and it will win. It is
a very undirected general
 intelligence, and AGI is the
 idea that you make
something so general that
 it can solve just
 about any normal problem.
ASI which is superintelligence,
 which is the thing
 that Nick Bostrom and
Elon and others are afraid
 of which is that the
 machine gets so smart that
it starts training itself
 and gets smarter and smarter
 and smarter until it's
smarter than human beings.
 The intelligence part is
 actually kind of
interesting because when we
 say "intelligence," what does
 it mean? The
problem is a lot of
 my friends in Silicon Valley, when
 I talk to them about
the future of this, they say,
 "So they will win," and I
 say, "What does it mean
they will win?" Well, life
 is a game and they will
 win and that's where I
realize there are at
 least two categories of people
 in the world. People
who, like one of my
 friends, knows exactly how many
 hours they need to
spend with their wife,
 knows exactly the balance
 of the happiness that
they get from their
 money versus their things.
 They can basically describe
to you in this sort
 of metrics how they measure
 happiness, and if they can
optimize for happiness they win
 at life. If you believe
 that life is a game
that you can win at,
 then you could probably imagine
 that a computer can
beat you at life. But if
 you believe that life is not a
 game, like I do, like I
believe I am a
 bunch of chemicals and
 molecular interactions in every
morning I wake up and
 my endocrine system tells me
 what I yearn to do
that day. My life is
 all about trying to fulfill
 the yearnings that come
through, not just my
 endocrine system, but my
 relationships in my
existence in the world. I
 think we have a somewhat
 spiritual idea that we
have a consciousness
 and we have an
 understanding. The word
"understanding" is pretty
interesting because a lot of
 these -- when you
hear people describe things like
 OpenAI, they say they get
 so good at this
that the machine understands
 what's going on. That's
 a pretty interesting
use of the word "understanding."
 It goes back to a
 really, it's in your
readings, the China room
 thought experiment. If you
 read your readings,
this will be redundant, but
 the idea is basically if you
 put a person in a
room who doesn't understand
 Chinese but has a
 set of instructions that
say, if this comes in
 one window, which is in
 Chinese, and it's a question,
you can pull together and
 then output this which is
 the answer in Chinese.
If you have a complete
 set of instructions of what
 to do when this phrase
comes in, and you're
 just looking at squiggles
 now, so you don't
understand what the Chinese
 is. You have a lookup
 table that tells you
what the answer to the
 question is, which is also
 a bunch of squiggles
and you're putting it out,
 you could appear to an
 outsider who is putting
questions in one end and
 getting answers in the other
 as if you perfectly
understood Chinese; but in
 fact, you yourself in the
 middle who is the
program, which is kind
 of like the AI,
 doesn't understand anything except
the ability to execute on
 the instructions that you have
 which is if this
comes in then put that
 out. In the readings you have
 a lot of peer review
and critique about this;
 but the general question here
 is just because you
can produce the right answer,
it doesn't necessarily mean,
 at least to all
people, that you understand what's
 going on. Then again, this
 is kind of a
recursive -- and I am
 glad we have a couple
 of philosophers in this class
this time because this
 becomes a very philosophical
 question. First of all,
does it matter? I think
 Elon, who is one of the
 funders of OpenAI, I don't
know if it's a joke,
 and we can ask somebody
 here from his organization
whether it's a joke, but
 he often says that there's
 a 90 percent chance that
we are living in a simulation.
It's an interesting point, I
 think there are a lot
of people in the world
 who believe that the world
 can be reduced to zeros
and ones. If in fact we
 are living in a simulation, it's
 pretty easy to go from
there to imagining that
 if computers can manipulate
 bits at scale and
tackle the complexity that
 we have been fighting
 against, that they could
somehow understand the
 world. Then there are
 probably others who
probably don't believe that
 we are living in a
 simulation and that this
doesn't constitute understanding;
 we understand something
 more, but
again it gets into a
 philosophical debate. I tweeted the
 other day my own
belief which is this:
 Those people who believe the
 world is a simulation
are probably those who
 are most likely to
 be simulated by computers.
 [LAUGHTER]
We can talk more
 about this general intelligence,
 and it's a really
interesting philosophical question,
 but we are not
 there yet. OpenAI may
say that we are
 going to be there really
 soon and we should talk
about it, so I am not
 taking it off the table. I want
 to focus a little bit on
machine learning which is
 a dumber version but
 still very powerful tool
which is actually deployed
 already. The idea about
 machine learning is
that it basically gives
 the computers to learn
 without being programed.
When we say "algorithm"
 in AI or machine learning,
 it's very different than
an algorithm that we
 typically think of. When
 you heard about the
Volkswagen that was
 programmed to not
 meet the environmental
standards, you could open
 it up and deconstruct
 it; it's compositional and
you could read it and
 see and audit it. The
 way that a machine learning
algorithm works is that you
 feed it data, you turn
 knobs, and it sets a
bunch of weights in this
 neural network in a way that
 you can't look at the
neural network and understand what
 it's going to do just
 like you can't rip
open somebody's brain and
 understand what they are
 thinking. While it is
an algorithm in the
 traditional sense that it
 does things based on
functions, it's a lot harder
 to understand exactly what it's
 going to do, and
it's a lot more like
 our brains in that your
 child, you know what textbooks
your child has and you
 know the genetics that went
 into your child, but
you sure don't know exactly
 what your child is going
 to do or become. As
we start to think about
 the future of machine learning,
 we have to realize
that it is a quite
 very different thing. As a
 side footnote, because you are
not sitting there programming
 it, it actually takes up
 less space in many
cases, it's very expandable,
 and it's very powerful.
 When we say
"programming," we often are
 programming the system in
 which we use to
teach or allow it
 to learn. We are not
 programming the actual algorithm.
Even rules like Isaac Asimov's
 rules of robotics, it's a
 little bit harder to do
that than you might imagine.
 I'm not going to do
 this. We might do a
premier tomorrow on AI
 and machine learning. Even
 in machine learning,
there are many, many ways,
categories of machine learning,
many, many
specific methods and most things
like AlphaGo was a combination of
several different types of
 machine learning. A lot
 of machine learning is
kind of a fine art
 of picking which sorts of
 algorithms in which order, what
the tunings of the knob are,
 and it really is kind of
 a dark art. One of the
troubles that we have right
 now is that you have to
 get pretty good as an
engineer before you are going
 to be a able to
 train a useful machine.
Unlike VisiCalc in the
 old days where any
 accountant or business person
could become creative
 and generative on the
 computer, machine learning
is not yet to
 the point where the
 normal average person without
specialized training can create
 something useful out of
 it. Although that's
something we would like to
 see. One of the main
 applications that we are
all using day-to-day is
 classification of visual images.
 Right now, through
using data sets, Google
 and Facebook and other
 places, whether it's
facial recognition or the
 recognition of objects, it has
 a very high accuracy
of categorizing images. This
 is now mostly more
 accurate than human
beings and faster. It
 works on some things
 like self-driving cars. The
problem is it is a
 great tool, but people still end
 up misapplying it. This is
also in your readings, it's a
[foreign word]
It's some Chinese
 researchers who took
 government IDs,
and now assert
 in the paper that
 over 90 percent accuracy
in predicting whether
 somebody is a criminal
 by the photo.
The article in
 your readings goes through
 and deconstructs how
they might have gotten
 to that problem. If you
 notice, all the non-criminals
have white collars, so it
 could be that the machine
 is just figuring out that
it's the white collars that
 signify -- they also point
 out that the people who
are criminals tend to
 not have as happy faces.
 The problem is, because
you are training this model,
 and it isn't able to
 explain to you exactly what
it's learning, you don't know
 for sure how it's getting
 to that accuracy. The
problem is that if you
 suddenly trained the machine on
 a data set, and
then you say, "Okay. It
 turns out we are 99
 percent accurate; we are going
to roll it out." Then
 it puts every white-collared person
 into a job and puts
every non-collared person
 into prison or dark-skinned
 person into prison
or a white-skinned person
 into a job or
 whatever other problems. The
problem with machine learning
 is we don't exactly know,
 and it may be
surfacing a bunch of really
interesting underlying factors.
 Now, this goes
all the way back, so
 if you think about how
 Nazi Germany started the
Holocaust, it was this
 whole notion that we
 have evolutionary traits and
by eliminating certain
 categories of people, society
 would get better.
There was eugenics that led
 up to that which was
 the idea that the shape
of your head or the
 form of your body somehow
 was an indicator of your
social value or your
 criminality. The fact that
 we have modern papers
coming out just as recently
 as last year trying to
 use machines for this
shows that we have
 this risk of what
 many people call "reductionism,"
which is that often you
 come up with a scientific
 theory that sounds great,
you apply it, and it actually
is an oversimplification, and
 it can cause a lot
of harm. So that is one
 of the biggest categories of harm
 that I see that is
already happening. This is
 actually an MIT project, and
 I think -- Jenny,
this is your team, right?
 So, this is an adversarial
 system. This is a 3D
print of a turtle, that
 Google thinks is a turtle
 appropriately; but by fiddling
with, I think it's the
 pattern and some of the
 lines, they were able to
modify the turtle so that
 Google now thinks it's a
 rifle. There had been a
tax like this where you
 set up and changed some
 of the pixels on the
image and are able to
 make things misclassified. I think
 this is the first
time that something very
 general like this 3D object
 that you would hold
in your hand now always
 looks to Google like a
 rifle. There's this one
category, which is maybe
 you mistakenly start to
 mess up the
classification because you
 don't notice that all
 the non-criminals have
white collars, but if you
 actually try to attack the
 system by creating an
adversarial system, you can
 do this. There's also
 another rumor that I
heard that some kids
 were cutting out craft paper,
 stop sign shapes and
putting them in front of
 Google cars and jamming them
 all up. So that
quickly shows you the
 limits of machine learning.
 A human being would
know immediately that if they
 saw some teenage kid with
 a piece of craft
paper that it wasn't a
 stop sign, but a car gets
 very confused. I think one
of the things that as we
start to jump into --
and actually I think there's
an assembler project working exactly
on this adversarial jamming thing --
but what's interesting is
 that because machines are
 so smart and so
good at what they do,
 we suddenly think that they are
 going to be good at
everything, and they are
 really bad at some things
 we are obviously good
at, and I think one
 of the key things as that
 we think about the relationship
between humans and machines is,
 I think for at least
 for the short run,
there are going to be
 a ton of things that
 we are really good that
machines are not going to
 be good at and vice
 versa and figuring out that
relationship is going to be
 key. That is going to
 involve law, that's going to
involve norms, that's going
 to involve things like
 user interfaces; and I
don't think we're there yet,
 and that is one of the
 things I would like to
explore in this class. This
 is an angiogram, but I am
 just going to use this
slide generally to talk
 about medical imaging. Medical
 imaging it turns
out is a great
 application of computers for
 machine learning because
now with the cell phone,
 you can send a picture of
 a tumor, you can send
a picture of your skin,
 and the computer can come
 back and give you
results and in many,
 many fields now they are
 showing that the machines
are much better than
 human beings. I heard a
 confidential study the other
day that there was a
 particular test, and they did
 a study of allowing
doctors to overrule the
 machine, and 70 percent of
 the times that the
doctor overruled the machine,
 the machine was right.
 In other words, 30
percent of the time, the
 doctor was right. That's scary,
 right? If you are a
doctor, are you
 going to overrule the
 machines recommendation when
you have a 70 percent
 chance, more than 50 percent
 chance, that you are
going to be wrong
 and somebody might sue
 you because you have
overruled the computer? They
 were all, and this is
 a big company, like
how are we all going to do this? And
it is setting up this adversarial
relationship. It's similar in
an airplane. You overrule
the autopilot or in my
Tesla, you overrule
 the autopilot. There's
 this interrupt-driven adversarial
relationship. I talked to
 somebody who was involved in
 the start up the
other day that has a, I
 am not supposed to give away
 the details, but it's a
medical imaging thing
 that's 90-something percent
 accurate, but doctors
are generally  90 percent
 accurate in that field, so
 the machine is slightly
more accurate, but still
 accurate enough more than
 doctor's that it was
worthwhile. They also had
 the same problem where
 the doctors didn't
want to give up control
 to the machine, but they
 changed the interface so
that the doctor is
 always in charge and you
 don't see the machine.
Whenever the machine sees
 a result that's different
 than what it believes,
it highlights the area, and
 it notices the thing that
 might be different and
it's like a spellchecker.
 The doctor's like "Click,
 click, okay, click, click,
okay, and then click, and
 there's a little red thing"
 and they're like, "Oh
yeah, I didn't see
 that." So suddenly instead
 of having an adversarial
relationship with the doctor,
 you have the machine
 looking over the
doctor's shoulder giving
 suggestions, but it's still
 the doctor's choice
about whether to take the
 suggestion, and in 90 percent
 of the cases, a
doctor is just doing it
 and the machine is not
 saying anything. I think the
idea of having the
 machine looking over the
 doctor's shoulder, but also
the driver of the car saying,
"Wait that's just a kid
with a bunch of craft
paper overruling the machine." That
kind of relationship of looking over
each other's shoulder rather
 than being an adversary of
 each other I think
is a really important one. It
 does, and this will start to
 tie to some of the
law school stuff, it does
 start to get blurry on
 whose responsibility it is
right now. When you crash
 a test line autopilot, it
 is your problem in
America. I think if you
 crash many of the European
 cars, it's their problem.
This whole idea of what
 we call "the moral crumple
 zone," which is this
idea that people are
 going to be pushing responsibility
 onto others, is one
of the things that
 these  computer-assisted
systems will have. We will
hear later in the
 afternoon from Karthik and Pritique
 who work in medical
systems. Karthik's work
 is very interesting. He's
 taking now the
conversation between the doctor and
the patient before they take the
angiogram, and then he shows what
he is finding is that the doctor
doesn't really understand,
 especially with male
 cardiologists and female
patients. Women underreport
 pain; men over report
 pain. Most male
doctors misdiagnose women's
 cardiology tests. But once
 you have a
machine, the machine
 actually through learning
 the conversation, can
often guess what the problem
 is better than the doctor
 of the patient. He
was also saying that
 the nurse and the patient
 actually is even better.
Maybe in the future what
 you are going to have
 is, you might eliminate the
doctor, but you probably
won't eliminate the nurse.
It will probably
be a nurse augmented with a machine
to be able to do things. The
conversation between a human
 and a human is
 actually really important
to add additional information
 to the data. Again,
 I think that's interesting.
The other thing that
 Pretique will talk about
 is sometimes machines will
find patterns that humans don't.
 We have a lot of
 frameworks that we use
in order to sit down and
 try to figure out where we
 look. It turns out that
when you start to
 get machines that look at
 medical images, they might
start to see patterns
 that we never thought meant
 anything. This gets into
explainability because there may
 be things, we are
 trying to get machines
to explain everything, but
 there may be things
 that the machines can't
explain because we don't
 have a framework in
 which to understand it.
They may actually help us
 come up with ways, just
 like AlphaGo created a
whole new way to play
 Go by doing a very
 creative move that humans
haven't thought of, it could
 be that machines will surface
 ways to look at
medical images that we
 haven't even thought of as
 human beings. I think
many of you have probably
 read this paper. I think
 it's now about two
years old. It's by Julia
 Angwin who is a Director's
 Fellow at the Media Lab,
and we assigned it
 as reading last year and
 one of our participants,
Madars, read this. This is
 basically a paper that describes
 the use of risk
assessment or risk scores
 in the judicial system. Basically,
 it is not even
machine learning or AI, it's
 just a bunch of numbers
 that get put into an
algorithm that create a
 risk score that people
 used to determine pretrial
bail; or in some
 cases, sentencing; and in some
 cases, parole. She's a
Data Scientist that took over
 a year to put together
 the data that show
that the system was
 biased. It was biased against
 people with dark skin.
And it was roughly neutral
 for white people. So, we
 jumped in and said,
"Okay, this is a problem.
 We are a bunch of
 engineers, maybe we could
solve this, maybe we should
 have a block chain and
 have it so that we
can audit it." As we
 started to go in and we
 brought in a sonographer, we
realized that it was
 a much deeper problem. There
 is one argument that
said even though it's
 biased, it's still reducing
 jail time for people
compared to human judges,
 and it's actually more
 fair to white people
than black people, but
 at least it's more fair
 to somebody. There's another
argument that was saying they
 are not ingesting race as
 one of the inputs
but the rate ingesting
 proxies to race… It's somewhat
 racist, but it turns
out that the underlying
 data is racist because that's
 how society is racist.
Then the question becomes,
 If it's just reflecting
 the underlying data, and
the underlying data,
 the underlying society
 isn't fair,
is it the machines job
 to make society fair? No.
 The machine's job, at
least in this particular case,
 is trying to accurately assess
 risk; and if a kid
has the wrong
 circumstances they will,
technically speaking, be higher
risk. Then it gets into
 this bigger problem which is,
 shouldn't we try to
address the underlying cause? If
 this kid from this ZIP
 Code has a higher
risk than the kid from
 this ZIP code, shouldn't we try
 to figure out why this
ZIP Code has higher risk
 and go after that? Adam Foss
 is here, and he is
a prosecutor who works on
 this stuff. Can we use
 machines to go after
the underlying causes rather
 than just more accurately
 predict and make
the current system more
 efficient? So this is Karthik's
 work, and we wrote
a paper recently for a
 conference on this; but a
 lot of machine learning,
almost all of it
 right now, is about accurately
 predicting things. This is
correlation. It doesn't necessarily
 mean that because you
 are in this ZIP
Code and you went to
 this school, you are a
 bad person. It just means
that you are more likely to
 be a bad person, or not
 even a bad person, you
are more likely to be a
 failure to appear if you live
 in these places. It's not
causation, it's correlation. The
 question that we have,
 and we have a
person from the Catholic Church
 who is with us, and
 he pointed out the
most strongly of all the
 people, that it's not right
 -- the justice system
should not be punishing people
 by longer terms or inability
 to get bail just
because they happen to live
 in the wrong ZIP Code.
 Then it becomes, but
that may be the
 most effective utilitarian way
 to predict that outcome.
So, this correlation
 versus causation is actually
 really important because
it's harder, again we will
 get Karthik to talk about
 this, but the causal
inference which is a way
 to try to change the
 underlying causes to figure
out what variables are
 just correlations and what
 variables are actually
causes to it. For
 instance, if you remove that
 variable or change that
variable, does it change
 the outcome? We are
 working in towns like
Chelsea and in certain
 jurisdictions where they are
 giving us more access
to the underlying data to
 try to see what the causes
 are and to try to
come up with a new
 way of doing machine learning so
 we can go and try
to fix the problem rather
 than just focusing on the
 place. We will do a
whole day on criminal justice,
 but the point I wanted
 to make with this
rambling and meandering path
 here was, what might
 look like a simple
problem on the surface,
 machines are biased, actually
 when you start to
drill down and try to
 understand the whole thing, it's
 like peeling an onion
or shaving a yak or
 whatever metaphor you want to use,
 it turns out it's a
big societal thing that
 gets to the fundamental
 thing: Should we rethink
the criminal justice system?
 Is the law the right
 way we approach these
problems? Is this an opportunity
 to make the world more
 just or is this
just an opportunity to
 make every subsystem in
 our whole society just
slightly more efficient and make
 us efficiently just as bad
 as we are today
or worse? Norbert Wiener,
 who was a famous
 mathematician here at MIT,
said that when you are
 in a company or a
 bureaucracy, that it is
automation. You have a
 bunch of rules, you have
 all these things, now
corporations have all these
 kinds of rights that
 make them have rights
like human beings, and
 that human beings in
 the corporations are just
machines of flesh and blood.
 A lot of the questions
 that we have about
governance and AI, I
 think we are already seeing
 these when we see
corporations that are able
 to grow and spend
 money and become entities
that are completely out of
 control. If you look at
 the market today, we
can't understand it. Human
 beings do not understand
 the market. More
than half the trades
 are done by machines. It's
 completely out of control.
Most corporations are above
 the law. They don't pay
 tax, they can lobby,
and in a way, if
 you just imagine corporations as
 machines, and they are,
we are already in
 the world of non-human
 intelligence. Our ability to
regulate or deal with them
 has not been, is not,
 very good. If you imagine
AI machine learning as
 just booster rockets to all
 the elements in these
machines, it's just going to
 be a more and harder
 version of the problem
that we have today. At
 the Media Lab, we don't
 use the word "artificial
intelligence" as much. We
 like to use the
 word "extended intelligence"
because artificial intelligence
makes it feel like there
 is another sort of
Terminator-like robot that's
 an artificial human
 that's intelligent and
conscious and wants to take
 over the world and has
 all the attributes of
humanness. What I think is
 going to happen is we
 are just going to see,
and we already are
 seeing, machines and algorithms
 seeping into every
part of our society
 from the individual, our
 phones which are our
prothesis, to societal
 systems, governments that are
 using machines and
search engines, and so on.
 I think it's much better
 to think about instead
of intelligence, a play
 on collective intelligence which is
 a whole field --
we know we have the
 Center for Collective Intelligence
here at MIT -- and
the idea that a group
 of humans or a group of
 things can get together as
a network and think
 and act and do things.
 I think that artificial
intelligence is, again, the
 wrong metaphor, and for better
 and for worse, I
think we need to think
 about this as an integrated
 system. The problem is
when you have a complex
 system, you can't just go
 back and say, okay,
let's just redesign the
 whole thing because you
 can't stop the machine.
So, it's not like creating
 a new nation where you
 can sit down and say,
let's come up with
 the laws and now we'll
 make everyone behave this
way. It's like trying to rebuild
 the plane as we fly it.
 This is no longer as a
designer an option. We
 can't just stop and restart,
 although we are trying.
I love this example.
 Everybody knows Monopoly, right?
 There is a
precursor to the game
 called The Landlord's Game from,
 I think it's 1905
or maybe 1904, and this
 game had nearly the same
 rules, but it was
created by the George's,
 and they were the
 precursors of the communist.
So, this game was
 created to show how
 ownership and rents drove
people to unhappiness and
 poverty and was about
 teaching about the
perils of capitalism. During
 summer, they would have
 kids play this to
understand how awful
 capitalism was. So, Parker
 Brothers came around
and said, "That's a really
 pretty cool game but let's
 just change the goal.
The goal is you're
 the capitalist, and you drive
 your friends to bankruptcy
and you win, and it
 became a very popular game.
 The reason I point this
out is that a lot of
 our jobs as engineers or as
 lawyers is to try to change
the rules. But if you
 change the goal and keep
 the rules the same, and
everyone's behavior changes, might
 we not extrapolate that?
 Even if we
fiddle with the rules, if
 we don't change the goal,
 maybe that's not going
to have a lot of
 effect whether we are talking
 about climate or we are
talking about things like crime.
 I want to pull all the
 way back out to much
higher than 30,000 feet,
 and think of the Earth.
 Right now, we have
systems where we
 have photons coming in,
 and photosynthesis allows
the photons to take
 water and carbon dioxide
 and convert them into
oxygen and glucose and then
 other things then… First of
 all, if you know
your history, I think when
 photosynthesis first came out, it
 was one of the
largest extinction events in
 the history of the
 Earth because oxygen was
toxic for many things.
 It took many key elements
 out of the environment,
and so it was a
 huge extinction event but a
 whole bunch of new
organisms and new processes
 were created that took
 the glucose and
the sugar and turned it
 into other things. If you
 look at nature, what
happens is, when there
 is an abundance of
 something, something gets
created to take that
 abundance and convert it
 into something else and
then that something else
 becomes the input for
 something else. So, the
whole world is a bunch
 of loops of one thing
 being the output for another
and another thing being
 the output for another, and
 there is no single
currency. It's all kind
 of interconnected, and our
 human bodies, it's the
same; it's a cross scale.
 We have at the molecular
 level it is happening
but at the geological
 level… Somehow, the Earth's
 temperature is able to
stay somewhat stable.
 Our body temperature amazingly
 stays stable, and
then you can take huge
 percentages of the processes in
 our body and rip
them out, and we are
 still able to function. We
 can eat very different
foods, and we are
 still able to function
 because we have self-adaptive,
robust, complex systems so
 that when you disrupt
 it, we are very
resilient. That's how complex
 systems worked and it's
 an evolved system,
it's not a designed system.
 It's a great way to
 think about how everything
is connected to everything
 else. There's a whole
 field called Systems
Dynamics. It was used a
 lot in the '60s and
 '70s model of the world,
things like poverty and war,
 but the basic idea is
 that you have things that
come into a system, goes
 out of his system, and
 then there's feedback. I
think I will use an image,
 this is my bathtub, to describe
 it. If you have a
bathtub and you turn on
 the faucet, you will see
 the bathtub fill up. Then
you have a drain, depending
 on how open the drain
 is, it starts to drain
out. Depending if you are
 trying to get the water
 to a certain point, that's
the target of how where
 the water wants to be, and
 so what you are doing
is, you are turning it
 on and you're closing it and
 that's the flow and the
amount of water is the
 stock and you have outflow.
 A simple system, that
the small bathtub is harder
 because it fills up more
 quickly, a big bathtub
like the ocean, it moves
 more slowly. Then imagine that
 you are trying to
get it at exactly the
 right temperature so you get
 in and you've got hot
water. Now imagine, as
 many of you might have,
 where your boiler is
really far away from your
 water so it takes a little
 bit longer for the water
to get to your boiler.
 Then imagine that you have
 to worry about where
the water's coming from
 and then the system
 that's heating the water,
and then you kind of
 think about everything as a
 system of inputs and
outputs to other systems
 because your boiler also has
 a stock and the
energy has a stock. So,
 the whole world is just
 a huge network, your bank
accounts of basically this
 model of stocks, things
 coming in, things
going out, and things
 connecting to each other with
 a bunch of pipes.
The world is a
 bunch of tubes. Systems dynamics
 people, what they do,
is they can imagine. One
 of the favorite examples is
 there is this game,
it's called the Beer Game,
 and they teach it in
 business school. What they
do is, they make three
 groups. One is the distributor
 of the beer of the
store, the other is the
 wholesaler, and the other is
 a brewery. What they
show is that they say,
 one day suddenly people start
 buying more beer, so
the store orders more
 beer. Then the distributor suddenly
 gets, this is a
bunch of stores, a bunch
 of orders, and they don't
 have enough, so they
can't ship them out,
 so they order more beer
 from the brewery. Every
person plays a rational
 actor where they're ordering
 more and then it
doesn't show up, so they
 order even some more, but
 in most cases, even
though everybody is trying
 to do the right thing,
 the whole system goes
bankrupt. When you look at
 the input, the only thing
 that has changed is
the amount of demand for
 the beer has doubled. The
 only cases in which
the system doesn't go
 completely bankrupt is when
 each of the nodes
imagines what might be
 happening in the other nodes.
 If they are just
working rationally in their own
 node, oh, I ordered beer,
 it didn't come, so
maybe I better order twice
 as much for next week
 so that I could have
enough to fill the…
 Then you completely screw up.
 The whole science of
systems dynamics is to
 show that if you think
 about things in systems,
they make sense because
 they are complex and adaptive,
 but if you think
about things in terms of
 the autonomous units, it can
 go wildly out of
whack. This very busy
 slide is Donella Meadows
 who is a famous
systems dynamist here at
 MIT. She talked about
 how you intervene in
systems to try to get
 a system to be more
 healthy or resilient or robust.
This is kind of a
 weird, and there is a great
 essay that you should Google
and read if you're interested,
 but the top one is
 the least effective but the
one we use the
 most, which is changing the
 parameters and the numbers
and the constants. You can
 go down and start to
 change things like the
structure of the flows,
 which pipes are connected to
 which ones. You can
change things like the size
 of the bathtub, you can
 change the flow. As
you go down, you
 start to see the power
 to self-organize. Unlike a
bathtub, society, markets,
 and the ecosystem
 are actually dynamic
systems, you see the last
 three, which are the goals
 of the system, the
mindset, and the paradigm,
 and the power to
 transcend. Going back to
corporations, right now, the
 goal of a corporation,
 roughly, is to eliminate
the competition, to
 return shareholder value, and
 to externalize their
costs. It's kind of like
 cancer. Normal human cells, if
 they end up in the
wrong organ, they eliminate
 themselves. When they are
 out of context,
they don't grow, but
 cancer cells, regardless of their
 context, they take all
the free energy they can
 get and they grow as
 much as possible, and t
hat's why the system breaks.
 One of the problems I
 think we have right
now is that the goal
 of our important systems is
 just to grow and grow
out of context. In the old
 days when you had a company
 and you set it up,
you had a charter and
 the companies were like, "We
 need a windmill for
this village. We will create
 a charter about the fact
 that we need the
windmill and how we are
 going to govern it, and
 then we are going to
collect a bunch of
 money to execute on this
 charter." That's the initial
reason that incorporation happened,
 and it was not
 until quite recently, in
the '60s and '70s, that
 we started to think about
 the shareholder as a
primary customer of the
 company more and more.
 Having said that,
recent studies show that
 millennials, many of you,
 I guess, the majority
won't join a company
 that doesn't care about social
 values. We do see
things like Uber and
 other places for social
 reasons, companies are
getting taken task. But
 still, I think the
 overriding goal of many
organizations is to increase
 financial returns. The paradigm
 of this is the
fact that we are
 measuring things in financial
terms. I was recently in
New York and somebody said,
 "Well how he be smart?
 He's not rich." So,
the idea that if you are
 smart, you must be rich, and
 if you're not rich, you
must be dumb is actually
 quite a common way for
 people to think about
things. It's also because
 we measure things that way.
 GDP, when we talk
about the competitiveness of
 nations, we talk about
 GDP. GDP is how
productive you are. When
 Jonathan and I are
 taking care of our
respective children, that's not
 contributing at all to
 GDP, so the countries
don't measure that as
 success, but if I break
 that window, that will
contribute to GDP because
 it will create jobs and
 things will move and
money will get spent.
 This is a common metaphor
 that people use, but
breaking windows increases
 GDP, whereas taking care
 of your children
don't. Often there is a
 number cited that says that
 IT hasn't contributed to
productivity gains, and a
 lot of it is because
 the qualitative stuff doesn't
contribute to GDP; it's
 only when things move around.
 GDP is a financial
metric that is really
 good at moving stuff
 around, and again, the
behavioral economists will argue
 with me and say
 that you can measure
so many things financially,
 even suicide or happiness,
 you just have to
get the formula right,
 and don't throw away
 economics and accounting
just because we haven't
 done it right. To me,
 it's like somebody arguing
that you can understand
 music in mathematics; of
 course, you can. You
can represent a song on
 a CD just as a string
 of bits, but is mathematics
the way you want to
 enjoy music or even to
 create music? That's the
question, and I feel like
 the paradigm that we have
 right now, the mindset,
is economics. The most
 important one is the
 power to transcend
paradigms. To me, that's
 really important. That is
 questioning. Is this the
way we should be
 measuring? That's what scientists
 do; that's what
artists do. I think this is
 one of the things I want
 to do in this class. When
I think about philosophy or
 ethics, a lot of that
 is about our ability to
reflect on whether the
 paradigm is the right thing,
 asking some of the
basic questions. I was very
 heartened the other day. I
 was with a bunch
of junior high school kids
 in Tokyo, and I had an
 hour with them, and I
said, "Let's talk about climate."
 One of the kids said,
 "Well, is it better if
there are no humans
 for the climate? Are we
 talking about with humans
or without humans?" I
 said, "Oh, that's a good
 question. Let's say with
humans because we are
 biased. That's our perspective."
 And then a
ninth-grade girl with her
 Japanese school uniform said,
 "What about the
meaning of life? Because
 don't we have to figure
 out what makes us
happy and why we are
 here before we start to
 figure out the solution to
climate?" To me that
 was really heartening that you
 could have a room
full of kids going
 through a very stodgy educational
 system, which is the
Japanese system, still
 asking some of these
 fundamental questions. My
concern is that a lot
 of these solutions that we
 are looking for in AI,
whether it's in the
 criminal justice system or
 in the economy, assume
certain paradigms, certain ways
 of going into the
 future, that may not
necessarily be the right
 way. I think we are
 not questioning enough. This
is an image Iyad
 Rahwan from the Scalable
 Cooperation group, where I
think a number of the
 students this year are from his
 class. I think it's an
interesting way to think
 about this kind of
 co-evolution where you have
society, and you have
 machines, and they interact with
 each other, and it
evolves. Now, evolution isn't really
 a kind way for things
 to get better. It's
a little bit more,
 in my view, nuance
 and appropriate than this
optimization. People try to
 optimize for things, and
 getting back to my
friends that believe that you
 can win at life, they
 are optimizing for a
number of variables. I think
 that's not really a very
 resilient way to think
about life. I am going
 to make fun of my own
 institution, and so this is
MIT's campaign, "MIT for
 a Better World," and people
 kind of know what
you mean when you say
 "a better world," it's for
 society and stuff like that.
But I always ask, for
 who and what time scale?
 For shareholders at a
quarterly time scale, which
 is what most companies
 are doing, that's a
particular set of outcomes
 that won't spend money
 to train people, won't
spend a penny more on
 the environment than they have
 to. Or is it for
Native Americans who think
 at seven generations? Or
 is it for biodiversity
and eons? In which
 case humans are probably
 a problem. Think about
who at what time
 scale. What's interesting
about human beings, I think, is
that we think about
 many timescales and many things.
 When I am sitting
there meditating in the
 morning, I am thinking about
 the Universe at a
very long-time scale, but
 when I'm in a meeting
 negotiating with one of
my faculty members over
 resources, I'm thinking at
 a very short time
scale. I think the
 diversity of timescales and the
 diversity points of view
that exist inside ourselves,
 inside of society, is
 what creates this robust
complex self-adaptive system which
 isn't as good as
 nature, but it's
getting closer. The idea that
 we can reduce it to
 some sort of formulaic
optimization that we can
 train the machine against, I
 think is a problem.
It's reduction. One of
 the key takeaways for myself,
 and this is obviously
a point of view and many
 of you may disagree, but I
 am kind of out to
resist reduction. I don't
 want somebody like, and
 this isn't Godwin's Law
because I am equating
 Silicon Valley with Nazis
 by saying that
reductionism is like eugenics,
 but there are other
 examples like BF
Skinner and learning. I think
 that when you get a
 lot of new science and
technology, you get this urge
 to apply it to a
 bunch of societal problems.
If you are too
 reductionist and you don't embody
 the complexity, you do
have the danger of going
 way out of context and thinking
 that a turtle is a
rifle or thinking that
 because somebody's skin color is
 one color that they
are a certain classification. I
 urge us, as we go
 into the nitty-gritty of law
and technology, that we don't
 forget. This is why it's
 great, I see the two
philosophers in the front, that
 we don't forget the context
 in which we live
and the complexity. My
 personal belief that the
 fundamental nature that
is humanity is not
 reducible and that somehow, we
 need to figure out
how the reductionist part
 of our society and the
 irreducible part of our
society coexist and help
 each other rather than
 hurt each other. Thank you.
 [APPLAUSE]
 [MUSIC]
We have time for Q A.
 We have the weird Media
 Lab box, and the
other day I
 had Kate interviewing
 Chelsea Manning,
and she was really good
 at this. She told me
 later she plays basketball.
Any comments? Any questions?
Opinions are great too.
QUESTION: Hi. I am Kathy Pham.
I am an assembler and also
a Fellow at the Berkman
Klein Center. Something that
I thought of while you were
talking about corporations,
Joi, was yes, profits
 are a big factor, but
 entities like MIT and Harvard
and other universities
 train the engineers, the
 humans that go and work
there, and many, as
 you said, especially the
millennials, want to do social
good and have
 social responsibility. How do
 we get that sentiment
from the
 individuals into these companies
 that care about
profits, etc., and make
 it so that the engineers
 working on these problems
don't ultimately produce the
 problems and actually achieve
 the goals that
they are perhaps striving
 towards with ethics and
 doing social good?
>> JOI: You happen
 to be sitting next
 to some philosophers
who will be able
 to help you. At the
 Media Lab we are
using design as one
 of the cornerstones of this,
 so we have a journal
called The Journal
 of Design and Science,
 and part of design
is trying to
 understand all the systems
 and constraints. What I
think that engineers
 need to do is
 understand, first of all
, that everything they
 do affects a whole
 bunch of systems. If
you create an idea
 or a thing, you are
 affecting the aesthetic landscape,
you are affecting the
 environment, you are affecting
 learning, you are
affecting a whole bunch
 of things. You may not
 be able to control
everything, but you are
 responsible in the way that
 you are responsible for
bringing a child into
 the world. By being responsible,
 at least you then
either iterate towards or feel
 some sort of, I'm calling
 it a sensibility. Neri Oxman
and Meegin Kim
 [phonetic] teach a class
 called Design Across Scales
where we are talking
 about design at this microbial
 level to the astronomical level
and that they are
 all connected. If you
 have a sensibility which
is let's avoid
 waste, more than enough is
 too much, be kind,
and if every person at
 every scale is thinking that
 way and has a
synchronized sensibility, I think
 the system will change.
 This is more
participant designed, which is
 that every participant in
 the system has
the ability to change
 the system, it's not the
 master planner. Kevin Slavin
and I worked on this
 paper, and we were in traffic
 one day and he said,
"You know, you are not
 stuck in traffic. You are traffic."
 I think the part of
the engineer also is about,
 often your designer or an
 engineer, you are the
object doing something to design
 for the subject. But if
 you start to just
design things for yourself that
 you would want, and this
 is kind of the
Golden Rule of do unto
 others, but if you are
 a participant in the system
rather than an outsider,
 that's another way to reframe
 the problem so that
people think about in their
 own context. I think it
 comes with, and this
class is the beginning of
 this, is to bring these
 thoughts to scientists and
engineers, especially scientists who
 tend to think about
 things in very
narrow, in a particular
 microscope setting, in a
 particular field, rather than
how everything is
 connected in a
 complex system.
>> QUESTION: Thanks.
 Holly Benjamin. I
 introduced myself yesterday,
but I
 currently work at
 Google on identifying
and testing for
 product bias, and I
 am also an assembler.
This may be more
 of a comment or something just
 like a provoking thought exercise
that we can start
 to get into over
 the course of this class.
It can be
 devastatingly frustrating to watch
 the illumination of certain
societal problems that for some people to
 say, coming back to your comment about the
 criminal justice system and things that people may
start to realize there are
 problems or there is the
 existence of this bias.
I think, one, it is so,
 I don't know what word to
 use, just fascinating to see that
there may have
 been people who have been
 crying for help for decades
and that it
 wasn't until this became a
 problem that it was interesting
to technologists or
 to people in certain
 positions that they would think
we need to solve
 it. The second piece being,
 as we recognize the power
and opportunity
 for machines and technology
 to help us uncover
and start to
 solve some of these
 deeper, perhaps, embedded societal
issues and paradigms
 that we see the world
 through, who is it that's
designing those experiments,
 and who has the
 power to think
about which ones need
 to be solved? So, recognizing
 that it has the
power to elevate voices
 that may have been
 too quiet or powerless
before, but that who
 designs those experiments and who
 starts to look at
what problems are worthy
 of solving is also just
 a really interesting thing
to
 think
 about.
>> JOI: I think
 we can talk about it during
 class, but that is key,
and we have Kevin Esvelt
 in our lab. He is
 one of the co-inventors of
CRISPR Gene Drive, but
 he's working with the
 Maoris in New Zealand
and the committees
 in Nantucket to actually
 run the experiments
themselves rather than us
 telling them what to do.
 There is an informal
consent in bioethics, and
 I think it's very similar
 with the criminal justice
work that we are working
 on, and this is really
 starting to connect to
Adam's work, is how
 do we empower the communities
 to do this work
themselves rather than
 from the outside?
 So totally agree.
Do you
 want us to
 hold questions?
>> It's the
 answers I'm worried about
 not the questions.
 [LAUGHTER]
>> QUESTION: Hi. I'm
 DP, one of the assemblers.
 Maybe on a lighter note,
and I don't want to
 pick on Elon Musk, but
 the whole thinking of the
superintelligence, there's
 always a dystopian
 tint on it.
It seems that there
 is a correlation between the
 intelligence of it and
the malevolence of it, and so
 I feel like that frames it
 in a certain way that
maybe reduces the accountability
 of the people working
 on it because if
in the end it's going to be
 evil or if it's not going to
 be better than us at our
best, then we don't need
 to try our best. It's
 kind of a philosophical point,
but why
 is that
 the narrative?
>> JOI: I
 will just point out
 that it's very interesting
because the Japanese
 aren't afraid of machines
 like the Americans are
. I think it has a
 lot to do with, at least
 one hypothesis that I've heard,
is that it has a lot
 to do with slavery. The West
 has had slavery since Greeks,
and so you've
 always had slaves
 and overthrows. We imagine
us being enslaved
 by a superintelligence where
 in cultures that
had less slavery, that's
 not really the dynamic.
 Also, because in Japan
you have you have
 animists system where you
 don't control things and
animals so much. You
 don't have this fear that
 an intelligent thing is
going to be evil, but if
 you are that evil person in
 real life, then you will be
afraid of something that is smarter.
I think that is a cultural thing.
That's the one thing I think Japan has
up, is that the AI is
 not as advanced, but I think
 the culture is going to be
more, it's going to
 be easier for them
 to assimilate. Last question.
>> QUESTION: Hi,
 [indiscernible name], a security
 reporter for CSO
online and former
 assembler. I have
 been following the
GDPR right
 to an
 explanation debate.
This is hotly
 contested by legal scholars,
 and the normative question
is also debated at
 the Berkman Klein Center just
 at a paper route
criticizing the idea there
 should be a right
 to an explanation for
automated decision-making. My
 question for you as
 a technologist, is
it technically feasible to
 give a right to
 explanation for automated
decisions in all
 cases? Is that
 even feasible?
>> JOI: I'm looking at
 JC. There is a long answer
 and there is a shorter answer.
The short
 answer is it is
 kind of complex.
There is a section of
 this, I am not sure if
 it was in the readings,
but first of all, it depends
 a lot on how necessary it
 is. It will definitely add cost;
it will definitely
 slow things down. In some
 cases, you may want to
allow machines to
 run without explanation if
 the risks are low
but the benefits
 are high. There are certain
 situations where you want
explanations. You will hear
 from Pritique later that
 in certain medical
situations, we are actually
 learning from machines ways
 to think about
things that wouldn't be able
 to explain to us in
 the way that we
understand. By limiting
 the explanation to legal
 explanations or technical
explanations, we may limit
 the ability for machine to
 come up with certain
types of ideas. That's
 another risk that people
 think about. Different types
of algorithms are easier
 to explain than other types
 of algorithms as well.
There's a lot of issues.
 I think it's a good idea,
 but I think it's something
that we need to look
 at in a very context-sensitive
 way, and I think that
there is a -- but
 even human beings are not
 very good at explaining and
most of our explanations
 are retrofitted excuses for
 what we decided to
do. One of my
 favorite neuroscience things
these days is that your brain
has a physics engine, so
 when you see a bag of
 rice that's about to fall
over, you are not actually
 computing with math that it's
 about to fall over,
so you can't actually prove
 it, you just know. So,
 this engine, as my infant
is now learning, starts
 to get this intuitive nonlinear
 model of the real
world. I think we do that.
 If you are a skier, you
 ski, but you can't explain to
somebody exactly what's good
 about your skiing versus
 what's not. If you
had to only do
 those things that you could
 write down, you wouldn't
actually be a good skier.
 The mastery of craft, even
 in our human brain, let
alone computers, often is
 an unexplainable wiring of
 a complex system
that just produces a result
 based on the way it's
 learned. That would be
my concern, that explanation
 is a category of
 reduction. Reduce ability to
human cognitive understanding which
 may limit the ability
 for us to
design something that is
 actually maybe intuitively more
 sensible than is
something that is legally
 correct. That would be
 another concern. With
that I will hand
 it off to my
 legally correct tort's professor.
>> I
 wonder if the answer
 to Yens' [phonetic]
question doesn't have to
 do with the Donella Meadows
 slide that you showed,
JOI, which is sometimes
 we want an explanation
 for its own sake,
the explanation is the
 point. We are questing
 for reasons and sometimes
the explanation is the
 means to an end and
 assurance that there isn't bias
or something like
 that. It's only an
 understanding paradigm where the
values from
 which you are
 wanting the explanation;
and I think
 we can probably infer
 why the European regulators
are eager to have
 such a thing as part of
 a right to privacy, that
you can figure out
 whether there something other
 than explanation that
can satisfy the upstream
 right for which the explanation
 is trying to serve.
That's as good of an
 introduction as any to some
 of the things I wanted
to broach today. I
 certainly feel a certain
 feeling after hearing JOI's
presentation, and more
 generally a certain
 feeling whenever thinking
about the ethics and governance
 of AI, a sort of
 vague sense of unease.
That vague sense of unease
 is a feeling that I
 am familiar with, not just
characterologically having learned
 fantastically from my
 parents, but
rather from having studied
 the law and policy
 around the development of
the Internet. And having
 been involved in that and
 studied that for a
number of years now;
 in fact according to JOI,
 decades, that's given me
some sense of syncopation
 trying to make sense
 out of the current
landscape for which there
 is sufficient unease all over
 the place that we
can be a class around
 it. We build an AI fund
 around it, we trouble the
European regulators who have
 plenty on their plate
 to want to be
regulating towards it. I
 think our goal broadly in
 an academic kind of
enterprise is to be
 relentlessly lucid about
identifying where our sense of
unease comes from, make
 it less vague to examine
 if it's really worthy.
JOI's pretty provocative
 example comparing the
 Japanese and
American culture and their
 attitudes towards machines is
 really asking us
to examine what the sources
 of our worries might be
 and if we just might
want to chill out a
 little bit. Or if not,
 how to address those worries
through the levers that
 we have, whether it's
 individual action or
governance or something else.
 It's really hard to build
 this stuff up from
the ground, and I suspect
 we will get there, but
 it's certainly taking years
in the Internet space to
 even have a semblance of
 a coherent notion of
what we are wanting to achieve
 and why and how we are
 going to do it. I
just wanted to share my
 own sense of where this
 non-field field stands as
we go about it. I
 am aware, and JOI kind of
 opened with this too, it's
against a backdrop, this
 vague sense of unease
 has these moments
where really successful
 and; therefore, necessarily
 smart people have
told us that they have
 thought about it, and there
 is a real problem here.
Bill Gates insists AI is
 a threat, given that it's a
 BBC article, it just might
mean he just said the
 words "AI" and then there
 was some sort of
journalistic license. He is
 worried about it. We
 also have Stephen
Hawking, the famed
 physicist, saying, "The primitive
 forms of artificial
intelligence have proved very
 useful but the development
 of full AI could
spell the end of the
 human race." Now, it's true he
 told the BBC that again,
but I do think that
 represents his view pretty well.
 Elon Musk who is,
among other things, behind
 OpenAI, says, "AI is nothing
 short of a threat
to humanity. With
 artificial intelligence, we are
 summoning the demon."
There is also Nick
 Bostrom who is the
 philosophical kingpin of this
movement whose hedging his
 bets a little bit
 here. He does believe
superintelligence could emerge and
 while it could be
 great, it could also
decide it doesn't need
 humans around or do any
 number of other things
that destroy the world, which
 is kind of like the
 ending of a story in
Newsweek. For those
 who remember Newsweek, the
 kicker was always,
"The future is uncertain, but
 one thing is clear, if
 things don't get better,
they could certainly get
 a whole lot worse. Hard
 to disagree with that.
 [LAUGHTER]
We will have a chance,
 I think, even maybe tomorrow
 as we do some
of our breakout
 groups to talk about
 this superintelligence thing.
This is a weird way for
 me to dismiss it, but my
 dismissal of it is, we have
a number of problems
 in front of us right
 now. There's smoke coming
from the console. Let's not
 worry yet that we are not
 sure it's 50 or 100,
but there could be depend
 on all the assumptions you
 load into things, a
real problem way down the
 road. I am kind of
 wanting to solve the
problem in front of
 us. So that's artificial
 intelligence. When we say
"artificial intelligence," what
 do we mean by
 "artificial intelligence"? I
confess that it has a
 lot of different meanings to
 a lot of different people.
JOI had that chart
 just of machine learning and
 the different variants of
that. There are the
 technologists' definitions,
which can be very useful if
you are trying to
 identify the tactical limitations
and issues with a given
technical system. There is
 a real value in
 understanding and elaborating
the actual systems we are
 talking about him. But I
 think there is also a
way of trying to think of
 it more broadly what we say
 when we mean AI. I
should also be clear,
 classes on ethics and governance
 of AI, we are
working with a fund
 on the ethics and
 governance of artificial
intelligence. When we say
 "ethics," it's not entirely
 sure what we mean,
and when we say
 "governance," we are not entirely
 sure what we mean.
There's a lot of stuff
 that is as of yet
 not completely worked out in
anything but the prepositions
 of the stuff we
 are talking about. And,
again, the relentless pursuit of
 lucidity is trying to map
 out a little bit of
each of those to
 create a common vocabulary
 and not have the
semantics be the thing
 we find ourselves arguing
 about unless the
semantics turn out really to
 matter. I'm going to try
 in the course of my
talk today to limb in a
 little bit of each of those
 zones. We might as well
start with artificial intelligence.
 This of course is
 The Terminator, which
JOI invoked. Old movie, but
 still a good one I
 am told, and the blending
of the human and the
 machine, part of the visceral,
 no pun intended here.
Fear that lurking beneath a
 human skin is a machine
 that is at first
passing and then later far
 more powerful than we are
 and perhaps out to
get us. There is
 the predecessor to The Terminator
 which was HAL 9000,
which had no corporeal
 manifestation. It was just
 a voice that was
maddeningly gentle, even as
 it was saying horrible
 things. I don't know
how many people have
 seen HAL in Stanley Kubrick's
 in 2001. But that's
an interesting, again, manifestation
 of having it be
 humanlike but not
worry about looking like
 a human. Of course,
 Siri and its counterparts
which do use the
 first person, emphasis on
 person, and really represent
themselves to us, again,
 as a disembodied kind of
 creature here to help.
The very first appearances of
 Siri as we will see
 on the iPhone, where I
am Siri, your humble
 assistant. They could have
 picked any number of
adjectives like, helpful,
 computer, thing, and instead
 they picked "humble"
just to pretermit the worries
 of, "Wait a minute are
 you out to kill me?"
"No, no. I am your
 humble assistant, not that other
 assistant that is out to
kill you." For my part, when
 I say "AI," let me tell
 you what I mean, and this
will also explain why I
 have no career in marketing. When
 I think of AI, I
am really thinking
 of our arcane, pervasive
 tightly coupled adaptive
autonomous system. These are
 each variables that could
 be tweaked to
more or less. The more
 you have, to my mind, of
 each of these, the more
we are getting, I think,
 at some of the sources.
 If we break down each
adjective, we can then
 start to be more lucid
 about the sources behind
the vague sense of
 anxiety we might be feeling
 of the pell-mell rush
happening now thanks to
 the magic of the
 markets to make them
pervasive, to make them in
 our pockets in the world
 running some of our
industrial systems, all that sort
 of stuff. When you actually
 look at this, a
lot of these elements,
 as JOI was emphasizing
 through Norbert Wiener,
exist in the artifice of
 a corporation, the legal fiction
 that that exists and in
the ways in which any
 of us part of a corporation
 or an institution think of
ourselves as speaking for
 it. "Harvard wasn't happy
 when "x" happened.
It's like, really? Harvard
 wasn't happy? What does
 that mean exactly? Here
then, to me, when I am
 going to be talking about AI,
 I will be thinking less
about the mechanics
 underneath, even though that
 matters, and more
about these qualities that
 are getting dialed up in
 ways in the difference
of a degree can become
 a difference in kind that
 make us want to be
intervening in a more
 dramatic and resourceful way to
 avoid some of the
futures that we fear. I
 also should correct this, JOI's
 right, I once used
the word "autonomish" to
 say they're not fully
 autonomous, they've got
humans involved in other
 things, so they are
 "autonomish" systems about
which we have concerns. Let
 me give you an example
 in the real world
that I think many of
 us are familiar with. This
 is unlike, for example, in
cyber law where I felt
 like we had to spend
 five years around Internet
governance issues trying to
 persuade the world that
 it mattered. You're
like, "No, no, domain
 names, they really matter." For
 which, sorry, this is
streaming and everybody's
 watching from iCam like,
 "They don't really
don't matter that much."
 But we thought they mattered.
 Here, there is not
that same persuasion
 having to happen. People
 somehow know that
from this early day.
 Remember Yahoo when there was
 a search box and
you might type something
 and then it would
 give you answers that
humans had curated into
 categories like an old
 thing called the Yellow
Pages, they move along
 to something like Facebook
 for family and
friends to share their status
 with each other in an
 updated feed. Once you
say it, I wonder why
 we didn't have it sooner. You
 start to see that it
becomes a source of news
 and what I call "agenda
 setting." You wake up,
what's going on in the
 world? What should I care
 about? Less and less, I
think. Do we find
 ourselves consulting the regular
 media but simply
looking at these aggregators,
 our Facebook feed or
 our Twitter feed and
its counterparts around the world
 to tell us what's going
 on and what we
should care about? Of
 course, these feeds are
 not identical across, they
are personalized to us
 using algorithms that may be
 a little bit abstruse,
we don't see how
 they work. Our own reactions
 become part of the
personalization. This is a
 study that Facebook did,
 and honestly that only
Facebook could do because it
 only had access to its
 data showing that a
number of days before
 a relationship is declared
 on Facebook between
two, what we assume to
 be people, there are behaviors
 going on that end
up through a training
 algorithm, you can alert
 Facebook that these folks
are about to be in
 a relationship possibly before even
 before they know it,
which leads to some
 ancillary products that Facebook
 could offer such
as in-law alert telling the
 in-laws that this is about
 to happen and for a
small fee that perhaps
 we could help to
 drive that would-be, star-crossed
folks apart. It's just a
 way of saying, "Gosh, if
 they could do it with
relationships, what else could
 they do it with? What
 else might it know
that I am getting
 interested in?" When I hear
 some of the excitement
among AI folks talking
 about what they think is
 right around the corner
with the data and
 the training that they have,
 it is anticipating human
behavior to the individual level.
 If you can anticipate it
 and predict it even
before the person may know
 where he or she is
 going, it might mean you
can also shape it. I
 think that's part of the
 anxieties we might be having.
Back to just regular
 old agenda setting, a couple
 summers ago were the
rights in Ferguson, Missouri,
 and there was a strange
 thing that a number
of people noticed which
 was that in their Twitter
 feed, they found all
Ferguson all the time,
 yet in Facebook feeds, which
 had roughly the same
friends as the number
 and type of people
 they were following, in
Facebook they weren't seeing it
 at all, and they were
 left to have to make
inferences about it, which
 quite quickly went to
 the conspiratorial such
as, Is Facebook trying to
 keep Ferguson out of the
 feed? I find myself
skeptical of those
 sorts of explanations. It's
 interesting, hypothetical to
ask what should a
 Facebook do if asked by
 law enforcement in the
interest of public safety
 not to incite riots by
 allowing live streams or
other accounts to enter
 a feed of a riot
 in progress. Under what
circumstance, if any, should
 they be responsive to that
 kind of request? It
turns out here, the
 answer was that they had
 just introduced hosted video
offered by users on
 Facebook, and they were really
 trying to promote it.
The video shown here is
 one of the more obscure
 versions of the Ice
Bucket Challenge for ALS.
 [LAUGHTER]
This was the Ice
 Bucket Challenge, and it
 went viral not just
because it was
 compelling, I obviously have to
 get rid of that,
not just
 because it was
 compelling, but because
Facebook had tweaked
 its own newsfeed algorithm,
 as it has
the right to do, to
 promote user submitted videos to
 make that far more
likely to show up admits
 the 8,000 things it could
 choose to shove after
the cat picture. It's like,
 "Oh, user submitted video, let's
 just put a lot of
weight on that because we
 want to get it up
 and running." That ended up
because there wasn't a
 lot of user submitted
 video of Ferguson pushing
that down in relation
 to something like the
 Ice Bucket Challenge. Even
Facebook, ultimately, had to do
 a little research to come
 up with that as
what I take to be,
 the actual explanation for a
 large part of the disparity
between the two types
 of feeds. There have been
 other times when we
have seen platforms, by that
 we tend to mean these
 days in this kind of
conversation Facebook and
 Twitter and Google
 usually through YouTube,
are aware of their
 power. There was an
 immigration, a   pro-immigration
pack that Mark Zuckerberg
 had funded. That group
 did a PowerPoint
deck trying to argue
 why they were well-positioned
 to make a difference
in debate, and that
 deck leaked. Under a
 section called Tactical Assets,
they said, "We
 control massive distribution
channels as companies and
individuals." We saw the
 tip of the iceberg
 with a campaign against
SOPA/PIPA, a US federal law
 that ultimately did not pass
 thanks to a lot
of grassroots, and also
 astroturf Internet pressure that
 made members of
Congress reconsider, and
 they are saying, "Look
 we've got these
distribution channels. We
 can make Facebook be
 an instrument perhaps
for changing minds." As
 you might guess, once
 this became public,
Facebook very quickly disavowed
 any such intent.
  It did get me
thinking about the possibilities
 in a study that happened
 in 2010 in the
congressional elections. Facebook,
 with some other
 researchers, but
again Facebook was necessary
 to this, decided to salt
 the feeds of its
tens of millions of
 visitors from North America
 on election day that
November with a simple
 note that looked like this
 that said: "Today is
election day. Find your polling
 place. Let us know if
 you voted, and here
are some of your
 friends who have already voted."
 They were curious to
see by reviewing the records
 of voting and tying it
 to real names offered
up on Facebook whether
 or not that had a
 significant impact on those
who voted. Just that single
 alert in the feed that
 it was election day. And
the answer was it absolutely
 did by margins that were
 greater than that of
Bush versus Gore in Florida
 in the year 2000. Of course,
 not so hard to do.
It was very close in
 Florida the year 2000 between
 Bush and Gore. That
did get me thinking about
 whether or not it might
 be that something like
Facebook could be a vector
 for tipping an election one
 way or the other in
this review of Cassandra
 studies published here. Sadly,
 no one listened.
Anyway, that's the kind
 of thing that had me
 wondering about the power
of the platform. Is that
 an AI worry? It's a
 worry about obscure systems
and also asking us
 to identify who is wronged
 in the instance that
Facebook hypothetically, let's be
 clear, were to decide
 that they favored
one candidate over another,
 maybe because one has
 a better stance on
Facebook's immigration position,
 and simply chose
 to alert those
Facebook users on election
 day who were likely
 to support that
candidate. And they don't
 have to do much inference
 there. We often like
what we like on Facebook.
 Alert them that it's election
 day. For those who
are not supportive, don't alert
 them. Just give them the
 cat picture they so
desperately know that they
 want. Is anybody wronged
 in that scenario? I
think, yes, but I see
 dispute there. It's not clear.
 Are you wronging the
person you told that
 it's election day? Are you
 wronging the person you
didn't tell when you gave
 them what they wanted and
 they might not have
otherwise been told? I
 think you are actually wronging
 both of them, even
the person you alerted, and
 a little later I will
 explain why. In the
meantime, the closest
 we've gotten to that
 hypothetical was this
situation, during a town
 hall meeting internal to
 Facebook where they
were using a Q&A
 tool, again, internal to the
 company, for employees to
nominate questions for Mark
 Zuckerberg to answer at
 the front of the
room and then to
 vote and presumably the
 most uploaded questions
would be the ones they
 would probably ask. Here is
 this question with 61
votes: What responsibility
 does Facebook have to
 help prevent President
Trump in 2017? This was
 in March 2016, mind you,
 so it was election year
with the election coming
 in November. Obviously, question
 did not get
asked of Mark Zuckerberg on
 the floor, but that's the
 kind of thing that is
a paradigm busting
 question, that JOI's example
 the high school
student evoked. It created
 enough of a stir
 when that screenshot got
released that Facebook had
 to convene a band
 of humans. At some
point, I think this
 could be rather easily
 algorithmically generated to crank
out a statement that
 was meant to be as
 anodyne and calming as
possible to say that,
 "Voting is a core
 value of democracy. Supporting
civic participation is an
 important contribution. We as
 a company are
neutral. We have not and
 will not use our products
 in a way that attempts
to influence how people
 vote." Notice, by the way,
 that this promise, which
I am happy to hear and
 in fact would love them to
 sign on the dotted line
to kind of lock it
 in, this promise still doesn't
 reach my hypothetical of
turning out folks who
 have already decided out
 of proportion versus
those who haven't. That's
 not attempting how they
 voted just it influences
whether they vote, and that's,
 again, the kind of thing
 for which it's like
might be good to get
 the companies say what they
 will and won't do. Also
within the context of
 agenda setting are articles
 like this, "FBI agent
suspected in Hillary email
 leaks found dead in
 apparent murder suicide"
with a rather excitable photo
 of a house burning down.
 It turns out this is
fake news, and by that,
 I mean, it is literally
 fake news. It's from the
Denverguardian.com. There is
 no such newspaper
 as the Denver
Guardian. If you are
 in Denver, you cannot
 subscribe to The Guardian.
There's not a box on
 the street with The Guardian
 inside. It is just a
website that they know
 people won't even likely click
 through to. This is
one of the few stories on
 it at the time, but it
 turned out, when you look at
the most shared stories
 during the last three months
 of the US election
season in 2016, the
 Denver Guardian beat out
 our own hometown
underdog The Boston Globe
 where the most shared
 story on the globe
had about 150,000 shares.
 The Denver Guardian was
 upped just under
600,000 shares, and the
 question to Facebook was, "We
 get that you are
neutral. Does this mean
 that this is not a
 problem?" Facebook's own view
on that has evolved. I think
 now they are at the point
 where they say it is a
problem but that took a
 while. It took a while
 because of their own
chariness of trying to
 do anything other than
 saying, "Whatever people
share, they share and
 that's it." That's of course
 in the organic feed
ecosystem. Think a little
 bit about the advertising
 ecosystem which is, of
course, the financial
 underpinning at the moment
 to something like
Facebook. This was, I
 believe this is another
 ProPublica special, Julia
Angwin and Company, in which
 they got the idea when
 you want to place
a Facebook ad, you
 get to pick what characteristics
 of the target you
want to exist for them
 to be shown the ad
 and then you might find
yourself paying a fee. So,
 they got the idea of
 putting Jew-hater as the
field of study of any
 of the 2 billion, 3
 billion Facebook users. If anybody
happened to have typed those
 words as a field of
 study, then show them
this ad. Facebook,
 quite helpfully doing some
 very simple algorithms,
were like, "You know what,
 we can suggest to you
 some other fields of
study that might also relate."
 In order to expand your
 audience and pay us
a little more, such as
 "how to burn Jews" or
 if your demographic employer
is the Nazi Party,
 you might quite like this,
 "Your audience selection is
great, potential size 108,000
 people." Now, these again
 are not categories
of the old Yahoo
 variety where there's some
 demographer at Facebook
and it's like, this is
 a good category. It's what
 David Weinberg would call
"folksonomy" or perhaps we
 can call "volkonomy." This
 is the people
speaking, "Sorry, if I could
 redo it I would, it's too
 late, it's a tightly coupled
system." That's the kind
 of thing though where figuring
 out what we are
expecting of the system
 itself, what level of
 responsibility is ultimately
one of the questions. It
 was amazing to me, by
 the way, when you look
more closely at some
 of the suggestions that
 Facebook made through its
collaborative engine, there's a
 bunch of stuff that if
 you have an employer
of SS Nazi makes
 sense, like German shootstaufel,
but the very last one
of Eataly, NYC. I
 have no idea what makes
 Eataly something that these
groups would be liking. You
 start to realize too --
 Facebook reacted to this
study, by the way,
 by acknowledging upfront that
 it was extremely
regrettable, that it was
 bad, that it should not
 happen. Then they are
trying to figure out, do
 you do what you would call
 a "white list." You can't
do anything. You
 can't name categories unless
 we previously approved
them, or should it
 be you can pretty much
 name any category that
somebody has specified as a
 user to search on except
 ones that we will
try to exclude. And
 every time Julia Angwin writes
 an article, we'll just
exclude more categories and
 thank her for her service.
 You also start to
see states behind
 the planting, dissemination, and
 impact of propaganda
and fake news. In
 this case, possibly, Russian
 originating hackers behind
a Qatar crisis, but
 we have some examples
 offered up by Facebook
during their review of
 groups and ads, organic
 and nonorganic stuff, that
was produced by the
 Russian government according to
 Facebook. As you
can see here "End
 sanctuary cities deport illegals
 and close our borders.
We're full, go home, no
 more refugees, no more illegals,
 like if you agree."
These are things
 coming from another country
 designed to influence
actions in the first country.
 It's not just what they
 are thinking about, but
they are actually suggesting
 real-world impact. Here is
 in Twin Falls,
Idaho, an event chartered
 by Citizens Before Refugees,
 with a rather
evocative, scary photo of
 a citizen, I assume, being
 attacked by a refugee.
They actually called,
 and apparently according to
 Facebook, four people
went and 48 were interested
 in showing up in person,
 with the event itself
having been orchestrated
 from afar, according to
 Facebook, by another
government. If you are
 looking for things to be
 nervous about that involve
the kinds of systems with
 the kinds of adjectives I
 was talking about that
amount to some measure of
 AI, I'd put this pretty high
 on the list in 2018.
There is also a movement,
 I think, for the kinds
 of systems I've been
referring to as AI
 that Randall Munroe calls a
 "movement from tool to
your friend." Let me unpack
 those two words. If I
 go to Bing, which I
assume there was
 brief excitement in
 Redmond, Washington. Somebody
has gone to Bing. [LAUGHTER]
 I've got to say, I
 try to give Bing the
college try because I think
 it would be nice to
 have some competition in
search engine space, so it's
 like, "Go for it Microsoft."
 I said "should I
vaccinate my child?" I did
 not share these results with
 Joi. If I look at
the top results, it's no, no,
 no, yes, no. Is that a
 problem? Is that a system
that is buggy, or is
 that a search engine that
 is working as designed? I
don't know. Let me just
 take a quick poll of
 the audience Internet style.
Hum at the count
 of three if you think,
 presuming this reflects something
satisfying to the algorithm
 of which sites have the
 most rich links, the
most activity, the kind
 of stuff normally judged
 in the abstract of
irrelevance, how many people
 say this is not
 Microsoft's problem? One,
two, three, wow! How
 many people say this is
 a problem Microsoft needs
to fix it? One, two,
 three, huh. So, we have
 a kind of language response
from the audience. Maybe it's
 the first time we are
 humming, so we are
not yet even sure about
 the modality, and I respect
 that. The old answer,
the traditional answer is,
 it's not Microsoft's problem.
 It's not Googles
problem unless it's
 something systemic, unless
 SEO (search engine optimization)
is taking over. This
 is a window onto the
 web. If the web is
crappy, the results are
 going to be crappy. What
 you want from me?
That's how it works. That's
 kind of the idea that
 Joi was adverting to,
not approvingly, but adverting
 to of, we get systems
 to optimize. If what
you are optimizing is
 already crappy, you just
 have highly optimized crap,
and that may be
 the case, depending on your
 view of vaccinations here.
There has been a
 reluctance by the search
 engines traditionally to hand
tweak results. Now, if you
 see that on the left,
 this is Avishai Margalit, a
colleague of mine, you
 can Google him. There is
 some stuff about him
on the left, and over
 on the right, is something
 that is now quite
commonplace across all the
 search engines. This is
 a more oracular
statement from the search engine
 that is meant to be
 kind of the bio of
the person. Here you can
 see, he is an Israeli
 professor, a merit is
philosophy. Until 2011, he
 was at the Institute
 for Advanced Study in
Princeton, which is weird
 because he died in
 1962, which is especially
weird since he emailed me last
 year. I was like, "What do
 I need to do not
to be dead?" It's
 kind of a philosophical question,
 but also a quite
practical one. I'm like,
 "Oh, there's a feedback
 link. You should click
feedback."
 Feedback: "I
 am alive."
 [LAUGHTER]
And then wait, and
 I am sure Google
 will correct it.
This is an example
 of a different thing.
 This is knowledge graph.
This is trying
 to make some sense,
 to be relentlessly lucid rathe
r than shoving at you
 the first 10 things that
 it could grab and giving
you an answer. On the
 left, is a tool. It's a
 research tool. You just use it.
You don't blame the tool
 if it hands you stuff
 from the corpus that
is not so good. On
 the other hand, this is your
 friend. This is your advisor.
This is giving you
 preprocessed stuff that you
 might find yourself relying
on much more. The
 same thing with appendicitis.
 If I Bing "appendicitis,"
they are not to be
 blamed, I think, as much
 at Microsoft, if link number
two turns out to
 be something inaccurate. Over here,
 when it is actually
telling you about appendicitis
 in the knowledge zone, and
 it says it is
caused by an imbalance
 of bodily humors, you
 need a leaching. That's
the kind of thing,
 you know, Microsoft, maybe
 you shouldn't be saying
that kind of thing
 to people. It's wrong. Whatever
 you think about the
organic area, we are moving,
 my claim is from tool
 to friend. There's this
thing lately in business,
 everybody's really in, when
 you say "AI," they
think you mean chat bots.
 I don't know why, but
 chat bots are really old.
Chat bots are all the
 rage in business now. This
 is Facebook's "M." Has
anybody used Facebook M
 before? A couple. I
 would be interested in
hearing your experiences with
 it. Here's somebody, a
 friend in messenger
and it's like your concierge.
 Why? It's not clear. It's
 just, I'm M. I'm here.
Let me help you figure
 out where to go to dinner,
 just ask. In this case,
the person said, "Where
 should I get dinner tomorrow?
 "I would be happy
to help you find
 something. Where are you, and
 what are your favorite
types of food?" The person
 says, "I work in Colorado.
 Are you are real
person or an AI?" "I
 use artificial intelligence
but people help train me. I'll
find some restaurant options
 for you." He wasn't done
 yet. He said, "Do
you possess a physical
 manifestation?" "I live right
 here in messenger."
Huh, this is a smart AI.
 "How old are you M?" "I'm
 AI. I don't have an age."
"Huh, are you male or
 female?" "I'm just M. Is there
 a type of food you
had in mind for
 dinner tomorrow? Stop asking
 these awkward questions."
 [LAUGHTER]
It just does
 make you wonder, what
 was it in Facebook
that had them say --
 and I think there is clearly
 a human behind this, right?
So why did they
 have to make the
 human refuse to
acknowledge that they were
 a human? The person, by
 the way, doing this
conversation went to increasingly
 baroque means to figure
 out -- to
catch the human being
 human, including pretending that
 he was the
restaurant giving the phone
 number and having them
 call the restaurant
to see if it was
 available for him, and then when
 a person called he was
like, "I gotcha." We call that
 a hollow victory. He didn't even
 get dinner out of it.
 [LAUGHTER]
But what was it that made
 it, especially given the fears we
 are talking about AI --
"No, no. I'm
 not really a
 human and you shouldn't
be asking about
 that. I am this
 disembodied helper, available 24/7,
fungible at all
 times, for which that's not
 compatible with a human
being your concierge."
 That's, again, here's Siri,
 your humble, personal
assistant. "How can I help
 you today?" In the rush
 to make them so
helpful that any question you
 ask they are going to
 try with an answer,
we get
 stuff like
 this happening.
[AUDIO PLAYING]
>> QUESTION:
 Is Obama planning
 a coup?
>> SIRI: According to
 secrets of the Fed,
 according to details exposed
in Western
 Center for Journalism's
 exclusive video,
not only could
 Obama be in bed
 with the communist Chinese,
but Obama may in
 fact be planning a communist
 coup d'état at the end
of his
 term in
 2016.
 [LAUGHTER]
>> Now, is the problem with
that answer that it did not
pronounce coups d'état correctly?
We'll
 get right
 on that.
Or is the problem
 that it just like in the
 same voice that it would tell
you what the high
 is going to be today.
 It's like, "Yes, the president
is planning a coup." It turns
out it is nondenominational.
 It's ecumenical in its views.
[AUDIO PLAYING]
>> QUESTION:
 Hey Google, are
 Republicans fascists?
>> GOOGLE: According
 to debate.org, yes,
 Republicans equals Nazis.
 [LAUGHTER]
>> Well, all right.
 There you have it. According
 to Google, Republicans are Nazis.
That's the kind
 of thing that
 has Google being like,
"We have got to
 fix that. That's a problem."
 I think it should
get us thinking
 about pervasive, the
 rush to persuade people
who don't need
 as much persuading as you
 would think, to place
these weird ashtrays with
 no lip in their homes
 and then start asking
them questions, and to
 not even have the mildest
 of vetting. They are
actually speaking as friend
 when you know what they
 did, they googled it
and took the first answer,
 which is the organic search
 that I was making
excuses for before. Hey, it's
 just a tool, blame the
 web. There's just even
some low hanging fruit that
 could be done. Change your
 tone of voice to
reflect your level of certainty.
 You ought to be able
 to convey a shruggy,
as you say, "Yeah, Obama
 planning a coup, could be."
 Then, "The high is
50 today." "Hey look, the
 high is freaking 50° today.
 You can bank it." That
would be good to know,
 and it's a very basic
 thing that gets the company,
though, in the business
 of starting to articulate the
 basis of its sources.
Of course, this isn't
 just information exchange. Just
 as the propaganda
on Facebook went from
 just showing you memes
 to getting you
gathered in Twin Falls,
 Idaho, we see the movement
 of the concierge into
everyday devices for the
 Internet of things. "Here's
 the next thermostat
for which the one
 thing that we can guarantee
 about the next thermostat
is it will not remain
 at the temperature you set it,
 instead it will be the
best temperature for you."
 And you say, "What is
 the best temperature for
me?" Do we have
 a right to an explanation?
 Do we deserve one?
Especially if we have
 some anxiety that it's deriving
 best not according to
our interests, but to somebody
 else's. We will come back
 to some of the
governance implications of it.
 The other thing is, if
 we are talking about
friends, things that we
 lean on for advice rather
 than tools that produce
information on which we
 then weigh a lot of
 different things and then
make a decision, we do
 have, I think, as problem
 zero, the problem of
bias. Our readings have it
 all over the place. Any
 discussion of AI, ethics,
and governance is going
 to be about bias. Often
 times, the bias arises
either from an extant data
 set that is training or
 from ourselves when it is
relying on our own
 inputs to simply make
 associations. We are all
familiar with Uber and the
 fact that you can judge
 your driver. It wasn't
always known so well that
 the driver judges you and
 that the rider has a
rating. Uber did not
 let you know originally what
 your rating as a
passenger would be, and it
 turns out there was a
 bug on the Uber
website that let you find
 out your own rating if
 you got to the website
before they patched it
 and entered the right
 string. It went around
Twitter, it started ricocheting,
 this is 2014, "Hurry up
 and get your Uber
score before Uber runs
 out of them." Okay, people
 start figuring out their
Uber passenger score. "I'm just
 a 3.9, so my life
 as a C+ person
continues." Here somebody, "My
 Uber rating is 4.8.
 I'm racking my brain
trying to figure out
 how I've been a
 less-than-perfect passenger." We
should be clear it's out
 of five. This is somebody
 who is really shooting
for the stars, and 2.6
 Uber rating because it's not
 cool to yell, "hold it
steady" and roll out
 of a moving vehicle. That's
 somebody embracing his identity.
 [LAUGHTER]
Then of course,
 you start looking at
 this second order effects.
It's a complex
 adaptive system with us as
 elements of it, so our
behavior starts changing
 in order to get the
 better ratings. In fact,
people start publishing on Medium
 how to get a perfect
 5.0 rating. I can't
wait to read that. I
 think it means be nice. Then
 you start to realize too,
the ratings are coming from
 drivers, and even if they
 are being, in fact,
especially if they are
 being completely honest with
 the goal of helping
their fellow drivers. That's who
 the ratings are for. You
 can start to get
bias. So here somebody who
 says, "I am apparently 4.6,
 and I blame the
drivers who complained
 about me being in
 a wheelchair." Now,
descriptively speaking, this person
 may take longer to
 pick up and to
drop off than the
 average passenger, so the driver
 in a cold calculation
may be saying, "Yeah, this
 was not a five-star ride.
 I'm alerting folks this
ride is going to have
 a wrinkle to it." On the
 other hand, in America, I
think, it's illegal. You
 are not allowed under
 the Americans with
Disabilities Act to discriminate
 in this way, which
 is the ultimate outcome
of a system in which
 the rating drops on the basis
 of a disability, and the
ethical issues are legion. Is
 that Uber's problem? I think
 it's fair to say
that it is, but at
 first glance, it's just a
 rating system and when Uber
should intervene, how do you
 fix a problem like this
 if what you are just
thinking is, I am
 a window onto humanity
 judging one another, don't
blame me. It's not
 just in driving, it's in
 job placement services. Here's
monster.com for placing jobs
 with a wholly inappropriate
 person in a
bakery dressed not at
 all to bake, but
 you can imagine monster.com
having managers who work
 with hourly wage workers
 and they rate them
at the end. Did that
 go pretty well when they
 filled in? It immediately
starts to reflect the priorities and
values of that manager. If that
manager is discriminating
on the basis of any
characteristic, including a
protected one, the system will
 -- as a feature it
 is working as designed,
go ahead and efficiently
 make sure that that manager
 is getting the kind
of employee that the
 manager wants. That's an
 example of systems
simply deriving from human
 bias and paying it
 forward in an efficient
way. You also start
 to get the deployment of
 some of the machine
learning tools that Joi
 was describing for which
 when you train them,
they can seize upon
 anything as the basis
 for a correlation. Admiral
Insurance in the United
 Kingdom decided they would train
 on a bunch of
Facebook data and try
 to see what correlation
 there was between
anything in the nature
 and quality of the posts
 that people are posting
and whether they had
 in fact gotten into an
 accident, which they would
be alerted because they
 had Admiral Insurance and
 then pay it forward
to make predictions. What
 they found was, yes,
 there were correlations.
If you write in
 short concrete sentences,
use lists, and arrange to meet
friends at a set time
 and place, rather than just
 tonight, then you are a
better risk for Admiral
 and should be offered a
 lower insurance rate that
could undercut the competition.
 All you need to do
 is link your Facebook
account and realize the
 fruits of your discounts.
 Who is wronged, if
anyone, by this machine? Is
 it the people who get
 the discounts? Is it the
people who are paying
 the sticker price they would
 have paid before the
machine came about? I think
 the answer is yes, but now
 it's on me to tell
you why people getting
 discounts are getting screwed
 over. Charge them
more for fairness to
 them. That's weird. Also,
 again, second order
effects, the Medium posts
 will be coming along with
 all of the fake
Facebook posts you should
 point out. Let's just arrange
 with Joi. Joi, I
will see you at 10:07
 today. Just watch the money
 flow in, and we don't
even have to meet.
 That's a weird dynamic
 that ultimately had Admiral
finding themselves pulled. This
 was too much even
 for Facebook. The
initiative was tanked with Admiral
 saying it had never intended
 to do it to
begin with. You start to
 then port it back into
 the area that's already been
broached. We have had
 some reading about looking
 at things like whom
to detain, how to set
 two conditions in terms for bail
 -- and again, we get
to "Here's a questionnaire that
 you fill out," and you
 may not have no idea
that your short, concrete
 sentences are counting for
 you and your long,
turgid ones are counting against
 you, that if you talk
 about how many of
your friends and
 acquaintances are taking illegal
 drugs regularly, "more
than a couple times a
 month," "none," "most," I love
 how there's not "all."
It's an optimistic survey,
 but that answer could affect
 how long you stay
in jail or how much
 you have to scrape up to
 get bail? That's weird. And
yet, if you were alerted
 maybe you'd be gaming the
 system then to know
the impact of those answers.
 It might be clearer if
 you answer, "Have you
ever been a gang
 member?" that that could have
 an impact on your
incarceration. Then you start
 to get to statistics like
 these, which are the
sort that gets a little
 bit to Holly's question at the
 end of Joi's talk. It's
one thing to have this
 scale of wokeness by which
 we ask ourselves in a
given society, say in America,
 how much we are in
 tune with some of the
structural injustices within
 our systems made possible
 and reinforced by
the human actors. They
 are often the source of
 it themselves, but it's
also the system. When
 you mechanize it, I think
 there might be intuitively
something where it feels even
 more unfair. Maybe the better
 way to put it
is, it is easier to be
 woke when you are staring at
 this and you are seeing
that the machine is
 just cranking out these judgments
 than when there is
this messier human variable
 where there might be
 some refuge for the
optimist, for the person
 who wants to believe that
 we have gone farther
perhaps than we have. This
 makes it a lot harder
 to do. In that sense,
these may be quite
 salutary systems if exposed
 because they are forcing
us to confront stuff that
 we would rather not confront,
 that we feel we
want to have been well
 beyond. Tightly coupled, is a
 neat adjective I had
used around AI that I
 think is worth exploring in
 a little more depth. Some
of you may remember
 or perhaps have been one
 of the coders behind
Microsoft's Tay. It's self-based
 on a Chinese chat
 bot that had worked
out quite well. Tay was
 meant to be a chat bot,
 and they set the bar
middling that would imitate
 the behavior online of
 a young teenager.
They are not supposed to
 be able to talk physics
 with you so much, but
they can hang out and
 let's try it out, and
 it will learn from its
interactions. As people interact
 with Tay, it will
 get more and more
perspicuous as it talks to
 you. It turns out Tay
 went from "humans are
super cool" to full Nazi
 in less than 24 hours.
 I'm not at all concerned
about the future of
 AI because, of course, FORTRAN
 was like, "Game on."
They started interacting with Tay
 and here at t = 0,
 Tay was like, "Can I
just say I am stoked to
 meet you. Humans are super cool.
 At t = 6, "Chill, I
am a nice person. I
 just hate everybody." The mask
 is coming off. We're
at The Terminator stage
 where one eyeball has
 been revealed on Arnold
Schwarzenegger's face, and then
 finally by t = 12,
 "I hate feminists and
they should all die and
 burn in hell." Okay. This
 is a problem for
Microsoft. This is not the
 image that they are wanting
 to project, and at
that point, the plug was
 pulled on Tay without much
 due process, so if
you are into AI
 rights, it's unfortunate. They have
 gotten back on the
horse lately, December of '16,
 it's Zo and this time
 it's not Tay. We
wonder what was the
 point of the experiment.
 What would success or
failure be? It's weird for
 us now to be setting
 success as: Does not
become a Nazi within 24
 hours. If we can't clear
 that hurdle, let's at least
take baby steps first and
 go from there. It's a
 tale, though, of releasing
stuff into the wild and
 for Tay, nothing depended on
 it. It was itself tightly
coupled to its inputs. It
 didn't take long to react
 to them and to transform
under them but there was
 no output. It wasn't like
 Tay was like handling
your stock portfolio and
 that was just like, Sell,
 sell, sell everything and
burn in hell." That
 would be a bad financial
 advisor. With Uber, for
example, they have a
 tightly coupled system to
 determine surge pricing
as they optimistically put it.
 This will get more cars
 on the road. It's a
little bit like the
 breweries though, while you're
 waiting for the breweries
to dispense more
 cars, everybody's just getting
 rooked with higher
prices. In the case of
 a recent terrorism scare in
 New York, there was a
small explosion in Times Square
 not that long ago, The
 Sun, a tabloid in
the UK, "Shame on you.
 Uber accused by The Sun
 of cashing in on bomb
exposure by charging almost
 double to take terrified
 New Yorkers home.
That is the system
 working as designed. There was
 a surge in demand,
but, of course, the design
 had limitations that once Uber
 had a chance to
look at it, being
 Uber, they were like, "We
 should have charged three
times," but actually, they were
 like, "No, that is bad
 PR." During times of
crisis, of course, there
 should be some form
 of democratization of the
tools available, and maybe
 we should surge out of
 Uber's own heart what
we pay the drivers
 but not charge the
 passengers more as our
contribution to helping in
 a time of crisis. These
 are the kinds of
parameters you don't really
 think of until they happen.
 It gets even more
complicated when your AI
 is tightly coupled with another
 AI. This is an
extraordinary example from
 Amazon.com around a
 book called The
Making of a Fly, genetics
 of animal design. This is
 its usual price, it's
normal world price of $28.95.
 But there was a time
 when its price was
$1.7 million plus $3.99
 shipping. The second lowest
 price was $2.1 million
plus $3.99 shipping. I'm
 glad that you have to sign
 in to turn on 1-Click ordering.
There should be some
 trap before you are just
 like, "Making of a Fly,
sounds good, can't wait.
 [LAUGHTER]
How did this happen?
 This is not normal.
 Somebody went along,
Michael Eisen, and
 he started checking this
 day by day,
and each day what
 he found was that
 the price kept going
up until by April 13th,
 it was going at the
 lowest price for $5.6 million.
If it were
 a book about Bitcoin it
 would all make sense.
What's happening? Well, some
 very simple maths, as they
 say in the UK,
will tell you that
 Boardy Book, [phonetic] one of
 the sellers, was just
taking Prothnath, [phonetic]
 the other sellers price,
 and multiplying it
times 1.27059. Prothnath, it
 turns out was taking
 Boardy Book's price
and multiplying it by .99.
 So, it's two steps forward,
 one step back. No
wonder that you move
 forward. I get why
 Prothnath, if going
algorithmically, would say, "Find
 what is other than
 me the lowest price
and offer my price at
 .99. Now I will be
 the lowest price." That's just
called markets. That make
 sense. That's how it's
 supposed to work.
What is Boardy Book
 thinking by multiplying by
 1.27 the lowest price?
The answer is, I think,
 is, Boardy Book doesn't have
 the book. They are
just like, "Hey, if you want
 to pay me a third more,
 I will go over to the
other guy, click on the
 link, and have the book sent
 to you, and that's just
my middle person fee
 for having clicked where
 you apparently were too
lazy to click." That
 is a totally rational strategy
 once you understand it.
You add this rational
 strategy to this rational strategy,
 and you end up
with a $7 million book.
 I think it's partly a
 tightly coupled system, here
day by day, but
 you could see it happening
 moment by moment where
these AI's interact in
 ways that bust the
 quite reasonable assumptions of
each party and then lead
 to results that are quite
 awful. When I think
about results that are
 somewhat tightly coupled, in
 this recent example
from Hawaii, people
 with iPhones had the
 emergency alert ballistic
missile threat inbound to
 Hawaii, "Seek immediate shelter.
 This is not a
drill. Slide for
 more." Nah, I'm not
 that curious really. Clear.
 [LAUGHTER]
This, of course,
 is human error as we
 have been coming to
understand it.
 There was
 apparently an employee
in the
 chain of command who
 never heard the words
"exercise, exercise,
 exercise" before they then
 went then through the
shtick of the drill
 which included the words, "This
 is not a drill."
That person was like,
 "Gotta tell the public."
 The things that you would
do to fix it
 with that human in the loop
 are different than the things
you might do to preempt
 it if you feel like
 the threat is so immediate
there's no time for the
 human in the loop. We
 are having to go
heuristically and send out
 the warnings. Now, maybe one
 of the fixes is
on the receiving end.
 The humans should know
 these are heuristics so
when you get it, there
 is some chance it's just
 a bug. That also changes
behavior in people as they
 listen to warnings. We want
 them to not just
see this as another
 notification on their phone.
 Within these tightly
coupled systems too,
 with algorithms that hit
 variables they don't
explain, you end up
 with associations that may be
 at first glance sound
strange. This is an
 example of the construction
 paper stop sign that
Joi was talking about.
 The official Lego Creator
 Activity Book, "Get it
with the perfect partner,
 American jihad the terrorists
 living among us t
oday." In which point,
 I'm backing slowly away. Isaac
 will not be playing
with the official
 Lego Creator Activity Book.
 Neither will Keough [phonetic].
It is all, "Just
 forget I even visited it."
 Whatever correlation that is,
is clearly on pretty
 thin data. This tends to
 get to the phenomenon
described in our reading
 of overfitting.  Overfitting sounds
 like just a
problem at Macy's that can
 surely be fixed by a
 tailor, and it is a
fundamental problem that you
 have with some of
 the systems. One of
my students, Tyler Vigen,
 came up with a correlation.
 He has a whole
blog about it of
 suicides by hanging, strangulation,
 suffocation with a
number of lawyers in
 North Carolina, correlation
.993796. That is a pretty
tight correlation which means
 to solve the suicide
 problem, you simply
need to reduce the number
 of lawyers in North Carolina.
 Or is it the other
way around? I think it's neither
 way and in fact what it
 means is, that the .007
chance that this is
 a random, spurious correlation
 is what has
materialized and there is in
 fact no correlation between the
 two. It is an
overfitting problem caused by a
 lack of data and by
 the fact that if you
run enough correlations of
 random things, which is the
 grail often of an
AI system, just give me
 everything big data, you are
 going to start to find
out of a million correlations
 plenty .99ers that are in
 fact not the 99
percent chance that they
 are related, but the .01
 chance that they are
not. You can even start
 to see it in this,
 this is potential opium production
in Afghanistan chartered against
 the silhouette of Mount
 Everest, and as
you can see, it is
 a very tight correlation, which
 means to know the
production in Afghanistan in
 2010, all you need to
 do is turn your
binoculars slightly to the
 right to see what happens
 with the range of
mountains near Mount Everest.
 I think it's going to
 take some work. It's
not impossible. I don't know
 that I am sure all
 about this, but some work
to figure out how to
 train our systems that are
 now being set about
finding every possible correlation
 that they can where
 the check should
be for where the correlations
 don't matter. For that, I
 can't help but say
Bitcoin. I just would
 like to exhaust the
 room here, Blockchain [phonetic].
Okay. There, it's been said.
 The kind of idea of,
 it's not just a crypto
currency, it's a smart contract
 generator that in turn can
 lead to a dead
person switch where
 you can set into
 the blockchain conditions
precedent for the spending
 of money that's already in
 there and when the
condition is met, The
 New York Times says
 that the temperature was
below 32 today, money
 moves from here to there
 and we all can't r
epudiate that, think of
 all the bounties you could
 set on people's lives,
not great. Let's
 create a decentralized autonomous
 organization. It gets
created. Here's the wonderful
 Wiki article about it.
 "The precise legal
status of this type
 of business organization is
 unclear," citation, which
the next sentence is,
 "The best-known example of
 this was the DAO,
which was launched with
 $150 million in crowdfunding
 in June 2016 and
immediately hacked and
 drained to $50 million
 in cryptocurrency. Cool.
These are the kinds of
 things that are tightly coupled
 and pervasive that I
think greatly lead to the
 kind of anxiety that we
 have. I would also
bookmark here something
 that Joi hinted at
 and David Weinberger,
among others, has written a
 paper about, which is, what
 if it turns out
that there are some
 aspects to reality themselves
 that don't avail
themselves of a theory
 even though in fact they
 are causative, not just
correlated, but causative.
 We could create such
 associations. We could
hypothesize an arbitrary
 number of variables
 with an arbitrary
number of switchbacks that
 are not continuous on
 any curve. That
would make it really
 hard to reverse engineer
 the formula behind the
thing we just generated.
 We can make things
 as complicated and
irreducible as we want. And
 if we can do it,
 there is some possibility that
nature has done it
 and that the only things
 we've discovered as we
learned more and more
 about nature through the
 investigations of the
operation of science, is
 that we've only found the
 things for which there
are elegant reductions    like
 F equals MA or E
 equals MC squared. But
what if it turns out
 there are a million variables
 and a million different
things kind of like for
 those who read the books
 or watched the show
"The Magicians" like trying
 to do magic in
 that universe of Lev
Grossman's is like
 the slightest change completely
 invert something and
nobody knows why, but that's
 how it is. These are
 the kinds of things.
And this is just an
 example back    kind of
 mirrors Joi's of cardiac
magnetic residence imaging where
 you can train the
 thing to actually
predict who's going to drop
 dead within a year and to
 any of the    most
highly trained physicians, they
 are just like it's a
 ventricle. It appears to
be pumping blood. I don't
 know. And the machine is
 like    yeah, but
these are people who are
 going to die and these
 aren't and we cannot
say why. It's possible that
 there is an explanation that
 will discover, it will
be like F equals MA.
 It is also possible there
 is no explanation. No
explanation that is other
 than when things go up
 and that goes down
and a million other variables
 are exactly in this position
 and the sun is
declining behind Venus, then you
 die. And if that's the
 case, I think we do
have to contend culturally
 with the notion of
 building a bunch of
technology that can answer
 questions about ourselves and
 not offer any
explanation and then when
 do we want to rely
 upon it? This generated
through AB testing is
 one of those adds
 for car insurance, which
inexplicably appears to be
 clicked upon when it
 shows a hand with
growing fingernails. Like who
 would ever dream this
 up? This is dada.
But it's like, it works.
 It works. And when it's
 working it's at this weird
kind of Promethean
 inversion of we are
 granting ourselves knowledge
and insight, information
 and insight, prediction and
 insight, but not
explanation. Not a larger
 theory. There's pipes leading to
 the top and we
can't count them. And
 we don't understand them,
 but damned if you
don't, press the magic box,
 you get your bath. Do
 you want the bath or
not? The answer usually is I'll
 take the bath. But that, I
 think, is a form of
knowledge that is kind of
 like, in the old days,
 the V.C.R. that like
magically records shows, but you
 don't know how to set
 the clock on it.
It's just like    just live
 with. That's how it is. And
 of course, it calls to
mind Arthur C.
 Clarke's third law and
 he sufficiently advanced
technology is in distinguishable
 for magic, he, of
 course was drawing
from someone named Lee
 bracket who said which
 craft the ignorant
simple science to the learned. But
there's a difference between these
quotes. Her quote was saying if you
learn enough, if you go to enough
MIT classes, you, too, can
 be a wizard and you'll get
 it! The only getting it
here potentially is you'll know
 how to set up the
 box with the blinking
light to fill the bathtub.
 And then like take credit
 for the warm bath and
you're like, great. I don't
 really know how this works.
 That's a weird state
of affairs to be in and
 another way of doing it in
 this really weird chart of
number of people on the
 Y axis would be    in
 one corner you have the
MIT folks, the nerds who
 are like, yeah. We know
 the technology. This is
at least the Internet
 story. We know how routing
 works. It looks like
magic, but you know,
 there are reasons. It's rational,
 it boils down. And
we don't then become prisoners
 of the system. We get
 to hack it all the
time. And then on the
 other corner you have kind
 of the Harvard corner,
the lawites, the people who
 are like    I'm not
 prisoner to the system
because I don't use
 Facebook. I don't have
 one of these computational
devices. I have a book.
 And I know how a book
 works. You turn the page,
you read the story. And
 it's like    that's great. It
 still might bear on your
life if the Wikipedia entry
 about you says that you're
 a horrible criminal. I
don't use Wikipedia. That
 will show them. All right.
 That's a problem. But
in the meantime, the rest
 of us are kind of in
 the middle and are prisoner
to the technology, maybe even
 in ways that we don't
 know it. And that's
the kind of thing for
 which it really is important
 to try to have a
framework. And to me,
 in the larger picture
 of things, I'm concerned
that's a lot of what
 was left to chance, a
 Pachinko machine that you
might be able to vaguely
 figure out. The ball might
 be in this direction.
And the fact that
 none of us knows, if
 Facebook engineer didn't know.
The Google engineer didn't
 know what the top hit
 on some search would
be. Provided us all
 some sense of equality
 before the unpredictability.
It's weirdly becoming
 more and more predictable
 and controllable even
without a theory of
 operation for how the control
 is being affected. That
creates, to me, a
 very dangerous situation when you
 start to use these
technologies to affect states
 of the world. And that's
 why we need to
talk about governance. So
 by governance what do
 we mean? And by
governance I also start
 where Joi happened to
 start here by thinking
about an Internet law,
 1998 '99s, Larry Lessig came
 up with a theory
meant to explain why
 there should even be a
 field called cyberlaw. Why
it isn't just law
 that happens to be about
 computers. And his answer
was, as Joi showed,
 think about a poor
 person getting buffeted by
forces that control, ah,
 their affordances. There, what,
 they can and can't
do or want to do in
 the world. If law is one
 of the things that affects them
, the marketplace, the
 prices of things affect what
 they can do, you
might like that house, but
 if you don't have to
 money, you can't buy it.
Norms affect things.
 It's greatly constraining
 what people will
disapprove of you, of what
 you should do especially if
 you're face to face
with them. And finally,
 architecture is constraining. And
 by that, Larry
meant code. The software
 can govern your behavior in
 a way that is
much more tightly coupled
 it just won't let you in
 if you don't have the
password, then the law, which
 you might still be able
 to enter the house
even though it's not yours
 if it wasn't properly locked
 and the police have
to come find you later,
 that's a slower system maybe
 than the use of
architecture. Just describing
 this lawdalities started to
 describe them if
you want to affect
 people's behavior for some
 reason    social
engineering, you could pick
 one of these four
 modalities or some
combination in order to
 achieve your goal. And
 you should think
considerably and lucidly about
 which of these might
 in a given
circumstance be the right way
 to roll. And in fact,
 this is then saying for
governance purposes, it's really,
 if we're going to
 look at it, even
descriptively, it's telling us who
 is doing the governing. If
 we're in the law
circle, okay! It's regulators.
 We know who they are.
 We know whom to
petition if we want them
 to do it differently our whom
 to roo if they don't
let us petition. If it's
 the marketplace we know about
 that and we can
blame the corporations or
 whatever for some problem
 that we have. If
it's the code and we're
 self aware about it, we
 can be like, yeah. That
software shouldn't work that
 way. You should change
 the way this street
works or the way that
 ways works so it doesn't
 send people through the
street of a quiet
 neighborhood just to shave
 .3 seconds off their
commute. And norms again,
 we can go on norms
 campaigns in order to
say, no! Smoking isn't cool!
 Drugs! Don't do them! That
 kind of thing. All
right. That's going to
 be another modality or
 something that we
understand binds us. Now,
 Larry also said law often
 could be a real
force effecting each of
 the other modalities that
 in turn can affect
people and as a law
 professor that was kind of
 first instinct. And it's
worth thinking in the realm
 of A.I. where do see
 the constraints and the
affordances coming from? Is
 it the A.I. systems
 themselves? The code?
Is it the structure
 of a marketplace that might
 have only a handful
ultimately of companies that
 offer really good A.I.
 systems to advise or
to execute on things?
 Or should it be democratized
 for anybody to get
access to A.I.? For each
 of these, you could start
 asking those kinds of
questions and in a way,
 by having a    a verb
 here, law does this to
architecture which in turn
 does this to person. No
 longer are you just
describing who regulates,
 but you're describing
 how regulation happens.
The verb is the how
 and that can be very illuminating
   both to let you
understand your plight as,
 as the orange dot, and
 to help you understand
if you’re outside the
 system wanting to change it,
 how you might affect
it. This is the core
 of what you learn in law
 school. How to move the
levers to move other
 levers to ultimately affect
 people and systems.
That's the law. And if
 you're thinking of neutrally, it's
 what is the fair
system that should be
 permitted by which this
 lever connects to that
one? Do we like
 that? Is that the right
 way that governance should
happen? Now, another theory
 from Internet studies is
 to think carefully
about possible points of
 control. Architecture isn't a
 monolith, unless it
is 2001, and we look
 at Internet routing    here
 are all the entities that
have a hand in getting
 bits from me over to
 somebody else with whom
I'm communicating. And if
 I, as an outsider,
 want to intervene between
two people exchanging
 bits, Internet service providers
 near the source
are a possibility
 intervention. One's near the
 destination are an
intervention. One's in the
 middle, at that time, called
 the cloud, now the
cloud means something
 different, those are also
 points of intervention.
Can we map this to
 the A.I. zone to start
 thinking about within the
technology, is it those
 who manage and cultivate the
 data? Is it the
people that make the
 algorithms? Is it the
 people who execute them?
Where would you want
 to intervene within the
 technology if you're
making an architecture or
 code play for regulation?
 That's a where
question for governance. So
 we have who and we
 have how and we
have where. And where
 also applies, again, when
 the Internet context, to
the so called hourglass
 architecture or the layers
 of the Internet, that
can be independent from
 one another, and if you're
 going to argue about
net neutrality and think that's
 a big deal, that's towards
 the bottom of the
stack, but if you control
 the wires and what goes
 over them, then you
control everything on top of
 it would be the theory.
 Or maybe it's one
application at a time. We
 need to get a signal
 so that it reveals who’s
talking to whom when a
 proper warrant is given to
 them, it's a signal
problem, the application signal,
 not a wire problem.
 And are there layers
to A.I.? I would
 find myself asking. Where
 are those layers existing?
Where do we want to
 cultivate them? If we want
 healthy A.I. to develop
and not just to intervene
 to prohibit things we don't
 like. None of these
things has been
 adequately explored much less
 answered. In Internet
studies we've gone a long
 way. It's been a good
 15 year run. We have
some answers to that on
 Internet. We don't in A.I.
 and I do think there
are many, many parallels,
 not least because A.I.
 is itself depending on
networks and on so
 many of the same
 technologies that build the
modern information ecosystem. So
 I'm just giving you
 kind of my
research agenda for the next
 three to five years, it's
 trying to, without just
copy and pasting, make the
 most out of what we
 learned in the complex
regulation of other digital
 systems. And starting to
 think, too, about,
again, back to Internet
 studies, some folks mostly at
 MIT, came up with
the idea that it's a
 technical matter. It was really
 good, usually not to
intervene for new features or
 for any other kind of
 solution to a problem
in the middle of the
 system. Instead you should do
 it at the endpoint of
the network. If you do
 it at the endpoint, that
 tends to have implications
for user freedom. If
 it's at the endpoint and
 the user controls the
endpoint, then the user
 gets to choose whether
 that user wants that
feature. That's a
 libertarian embedding or
 implication of otherwise
technical observation about where
 it happens to be
 most efficient to
implement a new feature
 in a system. And so
 what looks like a
governance where becomes
 a governance who because
 if it says
endpoint you're talking about
 the user being empowered. If
 it says in the
middle of the system
 you're talking about whoever
 runs that middle of
the system being
 empowered. Now, what other
 kinds of interventions
can we actually start to
 think of? Transparency comes up
 a lot. We don't
know what's in this food.
 We're not going to tell you
 how to make it, but,
you know, fill us in.
 We don't know what's in the
 food. Tell us what's in
the food. That's the write
 of the explanation. Can we
 start to learn more
about these systems? And
 I think oftentimes that can
 be quite helpful. It
may be its most
 helpful when people can do
 something useful with the
information. If you're about
 to be sentenced, it's like,
 this is why you're
being sentenced and it's
 a horrible reason, I'm glad
 you now know. If
that's not the basis for
 an appeal that would be
 recognized    how much
better do you feel having
 been sentenced to now you
 just know it's for a
horrible reason? So this
 transparency may, as often, be
 a means to an
end. Like market discipline.
 Thinking that people will
 walk away from
nutritionally vacuous stuff and
 choose better stuff then
 an end on to
itself and it's really good
 to have in mind what
 you're trying to do when
you govern. Is transparency? End
 all or is it just
 the means toward as
certain end? Now, later
 in the term, we're
 going to learn about
autonomist vehicles and Yad's
 work with Joi and
 others where he's
been asking people around
 the world variance of
 the trolley problem and
what they think should do.
 Here are some cats about
 to lose one of their
lives. But if the car
 swerves it could be this
 apparent person carrying a
Swiss flag would lose
 their life. And this large
 iPhone survives no matter
what. But these are the
 kind of things for which
 you could even start to
ask for a regulatory perspective,
 is it one rule set
 or lots? Maybe when
the car rolls from
 one jurisdiction to another it
 just gets that new j
urisdiction's rule set. So
 we don't have to have
 a worldwide thing. When
you look at Facebook,
 back to the issues
 around fake news and
propaganda, I was talking
 about, should it be
 a global rule? Facebook
doesn't allow stuff anywhere
 on Facebook if it
 meets these criteria or
should it be, you know,
 in Japan it is different
 standards than in the U.S.
So we're going to have
 different standards by country or
 by culture or by
groups, self identified,
 within Facebook. There's so
 much exquisite data
available and tightly
 tethered networking responsiveness
 that we can
affect a form of control
 that is, I think, previously,
 at least by degree and
maybe a kind unthinkable,
 we can anticipate so much
 more. And in fact,
the real story to
 me about something like this
 is not just the
jurisdictional differences you
 could cultivate and answers
 to the trolley
problem, it's timing. It's
 actually saying when do you
 want to govern this?
Because we could load the
 rule set of the car
 in five years before the
accident in which it will
 come into play and have
 to make a decision. But
because it's got all
 the conditions precedence and
 it's anticipated this
kind of accident, it's just
 going to know what to
 do as against the person
who just so happens to
 be behind the wheel and
 two seconds before the
accident wasn't thinking at
 all about the moral
 dimensions of two cats
versus two people. This seems
 to be one of the
 easier trolley problems, I
would hope. Wow. Anyway,
 that's the kind of thing
 for which being able
to transpose control, not
 only from far away in
 distance, but from far
away in time, is
 something we really haven't
reckoned with. And we need
to think about it for
 those who are lawyers in
 the room, you learned the
rule against perpetuities and
 property or better to
 say, you, somebody
tried to teach it to
 you. And that's an example
 of disfavoring at some
point what's called dead
 hand control from a
 far. There's something
weird about binding an
 entire society to a principle
 or rule agreed upon
freely by people no
 longer even around. Really
 the governance of this
property that somebody happened to
 live in 50 years ago
 is going to tell
me what I can do
 with it? When I bought it.
 These are the kinds of
questions I don't think
 we're squarely confronting. But
 if we're lucid
about it, we can identify
 as available to us as
 dials to turn. Other forms
of intervention include, this
 is anti red lining, you
 know, the standard if
you're discriminating mortgages in
 America, that's illegal. We
 have a law
about that. We know what
 you're doing, what field you're
 in, these are the
boundaries. We need to
 make a decision about
 regulation should it be
general about A.I. systems
 that may be playing
 chess at one moment
and deciding mortgages the
 other. That's the grail
 of artificial general
intelligence or strong A.I. or
 is it no. No. Just
 decide what you're going to
do and once we know
 what you're going to do
 we can have a bonded
conversation about the proper
 behaviors and the limits
 of them that you
should undertake within
 lending or housing
 or transportation or
whatever it's going to be.
 And in fact, some of
 my colleagues at Harvard
have written a paper
 on fairness through awareness
 which offers up a
formula for fairness. Here
 is the formula for fairness.
 It was there all
along. And the idea is
 if you can define enough
 exactly what you want
the system to do
 and what counts as equal,
 and anything else is
unequal, you can insist that
 the data be groomed ahead
 of time so that
the unfair result cannot happen.
 And I think that is
 true. It just is again,
assuming you have everything
 lined up ahead of time
 as to how you
want the system to be
 acting in the ideal and what
 its purpose is. In this
particular example, this was
 training on a set,
 somebody at Stanford
discovered that if you
 use Google Translate defendant
 is always male
and nurse is
 using the female gendered
 determined reflecting a
particular reality that prescribed
 in its documents. But
 I'm using this for
a different question, which
 is to say machine translation
 is one of the
most empowering democratizing
 advances humanity has
 ever seen. We
are actually nearing the
 universal translator of "Star
 Trek" circa 1966.
Where we can have
 a conversation with somebody
 that otherwise would
be completely inaccessible to
 us and vice versa
 because of the
language barrier. Let
 me ask, though, suppose
 this translator, smart
enough to translate, also
 gets that what they're
 talking about is horrible.
They're having a discussion
 in which they are being
 racist or in which
they are planning an attack
 on something, is it the
 job of this machine
translator, if it can
 plausibly be affected, to
 notify the authorities
because it was used?
 In the case of their
 planning some form of
physical violence, how many
 people would like the
 translator that they
are employing either to
 refuse to translate or at
 least to alert the
authorities while still translating
 loyally that these folks
 are up to no
good? How many people
 would want the authorities or
 that control to be
affected? One, two, three
 wow. Joi. Sorry, I didn't
 mean to out you.
How many people are
 like, no, absolutely not.
 The translator should just
translate and otherwise FTSU?
 One, two, three
 very interesting. This is
an engineer heavy audience
 I'm telling you. And this
 is a neat question
for which its late
 when you look at Facebook,
 has secret rules about
what it allowed and what
 it didn't, those rules leaked.
 They ended up in
the "Guardian." So this
 is from their own slide
 deck training their people
on the rules about
 what they'll allow on the
 platform. So if somebody
says someone shoot Trump,
 do you want that
 to be immediately deep
six? Is that not allowed
 under the rules of, at
 least not without further
explanation. How many people
 would say not allowed?
 One, two, three.
Hmm. How many people are
 saying like let the ride?
 One, two, three. All
right. So engineers are
 like don't touch a damn
 thing. Not allowed on
Facebook. Kick a person
 with red hair. Allowed on
 Facebook. To snap a
neck, make sure to apply
 all your pressure to the
 middle of her throat.
Allowed on Facebook. Let's
 beat up fat kids.
 Allowed on Facebook. Now,
this is their own
 training deck. You can bet
 the rules have since
changed. That third one
 is not allowed on Facebook
 any more as a
result of this deck leaking.
 But the explanation turned out to
 be    is it a
real, credible and specific
 imperative? This one about
 Trump, is there for
it goes. It's calling for
 violence. This is, if you
 go through the deductive
logic of it, a simple
 explanation of how to do
 something. It is not telling
you to do it
 and therefore it stays. That
 technicality is exactly how
elaborated the rule set
 has gotten. Again, here for
 humans to follow, not
machines. But that's the kind
 of things that Facebook is
 doing to try to
make sense itself of what
 it will allow and what
 it won't. And it's almost,
to me, leading to
 this Kantian moment. Kant was
 known to him, said
ought implies can. And if
 you're going to give somebody
 a normal duty it
had better be that they
 can do the thing you're
 asking them to do.
Otherwise you're kind of
 lousy. That's my spin
 on Kant's German. But
this is the flip that
 we're arriving at which is can
 implies ought. If you are
in a position to
 help, maybe you should. If
 you wrote that translator
software and it can
 easily detect terrorist activity,
it is an abdication not
to do it. When does can
 imply ought to me is going
 to be one of the
central questions linking
 the study of Internet
 policy, because those
platforms have long been
 powerful in just the
 way Facebook without the
use of A.I. is
 powerful, and A.I. policy because
 the A.I. platforms are
going to know when we're
 up to no good as
 defined in certain pretty
reliable ways. In this
 room I hear people not
 excited about it. The
pressure is going to be
 high and it tends to
 ratchet in one direction
which is towards more
 and more responsibility. As
 Bono says, because
we can we must. That
 was on the streets of
 Davos during the World
Economic Forum right next
 door to the CryptoHQ. I
 went in there and
walked out pretty quickly. But
 I was reflecting on the
 bear with an owl on
its shoulder telling me
 that because we can, we
 must. That, to me,
again, if there's somebody
 wants to take up this
 question, is it that
categorical and if not when
 and when not. That's going
 to be one of the
central questions of A.I.
 Finally, I think about fiduciary
 duty. This is my
own way of trying
 to puzzle through as
 these machines are becoming
more and more intertwined
 with our lives. Shouldn't
 they have    should
the systems and companies
 behind them have some
 duty of loyalty to
us as, as the users?
 That can be aggregated, it's
 not absolute, but the
baseline should be Siri,
 if you are recommended
 a restaurant because
you think I want to
 go there, rather than because
 the restaurant is giving
you $20 and I'm not
 going to see should I go
 there and order the blue
plate special. Facebook, if
 you think I'm wanting to
 vote, yeah. Tell me
where the poll is. If
 you think you're wanting to vote,
 don't use my vote to
be yours. That's not being loyal
 to me. In law we call
 it a fiduciary duty. It
comes about when the
 person who has the duty
 is stronger than the
party to whom it owes
 the duty. The party is
 in some relationship to
them. You go down the
 line and that gives rise to
 the duty. It really nicely
tends to match the platforms.
 Now, we're low on time
 so I'm not going to
go in to a possible
solution involving librarians
in the information quality
and Facebook and how it
 illustrates a way to be
 true to your fiduciary
duty while not giving
 people what they might think
 they want, which is
that news story about Hillary
 and the F.B.I. agent. And
 I won't even talk
so much expect by
 paraleipsis about the idea
 of some of these
platforms becoming wholesalers
 rather than retailers of
 what they do.
Facebook, let us write
 our own recipes for how
 our feed should work.
Give us the variables, let
 us do the weighing. Let
 somebody in this class
write the A.I. that weights
 it for us. But I'm going
 to use your A.I. instead
of Facebook's. That to me
 is one of the ways
 of relieving on the
company the horrible burden
 of having to generate
 the perfect feed for
each and every, if it
 is two plus billion customers.
 Just be an operating
system. Don't be all
 of the applications software on
 the other side. And
finally, let me just
 say a word about ethics
 and ethical compass. There
was a time in
 the early days of information
 technology, this is Steve
Jobs introducing the Apple
 II, the 1977 West
 Coast computer fair. It's
like nerds! A new use
 for your television set! It
 can hook up to a
computer and then you can
 run software and then nerds
 can, like, use it
and like, you know,
 there's their computer and
 their little modem and
their rocket ship and
 the nerds have a great
 time writing software that
maybe they will share with
 a thesaurus at the ready
 should they need a
word that they don't
 otherwise have. And hand. But
 this is anybody with
$99 can be writing software
 and sharing it and there
 are nerds then like
who wrote Visicalic in a
 suburb of Boston in a
 cluttered attic and it
turnout to be one
 of the most consequential
 pieces of software in
human history. That's what
 bought the P.C.s in
 to businesses because
gosh, they're spreadsheets, who
 knew? These are great.
 This is a very
generative technology. Do we
 want this for A.I.? Do
 we want anybody to
be able to write
 an A.I. that on the
 smallest of random, seemingly
nonsensitive information can predict
or in terms, most qualities,
thoughts and future desires. When I
put it that way, you know what? The
kids should just go play
 with the model rockets. I'm
 not sure they should
be writing A.I. to
 manage upward their parents
 or their teachers at
school. We're going to
 face that question with
 A.I. possibly more sharply
than we faced it with
 Internet and I.T. And I'm
 not sure how we're going
to deal with it,
 but the very qualities that
 I know I've celebrated
information technology, I don't
 want to just port over
 and assume on the
right kind of democratization.
 And that's where the folks
 in open A.I. I
think we have a lot
 to talk about to hear about
 what we want. Now, I've
set up a dichotomy
 between teenagers or preteenagers
 with rockets on
the one hand with
 access to the technology and
 big companies on the
other. There's another sector
 that we're sitting in the
 middle of which is
academia. It's academia
 that's like there should
 be a particle
accelerator. It's going to
 be big. It doesn't return
 any money to you.
That's why it's academic.
 And it just gives
 you knowledge. The academic
sector has been taking the
 lead, it did take the lead
 in the early days of
development of the Internet.
 And yet, I look now,
 here's my colleague in
the Harvard C.S. department,
 the word that he got
 promoted to tenure in
2010, here's his blog
 entry in 2010, oops. Gosh,
 from June to November.
He gets tenured, he's like
 I'm out of here. And
 yeah. I'm getting out. Why
am I leaving academia? I
 love the work I'm doing
 at Google. I hack all
day. Work on problems.
 Orders of magnitude larger
 and more interesting
than I can work on
 at any university. Like, oh!
 We'll just stay with our
Tinkertoys and you can do
 the real stuff. It's hard
 to beat. It's worth more
to me than having prof in
 front of my name or a
 big office. I don't know
where he's been spending
 his time. Or even
 permanent employment. It's
like, great. I'm drawing a
 welfare check in a big
 office with a prof title,
but like there's nothing that
 I can do that could rival
 what I could do in
private industry. That's a
 real question. And when I
 talk to the wonderful
folks at DPine and they
 tell me they have 400
 postdocs and they can
pluck any tenured professor
 they want because they
 got Google's data.
Tell me that's like, that's
 better than any office
 Google's data. That's the
kind of thing that
 asks us about the structure
 of this revolution and
whether academia was simply
 an economy of scale
 to do stuff that
didn't have immediate return
 and humanity wanted to
 see it done, so
let's just shove it into
 this sector and we'll call
 it university. Or does it
have values that are
 different and complimentary to
 the values the other
sectors that provide a
 counterbalance? Now, our values
 have their own
issues. We just had
 120 gibberish papers in
 peer reviewed journals and
by gibberish I mean,
 Denver "Guardian" level gibberish.
 This was the MIT
C.S. paper generator used to
 produce them. This is the
 paper I wrote in
about 10 seconds using
 the generator on a
 methodology for the
improvement of rasterization and
 that got through peer
 review at these
journals which then said,
 yes. This happened. We're
 really sorry about it.
And we're going to run
 a new piece of software
 to detect gibberish in our
papers because there are too
 many to go through for
 humans to look at.
That's a problem. Just as
 it's a problem that the
 thing we celebrate in the
iPhone, if it's a secure
 key area to make sure that
 even the F.B.I. can't get
it with a warrant, many
 of us celebrate that. This,
 if it's A.I. inside means
outside of the companies
 that have the postdocs and
 the money and the
data and the processors.
 You may never have anybody
 able to have that
information, those skills, leaked to
 the rest of the world.
 And so I end
with a call to
 recognize the kind of new
 learned profession wherever you
are, whether you are
 in academia or not,
 the original three learned
professions that had obligations
 to society as well
 as just to whatever
they wanted to pursue
 were divinity, law, and
 medicine because these
were the three, um,
 professions that had such
 a huge amount of
knowledge needed to be
 good at it and were
 thought to be accessing
levers of power    god,
 law, and health that could
 so effect people that
they needed to have
 principles beyond just their
 own self interests. There
ended up being a
 fourth learned profession
 surveying in the 1800s.
Very important to get
 boundaries right. But I'm
 suggesting there should
be another learned
 profession, maybe around
 data science. Maybe
around the use of A.I.
 But those who are    whether
 they are in a bedroom
with a $99 computer and
 rockets or whether they are
 in academia or at
Google, thinking about themselves
 as a cohort with
 enormous access to
power, with grave responsibilities
 for what they're doing,
 and start to
work through what
 would exercising those
responsibilities well look like?
We don't have answers
 to those questions. Cathy
 here among us is
working with others on a
 data oath that people might
 take that is part of
the indicia of being
 part of a profession. What
 should the contents of
that oath be? We're not
 sure. We're just embarking on
 this, I can't wait,
well I guess I will have
 to wait, to look back 10
 or 15 years later on a
lecture like this one or
 like Joi's, I hope with a
 lot of the answers filled
in through the work of
 people like those in this
 room. Both working on
the principles and working
 on the software and
 systems that so scare
us. So my charge to us
 as we began is to be
 able to start filling in the
blanks of what are the
 specific problems we see, what
 are they linked to
as the larger problems, and
 how are we going to
 think about it? What
locust the problem should be
 solved? Are we kicking it
 a too a legislator
or are we thinking
 that it should be the
 individual conscious of people
building these systems that
 should be the flash
 point for understanding
the ethical moment that we
 have. Um, with that, I
 think lunch is available
outside. So I unilaterally
 declare us adjourned and
 look forward to the
rest of the class. Thank you.
 [ Applause ]
>> So feel free to
 like shake your hand vigorously
 if you have a question
or a point.
 We want to be
 a little bit freeform.
So I'm going to
 let each of them sort
 of describe their work so you
understand who they are. But,
 since this is, this    is
 a, it's, is it a course
or a class,
 what is it called? It's,
 it's a course
>> Course
 sounds more
 refined.
>> Okay. So
 since it's a course
 and you're participants,
and we're really
 at the exploratory phase
 where we are trying
to figure out what
 we're trying to figure out
 so once you start to
understand who these people
 are, if there's any
 question that's not really,
even related with a
 specific topic that we're talking
 about, you should feel
free to ask them. I
 think they weren't here in
 the morning because they
slept in. So if you
 ask a question about something
 that we talked about
earlier make sure that
 the question is boxed
 in the appropriate context
so they understand the question
 and I, I, at least
 mention these two, in, in
the morning, but they don't
 know, in, in the context
 at which I mentioned
them. So with that, maybe
 we'll just go and just sort
 of briefly, but I might
double click you to go
 deep on a few of the
 things that you say. Describe
sort of roughly your
 work, your point of view
 on machine learning and,
and, and ethics and
 then we'll try to have
 a conversation with everyone.
>> Okay. So my name
 is Cartic. I, I use
 statistical machine learning and
natural language processing.
 I'm someone who was
 quietly rescued by
Joi. But how I kind
 of see my path, I would
 have been one of these
other people who would
 picked the same without that
 Joi had in the
last couple of slides just end
 up in a big company. What
 I try to work on
here is how do
 you have better human in
 the loop machine learning?
How do you give
 important perspective? I think I
 met some of the
assemblers a couple of
 a week ago, and I basically
 said I    I basically
view machine learning as
 a very young discipline
 and several people
have pointed out that
 it still seems to be
 going through an adolescence
stage. There's a lot
 that we can learn
 from very rich epistemologically
established ventral fields
 in the social
 sciences like philosophy,
psychology, different forms
 of psychology, clinical, and
 you use that
knowledge to both work
 on better engineering policies,
 better math and
also thinking deeply
 about the machine learning
 systems that you're
building and not just pledge
 an oath to the church
 of prediction. So that's
basically what I do.
>> And maybe describe
 sort of lensing and bias
 a little bit. Because I
    science
 was something we talked
 about earlier. Yeah.
>> Um, I took
 a class that Joi teaches
 with Tenzin Priyadarshi, who
if you don't know is
 a Buddhist monk here at
 MIT. And they teach a
class called "principles of
 awareness." And the idea
 of lensing is
something that I derived
 from taking this class.
 If you have machine
learning engineer or a
 statistician who builds a
 machine learning model,
yes. There's bias put
 into the data, but it’s
 very important to understand
from what the perspective
 the modeling is happening.
 So for instance,
we always talk about
 the predictive policing example.
 If it’s somebody
who's analyzing crime
 data from N.Y.P.D. starting
 from the 1980s
onward, and not
 having the background knowledge
 of perspective that
there are certain
 neighborhoods historically in
 Manhattan, certain
boroughs, socioeconomic backgrounds
 are different, heavily
 policed for
the same kinds of
 crimes which often don't get
 reported as much though
they take place in other
 boroughs, what would it mean
 to have a system
which says we're going
 to do predictive policing
 equally in all the
boroughs and how differently
 would a cop or
 somebody who best
understands the system build the
 model? So the idea of
 lensing is all of
us are looking at the
 world through our own lenses.
 Some of it is in,
informed by our experience.
 Some of it is informed
 by our education and
our cleaning and some of
 it is just bias baked
 into our social processes.
But focusing on that,
 extracting that, trying to
 represent that statistically
the best we can, and
 embedding that in a human in
 the loop where in the
machine learning process is
 super important. So the
 same algo, the
same kinds of data,
 but different lenses will
 just create vastly different
looking models.
 And that's what
 lensing is.
>> And, and, and for
 -- for    and
 it's too late this year,
but the awareness
 class happens in the
 morning from 10:00,
actually starts with 20
 minutes of silence and
 it's about sort
of exploring intrinsic things
 and then the evenings
 we have this class.
So it's kind of a
 good two fer if you want
 to have a balanced approach
to thinking with
 one side of your
 brain than other.
>> Thanks,
 Joi. Fantastic,
 thanks for rescuing
me from all
 the other stuff I'm
 doing to give you a
lens on what really
 is important in the eyes.
 So I'm a teacher.
I am fundamentally
 trained as a biologist.
 I will be
communication biology and then look
 at the throat. And I
 may be one of
the few biologists who
 is actually doing machine
 learning. I think
majority    I think
 it's probably fair to say
 that most machine learning
people are doing health.
 But there are very
 few people trained
fundamentally in biology who
 are doing machine learning.
 I'm doing that
topic. I'm fascinated
 by understanding complex
 systems and learning
how things learn, with
 human cognition and that's
 why we are biologists,
because in biology there is
 a lot to understand. If
 we don't know how to
put all that knowledge in
 context with what we know,
 so that's why I went
into biology before as a
 child. And later on, I'm
 very fascinated in the last
two to three years
 to make machines become cognitive
 like us. And by
that I mean there's
 machine saliency and there
 is human saliency. So
when humans look at
 an object they have a
 certain cognitive bias or
evolutionary benefit to
 look at it and
 understand patterns, understand
shapes. Machines are kind
 of something that we
 are training right now
and although they're kind of
 a lower level of cognition
 than humans if I
may say in some
 early systems, they are unique
 points of saliency. So
the salient views that
 machines have or that
 algorithms have can be
furthered for medical knowledge.
 So my research program
 and what I'm
really interested in and one
 of the ideas that Joi
 and I discussed is to
use machine saliency to
 find new medical information
 that humans may
think it's not necessary
 or useful. That's one.
 I'm also very passionate
about looking at how
 learning systems can be applied
 at the point of
care. So trans   or
 translating all the information we
 have in labs into
point of care
 medical technology and making
 machine learning more
accessible, deployable, to help
 people at the bottom
 of the pyramid,
there's a group in India,
I come from a relatively
middle-class family and
I've seen both sides
 of the spectrum. So I'm
 very passionate about that.
And    and the final
 layer is ethics which Joi
 and I were discussing
briefly yesterday especially
 in healthcare and pharma
 about how billions
of dollars have been
 spent on potentially lifesaving
 drugs and we don't
know what are human
 ethics. There we don't have
 a good idea about
that. And when machine
 learning is introduced I
 think we have an
opportunity to introduce
 correct ethics in
 those learning systems.
>> Thank you.
 Hi. I'm (
 Indiscernible ).
I'm also delighted to be
 here. And I lead    can
 you hear me in the back?
Okay. Thanks. I lead a
group called the
"probabilistic computing project."
The questions that my
 group tries to answer are
 how can we build A.I.
systems that go
 beyond pattern recognition?
 The kinds of
sort of reflexive
 judgments that maybe the
 human organism make in
one or two hundred
 milliseconds and kind of move
 up to, to types of
intelligence that
 might take a
 second or maybe minutes
or in a couple
 cases hours of deliberation. So
 just to give two examples,
you know, an example of a,
 of    of an inference that
 we make all the time,
you know, when we're
 driving, we see somebody
 on the street,
we can infer things like
 are they likely to turn
 to change direction or does
it seem like they're
 just going to go straight
 ahead? So that's an example
of something that takes,
 you know, more than
 a couple hundred milliseconds.
And in that way,
 it's actually not all that
 well suited for, for something
like deep learning.
 But then for an
 example, judgment call that takes
hours, you know,
 we did some work for
 the Gates Foundation helping
them by giving them
 an A.I. system that
 could make judgment calls
about new datasets. So
 they get a new
 dataset that represents a
field study and they
 want to know probably was
 the study done correctly
or not? You know?
 Are the predictive relationships
 between the variables
what you'd expect to
 see? If the field protocol
 was correctly followed or
is there some bias based
 on the site that was
 being used to collect the
data. Or does the
 outcome measures somehow not
 appear to be
reflective of sort of
 the variables that we're
 being manipulated. So those
are judgment calls that
 a statistician might spend hours
 or more kind of
sort of navigating and we
 built an A.I. system that
 could help them do
that. Sort of looking a
 little bit deeper, I would
 say, actually, I don't do
machine learning. We really do
 A.I. research. So    so
 the way that    the
way that    so
 the distinction is machine learning
 has, has really shown
in problems where they're
 objective right or wrong
 answers or where the
cost of an error is
 small. And so where there's
 sort of ubiquitous data.
And in all the
 problems I'm describing, actually
 none of those
characteristics apply. There's
 some inherent ambiguity
 that the human
organism needs to navigate
 to solve the problem and
 we, we need to
give our computing systems
 the ability to deal
 with that kind of
ambiguity and uncertainty. I
 would say that, you
 know, another scene
that the group has been
 looking at is how can
 we make A.I. technology
that can deliver these
 capacities, but also be
 more accessible? So you
know, so that people
 with an I.T. background or
 maybe even people who
can just navigate a
 spreadsheet can make use
 of A.I. capabilities instead
of having to just acquire
 a lot of technical expertise.
 As far as ethics, I
would say there's sort
 of two places where I'm
 really inspired to be
thinking about and working
 on the interface of A.I.
 and ethics right now.
So the first one is
 pretty tactical which is, I
 think, that although there's
kind of prevailing narrative
 that says, I think, rightly,
 that there's some real
risks posed by A.I.
 technology. I think they're
 also real opportunities to
deploy A.I. technology in a
 way that helps    that
 sort of in service of
justice. So with the help
 of people like Joi who
 are involved in a
nonprofit effort to, to get
 some early test cases of
 that going to put open
source A.I. technology in
 the hands of people
 working for the public
good to help them
 make better more empirically
 grounded arguments in
service of justice. That's
 one example. And then
 long, and this is
something I really want to
 invite this group to think
 about, I think one of
the deepest invitations that
 A.I. makes for people
 who are interested in
ethics is to confront the
 question of what does it
 mean to have an ethics
that's uncertain? So, you know,
 one of the, the drivers
 of the last, maybe
10 or 15 years
 of progress in A.I. has
 been embracing probability and
uncertainty. Right? Moving
 away from computer
 systems that shove
simple right or
 wrong answers to computer
 systems that consider
ambiguous possibilities. And I
 think looking out, you
 know, five, 10, 15 years
    would be
 it's exciting to think that that
 way of thinking could get
brought into work in
 ethics and moral philosophy
 and maybe even policy
and law to start giving
 us a handle on an
 ineffable question that have
been very hard to
 treat rigorously or carefully
 than the past.
>> And going backwards.
 Everybody starts feeling free
 to jump in.
But I want to
 click on one thing and
 then look at my philosophers
a little
 bit, because I find,
 you know, what's interesting
    I talked this
 morning a little bit about
 in, in, in playful way
about some of my
 friends who tend to
 believe that they're, they
can win at life.
 That they have basic
 parameters and they're optimizing.
 >> Right.
>> But I think
 that most people tend to have
 palette of ever changing yearnings
that every day they have
 a different set of things
 they'd like to do.
Right? And I think
 there's a view that
 principle people, though,
have some rules
 that they follow
 and that, their organized
and that your
 values shouldn't change from
 day to day. And there's
those somewhat
 random and sarcastic nature
 of things like
meeting people serendipitously or
 the thing that you
 ate that is changing
your gut biome that is
 making you feel a little
 bit more anxious and a
little bit less friendly. And
 so there's all this stuff
 that happens in your
daily life that is sort
 of problemistic and your, your
 earring was hitting the
mic, so she's giving you
 something else, but so, so,
 so    I guess the
question that I, that I
 have is one of the,
 the arguments against A.I.s that
I have is that the
 current ones that we have
 were called machine learning
is that you sort of
 have to give it parameters
 for optimization or create a
game for it to win.
 But is there a way
 in problemistic programming to be
a little bit more
 like humans where, you know,
 a little bit more
sophisticated than the
 random number generator that
 the, the machine
is constantly able to
 juggle a whole bunch
 of things and not
overoptimize and create
 these horrible scenarios
 where, you know,
you've solved the problem
 but destroyed the world. I
 mean, does that
are you    are you
 going to help us with
 making A.I.'s most like people?
>> Yeah. Okay. So I
 hope so. But let me
 just get, let me borrow
an example from Stuart
 Russell who's the coauthor of
 one of the leading
A.I. textbooks and for
 people who are interested in
 this, he gave a
"Ted Talk" on this
 topic that I think
 is really worth seeing.
So, so
 so    so the scenario
 he posed in the
"Ted Talk" was
 imagine you program a
 robot in such
a way that it has to
 follow your orders. And then you
 tell it get me coffee.
You may not realize
 that what you've done
 is you've given it
permission to kill you and
 indeed kill all people in
 service of this goal,
getting you the coffee. Right?
 So sort of as the
 stories and the three laws
of robotics were sort of
 one cut at these issues. But
 this is actually a, an
engineering problem, right?
 That as people who
 design the objective
functions that these
 autonomist systems are trying
 to implement, well
the
 whole design
 methodology
>> I'm going to toss you this.
>> Yeah.
>> Sorry. Bad toss.
>> Um    so,
 so I'll just finish this
 story and then, you know
    so this question
 of is there an alternative.
 Right? Like can we
somehow express a value
 or a preference which
 has characteristics and
objective function doesn't?
 Defeasible for, for example,
 so Stuart's point
in this "Ted Talk," he
 says, the key principle is you
 have to make the robot
not just try to do
 what you want, but be
 actively uncertain about what it
is that you want. And
 be engaged in a process
 of inquiry about what you
probably want, which might
 mean what it thinks you
 want or it might
mean something totally different.
 And it's that wiggle
 room where, where
the robot is designed
 to be in a
 process of inquiry formalized
problemistically that prevents,
 at least, some of
 these extreme failure
modes, like the robot killing
 you because it thinks, that's
 the only way to
get you
 the coffee
 you asked for.
>> And, and
>> Um
>> And just to, just
 and    I'll kind of, but
 just    it helps with the
example that
 Jonathan Zittrain showed
 about the Dow.
This thing that got
 hacked because the point
 about that is,
you have a contract
 that will just march forward
 and payout the hacker
$50 million because
 it said it would.
 With the problemistic programming
thing where it
 is constantly questioning you,
 wait, is this supposed
to be whale
 going on? And you can
 interrogate it and say,
actually, that's not what I
 meant. And that was just
 a bug. Whereas right now,
we don't have a way to do
 that. So you    that could be
 a solution for that sort of
 >>
 Yeah.
 >>
 Error, right?
>> I would
 say that this programming
 methodology evolved partly out
of attempts
 to reverse engineer
 the psyche.
I mean, that's where
 actually my background came
 from. My lab
is in
 the brain and
 cognitive sciences department.
And I think that
 perspective is maybe opens up
 new doors from an
engineering perspective
and it creates all sorts of new kinds
 of room for error
because now, you
 know, when you've
 written a problemistic
program and you try
 to get it to do
 something, I mean, some
of the times it won't
 and that's correct. So we're
 really in the earliest
stages of understanding how
 to work with these tools,
 but I do think
they    they point
   in a fundamental
 direction for resolving these
problems and create
 a whole host
 of new ones.
[ Laughter ]
>> Do you want to
introduce yourself?
>> Yeah. >> Yeah.
>> Well, I'm Anna. The,
so downstairs a couple
of floors down, Rosalind
Picard has her effecting
computing laboratory, which
is, of course, you know,
working on
 how to integrate
 emotions into computing.
How do you think
 that problemistic computing can
 kind of intersect
with effective computing
 to make, you
 know, an algorithm that's
not going
 to kill you to
 get you your coffee?
>> Yeah. Great
 question. So I guess,
 there's two ways.
So I actually spent
 yesterday all day with
 Intel with their
anticipatory computing group
 which is sort
 of, you know,
they're sort of
 resonant with some of the
 groups here at the media
lab like fluid
 interfaces and effective computing
 where, you know,
that corner of Intel
 is trying to build
 computing systems that can know
us a little better
 in very simple ways like
 what are we trying to
get the computer or the
 car to do? And maybe
 deeper ways like how are
we feeling why we
 do it? So certainly, as
 a component technology, right?
Perceptions about a person's
 emotion are uncertain. Right?
 I mean even
for me with my old
 friends I may think I
 know what they're feeling and
sometimes I'm right and
 sometimes I presume I'm
 right and then we
might get in a
 fight, right? So in that
 sense problemistic programming is
really, you know, one
 of the component technologies
 for building more
effective, affective computing
systems. The other part I'll say
 is that on a
longer time scale, again, like
 maybe more like 10 years,
 my hope is that
this way of thinking
 will help psychologists like
 actually there's Lisa
Feldman Barrett in the
 Boston area in Northeastern,
 people who are
trying to build theories
 of interception. So you
 know, what is happening
when we interrogate
 or somatic experience or
 are own emotional
experience? And really sort
 of asking the question,
 what, what is
emotion? Or how can we
 build a sort of    can
 we develop a richer, more
textured intellectual understanding
of it? That I would say
 is in very early
stages, but I'm also excited to
 see where that will go and,
 over the next 10 years.
>> And, and, and    I
 know, I have a friend who
 had, I think it was pituitary,
but was, it was
 an endocrine disorder. And
 so the doctor gave
her different cocktails of
 hormones each day with
 testosterone and, and,
and other things. And
 she would inject them and
 each time was a
completely different personality
 she said. And she
 didn't realize the
extent to which the
 chemical balance defined who
 she was. And she
picked the one that she
 liked the best and now
 she shoots it every day
and that's the person
 she is. But she
 could always become another
person. And, and what's
 interesting is that it's
 this weird loop because
your intent comes from
 your emotions, but if your
 intent can adjust your
emotions you have this
 very weird feedback system
 that becomes sort
of too weirdly self
 preferential, right? And then
 the thing about effective
computing. So there's a
 project Ros does together
 with Cynthia Breazeal
and what they doing is
 they're measuring the effect of
 a child and it also
turns out that robot
 body language, if, for instance,
 if the, like adults
always nod when you talk
 to them. Children nod, I
 think, 30% of the time.
And so if the robot nods
 30% of the time the child
 is more likely to trust
the robot because it's a
 peer. And there's a bunch
 of things that robot
body language can do increase
 trust. And it also, if
 you take the affect,
you can, and for education,
 you can say, okay. So
 you can tell by looking
at the face whether
 they're, they're challenged,
bored, or just right. So instead
of having to test
 them all the time you
 can actually tune the
learning and then you can
 turn the body language so
 that they are more
trusting. And then you model
 the child's brain to try
 to make sure that
you get    all great stuff
 if you're trying to teach a
 child good things. But if
you imagine that suddenly
 affect is coming in
 and you're able to
manipulate people without
 going through their
 conscious filters, there
are ethical challenges. And then
   so that's one thing
 to sort of think
about it. And then the other
   the other thing, and I
 want to tie this a little
bit to get, you to talk
 a little bit more about your
 lensing. And you can just
tell me if the way
 I'm describing this is wrong.
 But generally, the way that
machine learning currently works
 is you have a bunch
 of data and you
have an engineer who keeps
 fiddling with the knobs to try
 to get a high
percentage accuracy, this    other
 set of data to test
 the thing. And once
it's done, it got locked
 in and gets deployed and
 then people say, oh, it
worked 90% of the
 time. What    students with
 human in the loop
machine learning is that actually
 human is in the training
 loop. So if you
have the police officer, if
 you have the psychologist, and
 they look at the
data and they say, oh,
 that's not what I would do,
 that's not what I think,
that actually changes the
 model, not just the
 assessment of the
outcome. Right? And so, so
 there's two pieces here for
 me is, is as we
start to bring humans in
 the training loop, so if
 you make the interface
such that the
 the philosopher or the judge
 or the police person
continuously makes the model
 better rather than just
 assesses the use
of the data, I think
 that's an interesting thing.
And then the piece    the
affect stuff is if you're
 in a car and the
 Tesla is making everyone nervous,
that should be data that
 informs the model. Right? So
 every time you go
around this curve every Tesla
 is going to go the
 same way. And if
everybody always gets stressed,
 it should update the
 model or it should
learn something from that. And
 so    so I think
 sort of, another question
is, is one is    is
 the way I can described human in
 the loop right thing. And then
 >> Yeah.
>> Can affect be one of
 the inputs or should it be
 one of the inputs in,
in the models
 or in, are
 you working on that?
>> So    I    I
 would say human in the loop machine
 learning is not a new idea.
It's always been around
 since the '70s. There
 was one statistician
called George Box who
 basically said if you're
 going to do any machine
learning it's
 very important for you
 to skate the assumptions
that you're
 encoding into your
 model because all
of us
 are working with assumptions
 all the time. Estimate
the model and
 then he says instead
 of what we currently do
in machine learning,
 which is model evaluation and
 the twisting of the
knob example that you
 gave, we basically sometimes
 describe that as
superstitious twiddling of the
 knobs in the dark
 because there's no
principle way of configuring these
 parameters in    or way
 to just get the
desired output. Most of
 the time the evaluation
 metric is one of
classification. Whether it's
accurate or not. But in
 human in the loop
machine learning, we basically
 say model criticism is
 a question of you
evaluating the assumptions
 that you made when
 you first started
modeling the whole problem.
 And so the question
 of what evaluation
function or discrepancy function
 you're going to choose
 has to come
from you. When you
 involve the human in the
 loop, for example, the
police officer or the
 cardiologist, whoever's doing the
 training, you very
quickly understand that we
 move so far away
 in machine learning
towards prediction, prediction,
 prediction that I
 think sometimes we
forget what we think
 is of value from the
 machine learning system might
not be the same thing
 that is valued by the
 people who best understand
the data. For the
 cardiologists who just looking
 at angiograms, maybe
not having an accurate
 presentation of an angiogram
 is, that's probably
not the most important
 thing because people can
 go and get the
angiograms themselves. So I
 think that with human in
 the loop it's not
just a question of
 better machine learning models,
 it's a first principle
look at the entire process
 and actually, I think, for us
 just being a little bit
more humble in how
 we approach the problem,
 the nature of the
problem. With
 affect and, this
 is
>> Because you
 were in that
 group, too, right?
>> I, I come from
 the affect group computing group.
 I got my PhD there.
But I also have
 manipulation practice from a
 very young age
and I just come from
 a different culture. And for me,
 one of the things, I
always grapple with is
 I think many of the
 categories of emotions that
people work on is maybe
 a little too reductionist. I
 don't like to describe
any of what I'm feeling
 that way. And when, when
 you deal with things
like what is the emotion
 in a child, there are
 so many layers of things
going on from their
 developmental psychology to what
 the kid ate. And
it's just so intricately
 complex. I'm not really sure
 that a machine learning
approach is probably
 how I would
 approach it.
>> And maybe move the
 mic to    to three,
 four rows behind you.
>> Yeah.
 I can
 defer. Let's see.
>> Should ah
>> Give it a try.
>> Thank you for
 passing it and not throwing
 it. That scared me.
So Sarah Hund,
 assembler and Google
 public policy.
And I have
 this question about humans
 in the feedback loop
and really specifically
 the end user. So
 to what extent should
the end user be
 in the feedback loop?
 And in what context.
And the question is
 really more about
 is that collective accountability
and influence, is
 that representation or is that
 garbage in, garbage out.
>> When you say
 feedback, can you define that's
 a little bit more?
>> Um    so feedback
 loop as in constantly saying,
 like, yes this is right.
No this was wrong.
 And vice versa. So in my
 head I think about like
smart reply. So I like
 that response, I didn't like
 that response. But more
in terms of humans gauging
 the accuracy, and I put
 that term in air
quotes, of, of what, what
 that means. So is that,
 is that good? In what
context is that
 represented    is that
 representative? Is that
accountability or is it kind
 of like sometimes with bad
 data, garbage, in garbage out?
>> Can I just add
 and tie that to    another,
 related piece that came up
at the end of
 J.Z.'s thing that critique
 can also work on, that
is entice the explainability
 thing, right? So if you're
 a doctor and you
don't understand the
 output that's also an
 interesting feedback thing.
And so one of the
 things that J.Z. showed that
 you particularly didn't see
is at the end he
 showed, I think it was a
 paper in medicine, right? That,
that   showed machines predicting
 what was it? Was it
   death by
cardiac arrest? By heart
 attack. But that there
 wasn't any explanation for
the relationship or it
 was very complex. But you're
 also showing and you
can talk about this a
 little bit more, that maybe
 there actually is a theory
or an understanding that
 could be derived, but
 we, not through our
current framework. And that
 maybe listening and thinking
 about it. So,
so there's, there's
 the simple thing of just
 feedback yes/no. But there's,
I think, a much
 bigger one where suddenly the
 computer gives you a
whole bunch of facts
 that completely contradict any
 framework that you
have. What is the
 response of the human being
 and the relationship with
the
 computer? So
 there's
 >>
 Right.
>> So there's sort
 of two, but it's really kind
 of    when the fact,
the machine and the human are,
are interacting
>> Right. Very quickly, can I make
two other quick comments?
   >> Uh hu.
>> Tying in what Gus is saying,
 what you are saying
>> I think, I think, I think
it is like machine learning
or A.I. or whatever
we are working on was invented by
computer scientists who did math and
now we're expecting machines
 to behave like humans,
 but they were
essentially derived from
 mathematic principles. And I'm
 a huge fan
of mathematical principles because
 it's math. But I
 think one of the things
is that there
 are some other machine
 learning models which
go from biologically inspired
 systems like the brain,
 the neocortex and,
and which made more
 likely connects between neurons
 in a garden is
one way to kind
 of move towards more humanized
 way of looking at
machine. That's one comment.
 And second is error
 in machine learning
is highly penalized right now.
 It cannot be wrong. And
 in fact, if you look
at all computer science
 publications, what is the
 area under the curve
where your R.O.C. curve
 and guard them and
 that's the accuracy. Well,
as humans, we are
 most likely, at least
 usually wrong about many
things. But we have
 this higher model turpitude
 towards machine and,
and that goes to
 Joi's question, in healthcare
 that seems to be
incredibly important that
 if you're diagnosing someone
 you cannot make
a mistake. And our
 entire healthcare system is
 set up with those
paradigms which I think
 are useful. Should be
 enforced. And then as
Joi pointed out, there
 are many things that we
 are discovering in my
research where the policy, we
 call it a policy, that
 an expert or a doctor
uses to treat the patient
 is X and when you
 use machine learning or we
do reinforcing learning
 or advance unsupervised
 learning techniques, it
comes up with the
 complete new machine policy that
 we call. And it
kind of defies, as
 Joi pointed out, what
 we understand about treating
the disease. So    and
 I don't know the answer to
 that question to be very
honest. I think more research
 is needed and we need
 to kind of, kind of
accept that machines can be
 wrong, but so could also
 we. And kind of
come up with new
 paradigms of law, learning,
 ethics that can
incorporate this back and
 forth within humans and
 machines to work
together with
 this antagonist, antagonizing
 each other.
>> And maybe you
 can tie it too
 lensing, too. I
>> Yeah. The feedback
 loop question    one
 concrete example I can give
is something
 that's been happening
 since the 1920s.
We just
 don't take female
 cardiac symptoms as seriously
as we probably
 should. And when you
 trace the whole thing
you realize, yes. Women were
 not put in any cardiac
 trials until '90 '93
and Congress stepped in
 and said no you have
 to put women, because
they were quote unquote
 thought they were immune
 from heart disease
because of their hormones
 great. I think that
 when you use machine
learning to unearth a
 very deeply embedded complex
 problem like this,
which is a different way
 of looking at it then
 just, of course, it's garbage
in, garbage out data
 is already heavily oriented
 towards males, right?
That becomes a little
 bit more interesting. So the
 work that I'm doing,
when I first went
 to Brigham, they were not
 too happy because they
probably wondering who is
 this guy who, with a
 funny accent who thinks
who can just barge into
 a cardiology lab. So there
 was no trust initially.
But over a period of
 time, when you involve the
 doctor in the lensing
view and you basically show
 them why    why is it
 that only 20% of all
cardiac investigations in
 North America performed
 on women, though
more women die, and
 the way they verbally
 express their symptoms or
describe them to a doctor
 is also very different than
 how a man would
describe it. You basically arrive
 at a point where you
 realize, oh, my gosh,
I just used lensing
 and machine learning to
 actually show them a
fundamental bias. But the
 fix is not
 predictive machine learning
system that tells them
 when a female patient has
 heart disease. The fix
is also pretty complex.
 Like to your example
 of complex self adaptive
system. Med school,
 internal medicine, cardiology, kind
 of drilled into
you, look for    look
 for symptoms. Look for ethical
 symptoms. You go to
London and you see these
 big double decker red buses.
 A person with a
heart attack usually, like a
 guy with a suitcase just
 going out of hotel just
had a huge meal, just
 looks like he's having a
 heart attack, which is like
this campaign to make
 people aware of how to
 recognize one. And I
think the one they did for
 women was one with, she calls
 9 1 1, but she
wants to clean the last
 pile of dust at home
 before opening the door
when they come in. So
 the messaging is screwed up.
 It's like a complex,
multilayered problem. And no
 machine or no number
 of George Boxes
will actually solve that. And
 it's the feedback cycles are
 so crazy that it
goes beyond garbage in, garbage out.
>> Maybe one thought and
 then we're going to be
 starting to run out of.
>> All right.
>> Time.
>> Coordination. Hi. My
 name's Ilana. I'm a
 high school senior
from Washington, D.C.
 The question I have
 is more so related
to what Professor
 Tran said at the
 end of his presentation about
explainability.
 And to what
 degree explainability,
you know, relates
 to the success
 of an algorithm.
Because I know
 in the context of
 something benign, like Netflix
recommending your
 movie choices, maybe we
 don't need to
know how all the nuts
 and bolts work or what
 they are. But in the
context of something
 more serious like parole
 decisions, deciding how
long a person's prison sentence
 is, we would like to
 think that we can
know what the variables
 are and how much weight
 is being given to
them. So the question I
 have is    to what
 degree is explainability relating
to the success of
 an algorithm and also do
 you have to sacrifice
accuracy for
 transparency and
 explainability?
>> I can,
 I think all females
 have individual answers here.
Fantastic question.
 I think, you also
 mentioned the medicine.
So the
 phenomena of
 understanding of mechanisms
is very prevalent
 in healthcare in clinics. So
 if you publish a paper
in a high impact
 medical journal like "Nature", if
 you have a state
of machine learning algorithm
 that works at 90%
 accuracy, they will say
show me the mechanism.
 People have done the
 plots and the neurons
and seeing how they
 fire, and you're absolutely
 right in your opinion
that's a lot of time
 would we spend on making
 the, the    the algorithm
explainable versus useful. And
 my personal opinion is, if
 I may say that
right now we should
 be trying to get accuracy
 and moving first towards
ethics and making them
 morally accountable for what
 we are doing
versus trying to spend
 too much time on
 making them explainable. I
think that's the,
 can go side
 by side.
>> And partially
 because I think medicine
 is just so crappy.
 >> Yeah
. >> Understanding what's
 going on. I mean, I
 think, you know, we really,
anyway    I
 won't go for that
   [ Laughter ]
>> Well, you can say
   I think, I think,
 I can complete your sentence.
I think    I
 think    when people are trained
 in medical school or biology,
people are
 trained with mechanisms,
 pathways and
understanding and it's    and
 you know, it’s usually sometimes it
 is over information. It's not
>> So I, I,
 two comments. This is
 a fundamental question
and there are no
 simple answers. The first
 comment is    I'm
not so sure there's
 an intrinsic trade off
 between explainability and
accuracy. But the lens I
 want to offer for that is
 how have we, as a
society developed mechanisms to
 help people build a
 credibility and the
solutions that they offer?
 Right? There's a whole
 dialogue that's only
beginning to get started in
 machine learning which, it is
 sort of like, for
some of the people
 in the field who are
 maybe, you know, couple
generations my senior who
 are involved in starting
 machine learning, if
you talk to them about
 it, like Tom Deater who
 coorganized a bunch of t
he Obama administration's events
 on A.I., he will say
   yeah we're just
so amazed that any
 of this stuff worked. We
 weren't worried about how
to make it reliable
 or auditable. And now
 we're going, ah!
[
 Laughter
 ]
Like, people are
 adopting it like crazy.
 So there's a whole
so, so, so    so
 the angle I sort of want,
 want to offer as
you know, kind
 of the academic
 mode of work
in machine learning
 or actually you see this
 in Silicon Valley, too,
where you are just
 sort of trying to get
 something to work and measure
its accuracy. I think that's
 a big part of the problem. I mean,
in other engineering disciplines,
like if you
 talk to Ford about
 how they design some
component system of
 a car, the way
 they think about what
claims they can make
 about its fitness for purpose
 and its safety is
just much richer and
 much more mature. You know,
 they think a lot
about all sorts of
 different ways it could
 go wrong. They don't
reductively summarize its function
 or fitness for purpose
 in a single
number. There's a whole
 dialogue about around is
 this an appropriate
component technology to be
 bringing out there. And I
 think as we start
thinking more that way
 about A.I., rather than
 focusing on Joi's point,
this one accuracy number,
 you know, then we
 can start looking at
different deaths or kinds
 of explanation which are
 suitable to explain
different sorts of errors. Right?
 It's just like a very
 complex picture and
my hope is that, is
 that, that kind of view, that
 broader view will help us
make progress. A second
 point I want to make,
 this is something I
learned from, from, from
 Charles Nesson at the
 Berkman, he pointed out
that in human decision
 making, actually sometimes the
 integrity of a
decision making process depends
 on the right to
 not explain yourself.
So when he said that, I
 was a little bit surprised at
 first, but then he said,
well think about a jury
 trial. Right? The, the, the,
 the sort of sanctity of
the jury to not have
 to explain why it decided
 something is a key safety
valve that underpins the
 integrity of that process. Now,
 I don't think we
should have A.I. processes
 that, I mean, I don't
 think we should have
machines making decisions that
 carry that kind of
 moral weight any
time in the future. But
 I just want to point out
 that from a first principles
perspective I don't
 know whether explainability is
 always a virtue.
>> We have time for
 one, one more and maybe
 we'll try to pick somebody
who hasn't, or
 maybe    I guess both
 of you have already asked.
So Cathy, you, or
 actually behind you
 what's your name?
>> ( Speaker off microphone ).
>> Yeah. One    we'll
 let you    so
 here comes the mic.
>> Okay. [ Laughter ]
>> Hello. Okay. [ Laughter ]
So actually, it's not
 really a fully flushed or thought
out question, but it
 was more    so the,
 the jury parallel made me think
a little bit
 because I do think
 that there is a potential
difference between taking
 sort of that kind
 of immunity from having
to explain your
 decision because it's a
 person chosen amongst
many and so somehow
 we're not expecting that person
 to be able to
explain fully their decision.
 But I think when
 you're talking about a
machine or a very
 articulated system or a very
 intelligent person or, you
know, then I think
 maybe there are different
 considerations there and
you might have the right to
   to claim more from that
 machine. So that's my thought.
>> And I will
 add that one of the
 arguments that I heard juries
is that you don't
 want them to be influenced
 by the political fallout
of having to explain or
 having to verify what, what
 they say. I'm sorry. I'm
going to add one other
 thing. But also, if you think
 about a machine    a
category of machine which
 is just collecting the
 data, maybe the
sentiment of a whole bunch
 of people and being a
 machine that is a
democracy. Like one of
 Ethos’ stories, then it
 doesn't actually know. It's
like the Chinese room.
 Right? It doesn't actually know
 what's going on in
everyone's mind, but it
 is the machine that
 is aggregating what's going
on in everybody's mind. So
 be the role of the
 judge that's managing the
jury. So different interesting
layers of whether it's even
explainable. But I don't know if
>> Yeah. I mean, the
 thing I really want to
 invite here, just, you know,
roofing off of what both of
 you said is that, I mean,
 I think that if
if we enlarge this
 question about, I think it's
 great that people are
asking questions about pieces
 of software that are
 now, I'd argue, often
inappropriately making decisions.
 And saying what
 explanations do we
have to start asking
 what are the explanations we
 already accept for the
legitimacy of decisions
 in other social systems
 that were embedded.
Right? So you know, a
 judge or a police officer
 or a prosecutor have
forms of discretion they
 are allowed to exercise.
 Maybe that discretion
is essential in some cases,
 maybe it's also    a
 place where various kinds
of bias and abuse can, can
 be hidden. Much like    or
   in some ways like
how that would be
   the case if software
 were making an analogous
decision. Or, you know,
 they're friends of mine,
 you know, in the
entrepreneurship role where if
 I asked them for
 an explanation for
something in the more
 commercial world, they'll say
 well the market
decided it. Okay. Well
 what does that really
 mean? Right? There are
some discourses where
 that's sufficient explanation.
 Right? And, and,
you know, so    so I
 think, I mean, I guess, I hope
 that we will be led to
think a little bit
 more precisely about what kinds
 of explanation we want
and what circumstances
 and what purposes
 the explanation's supposed
to search and also what
 costs does it impose to
 ask for one and apply
that more uniformly
 to both software
 and social systems.
>> And we're out
 of time so thank
 you very much, guys.
 [ Applause
 ]
>>
 Thank
 you.
 [ Music ]
>> Thank you. Okay. I'm going to
 scooch you out for some
 [ Music ]
>> Should we start?
 Okay. We're back. So
 this is    actually interestingly
kind of leads
 on from a lot of
 the conversations. So maybe
what we'll start out doing
 is introduce yourself and your
 work, but also, I
mean, you've been, I think
 the three of you've been
 here through the day
so maybe    if there
 are any thing in the
 conversations that we've had so
far that you want to
 add to jump in and
 then obviously for anybody out
here feel free to also
   wave your hand and jump
 in. But why don't we
just kind of continue
 the conversation, but okay. Why
 don't you start out?
>> Sure. My name is
 Kate Crawford. I work at
 the ACLU here in Boston
on technology issues mostly.
 I'm really interested in,
 I was just saying
this to Joi    the
   the first principles in
 ethics that Allen Meadows
talked about, there seems to
 be a tendency in the
 policy world when we
think about how to
 integrate algorithmic decision
making into tools like,
you know, or systems
 like the criminal justice
 system, there's a tendency
to stay at the very
 surface level and to have a
 lot of fights there about
what the algorithmic tools
 should look like and
 what types of data
should be fed into it.
 And I think that that has
 the impact of allowing us
to continually avoid having
 a conversation at the
 much deeper level
about the values and
 goals and paradigms. And
 I think that's really
dangerous. So one of the
 things that I try to do
 in my work is encourage
policymakers and technologists
 and people who are
 thinking in this
space to avoid falling into
 that trap. I'll have a lot
 of other things to say,
but that's all
 I want to
 say right now.
>> Good morning.
 Good afternoon. Whatever. My
 name is Ran Fullfa.
I just have
 one question for the
 audience which will sort of,
like, explain why I to
 the work that I do. Can
 you raise your hand if you
identify as African
 American? So take a
 look around the room and
  sorry. Brothers
 and sisters, for calling you
 out. Happy Black History Month
. To Holly's point
 at the beginning, the reason
 that the work that I
    that, that   the work
 that I do is so important
 to me is we can
have the smartest
 people in the room
 talking about these issues
, but unless we have
 to people who are, like
 the reason that they're
not here is because
 they're being impacted by the
 system, if we don't
have them in this room
 then we're going to be
 in this whirlwind that
we've been in. The other
 point that I, that I wanted
 to pick up on was,
was Cathy's    so what I
   what I do, I was
 a prosecutor in Boston for 10
years and just saw
 this cycle of the criminal
 justice system, the thing
that we call the criminal
 justice system, as a prosecutor
 we sit in like
this very weird role.
 We don't really have
 any affinity towards anyone
except for the people. We
 don't have a    an
 individual client. We certainly
aren't making money for
 the decisions that we're making.
 So we are just,
we are told to go and
 do justice and we are fed a
 lot of stuff in school on
how we're supposed to do
 that. But we don't actually
 learn tools how to
we don't have tools
 okay. We're saying, how to do
 it and so we have
this really negative impact
 on people of color,
 people from marginalized
populations, people who are
 frankly just different than
 the people who
are running the criminal
 justice system. And so I
 created a curriculum as
a prosecutor to teach
 other prosecutors things that
 we should have
known before we ever
 had the ability to prosecute
 people like    what
does it look like inside
 of a jail and prison?
 What actually happens in
there and what does it
 do to your stated angle?
 What is trauma? And
how does that play out
 in the lives of the
 people that are impacted by
our system? How much
 does poverty drive the
 actions of the people
who are coming in?
 And is this thing that
 was created hundreds and
hundreds    we saw a
 nice image of all those white
 men back in the day
creating a system that we
 still operate with right now
 and is that    you
know, should we be asking
 ourselves if that's the best way
 to do it? And I
took that curriculum, which
 was awesome and effective
 when it was
implemented in prosecutors' offices
 all over the country, I
 took it to law
schools and their answer
 was in rejecting the idea
 that they're no longer
trying to create public
 interest lawyers because we
 make bad donors.
And so when we
 ask ourselves how    like
 what do these corporations
that we call universities, how
 do we change them? I,
 I think that there is
a greater strategy that has
 more to do with the
 reduction of profit and
educational institution and what
 those metrics look like
 then, then what
we currently have. So those
 are the only two things
 I wanted to touch on
this morning. I'm sure that
 we'll have a lot more to
 talk about, but I come
at this from a
 lot of ways, but fundamentally,
 I brought Jordan, Jordan
raise your hand. Jordan's a
 junior in high school. I
 didn't step into this
building until I was 36
 years old. And as, as such
 I never realized that I
had a place here. So
 it was really important to
 me to bring someone who
is 20 years my junior to
 be inside of MIT to see
 he has a place here, too.
So make sure you
 talk to Jordan today
 because he's dope A.F.
[ Laughter ]
>> And I wanted to just
 click on one thing that you,
 you, you talk about a lot,
but I think ties into
 this which was    you often
 tell a story about what happened
to the people
 that you touched in
 one direction sending them
to prison or not and
 that you had the choice, but
 that you don't have a
system where the prosecutors
 have that feedback. You
 don't get that.
Right? So one of the
 things, and this is what
 we're doing in Chelsea and
some other places, is
 now that we have data,
 isn't there opportunity for
prosecutors and others who
 affect peoples' lives to
 see what happens
to those people because that
 seems to be one of
 the things that you
were saying. Was
 that couldn't we
 do that.
>>
 Yeah.
>> And and I think we can.
>> Yeah.
>> And it gets to Kate's point,
 which is we have to have a
  we have to want to do that.
 >> Yeah
. >> That's, I think
 what you're working on. But
 maybe some of these
tools that we're using
 to accurately throw people
 in prison can also accurately
reflect back to
 the prosecutors what happens
 to the people that
you throw
 in prison versus
 not. Right?
>> Yeah, and that's, that's
 the work that I'm doing
 here. I'm direct fellow
at the media lab and
 the work that I'm doing
 here is more about understanding
the culture
 change requiring in terms
 of advising behavior
and this is
 again why prosecutors are
 really interested. Because
we don't really have
 an incentive to send
 people to prison
because we know at this
 point that it's really bad
 to our core objective.
>> There's, there's a comment
   and maybe toss the mic
 over if somebody has
>> Where's
 the microphone
 box?
>> There.
 Maybe for
 the video.
>> Thanks. Sorry, I didn't
 mean to stop the conversation.
 I just feel like
it's not just about the
 opportunity, but also how do
 we start to change
the narrative. So it is
 about an obligation. Right? So
 because we can
>> We ought to.
>> Maybe we should. And
 also like we must. And so
 I'm just    you know, I
  want to keep
 challenging us to think even
 further beyond just like,
oh, maybe we should, it's
 like, how do we start
 to change the narrative to,
if you can look at
 this in that way, like, you
 have to. There's no choice.
So just tossing that out there.
>> There's another one.
 Are you just    you're
 just    you're just,
she's crab
 clawing. Okay. [
 Laughter ]
>> D.P.
>> Chris?
>> Hey, everybody, I'm
Chris Babbitt. I teach at
H.L.S. I'm basically at
the Berkman Klein Center.
I know a lot of
 you here because of the thing
 I do most of the time,
which is run
 this law school clinical
 program which is essentially
a law firm
 with five or six
 lawyers and thirdish students
this semester that does
 legal work on both
 direct advising of clients
and that's
 why we've made ourselves
 available to folks who
are
 in the assembly
 program building technologies,
bumping up against legal
 issues, that sort of thing
 to come seek out our services.
So we're
 here as a resource
 for you. We also
do a lot of
 advocacy work, including with
groups like the ACLU of Mass,
or nationally, or the
 E.F.F, or other sort
 of tech policy advocacy
groups where we help
 them speak out on policy
 issues by engaging with
administrative processes,
filing amicus briefs, that sort of
 thing. I guess,
my reference to this
 conversation is that I'm
 also heavily involved with
one of our research
 work seems related to
 A.I., fixing governance at
Berkman, and that's specifically
 the one that we
 call sort of algorithms
in just us which
 has to date been primarily
 focused on the criminal
justice system although the
 way we've, I think, can
 see the, the way
we've always framed it
 is that we're looking
 at the particular implications
of the use of all
 of the kinds of technologies
 we're talking about here
today    black box
 algorithms and machine learning
 and A.I. by the
government. So one key
 example we have here
 is criminal justice
assessing, assigning a
 risk score to a
 criminal defendant before
evaluating what her bail
 should be or making a
 parole decision or a
sentencing decision. That's one
 of them, but I think
 that you could even
pull back from that a little
 bit and talk about, at a
 little of a higher level,
what is the difference
 when a private company makes
 a choice to use
one of these particular
   not so transparent
   technology versus when
the government chooses to
 use it. Unsensibly, it
 was mentioned, sort of,
or someone in the
 previous panel mentioned the
 kind of difference to
the market and there are
 lots of problems with that,
 but essentially, if I'm
trying to choose among
 three different, you know,
 social media sites I
can use then I
 have concerns about the way
 one of their algorithms
delivers me information, but I
 like the other one, I
 guess, I have a choice
among those, set aside antitrust
 issues and all of that.
 I don't have a
choice when dealing
 with my, my government,
 when dealing with
prosecutors and dealing with
 my criminal justice system.
 So it is extra
important that we, that we
 get it right. And I guess
 I would say, on the
point that we were just
 talking about, about the ways
 to kind of flip the
script on use of data, one
 thing I just want to kind
 of flag is I completely
agree, and this is some
 of the work that we've
 been doing, to say so
much of the emphasis
 has been on taking
 people who are already
partway through or all
 of the way through
 the criminal justice system
and saying okay. What
 data do we have
 about them to predict
outcomes? Are they going
 to come back for their
 court hearing in X
number of months? Or are
 they going to reoffend? That
 sort of thing. It
is vitally important to think
 about using this, these, this
 data earlier on in
the system. I will say that
   that it has a lot of
 promise, it's also a little bit
scary because it
 requires creating possibly, and
 there are technological
solutions to this, but
 possibly creating bundles of
 data and putting them
in the hands of people
 that could use that data
 for good or for harm.
And every once in a
 while when I'm thinking about
 one of these great
initiatives to use data
 in applied machine learning
 to kind of predict
outcomes and, and
 and    and intervene early
 in diversion program or
something like that, I
 picture sort of Kate looking
 over my shoulder and
saying, well, waiting a minute,
the ACLU of Mass would have
a lot of problems with the
idea of all of these people talking
to one another, sharing data.
That sort of thing.
>> That gets a little bit to
the first video that I showed
which was a news clip that got
the science from. Right? And
 then I'm going to merge
 this with actually a
health thing which is
   so for instance, we
 thought diabetes was one
thing. It is actually
 multiple things that cause
 the same symptoms.
>> Yeah.
>> Failure to appear is
 one of the things that,
 that our team is working on.
And failure to appear
 is like a bit in, in
 its either yes or no.
Right? But actually, could
 be because, you have to
 take care of your
parents or it could be
 because you're an addict. Or
 it could be something
else. And the fact that,
 for the criminal justice system,
 it's just one thing
is kind of like the
 medical system before we figured
 out that there were
many causes to similar
 symptoms. Right? So I think
 that it is interesting
that you can find underlying
 data. And that, that can
 help you deal with
the problem, but the
 other meta thing, which is
 kind of interesting is,
and    I want to be
 a little bit careful about how
 I go to this, but for
instance, in genomic research
 right now there are
 a lot of interesting
things that we're learning
 about the relationship with
 the    between
genetics and various outcomes.
 And a lot of
 those results are being
used by hate groups as
 science behind a    these
 things. And that's not
even privacy stuff. It's
 just taking science and then
 kind of twisting it
around and using it as, as
 a bad, like a bad version
 of it. And so one of
the questions that I don't
 actually has the answer to,
 which is, I think
that    and, and
 and when we talk about
 regulation, you can regulate
the research and you
 can regulate the deployment.
 But with bad media,
which is what we have
 right now, you    you kind
 of also risk this weird
thing which is
 because you also don't want
 to prohibit research too
much because one of the
 other things we did
 the year before last was
the forbidden research
 conference. And there's
 this very interesting
thing. So we, we make
 it very difficult to do
 research on pedophilia and
on things like sex robots.
 But they're coming. But we
 don't yet know, for
instance, scientifically, whether
 giving somebody who
 has a problem
with pedophilia a sex robot
 or a V.R. We don't
 know if it actually can
make them better or
 it makes it worse. If
 we don't know that
scientifically, it's very hard to
 come up with the right
 policies. So, so the,
the, the, the    inhibition
 of research is also really
 danger and so to your
point, it's kind of
 an area that's very fraught
 with both opportunity and
risk, but change isn't going
 to wait for us to
 figure this out, right?
>> Yeah. I think
 one of the interesting examples
 of that is this,
the study that, I
 forget who it was,
 somebody did a study recently
showing that
 allegedly there's a
 gay face, right?
You guys are familiar with
 that? [ Laughter ] I
 think I probably have it.
I hope I have it.
 [ Laughter ]
In any
 case, you know, there's
 gay face. So wow!
That's scary maybe
 if Jeff Sessions is,
 you know, in control
of the database
 of all the faces.
 Right? [ Laughter ]
But that gets to the question of,
yeah. In some, I was thinking
the exact same thing, right?
Prosecutors don't only have
 the option to do
 this, they have an obligation
to actually understand
 what they're doing and
 the impact of their work.
On the other hand,
 you know, when I saw
 Zittrain's slide about what
Bono said, I thought to myself,
 um    so if I can
 punch Bono in the face,
does that mean I should do it? Like
[ Laughter ]
Right? And
 I find him to
 be    unbelievably irritating
so I probably
 would if I had the
 opportunity. But, anyway, the, the
    the point is
 I don't think, in many
 cases, we actually should do things
that we can do. Right?
 So I    there are a
 lot of examples of that in the
work that I do.
 So gay face is maybe
 one of them the others,
though, are things
 like, you know, the
 creation of these enormous
databases that law
 enforcement amassing, not
 just at the
Federal level, but
 at the city level as
 well now. Where they're collecting
huge quantities
 of information about
 everyone, you know,
conducting mass
 surveillance using lines plate
 readers. Soon, you
know, facial recognition systems
 that are built in
 to the ubiquitous
C.C.T.V. that we have
 all over cities now that
 are, you know, interlinked
and, you know, networked
 cameras that can be
 operated from a single
hub. These are things
 that we shouldn't do,
 actually. We certainly can
and they are likely going
 to happen, but they're really
 bad ideas. And
and    there's a really ugly
 area of the law, you know,
 I was, I was also
thinking about Lessing's formulation
 of the four, you
 know, sort of
pressures that create the
 world that we live in
 and they influence one
another. Right? I mean,
 my colleague Jay Stanley
 at the national ACLU
gives this example all
 the time. It's really
 interesting. People don't
necessarily think about how
 technology and the law
 interacts in ways
that are maybe unexpected.
 So one example is
 that the Wiretap Act,
which it was passed,
 in the 20th century, had
 the impact of determining
what C.C.T.V., the whole
 industry, would look like.
 And why? Because you
can't have C.C.T.V. audio
 recording. Right? So that's
 a, a really weird
interaction and I think
 the people who wrote the
 Wiretap Act could not
have predicted. And I'm
 really concerned sort of
 similarly about the
interaction between really, really,
 really bad historical law
 that we have
precedence, Supreme Court
 precedent around, for example,
 a Terry Stop,
do you guys know what
 that is? It's Supreme Court
 ruling that says that
law enforcement basically can
 pat you down and
 force you to empty
your pockets when they stop
 you on the street. If
 they have what's called
reasonable articulable
suspicion that you may have a
 weapon on you.
And this is a really
 frightening thing in, in
 combination with the kinds of
databases and mass
 surveillance that I've described.
 I'm envisioning a
future, if we don't
 get it right, and make
 some interventions along the
way, where law
 enforcement officials have constant
 lenses that have
facial recognition built
 into them. They're walking
 down to street
scanning every person they
 see. You know, a
 database system, you
know, working with
 some algorithms is telling
 them effectively coloring
maybe even in their field
 of vision every person they
 see as yellow, green,
or red and that designation
 itself, whether or not the
 data it's based upon
is even accurate,
 determines reasonable suspicion.
Right? I saw you,
you're my    my computer
 system colored you as red
 so I stopped you.
Courts may very well
 think that's totally appropriate
 and given, you know,
historical precedent, I think
 that's actually likely. So
 I'm really worried
about, about those areas
 where the law and
 technology are going to
interact in ways that,
 I think, people are not
 worried enough about, frankly.
>> And it is
 it's a real-world sort of like
 example of how we're already
there and it’s really
 scary. There was, there
 was a school fight
between two school aged
 children in Boston last
 week, in East Boston
High School, that was
 described as a nonviolent fight.
 I don't know what
that is. But it
 was a nonviolent fight. But
 the students' name
>> I think it's called
an argument.
 >> Yes
. >> Yeah.
>> An argument.
 [ Laughter ]
 In the cafeteria.
 >> Yeah.
>> Where the    the
 students' names were put into the
 school database that went through
the school police database
 that went to the
 Boston Regional Intelligence
Center database which
 highlighted one of
 these young people
as a
 gang member which
 alerted the Federal Government
which alerted
 them that he was
 also undocumented and so
 instead of the principal dealing
with this situation,
 I.C.E. dealt with this situation.
>> You've been
 tweeting that a
 lot, right?
>> Yeah. This is an
 issue that we're trying to grapple
 with at the ACLU for sure.
And    you
 know, there are multiple places
 along the way where people
made maybe, maybe
 didn't make decisions right? Simply
 just allowed things to
happen in a way
 that was really not
 very thoughtful and, and
    I guess, that
 more than anything is what
 concerns me. Is that we'll
sort of just
 keep plodding, you know,
 along in the same direction
that we've
 been plodding, and
 increasingly technology makes
the direction we're heading
 in worse and worse
 and worse. Right?
Whether it's with respect
 to economic inequality or
 racial injustice in the
criminal justice system or
 any number of other
 serious crisis that we
face as a society.
 Technology just exacerbates those
 really, really quickly.
>> Well, and I
 think that's why D.A.C.A.'s
 really important, because
if you're not a
 citizen you don't have
 the same rights. Right?
And    and    like
 this case of the traumatized,
 victimized exgirlfriend of a
gang member's new
 boyfriend being undocumented,
ending up on this
list is just this kind
 of horrible mess that we have
 and, and    and they're
couple of questions
 to, I think
 there's
 >> Yeah.
>> Back and then,
 I'm going to go first for
 people who haven't said anything.
So    oh, well
 okay. Here we go there.
 Okay. Yeah. Go ahead.
Go ahead.
 And then we'll
   go back.
>> Um    just
 a question about what you
 just referred to. So, you know,
there's this horrible
 thing that happens and
 these systems reveal themselves
of like automated
 injustice. How do we
 know about those systems
beforehand? Like how do
 we have a descriptive or
 are able to have a
discourse about systems for
 the deployed and you
 should be aware
of it before
 these kind of individual
 tragedies happen? Because
they seem so hard to
 access or, or kind of
 reveal until there's a victim
which seems like crazy.
>> Um    I really want
 to    hear what    you
 guys have to say on that.
What I'll say from like
 a very cynical point is that,
 that story is one that
will always be sort
 of trumped, sorry, by the
 use of the, the,
the Boston Regional Intelligence
 Center by the detectives
 and all these folks
M.B.T.A., E.M.S., who will
 say, we need to
 hang onto the brick because
we've been able
 to make all these
 successful arrests of all
these violent people. And you're
 not going to be able
 to find me, even the
most white, progressive,
 wealthy, liberal in Massachusetts
 who says get
rid of the brick because
 this one kid went away
 because all these other
bad brown
 people were prosecuted
 and incarcerated
>> I mean, yeah. I
 have a different take which
 is that we can,
we can to
 some degree, I
 think, future proof processes.
Not    right. So
 one really interesting area that
 the ACLU has been working
in lately that we're
 engaged in both here and
 Cambridge and in the city
of Boston is
 trying to get officials
 to pass municipal laws
that require the police
 to go before the
 city council before they
want to
 adopt new surveillance technology.
 So typically, the way
that this works,
 I mean, historically the
 answer to your question
would be that's my job. It's
 to try to figure out what
 they don't want us to
know. And to tell everyone
 and then you're right. After
 the fact, we play
catch up, right?
 And try to regulate/outlaw
 certain things. Whatever.
This, this idea is to
 sort of invert that. So that
 we have an opportunity at
the outset before
 the technologies, the database
 systems or the
information sharing agreements
 have been formalized
 or acquired and
have a public debate about
 A, whether or not it's a
 good idea to buy the
license plate readers. B, if
 we agree as a city
 that, yeah. We should have
them, then we decide
 how they're going to be
 used. Right? How the
information will be stored. How
 long it will be retained.
 Who it can be s
hared with. You know,
 under no circumstances will I.C.E.
 ever be able to
get a hold of it. That
 sort of thing. And I like,
 I like the, the establishment
of processes as
 opposed to thinking
 about discrete technologies
because it allows us to,
 like I said, future proof,
 you know, these problems.
>> How do, how do you
   how do you then also
 build into that    if
if we meet this
 benchmark are where we see
 this disparity, then we
reconvene and talk
 about the technology
 as a whole.
>> Reporting. Yeah. So
 you know built into
 these ordinances is also
a requirement that
 the police report back
 to the council.
You know, on an
 annual or semiannual basis
 about how the technology
was used. Racial
 disparities in the use
 of it technology.
Those types of things.
 So that we can
 continue to reassess.
>> Reopposing anything that
 Kate said. In the
 process of doing this
work on the
 government use of these kinds
 of tools, we've kind
of left the realm
 of the really interesting,
 sexy minority report
robotic judges and
 we've landed squarely in
 procurement which
[
 Laughter
 ]
 >>
 Yeah.
>> Which is
 really interesting and
 important. >> Yeah.
>> But we kind
 of, can map this cycle
 where outside private developers
develop technology.
 Government organizations decide
 we need to
procure a piece of technology
and they go out and get one.
And then that technology's
deployed for a particular reason.
And on the
 back end indicates other
 issues that they're making
    that
 technology needs to be
 tested and, and, and
    and evaluated. And
 we see so many failings
 along the way as we look
at these
 kinds of technologies in
 that cycle where technology
is being procured for
 purpose A    say it's
 a risk scoring tool that
being used to
 assess bail and determines that
 your risk score is four
and your risk score
 is six. And someone along
 the way decides well,
let's use these risk
 scores for purpose other
 than bail as well.
So set aside whether
 it was appropriate for bail
 in the first place,
now it's being used
 for sentencing or parole or
 for something like that.
That's a really bad
 fit. At the implementation phase
 we have judges who
are using these kinds
 of technologies who don't
 fully understand that
there's a six factor
 test they need to employ
 to make a particular
determination and this
 technology is answering questions
 one, two, and
three and they now
 step in and answers questions,
 three, four, five, and
six thus having double
 counted factor three. Right?
 So we need really
ridged implementation guidelines
 for the people
 who are actually
applying these things. And again,
 it's not, it's    it
 doesn't seem like the
most interesting topic in
 the world, but I think
 procurement and then use
in government, there's a lot
 of room for people in
 this room to be
propagating best practices.
 To be evaluating technology.
 Finally, to the
last point about the
 testing and evaluation. There is
 a trend when it
comes to government
 procurement of technology more
 so than anyone
else towards inertia where
 we've all been at the
 D.M.V. and looked over
the shoulder of the
 person who is looking at
 some amber screen from
30 to 40 years ago
 that's the system they bought
 30, 40 years ago
maybe thinking we'll, we
 evaluate that at some point
 and make it better.
It doesn't happen when
 government buys technology. So
 often. So I think
more so than ever before
 with these kind of tools
 we need to be thinking
about short cycles. Let's put
 this in place. Let's put it
 in place for a short
time. Let's make the
 data widely available. And
 let the research
researchers
 see what
 happens.
>> I just two short things.
So I do think that local
governments function
as better democracies than
 federal. And so I do
 think that locals really,
in San Antonio and
 other places are doing
 a really good job.
And I do think procurement's
important because you kind
 of follow the money.
Right? And so
 so right now a lot
 of these vendors are still
haven't gotten to
 the size of corruption
 capability. But they will.
So you kind of want
 to cut it off before
 they get that. The problem
is the fall government
 buy things like the D.H.S.
 F.A.S.T. system to deal
with terrorist filtering
 and people inbound into
 the country, but
that's dual purpose. So you
could easily see that go
first to I.C.E. and
then next to things like
the criminal justice system.
So I think what's
really important is to look
 at the business side, too,
 and look at where
the capabilities are being
 developed, and to try to
 head off what happens
with machines in other
 places where you get
 a sufficient market to
where now the eradication
 isn't just about convincing
 city council. You
have to kind of dig
 into sort of the whole lensing
 thing, which is the, the,
the, the, the corruption
 part. There's one question
 back then and then
we'll come back up here.
 Can you shoot    raise
 your hand again and
and then, I don't know
 if you want to throw
 it or hand it back.
>> Where is it? All right.
 I will try and    throw this,
 but I'm very bad at basketball.
>> Hang on.
>> Thank you. I'm
 perhaps looking for silver lining
 here, but I'm wondering
if there's like an
 intrinsic tension between like data
 as it relates to
like the legibility project
 of the state and
 individual liberty. Or
on the flip side, there
 is some possibility for a
 new role of like an
advocacy for data
 scientists where they might
 partner with communities
and understand that data
 and models are not
 neutral, but actually are,
intrinsically bias and it's up
 to sort of the data
 scientist to choose where
that bias going to fall.
 And if you see any
 examples of that in practice.
>> Yeah. Definitely.
 So the Ford Foundation
 has been funding
over the past
 four or so
 years technology fellowships
in organizations
 like the ACLU
 for the expressed purpose
of creating a new
 career path for folks who
 come out of places like MIT.
And it's,
 there's a really interesting
 history there, actually.
I didn't know this
 until I got involved
 with this technology fellows
program that was
 Ford was running. But
 Ford was actually
instrumental in a creation
 of a public, public
 interest legal career track
as well. Like, 50 years
 ago. [ Laughter ] When at
 that time, you know, you
went to law school,
 you were either going to
 work for a private
corporation or you're going
 to be a prosecutor
 basically. There was like
nothing really else. And
 organizations like the ACLU
 had existed    or
rather just the ACLU
 existed. There weren't really other
 groups like it for
many years. And it
 was a really deliberate process
 to create, you know,
funding opportunities for
 lawyers to work at
 organizations dealing with
the issues around climate
 and human rights and things
 like that. And the
exact same thing is
 happening right now with
 respect to technologists in
civil service type work.
 So that's, I think really
exciting and totally important.
>> I would look,
 as a person who spent
 most of my time working
with young people in
 the criminal justice system, I
 have this like weird
line that I straddle.
 When I see the
 graphic about Facebook has
this dataset that says this
 person    is about to
 get in a relationship,
working with young people
 in the criminal justice
 system, I would
imagine that they're, there
 is a similar graphic that
 looks like a gang
crime’s about to happen.
 A violent gang crime is
 about to happen. How
do we capture that data
 and use it in a way
 that addresses the needs of
the kids who are literally
 like    we're telling you
 this out loud because
we're kids and something
 bad's going to happen
 if you don't do
something I'm going to complete
 this act like a kid
 who runs into a fight
in school and he's
 like, please, somebody catch
 me before I actually
have to throw a punch. How
 do we, how do we do
 that and treat the child
as opposed to use
 that information to predictively go
 out and police that
kid, search their house, or
 whatever. I would love to
 see a way that we
straddle those two lines,
 accomplishing the goal that
 we want to
accomplish, which is protecting
 the safety of the
 person, you know, and
other young people around
 them, without going to
 that state where Kate
is freaking out.
[ Laughter ]
>> That's right,
 there was    what?
 Is there
>> Um, Joi, you were
 pushing for kind of paramedic
 change in the last talk.
I was just curious if
 for each of our speakers,
 if you support the
idea that, that, that we should
 look at things that way? Or
 if it's too big? If
you do agree, what do
 you think a paradigm change
 would look like? And
if it helps as an example,
like talking about ownership
 of data versus a
rights based approach to
 data. Is kind of something
 that would have a
knock-on effect
 throughout the
 system.
>> Um, I don't
 understand the last point of
 your question that's because I'm
 I'm not that bright.
 [ Laughter ]
But when I saw
 Joi's list of things, I
 see people doing criminal justice
reform work at
 step 12. Let's change
 the laws, let's change
the policies, let's change
 the, you know    let's
 push on the things that say
these are the things
 that we have to do.
 Where I came into building
my organization, I
 was like let me start
 with paradigm shift because
I don't care if it
 is a risk assessment tool or
 it's a new piece of legislation
or something that's
 been repealed or a
 new policy that's been issued
by a prosecutor,
 if you don't change the
 paradigm by which those
people are coming into the
 job and the thing that they
 want to do is, is
actually the thing that
 we're told, which is fairness
 and justice and safety
and all these things,
 if you don't change the
 paradigm of what those
things are, I will use
 the risk assessment tool to find
 a way to continue to
do the thing that I
 that I do right now. If
 I, if you reduce minimum
mandatory sentence, if
 you repeal minimum mandatory
 sentences for a
drug crime, instead of
 charging a person with
 one count of possession
with intent to distribute crack
 cocaine, I will now count
 every rock in that
bag and charge you
 with each one of those
 rocks thereby end rounding
your repealing of that
 minimum mandatory sentence in
 a way. There will
always, we will always find
 a way, as people in
 the system, we, we talk
about the criminal justice
system prison and all these
things as if they're
entities that are driving
 behavior and it's actually
 the other way around,
we, as the people who
 are in it, are, are
 driving the outcomes and
efficacies of those institutions.
And so if we can, and
 again, this is why I
like really heavily double
 down and focus on
 prosecutors, if we can
change that paradigm, which
 is more about giving
 tools in empathy and
understanding than it is
 trying to shift the
 entire incentive structure, then
I, I think we can turn this around.
>> I, I    I agree
 with all of that, and I
 will say that I sometimes worry
about my own work
 that we're doing is not
 focused enough on the,
on the paradigm shift piece
 of this and that we're
 tinkering at the magics.
And I was in
 a conversation recently with
 a group of people
about autonomist weapons that
 I think drives this
 home where you could
    let's say,
 hypothetically, that we could come
 up with a drone
that could make
 more precise determinations
 then any human
being ever could about
 targeting the people that it
 intends to target and
avoiding collateral damage to
 the people and the
 things and the property
that it doesn't intend
 to target. I think applying
 a traditional paradigm or,
you might actually say
 that's, that's great. You're,
 you know, you've
created even more attention,
 a tool, of, of, weapon
 or, or tool of
destruction and you've
 avoided asking the really
 big picture question
which is geez, do we
 even want machines at all
 remotely involved in this
process for some of the
 higher and moral reasons that it
 was alluded to at the beginning.
>> Yeah. I mean, I,
 I    the paradigm that I
 would like to see shift is,
I think, even more
 radical then, then that.
 It's    especially with
respect to the criminal
 legal system. Let's not actually
 invest in it at all.
 [ Laughter ]
Let's invest in other
 things. Right? So people, you
 know, a good example
of this problem
 is what's been
 happening in Massachusetts
with criminal justice
 reform. It's omnibus
 criminal justice reform
package that is
 now being worked out
 and in conference committee
between the
 house and the senate
 here. And risk
assessment tools are a
 piece of that. The
 reason that somebody wanted
to bring risk assessment
 tools into this C.J.
 pack subject because there
was a court ruling at
 the Mass high court a
 couple years ago holding
that you can't
 basically, you know, the
 Massachusetts Bell Statute
doesn't allow for judges
 to hold people because they
 can't afford to pay
to get out of jail or
 to stay out of jail. That
 it is unconstitutional is a
finding that some courts in
 other parts of the country
 have held. This is
like a traveling lawsuit
 that some advocates are
 doing. And the response
to that, instead of
 saying huh, you know,
 70% of people in
Massachusetts jails are there
 pretrial and a lot
 of them are there
because they're poor
 maybe we should figure
 out just an entirely
different way of thinking
 about this problem which
 would start actually
before the prosecution. It
 would start before the
 arrest even. Are you
poor? Is that why
 you're stealing something?
You know, do you have a
drug problem? Is that
 why you've been arrested
 for drug, you know,
selling drugs? Maybe we
 should deal with that
 problem instead of
investing more resources
 in the criminal punishment
 system. We don't
really want
 to do that
 as a society.
 [ Laughter ]
Those are, you
 know, questions that were
 frankly to address.
And so instead,
 policymakers, like, you
 were saying, Adam,
go to this really
 12    you know, number
 12 level conversation of,
well, maybe we
 should introduce a tool.
 And you know,
I think, the danger there
   people, I used to
 get asked a couple years ago
when predictive policing
 was a really hot
 topic in the press because
it was, it was
 starting to bubble up in
 some cities, what do you think?
At the ACLU
 about predictive policing? And
 I would always say things
that I think really irritated
the journalists because
 I kind of like refused
to answer the question.
 You know? I would say like,
 I don't care    this,
we shouldn't be doing
 this. You know? We should
 stop giving the police resources
 [ Laughter ]
When other
 departments and entities
 and government
really need them. Right?
 And a great example
 of this, some really
interesting work that
 some of my colleagues
 are doing in New Jersey
is, you know, just
 think about it this way, if
 there's a health crisis or,
you know, a crime or
 a mental health issue, this
 happened in Boston a
few years ago, there
 was a young black man
 who was really having
mental health breakdown and
 he was sitting on his
 front stoop in the
south end and wouldn't get
 up and his mother didn't
 really know what to
do. Who could she call?
 I mean there's no one
 to call besides the police.
Right? There's no one to
 call in our society who
 will show up who's paid
by the government other than
 the cops. And she said,
 on the phone with
9 1 1, please don't
 send the police. He really,
 he has a negative reaction
to people in uniform. He's
 not going to react well
 if the police come. And
guess what happened? The
 police came and they
 killed him. Because he
had a negative reaction
 just as his mother warned
 them that he would.
But we have no other
 way of dealing with that.
 Right? I mean, the police
are our response to
 everything as a society.
 Everything that, that goes
wrong. Fundamentally. So
 yeah. My colleagues in
 New Jersey are
looking at a paradigm shift
 that I think is really
 exciting and kind of taxed
with the abolition movement
 which is to create
 like a municipal
response team, basically,
 that is composed of
 mental health workers
and caseworkers and people
 who can help people
 in crisis when, you
know, handcuffs and chains
 is really not, not
 what anybody needs. Not
what the person who's in
 crisis needs and certainly not
 what our society needs either.
>> Thank you. We're out
 of time, but I just
 want to end with maybe
pushing it back to
 the class which is, you
 know, Harvard has more
supreme court justices that
 you guys have come
 out of your place
that inventor of libertarianism
 came out of MIT.
 And mean, we have
people who come out
 of our institutions that
 do refrain things. And
questions like    is
 the, should we redesign
 the criminal justice system?
Should we retrain prosecutors
 differently? You know, and
   you know, I
mean, Harvard still thinks about
 money, but it's part of
 their job to sort of
at least for some of
 them to    ask those
 really hard questions in these
sort of first principle things.
 And so I think it's
 the duty of the students
like you who care
 about this who are equipped
 with the credentials and
the access to, to speak
 up and, and think these
 thoughts. I think that's
the point of this
 class is to understand the
 mechanics so that you
understand all the second order
 effects, but also to go
 up to the first
principles and, and we
 have people who are
 also connected to the
ground. And I think it's
   to, to be honest, that,
 the elements layers, I think
all of those layers have
 to happen and they all
 have to be coordinated.
You can't just do one. But
 you have to also be able to
 go all the way up to
the top. And
 I think that's kind
 of our job.
>> This just goes to
 the graphic you showed of
 all of the founding fathers.
And this is
 where we sort of
 always intention and conflict.
It's do we just
 blow this thing up and
 build another system which is,
I mean, fundamentally.
>> That's what's
 trying to happen right
 now kind of.
[ Laughter ]
>> Then you, then
 you have to recognize that,
 that doesn't look much different
than it does
 now and the people
 who are making these decisions
and so how
 do you get them
 to the place where, were
at a tipping
 point why we're not
 exactly abolition but we're opening
their minds to
 something so much greater
 than can have benefits
to you and it
 doesn't mean that you're
 doing a bad thing.
>> Thank
 you, Adam. Thank
 you, guys.
>> Thanks.
[ Applause ]
