[MUSIC PLAYING]
AVI GOLDFARB: This
project was trying
to get a simple view and an
easily understood view of what
the current excitement
around AI means.
And we came to this
because of where we are
at the University of Toronto.
And a few blocks down from
us in the Business School
at the University
of Toronto, there's
this Computer
Science Department.
And in the Computer
Science Department,
a lot of the key innovations
around machine learning and AI
were developed.
And we together-- me,
Ajay, and Joshua--
run an organization called
the Creative Destruction Lab.
And what the Creative
Destruction Lab does
is it helps early stage
science-based start-ups scale.
We started it in 2012.
And as we started,
in our first year,
we had a couple
of companies that
were calling
themselves AI companies
before anyone was really
thinking about AI companies,
because they were driven by PhD
students out of the Computer
Science Department--
mostly Hinton's PhD students.
And in the second year,
there were a couple more.
And in 2014, there
were a few more.
And by 2015, there
was this flood
of companies calling
themselves AI companies.
And we realized
this was something
we should get our heads around.
Luckily at the same time Ajay
and I were on sabbatical.
So we had some time,
we could think,
and we were actually on
sabbatical just down the road.
We were at Stanford.
And we got to see both the
flood of companies coming
into our lab in Toronto
and the excitement that
was starting to percolate
here in the Bay Area.
And we realized that
we had a bit of a lead,
as we saw these
companies first, and it
was time to really get our heads
around what this all meant.
And so the insights
in the book and what
we're going to talk about
for the next 40 minutes or so
are explicitly
based on one, seeing
all these now hundreds
of science-based AI
start-ups coming
through the lab,
and two, our experience of
being right here in essentially
the center of it all
on the innovation
side, the commercialization
side, to figure out, well,
what's really going on.
So in this crowd we need to have
no shyness about that there's
a lot of hype around AI.
There's some sense that
artificial intelligence
is going to change everything.
We see these headlines
almost every day.
And we also see
these other headlines
with a little bit of anxiety.
Wait, if the machine is
intelligent, what about me?
I thought that was what I did.
That's what humans do.
If the machines are
intelligent, what's left for us?
And underlying this
is a lot of confusion
really about what is
artificial intelligence.
So if you read the press,
you have some sense
that artificial intelligence
is either on the happy side,
something like the C-3PO, which
is a robot who essentially does
everything that a
human can do, but is
friendly and nice and helpful.
Or perhaps, we have Skynet
from "The Terminator,"
where intelligent machines are
going to take over the world.
Now, you may or may
not think that's crazy,
but it's important to
recognize the reason we're
talking about AI in 2018, and
we weren't talking about it
in 2008 or 1998, is not
because of this technology.
It's because of advances
in machine learning.
And so when we think about what
these advances are, why are we
talking about AI today?
It's because of
prediction technology.
And so we should think about
the recent advances in AI
as advances in prediction--
better, faster,
cheaper prediction.
To try to get your heads
around why that might matter
and why that might
be fundamental,
it's useful to go back a
technology and remember 1995.
So looking around
the room, there
might be six of you
who remember 1995.
I remember 1995.
It was a really exciting
year in technology.
So why was 1995 such an
exciting year in technology?
Well, it was the last vestiges
of the public internet, NSFNET,
were privatized.
Netscape had their IPO where
they valued at billions
of dollars with zero profit.
At the time that
was really crazy.
And Bill Gates wrote his
internet tidal wave email
saying, this is the
technology that we
need to focus our attention on.
So Microsoft, perhaps
they missed it,
perhaps it had been
the background.
In 1995, he realized this
was the future of computing
and the whole company
started to change
their direction toward AI--
toward the internet.
And so everything
seemed to be changing.
And people stopped
talking about the internet
as an exciting new
technology and they started
talking about as a new economy.
That the old rules
didn't seem to apply
and it was a whole new set of
rules that were going to apply.
We didn't need our
economics textbooks.
We had to write new ones.
Now, there was one set
of people who said,
it's not a new economy.
It's the same old
economy, we just
need to understand that
the costs of certain things
have fallen.
The costs of search have fallen.
The costs of
reproduction have fallen.
The costs of
communication have fallen.
And once we understand
which costs have fallen,
we can apply the
same old economics.
And maybe the dominant academic
economic textbook writer
at the time who's
sitting in this room--
your chief economist
Hal Varian--
was perhaps the leader in
really thinking that through.
And he and Carl Shapiro
wrote this book,
"Information Rules," which
laid out explicitly that idea.
That the old
economics still apply.
You just need to think through
what's changed, what's cheaper,
and then we can draw on
decades, if not centuries,
of economic ideas to
understand the consequences.
Now, let's jump back another
technology generation to think
this through a little bit more.
So this is a semiconductor.
This is the technology that's
underlying your computer.
And when we talk
about Moore's Law,
we think about,
well, it's doubling
the number of transistors
in a semiconductor every so
many months.
How do we really
think that through?
What does your
computer really do?
Well, as an economist,
I think of it this way.
We think of it as OK, it
drops the cost in something.
It used to be
expensive-- something--
and then semiconductors came
along and computers came along
and that thing became cheap.
And so what does your
computer really do?
It actually only does one thing.
Your computer does arithmetic.
That's it.
All your computer
does is arithmetic.
But once arithmetic
is cheap enough,
we find all sorts of
opportunities for arithmetic
that we might not have
thought of before.
This is economics 101.
This is the idea that demand
curves slope downward.
When something is
cheap, we do more of it
and we buy more of it.
But because arithmetic
became so much cheaper,
there were all sorts of crazy
applications for arithmetic
that most of us might
not have thought of.
So the first applications
for machinery arithmetic,
for machine computing,
were the same
as the applications
for human computing.
So we had artillery tables.
So we had cannons,
they shot cannonballs.
It's a pretty difficult
arithmetic problem
to figure out where those
cannonballs are going to land.
We used to have teams of
humans whose job was computers.
And you might have seen the
movie, "Hidden Figures."
That's what they were doing.
These were humans
doing arithmetic
to figure out classic
arithmetic problems that
were of first-order
importance to space
exploration and the military.
Then a handful of other
human arithmetic problems
started to be replaced
by machine arithmetic.
Accounting-- accountants used
to spend their time adding.
You look at what the accounting
curriculum in the 1940s
and 1950s was, classic
homework problem
was to open the white
pages of the phone-book
and literally add up all the
phone numbers on the page.
Why was that your homework?
Because that's what you
would spend your time
doing after graduation.
Your life as an accountant
was spent adding.
Accountants don't add anymore.
This is just not what they do.
On the positive
side, there's still
lots of jobs for accountants
and there's lots of accountants,
because it turned out the
people who are best positioned
to do the arithmetic
were also best positioned
to understand what to
do when the machine did
the arithmetic for them.
But as arithmetic became
cheaper and cheaper and cheaper,
we found all sorts of new
applications for arithmetic
that we might not have
thought of before.
It turns out that when
arithmetic is cheap,
games are an arithmetic problem.
Mail is arithmetic.
Music is arithmetic.
Pictures are arithmetic.
And once arithmetic
became cheap,
we found all these new
applications for arithmetic
that we might not have
thought of before.
Your computer does arithmetic,
but because it does it
so cheaply, we end up finding
arithmetic problems everywhere.
And so that gets us to the
current excitement around AI.
Here is one of the foundational
technologies behind it.
A representation of a
convolutional neural net.
What should we think about here?
Well, same graph.
It drops the cost of something.
But in this context,
we think it's
useful to think of it as a
drop in the cost of prediction.
Prediction is using information
you have to fill in information
you don't have.
It could be about the
future, but it could also
be about the
present or the past.
It's the process of filling
in missing information.
And what we've seen is that
as prediction gets cheaper,
we're finding more and more
applications for prediction,
just like with arithmetic.
So the first applications
for machine prediction
are exactly the same
as the applications
that we were doing prediction
before we had these new tools.
Loan defaults-- you walk into
a bank, you want to get a loan,
the bank has to
decide whether you're
going to pay them back or not.
That's a prediction problem.
And increasingly, we're
using machine learning,
we're using AI tools,
to make that prediction.
The insurance industry
loves these tools.
The insurance industry
is based on prediction.
Are you going to
make a claim or not,
and how big is that
claim going to be?
That's a prediction problem.
And so over time, we've
seen increasing use
of machine prediction
replacing other older tools.
Now, as prediction's
gotten cheaper,
we found a whole bunch
of new applications
for prediction, new
ways of thinking
about prediction that we might
not have thought of before.
Medical diagnosis is
a prediction problem.
What does your doctor do?
They take information
about your symptoms
and fill in the missing
information of the cause.
That's prediction.
If you asked a doctor
20 or 30 years ago
if they were doing
prediction, they
might not have realized it.
But now it's pretty
clear that diagnosis
is a prediction problem.
Object classification
is a prediction problem.
Your eyes take in
light signals and fill
in the missing information
of what that object is
in some context for it.
Autonomous driving is
a prediction problem.
There's the obvious
prediction problem
of predicting what those
other crazy drivers are doing.
But actually the key insight
in the recent advances
in autonomous driving is
much more about, well,
all we have to do is predict
what a good human driver would
do.
Once we can predict what a
good human driver would do,
then we can create vehicles that
drive like good human drivers.
So it's a reframing of
this prediction problem.
And that's a key
element of the art
of understanding and
identifying new opportunities
from cheap prediction.
How do we reframe old
problems, whether it's
medical diagnosis, object
classification, or driving,
as prediction problems, as
processes focused on filling
in missing information?
That's all well
and good and we've
thought about, OK, well, we're
going to do more prediction.
But other things change
in value as well.
That's where the
anxiety comes from.
The anxiety is, well,
if the machine's
doing the prediction, what's
the human going to do?
And so this is the
other econ 101 concept
that still applies in the
context of cheap prediction.
So when the price of coffee
falls, we buy more coffee.
That we've talked about already.
Demand curves slope down.
The second thing to note is
when the price of coffee falls,
we buy less tea.
So when coffee is
cheap, we're going
to buy coffee instead of tea.
When machine
prediction is cheap,
we're going to have machines do
the prediction and not humans.
But the important
thing to remember
is there are complements
to prediction.
So just like when coffee
becomes cheap, we buy more cream
and we buy more sugar.
So when coffee is cheap, cream
and sugar becomes valuable.
The key question that you
need to ask both yourself
and your organization is,
what are the core complements
to prediction?
What are the cream
and sugar that become
more valuable as
prediction becomes cheap?
And the way to
think that through
is to recognize that
prediction is valuable
because it's an input
into decision-making.
That's why prediction is useful.
That's why this is a
transformative drop
in the cost, as opposed
to an incremental one.
And decision-making
is everywhere.
You make big decisions.
You make decisions on
what job should I take?
Who should I marry?
Should I marry?
When should I retire?
And you make small decisions.
Should I write that down?
Should I scratch my face?
Should I watch that bit again?
These decisions are everywhere
and because prediction
is an input into
decision-making,
prediction ends up
being foundation.
The important thing to
remember, though, as well
is that prediction is
not decision-making,
it's a component
of decision-making.
And we're trying to identify
the cream and sugar, the things
that become more valuable
as prediction becomes cheap,
we need to think through the
other elements of a decision.
And so in thinking
through decision-making,
what we found it useful to
do is put some structure
around the components
of a decision.
And broadly speaking-- we
put prediction at the center,
because that's what's changed--
but all sorts of
other things are
inputs into a final
decision and an outcome.
So your data is key.
That's not news to
most people here.
Data is increasingly valuable
because prediction is cheap
and it's an input
into prediction.
Actions are in many
ways more valuable,
because there's no point
in making a decision if you
can't do anything about it.
And so being able
to do something
with your cheap prediction
is increasingly important.
Then the one of these that
I want to talk about today
is judgment.
And judgment,
broadly speaking, is
knowing which
predictions to make
and what to do with those
predictions once you have them.
You can't make a decision
unless you know what
to do with your predictions.
And I don't know if you guys
have seen the movie "I, Robot."
Some people have,
some people haven't.
But in this movie,
there's one scene
that makes it very clear
what this distinction
between prediction
and judgment is.
So Will Smith is the
star of the movie
and he has a flashback scene
where he's in a car accident
with a 12-year-old girl.
And they're drowning
and then a robot
arrives, somehow miraculously,
and can save one of them.
And the robot apparently
makes this calculation
that Will Smith has a
45% chance of survival
and the girl only
had an 11% chance.
And therefore, the
robot saves Will Smith.
And Will Smith
concludes that the robot
made the wrong decision.
11% was more than enough.
A human being would
have known that.
So that's all well
and good and he's
assuming that the
robot values his life
and the girl's life the same.
But in order for the
robot to make a decision,
it needs the
prediction on survival
and a statement about
how much more valuable
the girl's life has to
be than Will Smith's life
in order to choose.
So this decision that
we've seen, all it says
is Will Smith's life is worth
at least a quarter of the girl's
life.
That valuation decision
matters, because at some point
even Will Smith would
disagree with this.
At some point, if her chance
of survival was 1%, or 0.1%,
or 0.01%, that
decision would flip.
That's judgment.
That's knowing what to
do with the prediction
once you have one.
And so judgment is the
process of determining
what the reward is to
a particular action
in a particular environment.
And to understand
the consequences
of cheap prediction and its
importance in decision-making,
I'm going to turn it over
to Ajay to talk about tools.
AJAY AGRAWAL: Great.
OK.
Thanks, Avi.
And thanks Hal and your
colleagues for inviting us
here to talk about our book.
So this triangle pyramid is
the representation of the five
sections of the book.
And we start off
with prediction,
which goes over the bits
that Avi just covered.
And in essence, there
are parts that you'll
be very familiar with, in terms
of just the technical parts
of prediction.
Also, why machine
prediction is--
in what ways is it
similar or different
than our traditional
prediction tools.
And the economics
of predictions.
And so the essence of
obvious point that there
are three key insights.
Insight number one is that
when prediction becomes cheap
we use more of it.
Insight two is when
prediction becomes cheap
that it lowers the
value of the substitute
to machine prediction,
which is it lowers
the value of human prediction.
And implication number three,
that as machine prediction
becomes cheap, it increases
the value of complements
to prediction, like input data.
So if data is the new oil,
why is it the new oil?
It's new.
We always had data.
But it's now oil when before
it wasn't so much oil,
because predictions
become cheaper.
So that data we had before is
more valuable as a complement.
And our human judgment
becomes more valuable
as prediction becomes
cheaper, because it's
a complement to prediction
and decision-making.
And actions become
more valuable,
because we can apply our actions
to higher fidelity predictions.
So that's section one.
Section two is on
decision-making,
which are how these
components come together.
So effectively,
what we're doing is
we're taking on the one hand,
these recent advances in AI
are a new technology
for prediction,
but we're applying them to
50 years of decision theory.
So we've got a
well-established theory
and we're just dropping this
super power prediction tool
inside a well-established
theory to understand
what are the implications
for decision-making.
The section three
that's on tools
is perhaps the most
practical bit of the book.
I'm just going to describe a
couple of the highlight bits.
When we are building
these tools--
effectively, every AI that
we build right now is a tool.
It performs a particular
prediction task.
And the way we think of
this is we divide up--
within an organization
like this,
there'll be a
bunch of workflows.
Workflow is anything that
turns an input into an output.
So it can be a product
would be a workflow.
We can divide the
workflows into tasks
and each task is predicated
on one or a few decisions.
And AIs work at the task level.
So AIs don't do
workflows, they do tasks.
And so just to give an
example-- a lot of you
will probably have
seen this-- this
was an interview with
the CFO of Goldman Sachs.
It starts off with this very
dramatic opening sentence
that at its height back in 2000,
the US Cash Equities Trading
Desk at Goldman
employed 600 traders,
and now there are just two left.
But then it goes on to, I
think the more important bit--
it talks about the AIs
moving to more automation,
moving to more complex
problems, like trading
currencies and credit.
They emulate as closely
as possible what
a human trader would do.
You can change, emulate,
as closely as possible
to predict--
effectively, they predict
what a human trader would do.
But then most interestingly,
down further on,
they break up the IPO
process into tasks.
So Goldman has already
mapped 146 distinct steps
taken in any IPO.
So what we do when we're
working on these problems,
we take the workflow,
we divide it into tasks.
In this case, the Goldman
Sachs IPO process workflow
can be broken into 146 tasks.
And then we effectively
just estimate
what's the ROI for building
an AI to do that task.
And then we rank
order the tasks.
Putting the ones with
the highest ROI--
the Return On Investment--
at the top of the list.
And then we work our way down.
And so in terms of just when
organizations show up and say,
where do I even start?
This is just a course
description of how we start.
And obviously, there are
many AI projects here.
And this is old.
I suspect it's at least
double or probably triple
by this stage--
the number of AI tools here.
We have large
companies that show up
to our Creative Destruction
Lab in Toronto and they'll say,
hey, we've got three AI pilot
projects in our company,
or four or five, are we
at the frontier of AI?
And once we start breaking
these things down--
workflows into tasks and
figuring out where we can get
a lift from building
a prediction machine--
we see that there are often
hundreds if not thousands
of opportunities to do that.
And of course, this organization
is at the frontier of that.
So one of the tools that
we have found very helpful
for companies that
are just starting
to wade into applications
of AI is this thing
that we call the AI Canvas.
But effectively, it is just
taking those components
that Avi described and writing
down the elements in English.
So first of all, what is the
key prediction of this task?
And so you'd be surprised,
people that do a task every day
struggle at first
trying to identify
what is the prediction
that underlies this task.
In other words, what
we do is we look
for elements of uncertainty.
And prediction
doesn't add any value
when there's no uncertainty.
So with our first
clue of where we
go to look for,
where we're going
to get some action
for deploying an AI,
is where are we operating in
conditions of uncertainty?
And then what is
the key prediction?
And then once an AI
delivers that prediction,
what's the human judgment that's
applied to the prediction?
And what's the
action that we take
as a function of having the
prediction and the judgment
in the type of outcome?
And then three types of data.
The data we use to train
the AI, the data which
we use to run the
AI, and the data we
use to enhance the
AI as it operates.
But the key point here is that
there are senior level people,
whether it's a bank,
or an insurance firm,
or manufacturing
firm, drug discovery,
who have never written
a line of code,
can sit down and start
filling these things out.
And within a day, a
senior management team
can have a dozen or
a couple of dozen
of these and all of a sudden
feel a level of comfort around,
OK, I get the basic idea.
Of course, they
can't build in AI,
but they now have
got a framework
that they can hand to
people who can build it.
And their thinking
largely center around
what is the core
prediction of this task?
So we found this thing to be
just a useful way to get people
started.
So most of the AI tools that we
build are like any other tools.
We build them and we use them
in the service of executing
against a given strategy.
So the organization has
some kind of strategy
and whether it's a
tool, just like a word
processor or a spreadsheet,
we build an AI tool that
just makes us more productive.
So tools are generally there
to enhance productivity,
to enable us to better execute
against the given strategy.
But occasionally,
these AI tools so
fundamentally change
the underlying economics
of the business, that they
change the strategy itself.
And so I want to just spend a
couple of minutes on AI tools
that impact strategy.
And when they do that,
a common vernacular
for this type of phenomenon
is what some people
refer to as disruption.
So again, this is
not an audience
where this is any surprise,
but if we were giving this talk
two or three years ago, this
would have been largely talked
about if.
It would have been if we
can achieve this in AI,
then wouldn't this
be interesting.
Or this would be possible
if we could achieve that.
Now over the last 24,
36 months, the number
of proof of concepts that we've
had, whether it's in vision,
or it's in natural language,
or it's in motion control,
I don't think any more
most of these are ifs.
We know now that
they are plausible
and so now it's just
turning the crank
and moving the predictions
up to commercial grade.
So this is now all a
conversation about when,
not if.
So here's a thought experiment
that we use for a strategy.
The basic idea is that we
call it science fictioning.
And the thing here,
though, is that it's
very constrained
science fictioning,
which is that the science
fictioning is predicated
on a single parameter
that can move.
And so the thought experiment
is imagine a radio knob,
but instead of turning up
the volume when you turn up
the knob, you are turning up the
prediction accuracy of your AI.
So that's the only thing that
you're allowed to manipulate.
And so you do that
and then you just
think through what are the
consequences of doing that.
So a useful thought experiment
is to take this idea
and apply to an AI that
everyone's familiar with, which
is the recommendation engine.
So for example, the
recommendation engine
on Amazon.
And so what's interesting about
this is that it's a useful way
to think about how
this could have
an effect that is non-linear.
So we go on to Amazon.
Everybody here knows,
has shopped on Amazon,
and has a feeling for how this
recommendation engine works.
You're shopping around, it
recommends you some stuff.
And for Avi, Joshua,
and I, it is on average
about 5% accurate,
meaning out of every 20
things it recommends to
us, we buy one of them.
And given the fact
that it's pulling
from a catalog of millions
of possibilities, the fact
that it serves us up 20 and
we choose one, is not too bad.
And so the process, of course,
is that we go on their website,
we browse around, we
see things we like,
we put them in our
basket, we pay for them.
An order shows up at an
Amazon fulfillment center,
and some human gets
that on their tablet,
and the Kiva robots dance
around the fulfillment center.
They bring up the
stuff to the human.
Human picks them out,
puts them in the box,
put the label in the box,
ships it to your house.
It arrives in the
back of a truck.
And then someone
knocks on your door.
They ring the doorbell.
They put the thing
in your porch.
You open the door.
You bring in the box.
You open the box.
And then you've got
your thing from Amazon.
We can generalize that by saying
that this is a business model
of shopping, then shipping.
So we shop for the stuff
and then Amazon ships it.
And so the thought experiment is
now imagine the recommendation
engine and everyday the people
in machine learning team
at Amazon are working
away at turning that knob.
And so maybe now
it's at 2 out of 10.
And they enhance the algorithms,
they collect more data.
3 out of 10.
They acquire data
set, like Whole Foods.
They learn more about our
purchasing behavior offline
and they get up to a 4
out of 10 or 5 out of 10.
And there is some
number, and it doesn't
have to be a spinal tap
level of prediction accuracy.
But there's some-- maybe
it's a 6 out of 10,
maybe it's a 7 or
10-- but there's
some number where when they
get to that level of prediction
accuracy, somebody
at Amazon says,
we're good enough at predicting
what they want, why are we
waiting for them to order it?
Let's just ship it.
And so why would they
do that even when
they know that they're not at
a 10 out of 10 or 11 out of 10?
Because let's say that they
ship us a box of 10 things.
And we open the door,
we open the box,
and we like six of the things.
We keep them and we put four
of them back in the box.
In the absence of them having
preemptively shipped it to us,
we might have only ordered two
of those things from Amazon.
And now we are
taking six of them
and preempting four
things that we might have
bought from their competitors.
And the benefit of selling
us that extra stuff
may outweigh the cost of
dealing with the extra returns.
But now that they
have those returns
to lower the cost of
dealing with the returns,
maybe they invest
in a fleet of trucks
that drive down our
street once a week
and pick up all the things
from you and your neighbors
that they dropped off that it
turned out you didn't want.
So why is this interesting?
It's interesting because as you
think about that recommendation
engine-- we've all seen
it, we've been using it--
and it's been getting
a little bit better,
a little bit better, a
little bit better over time.
But it's not dramatic.
It doesn't change a strategy.
It's just a slightly better
recommendation engine.
But the thought experiment
here is the non-linearity.
In other words, it gets
better, better, better.
And we just feel it getting
incrementally better.
And some of us don't even
feel it getting better.
But when it crosses a line
that doesn't mean perfect,
there's a potentially
step function change
in the effect on the business.
And all of a sudden,
they start shipping--
so they change the model
from shopping then shipping,
to shipping then shopping.
They ship to us and then
we shop on our doorstep.
And so we find that to be a
very useful thought experiment.
We go literally AI by AI by AI.
We go through each
AI and we say, OK,
what happens is
we turn the knob?
And is this just a tool
that incrementally enhances
some part of the process
or is this something
that when the knob gets
far enough along that it
will have a transformational
effect on the business?
And in the case of Amazon, who
knows whether they'll do it,
but it's not like they've
never thought of it.
They have this and a couple
of patents on what they
call anticipatory shipping.
And they're piloting versions of
this already in narrow markets.
But whether or not they
do it is not the point.
It is that you can imagine
how this type of thinking
about the process can have
non-linear effects on strategy.
So from our perspective, when
we see at this organization
an announcement that you're
moving from mobile first
to AI first as a strategy,
from an economics lens
our question is, well,
what does that mean?
In other words, to what
extent is this just pixie dust
that everyone in the Bay
talks about being AI first,
because if you sprinkle
AI on something,
its valuation doubles.
And so our thesis is there's
no, there's really an underlying
strategy here, and what is it?
So an outsider's
perception of what does it
mean at Google to have
an AI first strategy--
what it means to us
is that at Google you
have put the knob at the
very top of your strategy
priorities.
And so in other words, from
an outsider's perspective,
when someone says AI first--
so first of all, the strategy
here before was mobile first.
So what does that even mean?
What does mobile first mean?
So from an economist's
point of view,
mobile first means
not just that you're
going to be good at mobile,
because no company will put up
their hand and say, well, we
want to be mediocre at mobile.
Everybody wants to
be good at mobile.
But what mobile
first means to us
is that the company is
prioritizing performance
on mobile, even at the
expense of other things.
So that when there's a
trade-off to be made,
the trade will be made in favor
of performing well in mobile.
So what does it
mean to be AI first?
When Google announced
AI first, Peter Norvig
answered this on Quora.
And so his description
was effectively,
"With information retrieval,
anything over 80% recall
and precision is pretty good.
Not every suggestion
has to be perfect,
since the user can ignore
the bad suggestions.
With assistance, there
is much higher barrier.
You wouldn't use a
service that booked
the wrong reservation 20% of the
time, or even 2% of the time.
So an assistant needs to be
much more accurate, and thus
more intelligent, more
aware of the situation.
That's what we call AI first."
And what we would add
on to his definition
is even if it means at the
expense of other things.
So even if it means at
the expense of user's
experience in the
short-term, or revenues
in the short-term,
or potentially
privacy in the short-term.
In other words, things that
help us crank the dial.
What helps us crank
the dial, because we
can transform our
capabilities in terms
of what we can do if we get
our prediction accuracy high
enough.
So that's why we're
making prediction accuracy
such a priority.
Other forms of trade-offs--
so when other CEOs heard
your announcement, and then we
started getting calls, well,
what does it mean that
Google is going AI first
and should we be AI first, too?
This is another allocation
of scarce resources.
Putting AI as a priority.
And so when we saw this
about moving the Google Brain
team into the office
right next to the CEO--
a year ago the Google Brain
team of mathematicians, coders,
hardware engineers sat in
a small office building
at the other side of
the company's campus.
Over the past few months,
it switched buildings
and now works right
beside the area
where the CEO and other
top executives work.
When this story came
out in the "Times,"
the part that we
felt was missing
was they never
covered who got moved.
Who became second?
In other words, when
AI becomes first,
something has to become second.
And that's what makes
something a strategy.
Is that it means an allocation
of scarce resources.
In this case, it
with scarce resources
of space next to the CEO.
And so this just
as a strategy when
people put turning the knob at
the top of their priority list,
it means that they're
doing that potentially
at the expense of other things.
So I'll just conclude
with this point here.
What we've been feeling is
some level of dissonance.
Which is on the one hand,
people coming in and saying,
hey, look, I get it.
I see the AIs, like the
recommendation engines
of Amazon and other sites, and
things like Siri, and so on.
All these different AI tools.
And they're neat.
And they're impressive.
But they're not
transformational.
They're not
transforming industries.
And so on the one
hand, they see this.
Things that are neat,
but not transformational.
On the other hand,
this is a graph
of venture capital
going into AI.
Then there is the
various countries,
whether it's France, or
England, or US having policies
making significant bets on AI.
Google and then a series of
other companies announcing they
were going to be AI first.
Then governments
like China having
a very aggressive
strategy on AI,
with a fair amount of capital
to support that and goals
of being the leader in AI
in some fields in 2020,
and more fields by 2025, and
dominant across all fields
by 2030.
And now potentially
accelerating that.
The president of Russia making
remarks like the leader of AI
will be the country
that rules the world.
And then a conference
that Hal was at,
that we hosted in Toronto,
where a number of people
spoke including
Danny Kahneman, who's
the author of the popular book,
"Thinking, Fast and Slow,"
that many of you may have read.
He made the following remarks.
So in other words, we expected
him, partly because of his age
and partly because
of the fact that he's
been thinking about human
thinking for so long, that he
would be a defender
of all the things
that make us human and
distinct from machines.
So we expected him to be
the conservative wise view
at the end of the conference.
And instead, he closed our
conference with the following.
He said, "I want
to end on a story.
A well-known novelist
wrote me some time ago
that he's planning a novel.
The novel is about
a love triangle
between two humans and a robot,
and what he wanted to know
is how would the robot be
different from the people.
I proposed three
main differences.
The first is obvious.
The robot will be better
at statistical reasoning.
The second is that the
robot would have much higher
emotional intelligence."
And so here he had earlier
made reference to the fact
that robots are able
to envision systems,
are able to detect
changes in emotion
from happy to sad, or
to jealous or to angry,
with a much higher
accuracy level than humans.
And not just a visual
signal, also audio signal.
So with very short
amount of audio signal,
able to detect
changes in emotion
much faster than humans.
"The third is that the
robot would be wiser.
Wisdom is breadth.
Wisdom is not having
too narrow a view.
That is the essence of wisdom.
It's broad framing.
A robot will be endowed
with broad framing.
When it has learned enough, it
will be wiser than we people
because we do not
have a broad frame.
We are narrow thinkers,
we are noisy thinkers,
And it is very easy
to improve upon us.
I do not think that
there is very much
that we can do that
computers will not eventually
learn to do."
So on the one hand,
we have people saying,
well, wait a minute,
these AI tools are neat
and they are impressive, but
they're not transformational.
On the other hand, we
have so many people
of power and
influence say, you're
making very big bets on AI.
How do we reconcile
these two things?
On the one hand,
nothing transformative.
On the other hand, such big
investments and speculation.
And, of course, in our view,
the way to reconcile this
is having a thesis on time.
Which is if you think
the knob will turn,
and you think that knob will
take 10 years or 20 years
to turn, then you'll make
a set of investments today
that are very different than if
you think that knob will turn
in three years, or two years,
or something much shorter term.
And, of course, that knob
will turn at different rates,
in different applications,
and with different access
to different types of data.
But in our view, from
a strategy perspective,
one of the most
important starting points
is having a thesis on time.
So two people in
the same industry
may make very different
bets based on their thesis
on how fast the dial will turn.
And so this was another
reasonably recent article
in the "Times,"
and they're quoting
Robert Work, former Deputy
Secretary of Defense.
And in this article
he refers to--
talking about US versus
China-- and he refers to this--
he says, this is
a Sputnik moment.
And this really speaks to the
point about people's thesis
on time.
I don't think there's any
company in this country
and maybe in the
world that has treated
this technology with such a
level of urgency as Google has.
In other words, you were
early out of the gates.
As Avi was saying in the
beginning of the talk,
some of the
foundational innovations
in this field of
machine learning
came out of our backyard in
the University of Toronto.
But this is the organization
that capitalized on it first.
And so I think and we think
that there are organizations now
across industries
who are just starting
to come to the realization
that you came to three or four
years ago, and starting to
make bets in this domain.
And they're realizing that
this is their Sputnik moment.
That in other words,
these don't come around.
If you are a manager or
a leader in some part
of your organization,
these don't
come around once a
quarter, once a year,
even once every few years.
This is the type of thing
that comes around once
in a generation.
And so from an individual's
point perspective,
people are betting their
careers on-- some fraction
of people are betting
their careers on what
this is going to do.
And same with some companies.
And one of the reasons
this is a privilege for Avi
and I to come and talk
about our book here
is that some are also
doing it-- in our view,
they're so far ahead, that
they are making decisions that
have a humanity level impact.
And there is no company
more so than this one.
So it's a pleasure
for us to be here.
That's it.
Thanks.
[APPLAUSE]
AUDIENCE: So a very
interesting talk.
It occurred to me
that one thing that
might be missing from the
picture that you described
is that there's a back
reaction from society that
occurs when you deploy
some of these AI machine
learning technologies.
And I'm thinking in
particular, one thing
you mentioned that if Google,
for instance, is putting AI--
turning up this knob
on AI at the top
of their priority list--
does that mean that
they are putting things
like data privacy second?
And I don't think
anyone at this company
would agree that we would
sacrifice user data privacy
at the expense of promoting AI.
And another example
is, for instance,
with the recent Facebook
developments with respect
to Cambridge Analytica.
I mean, they've turned up
their AI knob so that users
are spending as much time as
possible on that platform,
even if it's in a
kind of echo chamber.
And what they've seen is that
there's a back reaction because
of its possible
political effects
on the outcomes of
elections, that users
aren't happy with that.
And in that case,
they might have
to turn that knob
back down, or at least
point it in a completely
different direction from where
they were going.
So I wondered if you had
any comment on that angle?
AVI GOLDFARB: So privacy
is tricky for a whole bunch
of reasons.
And the way to think
about privacy strategy
is, as I think we like
to think about it,
is it's also a trade-off.
But in the sense that
you have to have both
as a nation and a company
enough freedom to use user data
so that you can do
something with it
and train your AIs, but
enough restriction so that
your customers trust you.
And that latter point is of
first order of importance.
So if any company is
seen to be abusing
their users and
their users' privacy,
that is almost surely
going to be a bad strategy
and not going to improve--
and they're not
going get any data
and so that's actually going
to backfire on their AI point.
So I think to reinforce your
point, it's exactly as you say,
which you have to be
respectful enough.
You have to be respectful
of user privacy
and you have to
respect your users,
or else they won't let you
be AI first, because you
won't get the data to do it.
AUDIENCE: I think I wanted
to push back on the--
it seemed almost like the
Amazon recommendation algorithm
was being discounted
a little bit there,
because I've actually discovered
some great books through that.
As well as on YouTube, the
recommendation for videos based
on oftentimes it's talks
that I'm watching on YouTube,
it will recommend
someone who I've never
heard of who I actually later
become really interested in
and learn a lot from.
So I'm really interested
in this question
of how those
algorithms can get even
better at showing people things
that they didn't know existed.
And sometimes, I think with
both Amazon and YouTube,
I'm not sure exactly
how it works,
but I'm sure it's optimizing
for some sort of proxy,
like whether they're
buying, or whether they're
clicking, or watching.
But I think with things
like books and videos where
they're complex products,
there's an opportunity to get
feedback from the user, like
a qualitative feedback--
like what I liked about this
book or these types of things--
and have that be a function to
better inform the algorithm,
rather than just some proxy.
My question, I guess, is do
you know of any work being
done like that?
I guess it would
probably be more
in the domain of a university
or something than maybe
in the private sector.
But I guess the idea of using a
qualitative feedback mechanism
to better inform the AI.
AVI GOLDFARB: So there
were two questions there.
I'm going to actually answer
your first question, which
is what do we mean by the
current recommendation system
not being transformative.
I think that's
underlining the point.
What we mean by that is,
Amazon's business model
is the same business
model in many ways
as the Sears catalog
was 100 years ago.
And so how did the
Sears catalog work?
Well, you got a
catalog in the mail,
and you looked through
it, and you told
Sears what you wanted to buy.
And then they sent it
that recommendation
to the warehouse--
that request to their warehouse
and they shipped it to you.
And as Sears improved the
development of their catalogs,
they started to
figure out things
like different
kinds of customers
want different things
in their catalog.
And so in some sense, their
recommendations got better.
And Amazon's recommendations
are, don't get me wrong,
way better than the
Sears catalogs were.
But at the end of the day,
it's the same business model
just done better.
A lot better, but done better.
Where it becomes
transformational
is when those
recommendations get
so good that they no longer
have to have that business model
and they can have a
different business model.
That's what we meant by that.
AUDIENCE: Probably a prediction
that machines cannot do yet,
I'd like to ask humans.
From where you see
the world, do you
have a short list of areas where
this transformative threshold
will be crossed across,
like beyond Silicon Valley?
AJAY AGRAWAL: So my view is
that just as a thematic change,
many more things
will be personalized.
So in other words, we do so much
delivery of goods and services
based on averages.
So the one that
everybody's familiar with
is medical services.
In other words, if given
your age and some very basic
characteristics about
you, you get treated--
you and I would probably
get the same treatment
for if we had some
kind of ailment,
because we're both males
of roughly the same age.
And so, so many things--
in other words, the fidelity
of the predictions in our view
will lead to
personalization of--
when we talk about
shopping, that's
just another form
of personalization--
of personalization
of so many things
that as a thematic change we'll
move from mass to personalized.
And which people have been
talking about for a long time,
is personalization will
just become a thing.
But now we're starting to
actually see it in action.
AUDIENCE: So I had a question
about the judgment aspect
in the model that you
guys were mentioning.
And it's a two-part question.
So the first one was,
with the improvements
in the lowering
cost of prediction,
will judgments become
more polarized?
And what I mean by that
is, as the model turns up
and AI becomes
smarter and providing
more accurate judgment, I think
human judgment will be pushed
into a corner of yes or no.
Because, for example,
the AI might come up
with a prediction,
like oh, 96% 97%
says you should choose
option A over option B.
And so the human
judgment aspect for that
is, well, if it's only I have a
3% gap, I'm going to go with A.
So that was the other thing.
And the second
thing is, if someone
were to reject the highly
favored option out of the list
and go with option B,
wouldn't their judgment
be more scrutinized and maybe
even held more responsible
since they deferred
human judgment over AI?
AVI GOLDFARB: So we think about
judgment that can be before
or after you get the prediction.
But here's where it
gets really complicated,
within the legal system or not.
Which is that you actually have
to say explicitly how much you
value different things.
So in a car accident
context, the machine--
you have to pre-specify
what you think
a life is worth relative
to other types of damage
and other lives.
And that opens up-- in
the health care context,
we have all sorts
of similar things.
Once you have a good
prediction on survival
under different
treatments, for example,
you need to explicitly say,
this is the threshold where it's
worth it to save this person.
And so that becomes just
a first order issue.
Because you've specified
it, it can be audited,
and that can be a legal
challenge and a liability
challenge, and something
that you just need to--
you cannot use the prediction
without actually just
explicitly saying what you
value and that opens up risk.
AJAY AGRAWAL: Let me just
add to that one, which is--
so first of all, you mentioned
AI judgment and human judgment.
In our view of the world,
AIs have no judgment.
AIs never have judgment.
All they do is prediction.
Humans do judgment.
AIs don't.
Now, that doesn't mean that
sometimes AIs can't look
like they're doing judgment.
Because if they get enough
examples of our judgment,
they can learn to
predict the judgment.
But they don't have judgment.
They are simply
making predictions.
So that's the first bit.
So then what you said-- and
this is a thing that I think is
really a first order issue for
us to get our heads around--
is you said, well, as they're
doing more and more of this,
is this--
I think you used the words
like push us into a corner--
where the AIs are
doing 98% of the work,
they're doing all
these predictions,
and then they're just
tossing them over
for us to make the
final judgment,
and we're doing
less and less stuff.
And I think that I also
end up often having
a thought like that in my head.
And that's because I think what
we're really good at as humans
is we're good at extrapolating.
Doing this knob
exercise, and saying,
OK, if that prediction gets
better and better and better,
then our bit gets
smaller and smaller,
and the AIs do more work
and we're doing less.
I think what we're
very poor at--
what humans are very poor at--
is imagining what other things
we will now apply judgment
to because we have these low
cost, high fidelity predictions
that we've never had before.
So the thought experiment that
I offer the room is imagine--
I just saw Henry Winkler being
interviewed on Stephen Colbert,
so he's on my mind--
so imagine walking up to
the cast of "Happy Days,"
and saying, imagine having
a handheld device that's got
super good, super
fast arithmetic.
What would you do with it?
And chances are, in
that era, people are not
going to imagine
any of the stuff
that we're currently doing.
And so I think our barrier is
just imagining all the things
that we're going
to do and that we
will apply our judgment
to because now we
get to apply that judgment
to much higher fidelity
cheaper predictions
than we currently have.
So an example to think about
is, imagine accountants.
Accountants used to
effectively have two tasks.
One is the one Avi
described, which
is where they would add
up a bunch of numbers.
So they would type
them in and add them.
And then the second one
was after they added it up,
then they ask questions
of their data.
They say, well, what would
happen if interest rates went
up by 1%?
Or let's say they're
calculating net present value
of some investment.
They'll say, what would happen
if our sales were 3% higher
in the fourth quarter?
And then they would type
it all back in again
with the variable changed and
come up with a new answer.
So they had two
parts to their job.
One was the typing
adding part and the other
was the asking
questions of their data.
Now spreadsheets roll into town.
And if Avi was a faster
typer adder than me--
so that was a valued
skill he had--
that when spreadsheets arrived,
now he and I are the same.
Now there's a much higher
return to the accountant who's
good at asking questions,
because the adding typing
part is super fast and cheap.
And so the person who ask
good questions of their data,
there's a higher return to
that part of the skill set.
And so here our thesis is that
they'll be in higher returns
to judgment, because I don't--
when people say, oh, you guys,
do you think machines
are so great?
That they're going to be
all these wonderful things.
Do you think machines are
going to be so spectacular?
The answer is not really.
It's just that we think
that humans are not quite as
great as we think we are.
We're very poor predictors
and the machines
are going to just become much
better predictors than we are.
AVI GOLDFARB: Thank you.
[APPLAUSE]
