(calm music)
- All right, welcome and
good evening, everybody,
glad to see everybody here.
My name is J.P. Eggers,
I'm the Vice Dean
for MBA and Graduate Programs
for here at NYU Stern.
And I'm excited
to be here tonight
for this incredible program
that we've got lined up.
And I'm very excited
to see so many alumni
and friends of Stern in the
room back for this as well.
It's my distinct pleasure
to introduce Paul Daugherty,
the co-author of the
recently published book
Human and Machine: Reimagining
Work in the Age of AI.
In conversation
this evening with
our own Professor
Arun Sundararajan.
Paul is the Chief Technology
Officer at Accenture,
leads the company's
Technology Innovation
and Ecosystem group.
In this role, he oversees
technology strategy,
R&D, ecosystem relationships
and responsible
for Accenture's business in
emerging tech such as AI,
cloud, blockchain.
He's a frequent
speaker at conferences
on industry and
technology issues
and published articles in
a variety of publications.
He sponsors Accenture's
technology initiatives
for the World Economic Forum.
He's received a variety
of honors, Adweek 50 2017,
Business Transformation
150, I could go on,
but I think we wanna
get to the main event
as opposed to hearing me lay out
a whole bunch of cool awards
that he has won at this point.
And our moderator this
evening, as I said,
is Professor Arun Sundararajan,
professor of business
and Robert L. and Dale
Atkins Rosen Faculty Fellow.
Human and Machine was
published in March 2018,
been a favorite among
academics, industry experts.
And it's based on personal
experience and research
with 1500 organizations.
The book reveals how companies
are using the new rules of AI
to leap ahead on innovation
and profitability,
as well as what you can do
to achieve similar results.
And if you look within the
book, the book really talks
a lot about the ways
in, the future of work
and the ways in which,
through Paul's perspective,
that humans and machines
will be interacting
in getting work
done in the future
and these ways in which these
interactions will take place,
and marches through a
number of different areas
and use cases, from things
that are maybe more obvious
around manufacturing to things
that are maybe less obvious
around research and development
and marketing and
things like that.
So we're quite honored
to have Paul here
with us this evening.
So without further
ado, please join me
in welcoming Paul Daugherty
and Arun Sundararajan.
(audience applauding)
- Thank you, J.P.
Paul, it's always a pleasure
to converse with you.
It's even more fun when
we have an audience.
- Yeah, we've done this
a lot of times before,
so this is gonna be a
good opportunity to chat.
- So I just wanna start
with, first of all,
thank you for being here,
thank you for taking the time,
I know that you're a busy guy.
I've read your book,
you were kind enough
to send me a copy, enjoyed it.
What was the motivation,
what made you decide
that you're gonna take
time out of doing all
the other things
that you have to do
and actually knuckle
down and write the book?
- Yeah, no, good question,
and thanks to all of you
for joining and thanks
to NYU Stern for hosting,
and it's great to be,
to have the opportunity
to talk to you in this
forum, as we've talked
in many other forums.
The reason for the
book, it started
about now four years ago.
As you said, the book was
published a year ago this week,
so it's one year, it's its
official birthday this week.
And we started the research
about four years ago.
And it was, I remember the
moment I was in Cambridge
in Massachusets with
my co-author coming out
of some meetings where,
my co-author leads
our technology research,
including our research
in artificial intelligence.
And we came out of some meetings
and we were just discouraged
because we felt the narrative
that we kept encountering
at that meeting and others
was the wrong narrative and
different than what we saw
in our own experience with
artificial intelligence,
and the narrative at the
time was AI's this thing
we need to be afraid
of, it's a thing
we might not be able to
control, it's gonna eliminate
all the jobs, and people
making a lot of claims
without a lot of
substantiation behind them.
And our concern at the time
was, in these types of areas,
the wrong prophecy
or the wrong beliefs
can turn into
self-fulfilling prophecy.
If you interpret the
technology as bad,
it's gonna influence
what you do,
if you interpret
it as a technology
that's there to
eliminate people,
you're gonna figure
out ways to do it.
And we felt that was
the wrong approach,
and so we felt that
we wanted to set
the record straight
based on our experience.
So we had our own hunch, we had
our own personal experience,
'cause at the time, we'd
done a lot of work already
with artificial intelligence.
But we launched this
research project
to look at 1500 organizations
and many more thousands
of workers with a lot
of firsthand research
to understand what was
really happening in the jobs.
And our goal was to try to
educate and provide a roadmap
to business executives and
people in organizations
to help them understand
what we thought
was the right way to apply AI.
- Okay, so what is the
right away to apply AI?
- So after all that--
- I'm gonna keep
my questions short.
- So there's three key things
we talk about in the book,
and we can dive into
any one of them.
But I'll start with the
title, so the way I summarize
the book is, not
many people read,
how many of you still read
books end to end, cover,
oh wow, this is a good audience.
- Yeah, this is not a
representative sample.
- Yeah, not a good, but a lot
of people don't read books.
So if you don't
read the whole book,
a lot of people just read
the table of contents
and the introduction
or conclusion.
- This is a group of people
who have come for a book talk.
- Yeah, yeah, yeah, good.
Self-selecting.
So the most important point
in the book is the plus sign,
the human plus machine.
And the view we have
that the real potential
is in how you combine
the human capability
and machine capability,
the human intelligence
with the machine intelligence,
that's the main
message of the book.
And then there's three
core findings in the book,
the first is that you need
to take a different approach
to business and business
process to unlock
the potential of AI.
And we talk about the third
generation of business process
and the third generation
of work in the book,
the first being automation,
the second being reengineering,
and the third being what
we call reimagination,
which is thinking about your
business in a different way
to really unlock
the potential of AI.
I can give some examples of
that if you're interested,
but it's a new approach
to your business.
This reimagination approach
is the first finding.
The second is what we
call the missing middle,
which is a new view of
how to identify jobs,
a new view of what
jobs look like
as AI matures and rolls
through companies.
And we call it the missing
middle because it's the jobs
that are in the middle
in between the human
and merging the human
and machine capability.
And we call it missing, the
missing middle because people
were largely not thinking,
at least four years ago
and still even today,
not think about
a lot of these new
categories of jobs
that are already
starting to emerge.
And we believe that to, if you
wanna properly prepare people
and properly prepare
your organization,
you need to know what
those new roles look like.
So that's the missing
middle, and we can get into
what some of those
jobs look like.
And then the third
finding of the book
is around responsible AI.
And we found that
companies that are pursuing
what we deem a
responsible approach to AI
are having greater success
and having greater returns
than those who are not.
And responsible AI, we could
probably spend the whole time,
the whole session on it,
'cause that's very much
in discussion now, it was early
when we were looking at
this a few years ago.
But it's the, we
define responsible AI
as accountability for
decisions, transparency,
fairness, a lack
of bias, honesty,
and a human agency in the
way that you approach AI.
And it's believed that
this is a big moment
for companies and
organizations to really embrace
a new code of conduct,
a new way of operating
around responsible AI if
you wanna get the benefits
and avoid the pitfalls of AI.
- Okay.
And I'll come back
to each of these,
each of these are
really interesting,
I also wanna come
back at some point
to talking about skills
for the future, and this is
a question that I see
asked by so many companies.
As we're retraining, what
should we retrain people for?
But I wanted to
go back and start
with your definition of AI.
'Cause I'm a professor
and I like definitions,
but also 'cause AI is one of
the most poorly defined
things in general.
It's that extreme, those two
extremes that you point to,
right, there's stuff
that humans do, and then
the other extreme is machines
magically becoming human-like,
and that's conceptually what a
lot of people think of as AI.
And you've got a very
precise definition,
which I've written down here.
Systems that extend human
capability by sensing,
comprehending,
acting, and learning.
Right, and so just, can
you tell us a bit more
about that, why
do you draw these
as the boundaries around
artificial intelligence?
When people encounter
new technologies,
how should they
decide that this is AI
that we're dealing with?
- Yeah.
That's the way we, it's
our business definition
of how to think about AI.
So AI is a system that
can sense, so it can,
think computer vision or
natural language capabilities,
so it can sense and take
input, it can comprehend,
so it can understand and
reason and see patterns
and make distinctions,
act, it can make decisions
or take action in the case
of autonomous vehicles
and the like.
And then learn and
improve and cycle around,
so that's the way we define
an intelligent system
or artificial intelligence.
The more simple
definition I sometimes use
in audiences who are less
into AI is it's systems
that approximate
human-like capability
to do different things, that's
another easy definition.
And then there's the technology,
the way technology wise
we look at, a lot of
people are calling
a lot of different things AI.
And there's, broadly speaking,
the way I look at it,
at least there's,
broadly speaking,
there's a machine
learning domain of AI,
which is focused
on the statistical
reasoning, data science,
deep learning, all those
types of technologies.
And then you've got the
symbolic reasoning side of AI,
which isn't talked
about as much now,
but was the big thing
about 20 years ago.
And that's another branch of AI
that's underappreciated
maybe today
because of some of the
advances we've seen
in the machine learning branch.
From a technology perspective,
one thing I believe
is really important to
look at is the full breadth
of AI capability, because I
think a lot of organizations
are over pivoting on
supervised learning
and very specific
techniques that we've had
a lot of advances recently and
maybe missing the opportunity
to combine different
AI techniques
and achieve greater
outcomes as a result.
- Yeah.
And I was struck
by the fact that,
but all the big fans
of deep learning here
at NYU, especially
today, like Yann LeCun--
- It's a big day,
yeah, congratulations
to Yann LeCun and Geoff
Hinton and who was the third?
Oh, and Yoshua Bengio won
the Turing Award today,
for those who didn't
see that, yeah.
- But the, I also
noticed that you've got,
you explicitly talk about
extending human capability.
So I interpreted that
in two different ways.
One is really
broadly meaning that
the capabilities of
humanity at large
are extended by these systems.
But the other was it seemed
to be a nudge towards
what you get at in the
book, that you think of AI
as this missing middle of AI
as being really important,
right, where there's
the complementarity
between the human
being and the AI.
So was that intentionally
to orient us
towards thinking
about the humans
and the machines
working together?
Or did you just mean
the broad humanity?
And I'll get past the
definition in a minute.
- Yeah, no, no, it's fine,
'cause I think it's important,
when we talk about
human plus machine,
we mean very literally
an individual person
equipped with tools and
technology powered by AI
in this case that can help
them do things differently.
So we're very intentionally
talking about the change
in the way an individual
works and lives,
empowered and augmented by
technology in a different way.
But we're also then talking
about the collective impact
that has on our ability
to solve problems
and achieve solutions
in a different way.
Both are really important, and
I think it's really important
to look at, that's why we
spend so much of our time
in the research looking
at the individual impact,
as our belief that
this issue of jobs,
we are, I am, we are, yeah,
based on our research,
cautious optimists
on the jobs issue.
I don't believe we'll
have a jobs issue
in the next 10 years, probably
not the next 20 years,
due to automation or AI.
We will have a lot
of skill issues,
but we won't have job issues,
because fundamentally,
AI isn't eliminating as
many jobs as we think.
AI is automating a
lot of tasks and tasks
aren't always the same as jobs.
And the pace of
automation isn't always
what we think it is.
So we can talk about specific
numbers and statistics
and everything if you like,
but it's really important
to look at that
individual level,
'cause that's where
you can really see
the impact on individual jobs.
And then in the
macro level, there's,
that's why it's important to
think about responsible AI
and these types of things,
'cause the individual use
of AI can have unintended
consequences societally.
And it's a new obligation
for business to look out for
and think about the collective
impact that their AI
that they're deploying is having
on communities around them.
For example, deviating from
the responsible AI point
for a minute, there's a bank
who got themselves into trouble
by developing a lending
algorithm that they trained
with demographic data from a
more rural application area
that they then applied
in an urban area.
And very instantaneously scaled
a racially biased
lending algorithm,
'cause it had been trained
on characteristics that
didn't reflect the population
they were deploying it in.
A basic, basic issue that
they should've caught
and not have done, but those
are the types of things
we need to think about, the
mindset and the types of impact
that organizations can very
easily have collectively
on a community that you need
to think about differently
in the era we're moving into.
- So let's go a little
bit further on that,
'cause I think this is a topic
that comes up frequently.
And a lot of academics
are deeply worried
about bias in algorithms
and ethical AI.
And some of us have got
beyond the trolley problem,
for many years, the
ethics of AI was all
about autonomous car is
hurdling down a street
and has to make two
difficult choices between,
in different variants,
killing a certain number
of people, but some--
- The baby in the stroller.
- Like one with,
yeah, there's a baby
in the stroller version as well.
But my reaction to that is like,
if the car is so smart,
how did it get itself
into this situation
in the first place?
Maybe we should build
more intelligent cars.
But there's a deeper concern
about both understanding
the sources of bias, but
also putting ourselves
on a pragmatic path
toward making sure
that the systems are,
in fact, ethical.
'Cause what you just talked
about is one form of,
you're training the data
on the wrong sample.
I think this is, a lot of
facial recognition systems
have been trained
on biased samples
and so are not working as well.
But there's also systems that
are gonna be naturally trained
on biased data 'cause you,
'cause some of us generate
more data than others, right.
Socioeconomically
advantage people
in an era of 5G will
generate way more data.
And then you've got the
reinforcement of biases
that existed in the
data, you've got
the predictive policing problem
where you send the police
to one particular place and
there will be more arrests,
because you can't
get arrested without.
And so it seems like,
once we've sorted out
all these different
sources of bias,
the big challenge to me seems
to be how do we get people
to adopt a responsible
code of conduct?
And so you've witnesses a
lot of AI implementations,
what's gonna put us
onto the right path?
- Yeah, I think it's, frankly,
I don't think it's that hard.
- My questions have
gotten longer, as you see.
- Yeah, no, I'll try to
not make my answer too long
because I think we
can tease it out more
and get questions
from the audience.
I frankly don't think
it's that hard to do,
but organizations simply
aren't doing it now.
I would say any organization
that's deploying AI
that doesn't have
somebody in the C-suite
that's accountable for
responsible use and outcomes
from the AI is gonna get
themselves into serious trouble,
it's just a matter of time.
Every single company in my view,
and I can give you hundreds
of examples of ones
who already have.
So it's a new way, it's a new
code of conduct is the way
to think about it that you
need to take into account.
The trolley problem
is a great example,
if you're developing
problems that include AI,
the issue with the
trolley problem
isn't a societal decision
on, or a human decision
on whether the baby
and the stroller
is more important than
the driver, the issue is,
the real dilemma of
the trolley issue
is who gets to
make the decision.
And as a company,
you have to decide.
Is it a policy or you're
leaving an engineer
to do whatever they wanna
do in solving the problem?
Issues like the bias
in facial recognition
are because companies
let their engineers
figure out what
datasets to use to train
the facial recognition
algorithms,
and they were massively
biased as a result.
If people didn't
hear the numbers,
they could identify white
males with a 90+% accuracy,
African American females
less than 65% accuracy,
all because it was trained
on non-diverse datasets.
Basic problem, no reason
it ever should've happened
in the first place other
than it was an abdication
of responsibility, in my
view of those organizations,
and letting engineers make
decisions they shouldn't make.
So that's why I believe
it's a C-suite issue,
it's about governance
and accountability,
and it's not that hard to solve.
You're gonna have to decide
who can make what decisions
and what's a policy decision
versus a technology decision.
And so we have, we've hired
a head of responsible AI
who has ethics background,
looking at these issues for us.
I believe an organization
needs something like that.
We have a responsible
AI steering committee
that includes myself,
our general council,
other senior executives, that's
focused not just on doing
the right things in
our organization, but
what should we do,
are there projects
we shouldn't do
because of the AI implications?
Then there's the code, I
can talk more about this
if you want, but there's,
what principles do you set?
And we have a principle
that AI should be fair
and AI should be
used to reduce bias
rather than increase bias
is one of our principles.
And there's ways to do
that, which gets then
into the next layer of what
you need to do in companies,
which is processes and tools.
So we have something
called AI fairness toolkit,
which our teams use, which
isn't a silver bullet
to eliminate fairness
'cause it's not possible
to eliminate bias, bias is
human, bias is in the data.
The question is, how
do you get things
into reasonable tolerance that
are consistent
with your policies?
And there's ways to
do that, and we have
an AI fairness
toolkit that we use
in this type of work to do that.
So I guess maybe my answer
was longer than I intended,
but I think there's
very pragmatic steps
any organization can take
to be more responsible
in their deployment of
AI, very few are doing it,
they're viewing it as
a technology issue.
And by virtue of that,
they're delegating
very serious issues to design
or implementation choices
by technologies, which
isn't the right way
to solve these problems.
- So it almost seems like
you need a set of values
or a set of principles
that permeate themselves
into the product management
and the product development.
- Yeah, exactly.
Let me give you another
example, the black box AI
is the other thing
that everybody has a
lot of debate around
and this idea, well, we'll
never be able to explain
these algorithms
because they're,
deep learning in
the neural networks
is probabilistic, it's hard to
explain exactly what happens.
Very true.
But what that means is,
you better not let, again,
you better not let
your teams decide
where to apply deep learning
versus the other techniques.
There's techniques you
can use to automate things
and make informed decisions
that are explainable.
So if you're doing criminal
sentencing guidelines,
if you're doing termination
decisions for an employee,
things you wanna
be able to explain,
you should use explainable
forms and again,
the policy should be use
explainable forms of AI
when you need to be
able to explain things.
And again, not enough
organizations have
a disciplined policy
view like that,
which is one of the things
we argue for in the book.
- Okay, that's
really, we could spend
the rest of our session
talking about that,
but I do wanna get back
to this missing middle,
'cause to me, that's
one of the core ideas
of the book that makes it
interesting and different.
I'm used to reading
books about automation
and the future of work
that either have this
machines substitute humans view
or that are
excessively futuristic,
and you and I have talked
about this, that are looking 30
or 40 years ahead and
imagining that future,
but very few that are
talking about the path
that is going to be trodden
over the next decade.
And so what leads you to believe
that a core part of the
impact of AI on business
is gonna be this
humans complement AI
and machines empower humans?
How did you end up there, it
seems plausible, but for you,
it seems to be a
majority, right,
of what we'll see AI's impact.
- Yeah, I'll break
down the numbers.
So if you look at jobs overall,
we believe something around
90% of jobs, of work,
will be impacted
by AI, so most jobs
will be impacted in some way.
- Okay.
- If you break it
down underneath that,
we believe, and this is a
combination of our research,
this is also from OECD
and a number of other,
I'll cite some others as we go.
We believe about 15 to 20%
of jobs will be eliminated,
there's jobs that
will be eliminated,
some reasonably quickly, because
of automation through AI.
And we believe that's
in the 15 to 20% range.
- So what's an example of a job
that you feel will
be eliminated?
- Easy example is, one of
the areas we're working
a lot on in the financial
services industry,
I'm guessing some of you,
financial services industry,
in the room, I'm guessing
there's a number.
Compliance processing, any
money laundering in banks,
using machine learning to
better detect patterns,
you need less compliance roles
because you can empower
the compliance officers
with better tools to
enforce the policies.
That's an easy example.
We're working with some banks
on applying machine learning
in interesting ways to
drive efficiency there.
So that's an example.
So 15 to 20% get eliminated,
but of the remaining jobs,
about half get
transformed substantially
and about half change
to some degree.
So the thing you
really need to look at
is how do the jobs change?
'Cause the jobs that
are eliminated are,
they're going away, we need
to, we'll come back to that
because we, I believe the
grand challenge for the next,
for our generation,
the next 10 plus years,
is how do we help the people
that are impacted that way?
And I'll come back to that.
But the real thing
for organizations,
what do you do about all those
jobs that are transformed
that still exist, and how
do you prepare your people
for the new jobs?
And so what we believe,
there's two big categories
of new jobs that are
created, one where people
are needed to help AI and
help machines, and another
where AI helps people do
things more effectively.
So in the category where
people are needed to help AI,
this is a category of jobs
not many people think about.
So it's one that we spend
a lot of time talking about
in the book and
following the book,
I've spent a lot of time
with organizations on.
So again, where people
are needed to help AI,
we talked about trainers,
explainers, and sustainers,
and they all rhyme, which
we liked as authors.
Trainers, explainers,
and sustainers,
so an example of a
trainer job, again,
where people are
needed to help AI,
isn't the tagging,
it's not training
an algorithm and data tagging.
An example of a trainer,
one of many types of jobs
we're seeing is a role
we're hiring at Accenture
and a number of our
clients are hiring,
which is training
the personality of
the virtual agents
that are interacting
with customers.
So we're using, many companies
are using different types
of technologies, AI
chat bots, et cetera,
to automate customer
interaction.
How do you train, so
what's happening is,
your interaction is now
happening through AI,
through a virtual agent
rather than a person.
So your customer's
experience of your brand
is happening through AI.
So we say in the book,
is AI becomes your brand,
which we believe is
increasingly true,
and in a five year
period, AI will become
your brand in many companies.
- It can have huge
impact if AI--
- It has huge implications.
So the way, how is
your AI differentiated
from the brands
you compete with?
How, does the AI have the right
personality that you want,
is it conservative
or is it starchy
or is it gonna do
the right behaviors?
How is it answering the
questions and following up?
And turns out that these
aren't engineering issues,
these are human
behavioral issues
that you need
different profiles for,
you need sociologists,
you need psychologists,
we have poetry
majors, drama majors.
Other people who are just
good at understanding dialogue
and human interaction that can
help shape the personality,
then work with the
engineers on tuning
the behavior of the
systems in the proper way.
And that's a job that's
more of a liberal arts
type of job, if you wanna
characterize it that way,
than an engineering job.
- Fascinating, but it's also,
it's like a brand management
job of sorts, right,
because you're shaping
the personality
of how people
perceive your company.
- That's right.
But also a job that
doesn't exist now,
it's an incremental job.
And there's a lot
of jobs like this,
that if you were back in
1995 and you tried to argue
with people that you would have
millions of people employed
as search engine optimizers,
as Ebay commerce merchants,
as new successful
GoDaddy entrepreneurs,
people wouldn't have
been able to understand
what you're talking about.
That's the same way
these are the new jobs
that are starting to
appear, and we believe
there are millions of jobs.
Sustainers is another category
where people are needed
to help AI, which
is, think of this
as the HR organization for
the AI who's managing the AI,
who's diagnosing whether
the AI is achieving
the right business impacts,
who can, who's assessing
how you improve the AI,
so those are sustainers.
And a good example
of that, I think,
is what Facebook has done
post Cambridge Analytica
when Mark Zuckerberg
came out and he said
we've concluded AI, algorithms
can't police algorithms
was his quote, I think.
We decided we need humans
to police algorithms,
not just for the short term.
And they were hiring
something like 20,000 people
in different categories of job,
but we'd say they're all
in the sustainer roles,
which are overseeing and
working with curation
around the algorithm, so
that's an example there.
And then explainers are
another category there,
which is people
who are explaining
the implications of algorithms.
And we're seeing this
appear in many companies,
so when the Uber car
crashed in Tempe, Arizona
last April, a year ago, April,
how did you do the diagnosis,
what really happened?
It was a lot of factors,
it wasn't just looking
at an algorithm, which
engineers could've done.
What was happening,
what was the weather,
what was the nature of
the person that stepped
in the street,
how did it happen?
And the explainer is a
person who can understand
the whole context of a
situation and experience
and understand and
explain what's happening
and then make the
improvements they need to.
So they may sound
like abstract roles,
but we're seeing these
start to appear at scale
in organizations,
and again, we believe
these are millions and
millions of new jobs
that are in organizations
that deploy AI.
- But just to be clear,
the primary actor here,
as far as interacting with the
customer or doing the work,
is the AI, and the human being
as shaping it, managing it.
- In those cases, those are
generally not engineering roles,
but they're other roles where
you need broader experience,
people who understand
digital technology
and how AI might work,
some understanding,
but they're really providing,
bringing a broader set
of skills here, so that,
which I think is important,
because it's saying not
everybody needs to be relevant
in the age of AI, not everybody
needs to be an AI expert.
In fact, I think the
minority of people
need to be AI experts, the
majority of jobs are gonna be
in those categories we talked
about as we look at it.
- And then the second half is--
- The second half are
the people, we call it,
it's where AI helps people
do things in new ways,
and this is, these are the ones
that people think about more,
so they're more intuitive.
We talk about it as AI
gives people superpowers,
so categories of jobs here,
so a simple one is interact,
which we've talked
about a little bit.
I met with a very
interesting company here
called ASAPP based, anybody
know ASAPP, based in Manhattan,
you know ASAPP?
So interesting
organization, using AI
to automate text to
voice interaction in,
do you work for ASAPP?
- [Male Speaker] Yeah.
- Okay.
A very interesting startup
here, very well funded,
very successful startup working
in the telco and cable space.
Augmenting human operators,
understanding text
and voice interactions, that
they can make human operators
more efficient, they're
like the wing man
for the customer service
agent, not the eliminator
of the customer service agent.
So things like, they learn
from what successful agents do,
this is the question the
best agents ask next,
and they pop that up
for the agent to ask,
and a variety of other
things, but it's,
that's an example of giving
the human superpowers,
enabling every agent to
act with the capability
of your best agents, which
I think is very powerful.
- Yeah, I saw a
comparable example of that
with a Chinese
company called VIPKid.
And they're connecting
high school teachers
in the United States
to kids in China
to teach them English
as a second language.
So they've got about
100,000 teachers
in mostly socioeconomically
disadvantaged parts
of the US, and millions
of kids in China who are
in the evening there,
at 4 a.m. here,
getting English
lessons, English as
a second language
lessons, and there's
this constant, they've
got this system
that is learning constantly
and improving the performance
of every teacher by
figuring out how,
what techniques
from what teachers
are getting the best
reactions from the students,
so yeah, that's
really interesting.
- Or just an example,
just playing off that one,
another one in this
interact category,
another application of AI
which is fascinating to me
is work we've done
for an organization,
I think it's called
My Second Chance.
And it's for, it's an
organization dedicated
to helping women in India
reenter the workforce
after they've had children.
And the biggest issue with
women reentering the workforce
in this situation is confidence,
developing a level of confidence
to operate effectively
in interviews so that
they can get hired,
the confidence level
was a big factor
and the content to be confident.
So we developed an
AI application that,
it's a virtual coach for women
that are reentering
the workforce, again,
this is in India, just
in India right now,
that uses AI, visual,
facial, analytics,
social AI, and other
things to interview a woman
and understand her
response and look at,
and recognize the cues that
indicate a lack of confidence.
When you answer this
question, your body language
and your eye contact indicated
you were losing confidence.
And so you could do it again
and practice and develop,
practice how you
project confidence
in addition to getting
the content needed.
So it's helping
women get the jobs
they need to reenter
the workforce, they're
a great example,
I think, of helping
people be better
by using AI and helping
women in this case get jobs.
So that's an example,
those are examples
of this interact category.
Amplify is how do
you take a person
and multiply their
capability in powerful ways,
and a great example of
this that we talked about
in the book is it's what's
happening in the field
of design, PLM, product
lifecycle management,
product design and
the like, where,
did any of you work in that
field, design, design field?
So it's generative
design is the innovation
in this field, which
is AI-enabled design,
generative meaning AI can
generate a lot of designs
from parameters, so
the designer, you
still have a designer,
you still have the same
number of designers.
But their productivity
and their creativity
is multiplied, and if
you talk to the designers
who use these tools,
they love them.
And they actually,
one of the designers
used Autodesk's
Dreamcatcher to design
an award winning chair
called the Elbo Chair,
they talked about their
experience and said
they would've never
come up with this design
without the generative
design technology,
as it goes through an
iterative creative process
where it leads you to
ideas you might not
have thought about before.
The designer is still
making the decisions,
the curation and getting the
design at the end of the day.
But really coming out
with a phenomenal,
more effective and
more creative design.
- 'Cause that's the
nature of the example
that you do here, at least
that people imagine, right.
Where will AI create
jobs, will the person
in the community who's not
actually a skilled doctor
will be able to offer medical
advice with the medical AI.
You won't need professors
anymore 'cause anybody
who communicates well can then
be complemented with the AI--
- The virtual room, okay.
- I often think
of, to me, one of
the most fascinating AI
implementations that we have
is Waze, and I know you
use it in your book,
but I've always, I feel
it's like magic, right.
Everything that they promised
me about AI 25 years ago,
this thing does.
One of my first jobs
as a grad student
was a prologue program,
so I had to use psych,
the common sense
ontology, and I felt like
I rejected neural networks
as all these things
can't really do much.
What did I know, right,
but I guess the computers
weren't powerful enough
to give them the power.
But I see Uber and
Lyft drivers every day
in some ways falling into
that second category 'cause
they can come in and pursue work
in a city that they have
little or no familiarity with
where they are doing
part of the job,
but then this really powerful
AI is facilitating them
being able to navigate,
it's telling them exactly
what to do, they're doing it.
And on the one hand,
that's worrying,
'cause it's setting them up
for automation down the road,
but I think, once
you look at the world
through your frame,
you start to notice
that there are more
contemporary examples
of this human-AI complementary,
how many of you
feel like there's
some artificial
intelligence related system
that is helping you
every day in work
or that complements
what you do at work?
How many of you drive to work?
Okay, so this is a
bad question to ask
a Manhattan audience, how many
of you take an Uber to work?
- We're in Manhattan.
- Yeah.
This is really one of the
best parts of the book,
'cause there's, it moves us from
this imagined future
of job creation
to actually making
it tangible, and it,
I feel like it lays out
two different things
that I'm gonna
ask you about now.
One is something
resembling a blueprint
for what are the skills
that we need to give people.
Both mid career, but also
for us in a university,
what should we train
our students for
to prepare them
for this new world.
But it also starts
to give managers
more pragmatic guides
to making sense
of how AI is gonna
impact that business.
So I wanted to spend
a little bit of time
on the first thing for
sure, which is, yeah.
'Cause I talk to senior
HR executives a lot
as part of imagining
the future of work.
And they all seem to understand
that reskilling or that
some of the people
who work for them,
a significant
fraction, are going
to have to upgrade their skills.
They also understand
that many of these people
are gonna have to
transition out.
But they're frustrated
by, they read the reports
and the reports
say data scientist
is a job of the
future or machine,
technological roles,
and they're frustrated
'cause they're like, we
can't teach everybody
to be a Python
programmer or to learn R
or to take machine
learning courses.
And so, even though they
have an intent to transition,
they don't really know
what to imbue people with.
And so can you talk a bit
to what are the things
that contemporary, let's
start with people mid career
right now who need
to be transitioned,
what should we be
skilling them for?
- Yeah, I think the, I'll
give you the general answer,
then I'll give you an example.
I think the key thing
is, and first of all,
I don't have the exact
crystal ball on this,
I think this is the
big issue, is exactly
what you retrain people
on, but I'll tell you
what our current thinking is.
The general thing
you need is digital,
people have to have a base
foundation of digital skills
to be effective in this world.
And there's
professions right now
where people will be displaced
from purely physical roles,
and those will be the
people that are challenged
that we'll really have
to help with that,
we're doing a lot of work,
I can give you some examples
of how we're helping people
in a lot of ways with that,
if we wanna get into it.
But the kinds of things,
it's digital skills
and then it's the other
thing you need to do
is look at how
you amplify or how
you better utilize
people's human skills.
The skills we don't
think will be replaced
for the next, for decades,
are complex problem solving,
cross-domain, really
complex problem solving,
creativity in any profound
way, we can get into that,
there's a debate around
that, but I don't believe
true creativity will be
replaced any time soon,
it will be augmented,
as we talked about.
Sensory perception, which is
sensing and reacting quickly
to the world around us.
And social emotional
response, which is
the personal interaction,
those are the four human skills
that are best to double down on.
And if you look at
those categories of jobs
we went through earlier, it's
kinda looking at how do you
utilize those human capabilities
and then use AI around them.
So let me give you a
specific example of this
that I think brings it to
life, there's work we did
for one of the,
we're still doing,
for one of the large
energy companies.
Think drilling,
oil field services.
So you're drilling, you're
doing the messy work
of putting in the pipes,
pipes into the drill.
That's been a business
that historically has been
about valves, turn the
valve, you need more water,
or increase the pace of
the torsion in the drill
as it's drilling,
very physical job.
And what happened is you
did your best to guess
what was happening underground,
and then something broke
at some point, and you'd
pull it all back up
and figure out what
happened, and you'd look
at a bunch of spreadsheets.
Well, now what you can
do is you can put sensors
on the drill bit,
so you know exactly
what's happening underground.
So change is what the
drill operator needs to do,
the drill operator
still needs to do
all those physical things.
But now the drill operator
is doing something different,
instead of guessing
what's happening
and waiting for
something to break,
he's seeing a visualization
that's actually built
with a gaming engine,
it's built on Unity,
which is a gaming platform.
So that oil field
services technician
needs to use this gaming
platform where it's actually
a visualization of
what's happening
based on the sensor data.
You can see the, by
color coding, the torque,
the tension, the resistance,
the fluid density, et cetera,
everything you
need to understand,
so you can control
and adjust the drill
and steer the drill horizontal,
whatever you need to do
to operate it more effectively.
And what's happened is
the physical oil field
services technicians have
been trained, have learned
and are using this new
technology to operate drills.
So think about what's
happened there.
So somebody with no digital
skills is now playing a game
to operate the drill,
still needing to do their,
intersperse it with
their physical work.
So you have a gaming
engine powered,
digital skills literate,
physically capable
oil field services
engineer, that's literally,
that would be the job
title if you wrote it.
And so the important
lesson to learn--
- That would make a lot
of people happy, right--
- Well, the lesson to
learn out of that is--
- Playing games late
at night instead
of doing their homework--
- Right, playing games--
- Prepare them for
the future of work.
- Gaming does a have
a lot of redeeming,
we can talk about that
too, gaming does have
a lot of redeeming benefits
in this future world.
So the big message
to companies in this,
and we're seeing this
generalized across many jobs
in many industries, is think
about that company now.
How are you gonna
develop that skill?
If your approach
to the workforce
is fire my physical oil
field services workers
'cause I need these
digital workers,
and then you go to the
market and try to hire
a gaming engine enabled,
digital literate
oil field services
technician, there aren't any.
The people don't exist.
So unless you invest
in your own people,
the roles are
changing so profoundly
and at such a rate that,
if you don't develop
your own capability to
move your own people ahead,
you'll be stuck.
So the reason I go through
that example is in this world,
is you look at the
human skills you need,
companies need to invest in
developing those new roles,
or you'll be stuck,
your competitors
who invest more in learning
and bringing people on
are the ones who
are gonna succeed,
and this is not a one
time change, these jobs
are gonna continue to
change over the next decade.
So in the survey
we did in the book,
65% of these 1500
executives agreed with that,
65% of organizations
said our workers
aren't ready for AI and these
changes that are coming,
which is probably about right.
We then asked how many of
you are training your people
to get them ready?
Anyone wanna guess how many?
- [Male Speaker] 10.
- 10, 5, go lower.
Three.
3% of executives said they're
training their people.
And part of that may
be the time issue,
that was a couple of
years ago, but I think
it was more an issue
of companies not
knowing what to do,
I don't think they're
evil in saying
we don't wanna invest
in people either.
Companies just don't
know what to do,
and the message,
that's why we're trying
to bring some clarity
to this and say
the only route through
this is to understand
your human skills and invest
in your own human talent
and develop your
learning platform now
and view it as lifelong learning
and your ability to
continue to move people on,
'cause that oil field
services technician
that's using that
technology I just described
is gonna be doing something
different two years
from now and five
years from now.
And you need to keep helping
that individual succeed
if you wanna be viable
at your business.
- Yeah.
So it seems like,
early in your career,
I wanna go to Q&A 'cause
we're about 40 minutes in,
but it also seems
to me that you're,
but I'm gonna keep talking
first because I can.
We get spoiled as
professors, that learning
how to learn seems to be an
important early career skill
to pick up, right, you can't
just learn and then move on.
- Yeah.
I think the issue in the
world we're moving into is,
I think people who want to learn
I think are gonna be okay in
this new world if we can find,
if companies do the right thing,
if we have the right government,
there's a lot of work
we're doing in Washington
and in other governments
around the world
on what the
governments need to do,
we can get into that too
if you're interested.
But if we do the
right thing societally
across public and
private sector,
I think we can put the
right mechanisms in place,
and the people who wanna learn,
I think we can give
the right backing
for them to be successful.
If there's people
who don't wanna learn
or don't view themselves
as needing to learn,
that's what I worry about
and those are gonna,
that's gonna create a really
left behind generation.
So the question is what
do we, how do we instill
the desire to learn in
people, because the people
who don't wanna learn in this
environment we're moving into
are already in trouble and
are gonna be increasingly
in trouble going forward.
- This is a question
I deal with every
day, how do I inspire,
instill in people
a desire to learn.
- You're in an environment
where people generally
wanna learn, so it's good.
- But I feel like
that really is one of
the biggest public
policy challenges
of the next couple
of decades, right.
'Cause we've got these
fantastic institutions
for early career learning
and we've created
this complex product
of the undergrad degree
and the MBA and so
on, but we don't have
a comparable set of institutions
for mid career transition
that gives people the
package, not just--
- Yeah, that's right.
The one thing I'll just, since
you mentioned that point,
the one other thing
I'll mention is
one of our strategic
areas in the book
is we put something
back on page,
it's on page 250, 253, I think.
Where we say that all
the proceeds of the book
are being donated to non-profits
who are around the world
who are focused on
mid career reskilling.
'Cause that is the issue,
there's a lot of these problems,
and I think we'll
solve them, we can put
a lot of attention on.
Mid career reskilling
is the big challenge
that could really create a
lot of political instability,
a lot of other big
problems, not to mention
the human tragedy of
people being stuck
in the middle of their careers.
So all the proceeds of the book,
it's in four languages,
coming out in five more,
and the book's
doing pretty well.
And we just this week had a call
to donate our first
year's proceeds to,
it was really exciting
to look at organizations
who are doing great work in
the US and around the world
to help those mid career people.
But there's a lot more money
that needs to go into it,
so that's what we're
doing with the proceeds.
- It's amazing, yeah.
So we're about a
little over 40 minutes,
and then we're gonna
open it up for questions.
Now, Paul has brought
five copies of his book,
and we've decided that the
first five good questions
that we get will...
(audience laughing)
So I'll take the question,
and then Paul will judge
whether the question is good.
- No, no, no, that
wasn't what we agreed.
- Yeah.
And so let's see, let's
start on the right here,
yeah, the gentleman in the tie.
Yeah, that's--
- Was it because he has a
tie that you picked him?
- Yeah.
That's why I did this.
There's bias here, yeah.
- [Male Speaker] I didn't
even ask the question.
- Oh yeah, that's what
we have to evaluate,
this better be a good
question since we--
- We'll take it back
if it's not good.
- We'll wrestle
it back from you.
- [Male Speaker] Can you
comment on machine learning
on the global basis,
and particularly,
what you see happening
in countries in Asia
such as China, and your
comments about that?
- Yeah, I'm interested
in what Arun
has to say about that as well.
Kai-Fu Lee, first of all, I'd
recommend Kai-Fu Lee's book
for those of you, are
you aware of Kai-Fu Lee?
It's a book called--
- AI Superpowers.
- AI Superpowers, it's
Kai-Fu's book, it's fantastic,
it came out a little
bit after our book,
and we've done a lot
of events together,
and he talks specifically
and in detail
about China and the US and
Europe and what's happening,
so I'd recommend that for
a real thorough answer
to the question.
And my view is generally
aligned with his,
although he's maybe a little
bit more extreme than my view.
So what I would say
is China's clearly
making amazing advances in AI.
They're doing it because
they have a concerted policy
through their five
year plan to do it.
It's a case where
coordinated planning helps,
because you can have
government investment,
government policies around data,
combined with private
sector action that has
a real difference, combined with
the educational work
they're doing at (mumbles)
and other outstanding
universities.
So China is going
like this in AI,
you can measure it in terms of
investment going into China,
more AI investment went
into China than any other,
than the US or any other
country, in the past year.
You can measure it in
terms of citations,
the citations of the papers
published as well as citations
are increasing
dramatically in China,
so the quality of the
research is better.
You can look at the
AI that's in platforms
like Tencent and Alibaba and
such that they're developing
and it's outstanding,
you can look
at the quality of startups
that are coming out,
and there are some very
high quality startups.
So China's making a
very fast progress
on their stated goal to
be the leading AI economy
and leading AI country by,
I forget the year exactly,
I think it was 2028 or 2026.
I think they're on a
good path for that.
So the US is ahead
in terms of those,
most of those measures,
I think the technology
still would be ahead in the
US, the research, et cetera,
would be, the citations
and such would be ahead
in terms of quality
of research and such.
But the question's
what happens from here.
And I think the reality is,
we'll have both countries
with amazing AI capability.
And I'll talk about
Europe in a minute,
and my hope is that we don't
have a lot of trade protections
and competitive
considerations come into play,
because we're gonna
develop great AI capability
in a number of
spots in the world,
the risk is if it gets
politicized and weaponized
in different ways, not just
weaponized in terms of weapons,
but competitively
and economically,
and I think that'll
be very challenging.
So we're an advocate
and I'm an advocate
for keeping the flow of
information, research
and everything else very
open, which it still is today,
but there's different
forces, as you know,
from the US and China
and other countries,
who are voicing
different views on that.
So I think advocating
for continuing open flow
of intellectual property and
research and everything else
and the technology is critical.
Europe I'll just
make one comment on,
'cause every time
I go to Europe,
I do a discussion like
this or I attend a panel
and everybody's
all sad in there.
And the first question is
typically rather (mumbles),
what's your book about, the
first question is typically,
now that we've lost, now
that Europe's lost in AI,
what do we do next?
And I think that's
the wrong view,
because I don't think
Europe's lost at all.
If you look at Germany and,
Yann LeCun was educated
in France even though
he's here at NYU now.
And great research in
France at many institutions,
some of the world leading
research in Germany
at a number of
institutions, Switzerland
and other places, UK as well.
And so great academic research,
the issue has been more
the economical advantage
of startups and scaling
tech companies in Europe.
But I think that's
fighting last year's,
last decade's battle.
I think the question
shouldn't be framed as
how do I create the
next Google of AI,
that still seems to be
what Europe's asking.
I think the question should be,
how do we dominate
industrial application of AI
in the industries we're good at?
In aerospace, in life
sciences, in manufacturing,
in the Mittelstand in Germany,
how do you develop
an excellence in AIs
that your industries
are more competitive
than anywhere else
in the industry?
But large scale
implementation of AI in China,
the US and other places isn't
gonna impact Europe's ability
to be competitive at that
if they really focus on it.
So we can talk more about that,
but I think the view of
Europe as having no chance
I think is a limited
view, it may be true
in fields of computer vision
and natural language processing
and things that have
been defined by large
scale investment
and application of
data, but there's many,
there's thousands and thousands
of other very important
problems to be solved
that any country in the
world can have a say in.
- Yeah, that's fascinating.
I'll just add two quick
points, one is that,
there's no doubt
that China is ahead
in the implementation of AI.
And in part because the sets
of data about human activity
that are digitally available
are just so much more immense
in China because of the
merging of online and offline
that is so much further ahead.
And I think the
appetite to experiment,
and this goes to one
of your earlier points
about letting deep learning
loose on things responsibly.
I think that there's actually
a much greater willingness
to simply implement, and
while that may not be best
for society in the long
run, it certainly generates
more learning, which
then feeds into itself.
And so it's a tough
thing to fight,
and this trade off between
competitive advantage
and ethical behavior
is going to become one
that is a signature
of the US and China.
But I'm not as optimistic as you
about the rest of the world.
I do feel that the US and China
will grow disproportionate
to the rest of the world
and will probably suck
a lot of value out of
the rest of the countries
as we go down this path.
But I hope I'm proved wrong.
Yes, sir.
The other gentleman
in the tie, yeah.
- The ties are winning.
- [Male Speaker] What's the
civic and governmental role
for companies that benefit
from AI and then have,
in effect, disproportionately
negative impact?
I'm thinking of, I buy a
lot of stuff on Amazon,
but that's closing down malls.
I look at a lot of
things on Netflix,
so I don't go to the
movies so much anymore.
So all of these AI applications,
'cause I think of all
of these new companies
or relatively new
companies as a form of AI,
they have, they're
providing great services,
but on the other hand, they're
having a negative impact
on certain parts of the economy.
So if I'm a business
guy, all I care about
is my bottom line.
Who stands up for society's
overall interests, and maybe
the interests of the people
who have been disrupted?
- There are two things there.
One question is
what's society's role,
and the other question is
who is society in this case?
So I think there is, I'll
answer the first one first.
So what is society's role?
So government, we've
been advocating for,
with any government
around the world,
but we're spending a
lot of time on this
in the US in particular,
there's four roles
that we believe the
government needs to step in
and step up its involvement
in with respect to AI,
the first is in a national plan
and vision around AI and
the R&D to support it.
The second is around a
workforce strategy for AI
for all the reasons
we've been talking about.
The third is a
data policy for AI,
because without a data policy,
I don't think any country
will be successful in AI.
And the fourth is around
responsible AI, which starts
getting into some of the
things you're talking about.
So government does have a role
to play in setting boundaries
of responsible behavior
around AI and some of
the things we're talking about.
I think the challenge
in some of the examples
you bring up though is
who decides what's good.
In spite of all the
debate now about,
you use Facebook as an example,
about the way
Facebook's using data
and what's happened
with Cambridge Analytica
and what's happening in news
feeds and everything else.
Not many people are
stopping using Facebook,
people have decided
that this value equation
of free social media
is just fine with them
and they're okay with it.
So should government
step in and change that
when people are voting
with their behavior?
The same is true of Amazon.
People love the
convenience or they love
the cheaper products, and
that's how they're voting.
Before Amazon, the
debate was Walmart.
Walmarts shutting down
the local hardware stores.
So I think that's, I
think the real issue
is what do we want as consumers
in society more so than,
or what we want as consumers and
are we gonna change
our behavior?
And I think that, I don't
see a lot of promise
that that'll change
any time soon.
And so I think that's,
what we need to,
I think what will
change a little bit
is there is a changing
view, at least on the value
of privacy and
personal information.
I think people are starting
to take a different view
of that and I think
that's a positive thing.
We've written a
policy or a position
advocating a federal
level privacy standard
that would allow
consumers to control
their own personal
information, which is different
than the way the
US operates today,
I believe that's important and
we believe that's important,
which is why we're advocating
for that at a federal level.
That starts to establish some
of these societal principles,
but it gets tough
when you get into
should Amazon be able to
sell at scale like they do,
'cause I think consumers tend
to vote with their wallets
on these things and have
shown that they support
those types of models.
- Yeah.
One hopes that, much
like something akin
to a green company or
the advantages that,
or the progressive growth of
consumers voting for companies
whose values they support.
That doesn't scale
easily, but it's--
- Well, I think
what's gonna happen,
I think transparency
is increasing though,
and I think the more of the
transparency, the problem is,
to answer on Amazon, would
people change their behavior
if they had transparency
into the carbon footprint,
the locally displaced
jobs and everything,
whatever might give
results (mumbles).
Then maybe they would,
but I think the issue is,
how do you get real
time transparency
to the impact of your decisions,
and that'll improve over time,
as we can use technology
more effectively.
- All right, question, yes.
- [Female Speaker] So
with the focus being
on lifelong learning, how
will that impact, thank you,
how will that impact education?
So specifically
graduate programs,
PhD programs at incredible
institutions like NYU Stern.
- That's a terrible question,
we should take back your book.
(audience laughing)
No, it's a very good question.
I'd be interested in
your view as an educator,
I was with the, I won't mention
the institution, I was with
the dean of another very
esteemed engineering institution
in this case, and it was a
private, not just me and him,
but as a private off
the record meeting,
and he was questioning
whether a four year degree
was viable in the world
we're moving into.
And I agree with that,
but I think it still is,
but I think it needs
to be different.
I think in the lifelong
learning, there's,
we're doing a lot of
work on apprenticeships.
We've got a pilot
program in St. Louis,
taking people who have
no digital skills,
subsidizing their learning
process for a year
to try to see if we can have
them come out at the end
with digital, not turning them
into machine
learning researchers,
but just giving them viable
skills for the new economy.
And that's showing
a lot of progress,
that's a public-private
partnership
that we and other
companies are involved with
with the city of St. Louis.
And that's
apprenticeships combined
with some
subsidization of people
to help them through the
process, that's a good example,
we need a lot of
shared investment,
public-private
investment, to do that.
I think we need to
look at allocation
of educational assistance.
Right now, our
educational assistance
is targeted more that at
university entry to workforce.
You take something
like Pell Grants,
which are billions of dollars,
should we spread more of
those over lifelong learning
at different points
in people's career
rather than workforce entry?
I think we probably
should, it's a policy thing
that is being looked
at in that case
in the federal government.
There's tax incentives
that I've talked
about publicly before and
that I've been advocating.
Right now...
Which gets into how you
afford the education.
Right now, companies are
incented for investing
in equipment, capital,
plant, equipment,
if any of you study accounting
or work in business,
that's, you can depreciate it.
You have to expense training.
What would happen if you
could depreciate training?
How would that change the
profile and investment
that we make in people,
so that's another change.
What's that?
- [Male Speaker]
For tax purposes,
you want the expense though.
- For depreciation experience,
so it depends on your incentive.
- [Male Speaker]
So you're talking
about financial incentives.
- Yes, right, yeah, exactly.
So anyway, so there's
different incentives like that
that we need to look at.
So I think it's
restructure a lot of,
I think there's a bigger
role for community colleges
as a different form of
training and education,
there's some work
that's going on
with community
colleges there as well.
I worry less about the K
through 12, we're supporting,
we support Code.org and
all sorts of organizations
in the K through 12 space and
there's a lot of focus there,
we got a lot of broken issues
with our education system.
But I, it feels like we're
doing a lot of things
to move that population
along, I think it's
that later stage
that I worry about
whether we can do solutions for.
- I couldn't agree more, I
think that there will be more,
we're gonna feel
greater pressure to
think about reimagining
our model as a lifelong
learning model,
not without an
upfront investment,
but that may not be for you,
as it may be a little shorter.
Larger number of
shorter degree programs
are likely to happen,
we have certainly,
and our dean, J.P.
Eggers, has been leading
the charge there on
creating new shorter
one year focused
degree programs,
that's certainly
part of the future.
But I agree completely, mid
career transition that scales.
Then the community
colleges were set up
in the 60s in part because
of worries of automation,
like when JFK set them up.
So I don't think that they
fulfilled their promise
of being the institutions
for transition,
but they're certainly at least
the physical world
infrastructure for that.
Okay, so I think we're gonna
get exactly five questions.
After the fifth question,
they're gonna be like yeah,
whatever, right.
So all the way in the back there
so that I'm not favoring people
who are sitting in the front.
- [Female Speaker] I just
wanted to ask, with the things
that you've seen in the
implementation attempts
that you've seen, what
are the tasks or domains
where you're most concerned
about implementing AI?
So the example that comes
to mind for me is Amazon
and their tool for hiring
being biased against women,
and then they scraped it.
Where should we be most
alarmed or most wary
when it comes to implementing
these technologies?
- I'll give you two, in the
business context, I think it's,
you mentioned a good
one with Amazon and AI,
but we should be
concerned about that,
but I think we'll, it
took, to Amazon's credit,
Amazon is one of the most
sophisticated organizations
at using AI, the first
Amazon browser way back when
in 1997 or whatever it
was had machine learning
in the recommendation engine
behind the very
first version of it.
So they're one of the
longest using companies
and they're very sophisticated,
they're great at it.
So the fact that they can
even stumble into problems
is what scares me because
there's so many organizations
who are less sophisticated
using it in a different way.
So I think it's
those data issues
that I worry about,
humans are biased,
so any human data we've
collected reflects biases
or demographic differences.
I think the things we have
to be very careful about
are things that, decisions
that involve people's access,
so like the letting thing I
mention is something that,
decisions like that are
something I'm worried about.
We're using AI,
we're developing AI
in a lot of embedded
technologies,
we're doing medical devices
that have learning capability
built in in different ways,
manufacturing environments
that are using process
controls, increasingly using
not just traditional technology,
but AI and machine learning.
And the implications
of wrong decisions
are much more profound,
so you have to be
much more careful
around the guard rails
you put around the systems
and the way they operate.
So I think more care
just needs to be taken
depending on the
risk of the processes
that you're looking at.
The broader concern I
have and a big risk area
that I think we need
to be concerned about
as a society is the way
it can be militarized.
So this gets into
autonomous lethal weapons,
and there's been a lot
of work done on this.
Stuart Russell at
Berkeley has led,
and others have led a lot of--
- Wendell Wallach.
- Wendell Wallach and yeah.
And great work on building
awareness around it,
but there hasn't been
enough action on it yet.
And I think that's
something that could have
a lot of implications
very quickly
if we don't get our arms around,
arms is the wrong word,
but if we don't get
some policy around controlling
the spread of autonomous
and use of autonomous
weapons, there are,
even the US has a policy
on authorizing use
of autonomous lethal weapons
in certain parts of the world.
And so it's something we
need to be very careful
about as a society.
- Yeah, it's a scary one,
'cause the UN has a group
and they're trying to
learn, the UN has a group
to try and think about
what are the mechanisms
by which we can slow or
stop the proliferation
of lethal autonomous weapons.
And the examples that
they have to draw from,
say a nuclear power where
(mumbles) are only partial
because there was a very
different detectable signature,
there was a different kind
of technological curve
that you had to jump over,
there was the ability
to constrain supply
of critical resources,
none of which seem to apply
here, so it is a very,
I sometimes feel that's
one of those problems
where sometimes
we ignore problems
that are too enormous for us
to comprehend the solutions of.
And I figure climate change
is maybe one of them.
So we have a fifth question,
which I'm gonna let you call on
'cause I don't wanna
be the person to--
- For the last one, we'll
go back to this side,
you've had your hand up
for a while, go ahead.
- [Male Speaker] So I
graduated from Stern
about 20 years ago.
The gentleman to my right
had mentioned something
about the bottom line, and I
guess I've been close enough
to the center of
power at big companies
and little companies
to appreciate
how difficult it is
to make the quarter.
With Stern grads, HBS
grads, Columbia grads
who wanna learn, who've
got kids in private school,
who need to stay.
And they're incented
and they can barely
get their jobs done.
And if you knew how
close some of these big
publicly traded companies
are to missing their quarter,
when they don't,
you'd think Jesus,
how are these folk in
the C-suite going to take
half of their workforce
and help them augment AI
or vice versa as
trainers, as sustainers,
and what was the third?
- Explainers.
- [Male Speaker] Explainers,
when the companies themselves
need these folks to do
it for them so that they
can actually create some
sort of learning environment
for the companies themselves.
So in terms of there's
a macrocosmic problem,
and then equally,
and in this, Arun,
you'd be very helpful for,
is how does Wall Street,
which has a couple of
metrics which, quarter in,
quarter out, year in, year
out, they use to value
a company, and they've
been very resistant
to use anything else
than the metrics,
KPIs, et cetera, that they use.
Now the, I don't
know, it's almost like
a proclivity to learning,
preparing for the future,
AI readiness, what
sort of metrics
will Wall Street use to be able
to assess a company for
its long term viability
when it too judges by the
quarter and nothing more?
And rewards,
essentially, as well.
- Do you have a--
- I'll let you start.
(audience laughing)
It's your event.
- I thought I was gonna
dish that one off.
- [Male Speaker] If you
give me another book,
I'll just go home and we can
pretend that I didn't ask.
- Bad question, we
need the book back.
No, on the workforce, there's
two parts, I think, there.
One is the workforce, one is
the Wall Street measurement.
On the workforce
point, you're right.
It is not like you
can take 20,000 people
out of your workforce and
set them aside for a while
and meet the business, so
this is about how do you,
it's looking at
putting time training,
the right moments
to do it, et cetera.
That's why we talk about these
lifelong learning platforms
and learning platforms
that are accessible
at the point when you need them.
And I think that's
what's gonna need to be
so you help people learn
as they're doing their job.
An example is the company
I mentioned earlier, ASAPP,
which is working with
these call center agents.
They're not pulling them
out and training them
to use this new technology,
it's a new interface
they're using that's
incrementally training
them every call, they're
learning something
a little bit new, and so
they're still doing things
they were doing, but
they're getting better
at it as time goes on.
So I think generally speaking,
that's gonna need
to be the approach,
'cause I don't think companies
can take significant
bodies of their workers
out of the workforce just
for a training purpose.
But I think that's doable if
you approach it the right way.
On the measurement
thing, I don't think
they'll ever be able
to, I think other than
maybe the way they
look at discounting
the company's future prospects
based on whether
they're prepared or not,
I don't know how you reduce
it to a set of metrics,
I don't have a good
answer to that.
- One promising
direction could be,
and this is still
in the design stage,
there's nothing close
to implementation.
But if you think about
ways in which companies
have collectively
decided, some companies
have collectively decided
that they are going to signal
their environmental
responsibility.
This is, there are metrics
that have now emerged
through industry
consortia that give us
some information about
this, and so I've spoken
to a lot of large companies
about them somehow inducing
the same, trying to induce
the same positive perceptions
of companies that are being
particularly responsible
about workforce transition.
And to use this as a way, 'cause
these companies often
suffer from not having,
to be able to harness
the kinds of resources
that they need to
successfully accomplish this.
And so if you've got
something akin to
being environmentally
responsible
or running a
sustainable business
and having industry
consortia that then start
to rate companies based on that.
And this becomes a
signal of quality,
something that you
want to advertise,
something that you're
proud of, then I think
that that's one mechanism
by which we can start
to see some
information about it.
But I think we're a
few years away from it,
it's gonna be a
hard one to get to.
So let's do this, we've got,
we're running a little over,
but there are more questions,
so why don't we take
three questions
one after the other
and I'll write them down,
and then I'll read them back
to you and you can answer
any or all of them.
And then there's lots of
alcohol in the back that,
and there are also, there's
a book seller in the back
who probably hates us right
now for giving away free books.
But anybody who asks a
question who wants a book
who forgot to bring their
credit card with them,
just go and take a book, give
him my name and I'll tell him,
and I'm happy to contribute to
you having this book
because of where
the proceeds are going,
and I'm very happy to
contribute to that.
So there are more copies
of the book available
in the back and lots of
different ways to get them.
But let's take three questions,
answer them and then go
to the more informal part.
Yes, sir, you've had your hand
up for a while as well, yes.
Yeah.
- [Male Speaker]
So my question is
about universal basic income.
We talked about the future
of work, reskilling,
where does that
fit in all of this?
- Okay, great one,
short and sweet, UBI.
All the way in the
back, right corner.
- [Female Speaker] How is
Accenture applying some
of these concepts to how you run
your operations, how
you run your marketing,
how are you applying it
internally rather than
how you're advising
your customers?
- Okay.
And all right, final
question, okay.
Sorry I favored
this, I'll take one
from this side as
well, fourth question.
- [Female Speaker] My question's
around trust and how are,
what's the plan to
build and manage trust
between the trainers,
explainers, sustainers,
and the AI machine employees?
- What a great set of questions.
- And we'll take one
more from this side
since I didn't favor it, okay,
yes, the woman in the back.
- [Female Speaker]
Hi, I'm curious about
when you're making the
business case to the C-suite,
what's the most effective
for them to adopt AI,
what works well?
C-suite, making the business
case, what works well?
- Okay.
- Okay, so we've got UBI.
What do you think of UBI,
how is Accenture applying
the principles that
you've developed or
the stuff that
you guys are doing
around the AI
implementations internally?
How do we build trust between
the human-AI interface
in general, and how do you make
the case for AI
to top management?
- Yeah, great question, I'll do,
I'll answer them real
quick, I don't think
you want long answers to each,
so I'll just give quick answer,
we can pick it up
over a drink more.
So the UBI question
is a great one,
I almost went there
earlier and just didn't,
'cause I didn't want to
make the answer longer.
I'm generally skeptical of UBI
because of the word universal.
I think we need targeted,
for that 15 to 20%
I talked about earlier that
are gonna be out of work
and really have a challenge
to, like the people
in St. Louis that we're helping,
those people are gonna
need significant assistance
and I think we need
new societal mechanism
and investment to
help those people.
The idea of giving every
American $1000 a month,
which is Andrew Yang's platform,
I think it's too broad
and I don't think
it produces the desired results.
So Finland, there are
countries that are testing
universal basic income,
I personally think
it needs to be more
focused and targeted.
But we do need more, much
more assistance quickly
to help the people who need it.
So that's the
answer on that one.
On the, what was next, the...
- How is Accenture applying--
- How is Accenture, we're
applying it aggressively,
because look, we're almost a
500,000 person organization,
so if AI is gonna
impact how people work,
it's gonna impact us
more than anybody else.
So we've been aggressive
at applying it
in every part of
our business we can,
so one area of our
business, for example,
in our operations business,
we employ 100,000 people
and we actually talked to
our employees and we said,
to every one of them, we
said help us figure out
what we can automate, and
we'll help you figure out
how to do a better job.
And so we've automated,
in the three years
we've been at that, I think
the number is now 37,000
of the 100,000 jobs using AI,
nano bots and a lot of other,
RPA, a number of
other technologies.
All those people
are still employed
and our business has grown
because we've been able
to up level everybody from say
mortgage loan data validation
to mortgage analysis
for our clients
who use that service,
as an example.
So we're applying it as
aggressively as we can,
we believe the
future of consulting,
if you think of
consulting as spreadsheets
in PowerPoint, that's dead.
Consulting is about
AI and models and data
and real time
iterative analysis,
so we're transforming
our consulting business
to operate in a
very different way.
So we're applying it
aggressively in every part
of our business 'cause we think
it impacts our industry
faster than others.
- Yeah, trust between the
humans and the machines.
- I'll end on trust, 'cause
that's, I love that question.
And then the C-suite--
- The business case for AI.
- It's really hard.
Because the, I think we
have to do is start with,
the reason it's hard is because
there's often multiple steps
to achieve a big picture.
And I think you need to be
very, you need to put together
a business case that
delivers as it goes.
So an example in the, one
of the big things we find
is we go to a
company and identify,
work with them and identify
a great AI use case,
and it turns out, well, you need
to invest five million
dollars to fix your data
before you can even get
to the AI application,
and that doesn't work,
you're not gonna sell
a business case based on that.
But how can you, in
that same process,
maybe strip away with some
better predictive analytics,
solve some of the
problems, create some
short term business
value and work toward
a vision by doing
it incrementally,
that can work for
some organizations.
Or you put, the business
case is big enough
to go for the whole thing, which
a few organizations have done.
A life sciences
company transformed
their whole R&D
process with a big bet
that included
acquiring companies
in addition to big
investments to change
their whole R&D process
from just science based R&D
with MDs and such to AI
and science based R&D.
So that was a big
bet in that case
with a very different
business case.
And then finally, should I just,
I'll make a closing comment
along the trust one, unless
you have anything else.
- No, that's good.
- The trust, I wind
up, wrap up by saying
I think there's three
things that every company,
so what do you do going
out of here tomorrow
if you work at a company?
I think there's three things
that you need to think about.
One is technology, 'cause
this is different technology,
it's different
technology platforms,
if you're a big company,
you need to think about
who your partners are,
which ones are you trusting
and working with on AI.
And it affects your technology
and how you do things,
it impacts your data
architecture and
all these things.
So technology is
really important,
you're gonna have to
start thinking about that.
The talent is the second thing,
the really really good
AI talent is a lot better
than the less good AI
talent, so how do you get
the right AI talent and how
do you lay the foundation
for the lifelong learning
for all the non-AI workers,
the talent's the second thing.
And trust is, your point,
I think is the best way
to end my comments,
because I think that's
the big differentiator of the
next period we're moving into.
Because in the environment
of a tech clash
and concerns about some
of the questions around
how these companies
are operating,
AI is offering the
potential for every company
to do more invasive yet
individualized valuable services
for their customers.
Those companies that
have higher degrees
of trust are gonna be
doing the right thing,
but they're also gonna be
getting a competitive advantage.
Think about what Walmart
experimented with recently
with their key in
home delivery service.
You're gonna trust Walmart
to actually open the door
to your house, walk
inside and deliver goods.
And the groceries services,
they would actually open
your refrigerator,
rearrange your shelves
and put stuff in
your refrigerator.
Who are you gonna
allow to do that?
You might not allow
your brother to do that.
Are you gonna allow
Walmart to do it?
Customers are, and Amazon
has a similar service.
In China, that's
very commonplace.
But the trust equation is
huge, and those companies that
are now destroying trust through
exploiting customer data,
not respecting the
data, not being able
to secure the data, have a
tough time in the future.
So trust is essential,
and I think it's
a competitive
differentiator for companies
who can do it right
going forward,
but it requires a rethink
from the era of exploitation,
which has been
the last 10 years,
to an era of working with
the consumer to build trust.
- Okay, great, we're
gonna have some,
before we get to the brief
closing remarks from Russell,
I just wanted to
thank you personally,
this has been a rich
and deep discussion.
And I feel that there's,
I wish we could go on
for more time, but
we've got alcohol
and one on one conversation.
I do also feel that there's,
there are so many
open questions.
The book lays a foundation
for a new way of thinking
about the impacts
of AI on business,
but also leaves open
a bunch of questions,
and for those of you who are
academics in the audience
or PhD students, I
know I see a couple,
Elizabeth from Columbia,
Prasanna from NYU,
there's still a ton
to be understood.
I think that there's,
the kind of inquiry
that we need at the
scientific level
to be able to plan for
a better future of work
and for the kind of
positive future of work
that Paul envisions, the
impact of that kind of research
will be immense, and
so go out and do it.
So with that, Russell.
- Thanks, Arun and Paul, for
a fascinating discussion,
and my name is Russell
Isaacson, I'm an MBA 2007,
I'm a member of
the alumni council.
And I think a theme tonight
was lifelong learning.
And your participation
exemplifies that,
and we encourage you to
come to these events.
And on that note, I wanna
just stress that tomorrow,
March 28th, is NYU One Day.
Maybe some of you have
heard of it or haven't,
but it's basically the
big day of the year
for NYU wide giving.
And last year, Stern
exceeded all other schools
in donating the most,
so we encourage you
to keep that up this year,
and if you have any questions
on that, please
talk to me or any
of the staff on
how you can donate.
Again, there's gonna
be books for sale
and Paul has graciously
offered to sign books,
so I encourage you to stick
around and meet each other.
And just as a token
of appreciation.
I'm gonna give you guys a gift.
- [Arun] Thank you.
Thank you.
(audience applauding)
(calm music)
