- We have a discussion on
regulating big tech companies,
with 2018 General Data
Protection Regulation or GDPR.
Many people believe
that the European Union
has become the de-facto global
regulator of technology.
One of the foremost voices in the EU
on behalf of tech regulation
is Marietje Schaake.
Marietje represented the Netherlands
as a member of the European
Parliament from 2009 to 2019,
where she focused on trade,
foreign policy and technology.
And she initiated the net neutrality law
that's now in effect throughout Europe.
Marietje is a member of the
Transatlantic Commission
on Election Integrity,
the Global Commission
on the stability of cyberspace
and the European Council
and Foreign Relations.
I'm also really, really
pleased to say that Marietje,
as of this week, has
joined Stanford University
in a full time capacity
as high as Inaugural
International Policy fellow.
She's also jointly appointed
at the new Cyber Policy
Center at Stanford.
Joining Marietje will be Eric Schmidt,
who is Technical Advisor and
Board Member at Alphabet.
Eric joined Google in 2001,
and served as CEO and Executive Chairman.
Prior to Google, he was the CEO of Novell,
Chief Technology Officer
at Sun Microsystems
and on the research stuff at
Xerox Palo Alto Research Center
and Bell Laboratories.
Marietje and Eric, the floor is yours.
- So what we thought we would do is,
have me make a few comments
and then have you make
some comments and then
have you lead Q and A on sort
of everything we talked about
anything else the
audience is interested in.
I'm really delighted to be a
member of the Advisory Board
for the HAI, I think that
what Stanford is doing here
is historic in many, many ways.
And I'm reminded that ethics
is a system of moral principles,
and that it's important that we debate now
the ethics of what we're doing,
and that we debate now the
impact that the technology
that we're building is
going to have on everything.
So we're here, fundamentally,
because we want to have an open debate,
not just for this staff,
but for the things that will
be invented and so forth
in the next five or 10 years.
But we want to make sure that the systems
that we're building
are built on our values
on human values, right?
Which is why this conference exists.
It's why HAI exists.
And it's why the research
is so integrated.
A couple of updates of
things that are gonna happen
fairly soon are that I
happen to be fortunate enough
to be the chairman of a
National Security Commission
on Artificial Intelligence,
which will report its first report
on November 5th in D.C,
at a big event and the Congress,
and it said, this particular group
was charged by the Congress to look at AI
the impact of AI in society now security
and so forth and so on.
A few days earlier, the
Defense Innovation Board
of which I'm also the chairman,
is hopefully going to debate and approve
a proposal for the DOD
ethics and the use of AI
and ethical ways for the DOD.
This is just a reminder
that this work is ongoing.
And I encourage you all to
look at it and discuss it
and give feedback.
But my goal in those two
initiatives is to sort
of build a national consensus
AI strategy for the US,
where roughly 30 countries
already have these, right?
But we don't seem to have one,
and yet it's so central to
what we do in universities,
among our students, among our business
and for our nation.
So one way to think about it
is that in the last few years,
we've gone from a
relatively simple AI stack,
to a very complicated one.
And this new AI stack,
which is a combination of deep learning
and reinforcement learning,
is extraordinarily powerful.
And I want to give you some
examples of some of the things
that you can do and talk
about the positive impacts
to some of this on the thing
I care a lot about now,
which is science, and much of this work
is going on here at Stanford.
It's clear and this is
all published information
that we've already with
the current technology,
which is largely label data
and sophisticated vision algorithms
have detected lung cancer earlier,
we can predict heart attacks
and strokes from your retina,
we can detect the spread
of breast cancer tumors
much earlier.
This will save or help millions
of people over the next five
or 10 years as it deploys.
This stuff works.
It's done, it's in the bank.
People are building
companies and businesses
and so forth around it.
Google, for example,
automatically captions 1 billion
YouTube videos in 10 different languages
is something which would be
not economically possible.
You can imagine the benefit of that,
to global diffusion of information.
And indeed, TensorFlow is now open
in very, very broadly use.
I think folks here probably
know that TensorFlow
is a library, which
started off essentially
so as I think of it as
matrix multiplication,
which is what tensors are.
But it's really a set of
algorithms and procedures
to do with the powerful algorithms
that I'm going to talk about.
And it's been extended and so forth
and it's being used pretty
much by everybody now.
And what's interesting
about TensorFlow is that,
the extensions and again,
this is to open source
and other partnerships, in say, physics,
and statistics, which recent
libraries that have been added,
are changing the way people think about
using AI in these fields, right?
It's a new way of solving a problem.
This is new in the sense
that this stuff wasn't
available a year ago.
These are the newest
tools to do whatever it is
that you care about.
And one of things that
Google did is reinvented
something called federated
learning and popularized it
and also open sourced
and federated learning
can be understood as
allowing multiple computers
to learn in pieces, so that collectively,
you can do things faster and
you can do things at scale
and with certain kind
of privacy protections.
Now, along the way I had assumed
that natural language processing
was the sort of stepchild of all of this,
and I couldn't have been more wrong.
In a team, again from Google released
a technique called BERT,
which can be understood as
the first really scalable
self-supervision system.
What it does to see you
understand is it runs around
and it learns from things like Wikipedia,
and books corpus and a few other things.
And it's self-trains.
That's a very big deal.
And much to my surprise, right?
Credit to the inventors
of this self-training
and based on that sort of
rather, disaggregate set of data
has produced extremely interesting
insights in terms of, I don't know,
context and concepts in natural language,
and we can use this in
all sorts of new tasks,
question, answering
and so forth and so on.
A typical example is that
you can take some words out,
it will put them back in, it
figures all that stuff out.
And it looks to me like
NLP, which in my view
had been sort of sitting
around for a while,
has now gotten a very significant boost
by virtue of the Bert Approach.
And more importantly,
I think it illustrates
that there's much to do, that
self supervised learning,
the notion of doing it
on your own as a computer
is relatively new.
Most of the things and
systems that you use today
in AI have been with label data,
and indeed the examples that I use with,
breast cancer and vision
and things like that,
and healthcare are
largely from label data,
but there's a scale problem
with labeling data and so forth.
So with this, right?
The combination of GANs, which
I'll talk about in a second
reinforcement learning,
means that you really can
begin to do things at scale,
that are sort of magical, right?
And that has a lot of implications
for the things I'll talk
about in a bit about ethics.
So for me,
there is a set of questions
that this begins to ask.
And I'll begin to develop
these by saying that,
we don't really understand
today how humans
and AI will coexist together, right?
I given you very specific
task oriented examples,
language, understanding,
vision, and so forth.
But we don't fully understand
how this interaction
how people will deal with it.
We don't have exactly the user interfaces.
You know, 50 years ago, the Windows icons,
menus and pull-down interface
was invented at Xerox PARC.
And we use that today
without thinking about
it had an enormous impact.
What is the equivalent of that?
It's obviously not going
to be the wimp interface
from way back when.
But how do we combine a human decision
and an AI decision into this?
So if you look at AI
and speech and images,
I would say that these at
least have the first pass
or solve problems.
They're equal to or better
than human capability.
And that has a lot of implications.
So for example, that's why
self-driving cars make sense.
It's just much better to
have the car drive you,
rather than you because
it doesn't get tired
and it doesn't get drunk
and it doesn't have accidents like you do.
And that's happening again,
these are things which
we know will happen.
The question is what
additionally will happen?
So, we've made a lot of progress,
for example, here with object
recognition and so forth.
But we have a lot of things
that we're still working there.
So for example, true
deep scene understanding,
understanding for everyone
is sort of low resource languages,
there's lots of languages
that aren't the ones
that all of us speak, true
conversational handling of turns.
It turns out that one of the
hard research questions is,
how do you detect when a conversation
goes from one person to another
very back and forth quickly?
I'm familiar with a research project,
which has a thesis that, the
rate of such turns with a child
it was a velocity of turns,
is an indicator or perhaps driving
their intellectual
development as a young child.
This is a very young ages,
we'll see if that's true.
But it turns out, it's
hard to detect that, right?
So there's an example of
where we are in that frontier.
But to me, the thing that
we're going to see now,
is the combination of the
things that I've talked about
are going to transform science.
And I want to give you some examples.
So, what happens in
science is they have...
I know I've talked to
enough people to understand,
they have an enormous amount of data
that's very, very confusing.
And if you look pretty much all of
the interesting AI approaches
in science right now
have GAN in the middle of it,
and for those of you who
aren't familiar with GANs,
GANs are where there's
essentially two networks,
one which is generating one
and one is which is approving
it or disapproving it.
And because the two work together,
eventually they can produce things
which are normalized, right?
And look similar to the underlying data.
Pretty much any interesting application
that I've seen in science
has GAN in the middle of it,
which is trying to take all the sort
of loss he did the strange data
and turn it into something
which then can be manipulated.
This hard science is about finding robust
and reproducible patterns in this data.
And so there is an opportunity
where these models are very,
very good at this.
I had originally thought when
this was first presented to me
that somehow you just
sort of run this staff
and you would figure it out.
That's not true.
It turns out, you need much
more sophisticated pipeline.
This is the work that is going
on at Stanford in research
and many, many other places in
the top universities in
the country, and so forth,
at Google or elsewhere.
So when you add these
generative models like GANs,
and then reinforcement
learning, which is a simulator,
all of a sudden things go much faster.
So anywhere where data
is easily collected,
and then normalized,
or where there's very little
computational analysis
in the field which covers
a fair amount of science,
these things are appropriate.
Now the combination of these things
will produce historic results.
AlphaFold which is from DeepMind,
has just recently won one of
the protein-folding competitions,
the algorithm is essentially
an energy management problem
where you have to find
the lowest energy state
for a complicated set of proteins
as they move around and they fold.
And this is computationally
very, very difficult.
And what it shows and if you
study AlphaFold as an example,
that the generic solution is
gonna be something like this,
that you'll take the data
into some sort of GAN,
you'll get some sort of normalized data.
And then you'll use reinforcement learning
to discover a function that
was not known to you before
a transformation of A to B.
Where would this apply
pretty much anywhere
that's interesting and hard.
Quantum chemistry, molecular binding,
drug discovery, climate forecasting,
complicated energy flows,
anything involving Navier-Stokes,
those sorts of things.
Why do I spend so much
time talking about this?
Because what happens when this
technology in the next decade
can allow us to build extraordinarily
powerful new materials, right?
To understand fundamentally
what's going on
with the climate and the
way we didn't before,
much more efficient energy generation.
This is all good.
This is really, really powerful.
And I don't want us in
these complicated debates
about what we're doing to forget
that the scientists here at
Stanford and other places
are making progress on problems,
which were thought to be unsolvable
that have not been solved
for 50 years or 100 years,
'cause they couldn't do the math at scale,
simply because of the way physics works.
So I think that there's a set of issues.
And then there's a set
of really hard problems.
And I'm gonna define issues
as ones which we can,
I think address fairly,
relatively, easily,
not easily and solve on the
statement that can be addressed.
So one of the first
questions about AI systems
is we want them to be in a
situation where the end users
are in control, and they
get what they want, right?
This is the problem of you know,
I built it and it didn't do what I wanted.
And of course, movies are
full of these scenarios.
It's incredibly important,
that the AI systems
that we collectively
build, including the ones
that I've highlighted and many others,
have human control, right?
Our legal systems, our ethical systems
all are based on these
sorts of principles.
We don't want rogue things running around
in the vernacular of movies.
So how do we get it to the end users
that are using the systems
have more control over them?
And I mean, more than like and dislike,
how does that training actually work?
Right, this is an area of
extremely active research.
We just don't want the AI
just exploiting our impulses
and obsessions and anxiety.
This is critical to the
sustainable adoption of AI.
Give another example, data.
Many people, including
myself believe that if,
broadly speaking, healthcare
data was broadly available,
and we had the equivalent of
image net for healthcare data,
we would make enormous
progress for diseases
that bedeviled humans
since we've been around
thousands of years,
we're not gonna do that,
because it's a violation of privacy,
it's not appropriate,
and so forth and so on.
So how do we solve that problem?
How do we get to the point
where we can get enough training data
while respecting privacy, right?
Again, I'm not suggesting
we would do anything
other than respect privacy,
I think it's a fundamental ethical value.
On the other hand, there really is a need
at least in the research community
to get this data to
solve a disease, right?
And if you have the
disease, all of a sudden,
your view of this gets
very, very motivated.
And then how do you get
access to the right data
and to the right people in such
a way that it's not misused?
There's a lot of work here
and elsewhere on bias.
And an example of a contribution
that Google has made
is something called TensorFlow Lattice.
It allows you to manage by bias data,
there's a tool these are open source
facets to understand
that there's something
called Model Analysis,
the data and what if to
visualize the data, right?
In different forms of bias.
We know the data has bias in it.
You don't need to yell
at as a new fact, right?
'Cause humans have bias in them.
Our systems have bias in them.
It's like not a shock.
The question is, what do we do about it?
So, at least at the current state of AI,
we can help you understand
the bias and identify it.
The real question to me is ethical matter,
how do you address it?
Another thing that's important
and I think we're getting
there as an industry
is I'm going to say this as
good judgment as required.
Two examples.
In Google's case, we've
been very, very careful
about sensitive things
like face recognition
for obvious reasons,
if you look at open AI,
which is essentially a partner with Google
and many of these things,
they withheld the public details of GPT2
which is a text service that
generated arbitrary text.
And they allowed it for research use,
but they didn't publish it.
So they didn't want it to be misused.
These are two examples of good judgment.
And I think there's gonna
be more such examples.
And it's important that
we establish the basis
for that good judgment.
And that the people who are doing
this sort of really think about it
before they sort of release these things.
I mentioned before the issue of control.
This is a subject of research issue,
as well as sort of academic
interest in business interest,
you really need to know, right?
That in the corner cases
of these algorithms,
nothing weird happens.
There's a great deal of interest in China.
I think the China problem is solvable
with the following insight,
we need access to their
top scientists, right?
We are better off collectively,
when their top scientists
the incredible talent
that is there and in other countries,
is here in our country
working on these things.
And we also benefit
from common frameworks,
so TensorFlow being an example
and there are a number of other ones.
I would argue that even in a situation
where everybody hates each
other among all the countries,
and they can't get along and
they don't agree on anything.
They're still areas of common agreement.
The most obvious one,
is you have a country
that's doing experiments.
Let's say it has some horrific cyber thing
that is experimenting with,
and not the US but somebody else.
And clearly, it's not in their
interest or in our interest
for this to escape from
it's testing harness, right?
So there are all sorts of ways.
And imagine you can imagine treaties
and agreements among countries,
to try to mitigate the
worst possible scenarios,
the one that everybody
wants to talk about.
I think there's some really hard issues.
I think this fake video thing
and the impact of misinformation
is a really hard issue.
It's hard of a social logical level,
it's hard at the technical level,
Google made a data set of
visual effects for detection
for the research community following
one involving text.
And for excuse me for synthetic speech.
This is a case where the
researchers have to win.
It's important that we develop techniques
to detect these things, and
to be able to handle them.
We don't want a world which
is nothing but misinformation,
where everybody is an audience of one.
And we're everything is marketed to,
by some evil government in some
other country and so forth,
all of the scenarios that we talk about.
It's important that the
technology that we have invented
be used in a way that increases trust,
increases ethical use of information
and doesn't dumb it down.
I'm very worried about
the issues of deterrence.
So,
I'm good friends with the number of people
who worked hard to make
the world that we have now
safe from nuclear weapons.
And when you talk to them
about how they did deterrence
and how they did all of the
negotiations at the time,
and these are heroes, in my view,
'cause we're all alive basically,
because of the work that they did.
They talk about the
scarcity of the weapons
and the ability to count them,
and the ability to sort of
know what others were doing.
And often one of the techniques would be
that one side would tell
the other what they knew.
But none of these principles apply,
and what I would imagine
would be softer negotiations
for many, many reasons.
Softer is diffuse, you
wouldn't tell the other side
what you had done.
I mean, I again making these
things up in my own mind.
But we haven't had a proper regime
around how all that works.
So, if you have the secretary
of state of one country
and the secretary of
state of another country
having this conversation,
what are the ground rules?
And it's pretty clear, by the way,
that because AI is incredibly powerful,
that there will be negative issues,
there will be negative uses.
I'm not arguing against that.
So how do you talk about it?
Does one side disclose it to the other?
Well, no one would naturally do that.
What are the norms of this?
This area strikes me as one
which is in a nascent beginning
of likely to become very important
as general intelligence
becomes more and more possible,
which is sometime from now of course.
Another heart issue is the technologies
that I'm talking about
will largely first come out
in government form or in commercial form.
So for example, in China,
the surveillance technology,
which is as a technical matter, well done,
has had a sort of terrible impact, right?
So that's the first use that those folks
have seen of the power of AI.
Well, that's not the first use,
I would like them to see of AI.
How do we sort that out?
This notion of sort of
ubiquity of this technology,
and how do people experience it,
will also color how our
technology is treated.
And I think it's also important
that we establish right here right now,
that the liberal values of Stanford
and other university in western values
are the ones that should win, right?
That we shouldn't allow
other values, right?
Which we can debate
with their specifically,
that we need to be
unified and clear on that.
So there are lots and
lots of upcoming issues.
My favorite current one is the following.
So in the reasonable future,
not in the next year or two,
it'll be possible for you.
Let's say you're somewhat into
middle age kind of an adult,
and you're kind of bored
with your normal life.
And so what you do is you
put a headset on for the day,
and you live life of your younger self.
And that virtual world that you live in
is populated with the virtual
images of your friends
who are themselves younger,
more handsome, more beautiful,
more energetic, richer or whatever.
More hip.
What do you think about that?
What do you think about
the term I used here
it's called crossing over, right?
What do you think about people
who would choose to do that?
Is that a good thing or a bad thing
is they sort of leave our
current physical world
except for things like eating, right?
And they go into that.
Now, is that a likely scenario?
Take a look at 3D gaming today
and imagine 10 years from now.
You can do this with many,
many different technologies,
and we need to start having those debates.
What I wanna say here
and finishing up is that.
I start from the premise
that this technology
is extraordinarily beneficial.
I think the evidence is there,
the ability to deal with
disease alone, right?
Which is the devil, all
of us and our ancestors
and our parents and our
children and so forth and so on.
Me what a gift and I can go on?
Think of the number of
people who will be alive
because a car doesn't kill me,
I can just go on and on and on and on.
I also think that we're
just beginning to see
the impact of this technology,
on the really, really powerful
things in science, right?
Whether it's disease discovery,
understanding how the
basics of energy work,
and so forth and so on.
And we will see that
benefit in the same sense
that we saw the benefit of electricity,
You know, 100 years later.
I think that the optimism
that I would offer for
a research perspective,
is that I think we know what
the next set of things are,
right?
That the combination of the
GANs that I talked about,
this sort of self-supervised
learning maneuvers,
reinforcement learning, and
development abroad simulators,
or at least specific simulators,
will enable these extraordinary
GANs in the next five years.
I cannot wait to see what next year
and the following year look like,
from the power of what is happening here.
So thank you very much.
(audience claps)
Congrats you guys.
Thank you.
- Thank you.
I'm going to talk about governance.
We can see if it's a problem or an issue.
One of the two, it's
certainly a hot topic.
Earlier this week, I was
doing a debate in New York
organized by Intelligence Squared
where the proposition was
in an Oxford-Style debate.
Europe has declared war
on the American tech companies.
And I was kind of wishing that,
the debate would have
taken place one week later,
because then the
proposition might have been
American Congress has declared war
on the American tech companies.
So in that light merely talking
about regulating big tech
already sounds like an olive branch.
But very clearly, in
Europe, in the United States
and frankly globally,
questions of governance to
safeguard the rule of law,
the public interest and the
protection of individual rights,
collective rights, amidst
technological change
and geographical power shifts,
is at the top of the agenda.
I think the biggest question is
how do we implement regulation?
And I would say, starting with principles
that we protect for very, very good reason
is a much more productive approach
than to suggest that
technologies are so exceptional
that they can only be regulated
by entirely new systems
or models.
Firstly, we don't have the time
or really the ability
to start from scratch
if there ever is such an
opportunity in governing,
but especially now in
these polarized times
and globally tense times,
I think it is just not
going to work that way.
But more importantly,
there's too much that is of enormous value
in the human rights frameworks
and other fundamental principles
that we have to simply discard them.
Now, an often heard arguments,
especially in this part of the world,
is that governments should refrain
from regulating Technology
or the Internet,
because it would stifle innovation.
Maybe this sounds familiar,
but I think the zero-sum
dichotomy is a caricature,
in fact arguing that
implies that innovation
is more important than
democracy or the rule of law,
the foundations of our quality of life.
And I believe actually,
that some of the most serious challenges
to our open societies but also
to the open Internet today.
Do not stem from over,
but rather under
regulation of technologies.
Now the idea that tech
companies are categorically
against regulation is
paradoxical for many reasons
because they have directly and
also significantly benefited
from regulations such as section 230
intermediary liability exemptions,
and actually companies themselves
are increasingly governing
very impactful parts
of our economies
societies and democracies.
Terms of Use are often
a stronger indicator,
the legal articles of what
hundreds of millions of people
experience in terms of
content when they go online.
Google processes approximately
63,000 searches a second,
Verizon and MasterCard
verify your identity
and payments online.
Uber knows your every move,
Microsoft is now going to build
the Defense Department's cloud
while Facebook decides
who can cannot be trusted
as a new source.
There's a lot of power in
the hands of very few actors.
And not only does that
make it very difficult
for newcomers to catch up in
terms of creating data volumes,
private companies are increasingly
taking over crucial parts
and the role of governments but
without an explicit mandate,
and without democratic legitimacy,
without accountability
that's proportionate
to the powers that they assume.
So I believe principally,
and I really look forward
to being part of that
debate here at Stanford,
we need a deeper debate about which tasks
need to stay in the hands of the public
out of the markets.
I think about questions around currency,
defensive but also offensive capabilities,
critical infrastructure, personal data,
identity, including your
own genetic structures.
We need to talk about what
should be in the public,
not in the market.
Now when the internet
was designed and shared,
many had hoped and even hinted
that access to it as such,
would harbor and spread democracy
and others thought that
the internet would actually
technically be ungovernable.
I think we must look at what
we've learned from the promise
of the open internet and where
we are today in practice.
Sure, you will all remember
the famous words by John Perry Barlow
in his declaration of
independence of Cyberspace.
I quote,
"We are forming our own social contract.
"This governance will arise
"according to the conditions
of our world, not yours.
"Our world is different.
"We are creating a world
where anyone, anywhere,
"may express his or her
beliefs, no matter how singular,
"without fear of being coerced
into silence or conformity.
"Your legal concepts of
property, expression,
"identity, movement and
context, do not apply to us."
It was around 20 years ago in Davos.
And I often think back
about the echo of John
Perry Barlow's words as
a bit of a reality check,
when I hear evangelists about
artificial intelligence,
but also blockchain which kind of
is more in the background, now it seems.
But the notion or the suggestion
that there is no time to lose,
or that in a G2 world
or race to AI dominance
will determine geopolitical
relations for a decade to come.
Now on the significance
of AI, I don't disagree,
but the question is not
merely who dominates,
but very much the question of
which values and principles
will be underpinning this
kind of power and leadership.
Certainly a race for AI
power must not be an excuse
for a race to the
bottom, where innovation,
efficiency or competition,
trump the safeguarding of public
interest, fair competition,
human rights or democratic principles?
And then this is a question that really
has me thinking a lot these days,
if AI benefits disproportionately
from an undemocratic and
centrally government models,
such as the one we see in China,
but also other parts of the world,
where data can be massively hoovered up,
without much restriction
and where human rights are not respected,
and AI in turn will make
that undemocratic
government more powerful.
Why do we have such high expectations
of what this technology will bring us,
especially if we don't
have the proper rules,
checks and balances.
And if AI is not inherently an accelerator
of top down control, we
need to look at governance
and regulation even more ambitiously.
I believe that if we want
to preserve democracy,
we need to democratize the way
in which we govern technology itself.
And it's a little ironic to put it mildly.
That the same companies
that are warning against
the dominance of Chinese
standards are in fact sending data
to Beijing themselves.
I heard Mark Zuckerberg warning lawmakers
on Capitol Hill for China,
as the alternative to Facebook's
proposed Libra currency.
But the company has data
sharing partnerships
with four Chinese
companies including Huawei.
And you cannot imagine during the 10 years
that I spent in European Parliament,
how often I heard from Tech lobbyists.
Do not regulate us because
otherwise China dot dot dot.
They will use our laws as a
legitimate question of theirs.
And we can safely conclude right now that,
that argument has not led
to successful outcomes
for democracy so far.
Inaction to regulate by
democracies has not stopped
Chinese leaders from instrumental in tech,
mirroring communist values
and political models.
In fact, the a cemetery in
governance becomes ever larger,
when democratic countries
refrain from ensuring values
and rules based framework
that creates benchmarks
to protect principles such
as the freedom of expression,
access to information, non-discrimination,
fair competition,
presumption of innocence.
And when we do not develop a vision
for our relation to developing economies
and trade relations,
as AI also impact data
flows across the world.
We see China using
technology as an extension
of its governance model,
that is increasingly global.
While the US mainly lets technology
and thus business models
speak for themselves,
except and I find this interesting
and puzzling sometimes,
except when it comes to national security,
which always seems the exception
for Americans when they do
see significant role for government.
European privacy laws should
be seen as an intention
to protect people from
government intrusion
as much as overreach by private companies.
Since World War Two, the rules based order
was seen as a key priority
for Western democracies
from trade to human rights,
from democracy to war and peace norms.
And for norms to have meaning,
they need to be enforceable
and violators held to account.
So we need more guarantees
and institutions processes
than just stated good intentions
of what I used to call
in the European Parliament Scouts Honor
you know when companies say we promise,
we're really going to do good.
And I'm not even sure that's
such explicit intentions
are made anymore.
I don't know if "do no evil"
is still Google's motto.
But it's really about more than promises
we heard about the need for
redistribution benefits,
for example, in the first panel,
now, which government would say,
okay, if that's your
intention, big tech company.
That's fine, go ahead
or do we need taxation
and other kinds of
redistribution mechanisms
that apply equally in place?
I would say we do.
But what we can see so far is that,
in part led by the success
of Silicon Valley
businesses and its culture.
The US took a more libertarian approach
and certainly did not seek to
build and safeguard a rules
based order in the digital sphere,
or let's say, an Internet
hole and a piece.
And we now know that
this hands off approach
did not break monopolies,
but created new ones,
and powered not only
individuals but also companies
and dictators, disrupted journalism
and electrical processes,
and did not prevent
the balkanization of the Internet.
It certainly did not nudge China
into following our example.
And I've not mentioned
explicitly, inequality,
discrimination, job displacement
and the environmental damage,
that we also have to be
very mindful often that AI
can put on steroids.
And I'm very glad Joy is here,
and we'll hear from her
later because she will talk
about some of the risks of
exacerbating discrimination.
That can happen when data
that is being put into
the training of algorithms
and AI is biased or flawed,
for many reasons.
But because digitization
often means privatization,
it means that the
outsourcing of governance
two tech companies,
Technologies and Algorithms
build for profits, efficiency,
competitive advantage,
time spent online, add sold,
certainly not designed to
safeguard or strengthen democracy
is a process that's happening.
I believe that the shift to
private and opaque governance
through technological standards,
is one of the most
significant consequences of AI
that we need to shed more light on.
Lawrence Lessig work Code is law,
is in that sense more relevant than ever.
But the reality is and for lawmakers
this is an inconvenient truth,
is that the full impact of the
massive use of tech platforms
and AI remains largely or norm.
Academics, regulators, law
enforcement, lawmakers,
judges, civil society,
journalists and citizens are like,
have an information deficit
compared to companies,
even if their impact is public,
for both good and we've heard
a lot about opportunities,
but also harmful.
And also companies may
look at the same data
through a completely different lens.
And with the aim of achieving
completely different goals,
than the stakeholders that I mentioned,
may have.
And actually many AI engineers will admit
that person, really knows
anymore where the head and detail
of algorithms are after
endless iterations,
they're excited about
the fact that outcomes
are not predictable.
And I can understand that excitement.
But we can only know what
the unintended outcomes are,
when we know what was
intended in the first place.
When there's transparency of
training data and documentation
of intended outcomes and
variations of algorithms.
And on top of that,
regulators and auditors
as well as other public servants,
will need to get mandates
and capacity for meaningful
access to data and information.
I think the example of Cambridge
Analytica is interesting.
It is often seen as an abuse
of Facebook's platform.
But I believe it actually simply used
the platform the way that was possible
without restrictions on data
collection, micro targeting,
data sharing and the use of
political advertisements.
The same goes for multiple
other disinformation campaigns.
So in assessing all the
possibilities and opportunities
that AI offers, as well
as its potential harm,
we must explicitly look
both at use and abuse,
the intended and the unintended.
But the Cambridge Analytica
scandal anecdotally shows
how huge the accountability gap is.
And we see this with every data breach
or cyber attack over and over again,
because too often, no one
faces meaningful consequences.
Without transparency, no accountability,
and the real risk of
disenfranchisement of citizens
who see powerless public
authorities in the face
of very powerful events happening
and impacting their lives.
Now, trade secrets
and other intellectual
property protections
cannot be the perpetual
shield against meaningful
access to information and oversights.
It's a fairly cynical
cycle, where companies claim
that politicians don't know
anything about technology,
so they can only proposed
bad regulations and laws,
when in fact the most
important information
is carefully guarded.
This cycle has to be broken.
And if trade secret stands
between us and scrutiny
that has a change.
Another argument I often
hear is it's too early
to regulate artificial intelligence.
But at the same time many people agree
that we were too late to
regulate platform companies,
micro targeting political
ad data protection
and privacy online.
And you know, perhaps the
timing is never perfect,
but I would prefer to be
proactive and not wait
until there are further harms
or or other effects of AI
let's be proactive while we can be.
Now this conference will deal
a lot with ethics frameworks,
and it's a very popular topic.
It's also hard to be
against ethics, I believe,
and that may explain that
there are now around 128
frameworks of AI and
ethics in Europe alone.
But if everything is ethics, nothing is.
And the question is,
who designs and oversees ethics standards?
Who decides what is or who is
an ethically competent leader?
And what happens in cases of breach?
In other words, how do we make sure
that it is meaningful and enforceable,
and not just window
dressing and a distraction.
AI development should
promote fairness and justice,
protect the rights and
interests of stakeholders
and promote equality of opportunity.
AI should promote green developments
and meet the requirements of
environmental friendliness
and resource conservation.
AI system should continuously
improve transparency,
explainability, reliability
and controllability,
and gradually achieve
auditability, supervisability,
traceability and trustworthiness.
I can go on there's a fairly
long list and it's interesting
because I was reading from
principles of AI governance
and responsible AI produced
by the National New Generation
of Artificial Intelligence
Governance Expert Committee
in China.
This was produced on June 17th 2019,
and I want to thank Lauren
Lasky and Graham Webster
from New America because
they made available
the translation, I think it's worth a read
because when you read
these ethics principles,
they sound quite accessible and agreeable.
But clearly, they have not quite solved
the challenges between, for
example, this country and China.
And I personally believe
we can do with more focus
on the rule of law, than on ethics
and on empowering the
institutions we have,
to perform the tasks of
regulating antitrust,
the handling of personal
data, net neutrality,
the application of media
laws online, consumer rights,
safety and technical standards,
et cetera, et cetera.
Again, we don't have to start from scratch
and this is not about
regulating the Internet
or regulating against big tech companies.
This is about preserving
principles, standards and values
no matter what technological disruption.
It's certainly unrealistic to assume
a sort of broad societal
and political trusts
in artificial intelligence,
especially after so much trust
has been lost by tech companies,
and in failed self-regulation efforts.
I think it's difficult for companies
to wanna have it both ways.
Well, on the one hand, making
big promises to customers
or advertisers about the
extraordinary efficiency
with which ads can be
connected to Internet users.
And on the other hand, when you say,
well, we have to start thinking
about the potential harms
of machine learning to say, oh,
we're not that far yet, AI is
really in its early stages.
One of the things that
continues to puzzled me
is how companies like YouTube or Facebook
can turn over billions
because the ever more
precise way it handles
all the information gathered.
And it doesn't come much further than,
we're very sorry for mistakes we made
and we have a lot of learning to do.
When you ask them about
the various scandals,
and we have too many to tap into here
to really talk about what happened
and how we can prevent them in the future.
And I've mentioned
Facebook a couple of times,
I know there are a lot of scrutiny,
but just because they're
visibly targeted now
doesn't mean that they're the only company
that is doing these kinds of things.
I frankly think that
this kind of negativity
stands in no proportion to
the power tech companies have,
and with great power should
come great responsibility,
or at least modesty.
Some of the outcomes
of pattern recognition
or machine learning are reason
for such serious concerns
that pauses are justified.
And I don't think that
everything that's possible
should also be put in the
world or into society,
as part of this often
quoted race for dominance,
we need to actually answer
the question collectively,
how much risk are we willing to take?
Here too we don't have
to start from scratch.
In Europe, we have a principle called
the precautionary principle.
And the idea is that, for
example, when you look at GMOs,
or new medicines, and other innovations
where the impact could
potentially be huge,
but the societal risks are still unclear,
that you wait a moment and
research further before it is,
for example, standardized or licensed.
And it's always ridiculed
especially by Americans
is unscientific which
recently it was discovered
that two years after a big
announcement of the success
of a genetically manipulated cow.
It turned out the during
the gene editing process
bacteria that also caused
antibiotic resistance
entered into this cow, and it
was found out two years later.
Scientifically discovered, I should say.
So sometimes time helps
and especially when risk
is significant at time and a bit of pause,
I think is not ill-advised.
At least, there should be
systematic Impact Assessments,
parallel learning processes
in the public interest
when AI is developed.
For example, when data
cannot be anonymized,
or is very easily re-identifiable,
we should limit the use
until that problem is sorted.
Or if facial recognition
systems are irreconcilable
with the right to privacy,
then there's a legitimate
ground to burn it use,
not only by governments,
but also by companies
because we know how easily
technologies proliferate
and we don't want to create more asymmetry
between governments and companies here.
Now Rob said it.
That EU has adopted a
number of regulations,
causing some to call it super regulator,
which when I was in the midst of it,
it didn't always feel that way
when the sausage was being made.
But certainly, I think
it is very, very good
to take the approach when
regulating to see Internet users
not as products or
consumers, but as citizens.
And the general data protection
regulation will hopefully
also lead to, for example,
higher standards of data
for artificial intelligence development,
as well as to data protection.
We have net neutrality,
cyber security laws
that are steps in the right direction,
I was not personally as happy
with the new copyright directive.
And there's multiple challenges
that Europe still faces,
for example, without growth,
it will be very difficult
to actually set standards
on the basis of these agreed
principles and values.
And I think this is where
the EU really has to step up.
Meanwhile, we see development
where in the US there's a catching up
on the notion of regulation.
San Francisco, interestingly very close
to where these technologies are developed
and has burn facial recognition
as to be used by the government's.
Uber and Lyft drivers can
no longer just be seen
as independent workers.
California has a privacy bill
and the hearing of Mark Zuckerberg
look like serious grilling.
I think it is interesting
why some companies do
and other companies don't
appear before hearings.
But in any case, such hearings,
even though they're quite spectacular
and it's very important that
lawmakers get the questions
that they want answered.
They can substitute regulation and laws.
So concluding, I see a clear momentum now,
between the EU and the US significant part
of the thankfully democratic
parts of the world
where we can catch up to
fill the regulatory gaps
for platforms and other digital service
and anticipating the broader
use of artificial intelligence.
I'm convinced the question is not
whether there will be regulations
but who sets the rules.
And I hope that between the US and the EU,
but with partners like
Japan, hopefully India,
we can build and develop a
democratic governance model
for technology and AI.
And here, I think it's very
clear that tech companies
cannot stay on the fence,
in taking a position in
relation to values and rights.
I personally believe
that a rules based order
serves the public interest,
as well as individual
and collective rights and liberties
that companies benefit from,
but that everybody has a role to play
to also contribute to the common interest
and to strengthen the
resilience of our democracy.
Thank you.
(audience claps)
Thank you.
Thank (murmurs)
(audience cheers)
Wow.
Thank you.
Is okay, thank you.
Thanks, I'm glad if it rings
a bell here with people,
and I hope we can work on it further
as I start here at Stanford,
we have about a small half hour left
for discussion and Q and A.
And I wanted to kick off the
exchange with one question,
which is, you've talked a
lot about the possibilities
of the technology, you've
had a lot of leadership roles
in various capacities and
continue to have leading roles.
And what interests me
also from the perspective
of having served as a politician
and thinking about democracy is,
who do you see as your constituents?
From who do you want
to get a trust mandate,
if we can call it that?
Do you think about the American people
in your work with the
Department of Defense War
or a more global constituency
or the employees of the companies
where are you lead or customers?
- Well, in the Google context,
the answer has always been
the customer came first.
And the reason that we disagreed
with a number of initiatives
that you mentioned,
was we disagreed on
principle that this was in
the best interest of customers, right?
And I can, it's a longer conversation,
but I did for decades.
And, the reality if you're CEO
or a board member of a corporation,
is you have many constituents
that you have to serve,
the government who is a
monopoly of regulation
and you have to follow,
one of the bizarre problems
with running these large
multinationals is that,
governments don't agree on
really fundamental things.
And so you end up with
these very complicated
geopolitical things.
So for example, France
will try to pass a law
which will cause censorship
of a certain particular
thing in Google, and they
want it to apply to the world
and then there's a
complicated legal process
where it's decided that
applies to France or the EU,
and not the US.
- Yeah.
- We saw this, for example,
with the right to be forgotten.
But in any case, you ask
consistency question.
So you've got your employees
and employees today are
much, much more active
and the governance of the
company and what they wanna do.
- Yeah, we've seen it.
- And we've seen that quite a bit.
You have shareholders, right?
Who matter a lot.
But it used to be when I
was young as an executive,
the answer was the CEO should
serve the shareholders.
That's clearly no longer correct.
It's now shareholders plus governance
plus consumers plus
partners, plus employees
and the local community.
- All right, thank you for that.
I'm sure there's a lot of
questions in the audience.
You can ask them to either one of us,
and I think there's
people walking around with
microphones if I'm not mistaken.
Yes.
John.
Yeah, she's coming.
- Young girl.
She use to say thanks to John and Fei-Fei
for organizing all of this.
- Yes.
- Rob didn't work.
- Okay, thank you.
- So I guess this a question from Richard
So I like to talk very much.
But one of the things that you came down
at a different spot from
where I would come down
is on the issue of whether
regulating too soon
versus regulating too late.
It seems to me, I've
always been an advocate for
not trying to regulate before
we fully understand the technology
or we're having legislators
that don't understand
the technology writing
the regulations.
I think that's not not helpful.
I actually think that the
example of the San Francisco
total banning of the use
of facial recognition
by governments is not a
good example of regulation.
I think it's there are a lot
of uses that we should allow,
and a lot of uses that we should not allow
I agree with that.
So it seems to me that the real question,
it comes down to the question
of what we want regulations
that are just right sort of
the Goldilocks regulations.
We want them that prevent the bad things
and allow the good things,
roughly speaking, and we kind of agree
on what are the good things?
What are the bad things?
We're gonna get it wrong, either way,
if we hold off and don't regulate,
or if we regulate too soon,
we're gonna get some things wrong.
It's not gonna be perfect.
Seems to me that the question then is,
well, we will need correction.
And is it easier to correct
regulation that has been written
that is bad, that is over
regulating technology?
Or is it easier to introduce regulation
that prevents things that have been abuses
that have appeared and become manifest
so prevents these bad things later on.
I don't know the answer to that.
But that seems to me to be
a really important question.
It would seem that
adjustment could be made,
we could easily step
in and say, well look,
Facebook or whoever.
You've been doing this and we think,
you now understand the technology,
we don't think we should
allow that anymore.
Now, the problem of course
is the huge pushback
that you get, can the
regulators or legislators,
actually make that happen?
So, here's my question is just,
what do you think about
where the adjustment
is most easy to or most
likely to come about?
- Thanks, this is a very rich question
and let me try to just
touch upon a few points.
One thing is the notion
of getting it right.
You know, the thing is, of
course, nobody will always agree
there will always be people
who disagree with any law.
And that's the good
thing about a democracy.
Not everybody agrees.
But what I think is a
hugely important momentum
now is also from governments
to innovate the way
they regulate because technology
is to generally developed very fast.
You know, it's clear
that they develop faster
than democratic decision making.
But the question is, where do you start?
Do you start with the technology?
Let's stay with your example
of facial recognition,
and do you then say we
have this technology,
how do we regulate it?
Or do you start with the principal?
Let's say the right to privacy,
and we empower the regulator
to assess whether our facial
recognition or university
or the supermarket violates
your rights of privacy
and that regulator is empowered
in ways the appropriate regulator
so it could be antitrust for,
antitrust cases could be
non discrimination for,
equal treatment watchdog
or the civil rights
watchdog in any given country.
I think of this as a need
for framework regulation,
where the principles are
anchored very firmly.
And this also allows, for example,
for broader agreement to
be solved between the US
and the EU on free speech.
I mean, we are compared
to the rest of the world
very much line, but our
laws are very different.
Europeans don't have the First
Amendment which is really,
secret for many people
in the United States.
We have some exceptions.
How can you come to,
let's say, standards on speech?
Standards on the processes of exception
and how it should be dealt with online
and then empowerment of
regulators to even if next year,
there's a new technology or five years,
there's another new technology,
that we don't have to rewrite
the laws all the time.
And the empowerment of
the regulators to do such,
I think is challenging enough.
And, that's another challenge
that I don't want to leave I mentioned,
when Congress announced it was going to do
an antitrust investigation.
Tech companies hired a
whole army of lobbyists
75% of which New York times discovered
come from the very offices that
are going to be responsible
for the antitrust work.
So the revolving door and the question
of how can governments stay up
the speed in terms of knowledge
in a highly competitive space
where the best AI engineers
and experts and
researchers are bought away
with huge salaries that
governments can never pay
is something that also is part
of this information deficit
that I think we have to address,
which in turn is necessary to overcome
this accountability deficit
or accountability gap.
So, these are just a few hints,
we can talk about it longer.
And I hope we'll have a chance to.
Let me go over there for a moment
and then come back to the
front, gentlemen with, yep.
- Good morning, I'm George K.
I'm Principal Research
Scientist at Mozilla.
One of the things I thought
was really interesting
was the parallels between
both of you said in your talks
and Eric's points earlier around,
even in the playing field
and things like that.
And I want to ask you, in particular,
what you thought about
that as applies to data.
We know that we're in this new world,
where data is how we make
new decisions, right?
Eric your point about
taking the Uber here, right?
And about that maps
data that enabled that,
is really crucial.
And if you think about the old world
of algorithmic innovation,
the way that you make
new things in the world,
new patents whatever, is you
come up with new algorithms.
But we don't have that anymore.
You don't really come
up with the algorithms.
Everyone's using TensorFlow,
I mean, great job.
It works super, right?
But everyone is using
the data that they have.
And that means that we have a data regime
that reinforces the
existing power structures.
We've in an algorithmic reserve regime,
when there's problems
with patents for you,
everyone in this room would agree
there's problems with patents,
but in patents at least,
after some period of time,
that knowledge goes out
in the world, right?
That's the whole point, you have a patent
for a limited period of time.
We don't have that with data.
And so we have a system
now in which that data
is reinforcing the existing hierarchies.
And I'm worried about what
that does for innovation.
Is that as something that you
see the legislation coming,
issues coming wherever we're
actually gonna address that,
or are we stuck in this
world in which the innovation
comes to those who have
the data right now?
- Do you want to come first?
- I want to answer the
regulatory question.
- Sure?
(audience laughs)
Well, I mean, the notion that
some of these technologies
exacerbate already existing
inequalities of all kinds
within societies, globally speaking,
in the sense of competition, you know,
newcomers able to catch up,
which I briefly mentioned,
I think is a big challenge,
but specifically when it
comes to the public interest
is if I understand your
question correctly,
so the notion that the data
sets may not be perfect,
which we know they're not,
but let's just for the
sake of the example,
assume they're not perfect.
They will then be used
and iterated and use
and iterated and use and iterated
and then how do we know
exactly what's going on
if that information never
becomes public, right?
Similar to the the patent discussion,
so this is where I
think the question of...
Maybe parallel tracks of
research in the public interest
and the use of algorithm is legitimate.
So, especially when the
function and the outcomes
impact the public interest significantly,
let's say health care or traffic
or things that are very
much a part of society,
then I think it's absolutely
important that regulators
and others who have the role to safeguard
the public interest as
well and to enforce laws
have access to that information
into those outcomes.
So, what I think will happen
is depending on its use,
in some uses, this may not be required
and in some uses it may,
but the idea that companies
can take over more and
more vital functions
without having accountability
towards the public,
I think is unsustainable as we've seen,
but regulation has to catch up.
- You can imagine a number
of data repositories
that would be run by the government,
it would be opt in that would be addressed
some of the questions that you're asking,
to me the most interesting trend
is not the data is becoming more valuable,
but rather that algorithms
are being developed
that need less data.
So in other words, if you
think data is the new oil,
your oil may become
less valuable over time,
as research is showing
us that we can learn
on much smaller datasets,
that's a welcome innovation
and I think that will
mitigate some of the concerns
that you mentioned.
- Can I ask a follow up question?
Is that the ability to
do more with less data?
Is that a level playing field?
Or is that true for
those who have been able
to get to that point
on the basis of having
had access to a lot of data
like the big companies?
- Well, the stuff is open sourced
and the models are typically made public.
So they're open to anybody
who's got a computer,
I suppose there's a you
need some amount of training
computers and so forth to do it,
but I think it's pretty open.
This technology, which
we seem to have forgotten
is largely open source,
largely available to
everyone in the world.
Your discussion is largely about
the issues of regulation
and companies and so forth.
It's a two edged sword remember,
because this stuff is coming all out,
there's gonna be an awful
lot of ethics issues,
because people that we don't
necessarily like, right?
In other countries doing
these things, right?
So in your model, we would
seem to some agreement
over for example, surveillance
or facial recognition
as a better example,
we can't agree on even basics
in that area across governments.
So it's not obvious to me
that we're going to end up
with a common agreement on those issues.
It's much more likely that
some of this technology
will be diffused in have negative outcome.
And to me, that's the more
concerning ethics issue.
- Yeah, so that's another question
that I think it's really important.
So have you seen in any of
the roles you had with Google
or what you see now in Alphabet
or the broader markets,
where companies that have an opportunity
to role out in a market where for example,
human rights are violated in a way
and that those human rights
violations could be exacerbated
by the availability of the technology
and that the decision is made,
we're not going to go into that market.
Because we don't want to take the risk,
where we won't don't wanna be instrumental
in that sort of dynamic.
- I shouldn't comment about
companies other than the ones
that I'm familiar with which is Google.
Google has a complicated debate
about every product in every country.
I'll give you an example of the
Innocence of Muslims videos.
People may remember us from 10 years ago,
which is like a disaster.
And what happened was this
horrific video was uploaded.
And then it had a huge impact
in a number of countries.
And so what we did is a decade ago,
we Geo-restricted it,
so it wasn't available.
So that's an example
where you say, oh, yes,
we love these American principles.
But the fact of the matter is this,
that this video which
was essentially a spoof,
was real causing real damage.
And so we reacted there
that happens every day
in these large companies.
- And then is the benchmark physical harm,
which I believe in this
case of this example
that you mentioned about
the Innocence of Muslims.
It was also about violent protests
where people weren't getting hurt,
or is it also about human rights
as it continue as punctual?
- It's in any with any form of
government policy or concern.
So in Google's case,
we have a whole set of
rules about these things.
But you'd be surprised how much care
goes into these things now
for the reasons that you described.
- And is there also contact
with public authorities on this?
- They call us.
(laughs)
- They still use phones, okay.
(audience laughs and claps)
Any other question?
Yes, sir.
- Ben Shneiderman, University of Maryland.
Thank you, both are
exceptionally clear presentations
and thoughtfully nuanced ones.
The focus though, is both on the notion of
what I consider prospect
of regulation by government
and important force and
doing it right is important,
but I'd say the the ecological system
of independent oversight is much richer.
And don't we need to
build other structures
that for example, retrospective
analysis of failures,
a National Algorithm Safety Board
to fly in investigate
disasters, flash crashes
and real aviation crashes
or machine damage et cetera.
- All right.
- Don't we need also the role of
the traditional auditing companies
the PwC, KPMG, the Lloyds.
Don't they have a powerful role
in influencing the company's through
their annual independent audits?
And don't the insurance companies play
a necessary role they did so clearly
in the building construction industry
to support building codes
and provide insurance
only if demonstrated
safety has been addressed.
So, it seems to me there's a wider ecology
that would support and maybe
reduce the fears of regulation,
because others would pick up on
the varieties independent oversight
that seemed to be necessary to produce
the trusted, reliable
and safe systems we want.
- But in the last few years,
all pretty much every
corporation I'm familiar with,
has gone through essentially
a computer security audit.
And that's typically
driven by the insurance
and the liability of the board.
So I think that's a good
example of your point.
This conversation, I sort of forgot that,
we collectively as businesses
are subject to the laws
of the countries that we operate in.
And I can tell you, there's a lot of them,
and we're subject to all of them.
Every single one of them.
And even if they disagree by country,
we're subject all of them
in each of the countries.
And so I would be careful
about building any form of
additional regulatory structure
that's extra legal, right?
In other words, if you want
to do what you suggested,
then you better write it in a
law that applies to everybody
and can be codified in law.
So the notion of, for example,
hey, there was a crash
and we should investigate
the crash of an algorithm.
That's a pretty big
expansion of government role.
And you might want to think a lot before
you make that proposal.
Or it's an unintended consequence.
- All right.
- Yeah.
- I think the agencies like the FDA,
FTC are moving towards,
doing that on a local way,
which I think is more natural way,
I don't think a broad
agency should have narrowed
the forecast on the once
within existing review
(murmurs)
to retrospectively analyze that.
- No, but I wanna pick up
on something Eric said,
which I think is really
important because the agencies
typically especially in Europe,
they can only assess in
the majority of the times
whether existing laws have been respected.
In the US, sometimes
the agencies themselves
set norms as well, like FDA,
I believe does, et cetera.
But so, empowering the agencies
of all kinds
to reach into the new
domain that the technology
may create with their
same mandates, rights?
Public safety or maybe
even aviation et cetera.
Still requires to look
against the standard.
And then I think the sort
of post mortem on damage.
I think it could be
foreseen that if a crash,
I mean, if it's a small scale crash,
which you know, of an algorithm
or an attack or whatnot,
that didn't impact many people
could be a different case,
than if you have negligence
in the way software was built,
which has exposed you know,
the data of hundreds
of millions of people,
for example when this
company was trusted to run,
I don't know, voter registers or whatnot.
- So there is a group called
the Federal Trade Commission,
which largely regulates the tech industry,
which has found most of the companies
in violation of one thing
or another over the last
20 years that I've been dealing with them.
And so I would be careful
to first understand
what mandate they have
and what they're doing
before US for a broader expansion.
I would be very careful
about additional liabilities,
those sorts of things.
Because of the many possible impacts,
the regulatory structure
that you describe in Europe,
which Europeans like and is
legal and so forth and so on,
has the property that is
increased cost of operation
for small companies in Europe, right?
This is just a fact, and I
know this 'cause my friends
in Europe told me this, that's okay.
Right, that's a deliberate decision.
But it's important to understand
that all of these things
have cost as well as benefits.
- Oh, yeah, I fully agree.
And I'm curious to see what's gonna happen
in both the US and the EU
when it comes to liability.
For example, contents of platforms,
I think it's gonna be a
hand to bid in politics.
- No, but if you look at
current liability for content,
the vast majority of the
Social Network Systems
rely on users to upload content.
And shockingly, there are
users that upload bad content.
(audience laughs)
Wow!
And furthermore, there are
people who do criminal content,
which is illegal.
That's why it's called criminal.
They upload copyrighted content,
which is a violation of copyright law.
They upload information
was decided to misinform.
So, from my perspective, all
of those are undesirable.
I think it's very difficult
to then write a rule
that coherently addresses all
of those that then becomes
an Algorithmic Inspection
Group, et cetera.
I think it's a leap I'll be very careful
to understand that the
companies, YouTube for example,
is with it every day
and all sorts of ways.
And they're not perfect,
but they certainly understand these issues
and they make these decisions dynamically.
And they're completely subject
to the laws of the countries
that they're operating under
and they're well aware of it.
- Yeah, I mean, I think I mostly agree
that it's not a good idea to offload
all these responsibilities on
the platforms and just say,
figure out what is undesirable,
illegal, harmful speech and whatnot.
But on the other hand,
the state has quote it,
I think, is more than not
getting it right all the time.
I mean, if you have number
one recommendations on Amazon,
when you look for a book on vaccinations,
some kind of book I found
when I looked for an example
once when I wrote an op-ed called
Melanie's Marvelous Measles,
which was a book that
celebrated having measles
and was obviously part of whole
anti-vaccination movement,
we've seen conspiracies rising
to the top of search results
as well as other--
- If I may, do you
think that the companies
that are offering that like those answers?
They probably think that
those are wrong answers,
and they're probably trying
to improve their algorithms
to reduce that, right?
- Well, maybe, but not all
companies are the same.
And what the company doesn't like
may have a different reason
than what is harmful for society.
So, you know, as a lawmaker,
and I think that's good practice too
I've never gone with
what companies told me
they liked or didn't like,
I've tried to look at how you
create a level playing field,
(audience laughs and claps)
try to create a level playing field
because frankly speaking and you know,
this is to really underline
that there are now
companies under enormous
scrutiny that all of us know
and that we have opinions about it,
we can choose to delete or whatnot.
But there's also massive companies
like data brokers to name one example
are also the creators of
hacking and surveillance systems
that are being exported
to military intelligence
all over the world,
whose names we don't know
that are under very little public scrutiny
but that are still causing a lot of harm.
So, I think it's a diverse field
and you happen to know a lot about company
that's very well known.
But that's not not all
that we regulate for.
We have time for one more question.
- Way back?
- Yep.
Behind you, I think the gentleman
with the argument meant right there?
- Yeah.
- Yeah.
- Thank you both.
I'm reminded of I think, Jeff
Goldblum from Jurassic Park,
from a tech industry,
we spent all this time
from a scientist and research perspective,
thinking if we can not
whether or not we should.
And it seems that there's
a collision as well,
society in between the move
fast and break things mentality.
And from a tech industry
from a I would say a
Capital Markets Perspective.
As to it's all about whose fastest wins.
How do you get to market fastest?
The economics, trump
everything growth at all costs.
And if from the corporate perspective,
and this includes for stars as well,
stars are just trying to survive.
They're trying to get to the next level.
But once they get to the next level,
then it's, hey, how do
you grow fast enough?
How do you bring market share?
And how do you become the category leader?
How do we balance that?
How do we set the guardrails in place
so that it isn't this growth at all costs?
It isn't this move fast and break things,
just because we can,
doesn't mean that we should
because of the unintended consequences
that specially from a technologist,
technologists generally are
not out there thinking about
how the bad guys might
deploy technologies.
And are hacking to Brad
Smith book tools and weapons.
You can use a broom to sweep
or you can use a broom to
hit someone over the head.
So can you come out on that front
in terms of this collision,
I would say of the capital markets system
of we've got to move fast?
- Well, so my answer is you're describing
companies that are not going
to become great companies,
if that's those are the principles
that they operate under,
certainly you want to grow,
but you have to grow responsibly
and you have to do the right thing.
Google certainly has had a belief system
inside of it for a long
time, you can debate it,
internally we understand
it and we can express it
and we can express why our
decisions have followed
from that set of principles.
And it's very much a cases
where we would make decisions
that were not about absolute growth.
Typical example is that when I was CEO,
we would have some
complicated had system change.
And we would give some of it to quality,
some of it to revenue, right?
We knew we understood these balances,
and we had the authority and the ability
to build a greater company as a result,
greater in terms of impact
and scale and values.
So the scenario that you described,
which is startups that are sort of
out of control for growth,
that's just bad governance.
Right?
That's a bad CEO or a
bad board or whatever,
you're not gonna create great
value with that approach.
- Very briefly, because we're out of time.
But I think the question
of growth at all costs
is really scrutinized everywhere.
I think a lot of talent
that the tech companies
want to attract are looking for more value
than the value of money.
They also see the homeless
people in the streets
in the areas where they live,
they can afford homes themselves.
And you know, also with a lot of focus
from a young generation
on the environment,
for example, sort of the cost of growth
for the earth is also a factor
that is becoming more prominent.
So, these are not static concepts
is what I wanted to end with these are,
where is the the limit?
Which laws do we think are legitimate
and who gets to decide our exercises
that society creates by
participating in democracy
and changing laws if they don't work,
updating them when
necessary, creating new ones
when it's needed.
So, I also hope that every
one of you feel empowered
to do something about the change
you wanna see in the world.
So thanks all.
(audience claps)
- Thank you.
