>> From the Library of
Congress in Washington D.C.
>> John Haskell: Well
thank you all for coming.
I know you're busy, and for
those of you who serve Congress,
which I know some folks
are from Congress.
You're still in session.
So we're glad to
have everybody here
from the Library and elsewhere.
And I'm John Haskell, director
of the Kluge Center
here at the Library.
And we're co-sponsoring
this event
with national digital
initiatives.
That's a group, its mission is
promote the Library's digital
presence here in the US as
well as around the world.
Abby Potter and I think Kate
Swart [phonetic] associated
with that.
Abby is the head of NDI,
National Digital Initiatives.
Is Abby going to raise her?
There she is.
If you want to know more about
it, just give her a shout.
Not during the, not
during this event.
Let me tell you a little bit
about the Kluge Center for many
of you probably don't know much.
I'll keep it to about a
sentence, and I'm going to steal
from our charter which says
that the Kluge Center is,
its mission is to
reinvigorate the connection
between thought and action.
The idea of being to connect
scholars with policymakers,
one of the things we're
doing today with Martin.
And the idea to be part
of the conversation
about addressing challenges
that democracies are facing
here in the 21st century.
So that's what we're about.
Fundamentally, we have
scholars in residence
like Martin Hilbert who's a
distinguished visiting scholar
with us.
And he's had, he's a recidivist.
This is his second go round
at the Kluge Center, and he's,
and he's actually studying
the library this time around.
And we'll get more into that.
Martin is on the
faculty at the University
of California at Davis.
Let me tell you a little bit
more about him before I launch
into asking him difficult
questions.
First of all, he has two PhDs.
It wasn't enough to
have one in German from,
the PhD is in economics
and social sciences
from the Friedrich-Alexander
University in Germany.
He got that in 2006.
And then he has one
in communication
from the University of
Southern California.
And that was at the Annenberg
School of Communication there.
Before joining UC
Davis, Martin created
and coordinated the
Information Society Program
of the UN Regional Commission
for Latin America
and the Caribbean.
In his 15 years as United
Nations economics officer,
he performed hands-on technical
assistance in the field
of digital development
to presidents,
other government experts,
legislators, diplomats,
NGOs and companies
in over 20 countries.
Not only is he fluent
in German, he's not bad
at English, as you'll see.
And what, three or four
other languages, I think.
To start off this conversation,
by the way, we'll leave plenty
of time for questions
at the end.
But to start off the
conversation, I'm going to quote
from how Martin describes
himself
when you fish around on the web.
He says, I am pursuing a
multidisciplinary approach
to understanding the role of
information, communication
and knowledge in the development
of complex socio-technological
systems.
He also claims to be a
really good translator,
so you're going to
start with that.
What does it mean --
>> Martin Hilbert: To
German or to Python?
>> John Haskell:
We're hoping English.
>> Martin Hilbert: Okay.
The translator, what
does the sentence me?
>> John Haskell: Yeah, what
does the sentence mean?
>> Martin Hilbert:
Well, for me, yeah.
Both of them go very
much together.
I mean the fact that
it's complex systems
or a more disciplinary
approach is kind
of like the same thing, right.
So as for me, I see myself
as a social scientist.
And they are not really, I mean,
there are not really a
lot of boundaries for me.
So I'm interested at how digital
technology changes in society.
If it might be for society
to satisfy the needs,
the economics, how
society governs itself,
political science, or the little
quirks that develops, you know,
anthropology, sociology.
So that doesn't, and there comes
a complex systems approach comes
to that.
That there's an underlying
mechanism
that creates these emergent
phenomena on a higher level.
>> John Haskell: Okay.
>> Martin Hilbert: This
translator thing is more
like because it came
from the United Nations,
so I'm just in academia now
for three years or four years.
So before that, I was for 15
years at the secretariat there.
And yeah, I did research there
as well at the UN Secretariat.
It was a quite successful
career.
I had, at the end I
had lifelong employment
with lifelong global
diplomatic immunity
and what they call
golden handcuffs, right.
So I found a little way to,
yeah, I stepped out of that
and I retired then early in
my late 30s, in my mid-30s,
and joined the University
of California.
I mean that's, you
know, the most complete,
most comprehensive tertiary
educational system in the world.
And it's a really nice
playground because I wanted
to be part of this revolution.
You know, the digital
revolution, more closely to it
in California certainly.
>> John Haskell: There's
probably a lot of people
in that system that think they
have golden handcuffs too.
>> Martin Hilbert: Yes.
>> John Haskell: So let's
get into the substance,
and we're going to start
real simple, which suits me,
is that I want you
to define big data,
just so we make sure we know
exactly what we're talking
about, particularly insofar
as big data about us.
>> Martin Hilbert: Right.
>> John Haskell: You
know, that affects us
as citizens and as individuals.
>> Martin Hilbert:
Yeah, so big data,
for me for the social
sciences, I would, that concerns
to us I think the term
is not very lucky.
And it doesn't always
have to be big in order
to make the difference
in the phenomena
that we're talking about.
It's basically, for all the
social science purposes,
you can just replace the
word with digital footprint.
And then if you do that,
you know much better
what it's about.
And there are some
other characteristics.
May we just maybe start
with a digital footprint.
Maybe can pull my computer up.
If you have here, if you have
it on your phone as well,
and you have a Google
account logged in,
you can just Google
for timeline.
And then your Google
timeline will come up.
If you didn't change
the default settings,
which I guess nobody
did really, right.
Anyone looked at that stuff?
No. All right.
So then you can see here,
and you can see exactly
where you've been during
the last three years.
So this is where I have been
during the last three years,
right.
So you can see here, I've
been quite all over the place.
You can also zoom in and
you can go very closely
and you can see particular, I
don't know, let's take 2014,
November, whatever, '14.
Right. And you can
see here where I was.
I have no idea where I was.
I was in London.
Check that out.
And you can see exactly
where you went
around and where you walk.
You can even have
a little animation
and see where you will.
You leave this digital
footprint behind
because basically you have
this tracker right, and you're,
in your pocket that tracks you.
And even so you don't remember,
the digital footprint remembers.
For any practical purposes,
if you're not absolutely sure,
don't share this trick with
your significant other.
[Inaudible].
>> John Haskell: I notice you
were in South Beach there.
Exactly that was.
>> Martin Hilbert: So, yeah.
And so yeah, that
was a diversion.
So that's the digital footprint.
There are some other
characteristics.
So use sometimes the big data
that describe with the V's
with the three or four
V's that they have.
So there's the volume.
There's a big, I'm not
so good with the V's.
I'm just going, I'm giving you
four or five characteristics.
We can turn this off again.
The four or five
characteristics,
so one is the digital footprint.
The other one is that
the digital footprint is
always messy.
So it's never complete, right.
It's not like a traditional data
form maybe than maybe a complete
and kind of like every row and
every column is filled out.
So there comes this technique
that's called data fusion,
which is very characteristic
for working with these kind
of digital footprints.
So we have different sources
that we basically mix together,
and the main technology
that drove all
of that is called Hadoop, is
basically based on this as well.
It crosses the decentralized and
then brings it together again.
And we can really nicely
mix different sources.
Which we need to.
Because not everybody is
going to be on Facebook.
And not everybody is going
to even be on Twitter.
But then you're going to have
a credit card or you're going
to have a, like somehow
we're going to get you.
And then we try to fill out
all these different holes
and then complement
different data sources.
>> John Haskell: So what's
the, if you had to put it
in a capsule, like distill like,
what are the various
ways it's used
that might affect say
any individual's life?
>> Martin Hilbert: So there's
data fusion, for example,
it gives us much better
predictions because we can,
for example, you always
had data about this stuff,
for example, the FICO score.
If you were to get a credit,
right, you had the FICO score
and you were always
told, get a credit card.
You have to build up your
FICO score and whatever.
Nowadays, we use
this data fusion.
We mix it with many
other indicators.
For example, how you
pay your cellphone bill
and even the way you move
the mouse when you go
over the bank's webpage, right,
is a very high predictive power
if you're going to pay
your loan back or not.
And with that, they can
predict default rate 30% better.
That means they can offer
credit 30% cheaper, right.
So, which is a big gain, right,
so by complementing
these different sources.
But also another
characteristic of big data is
that often it's often
in real time.
So for example, when you call
a call center, the call center
of your choice, you often
hear this message, right,
this call might be recorded for
quality and blah, blah, blah.
And you always think it's
the head of human resources
that listens in and makes
sure you're treated right.
No, it's not.
In most call centers nowadays,
it's up to 10 million algorithms
that actually listen
to you while you talk.
And from the way you talk,
classify your personality
into five different rubrics,
if you're actions-driven,
emotions-driven and so forth,
and then they match you
with somebody on the
call on the other side
who has the same personality
as you have, right.
So if you're kind of
emotions-driven, they match you,
the people on the
other side also,
they don't even know about it.
You know, they don't
know about that.
They also just have the
person already classified.
But then if you're
actions-driven, for example,
I am very actions-driven, right.
A caller calls in, and
I just want my coupon.
I don't have a lot
of time, right.
They really messed up for two
days and I want correct my bill
or give me something, right.
So if I'm the somebody
who's also actions-driven,
that's great, you know.
We understand each other.
My wife, on the other hand,
she's very emotions-driven.
She just wants to be
really understood, right.
So when she calls
in, I'm just like,
and so what did they say
from the call center?
And she's like, did they give
us the money back or something?
And she would say like,
no, you know what?
No, no, he said he
couldn't help us.
But you know what?
He really understood me.
Like you, I told him
everything with the kids,
two days without a cellphone,
the doctor, how hard it was,
and he really, great company.
Really great, they really
understand us, right.
So, imagine I would have ended
up with a person
like that, right.
It would have been really --
>> John Haskell: Aggravating.
>> Martin Hilbert: Yeah.
So, studies show that it
reduces call duration by half
and doubles customer
satisfaction.
>> John Haskell: AT&T
always gives me somebody
with a southern accent.
And I don't know.
>> Martin Hilbert: Maybe
there's something there.
So the idea is that, you know,
in real time we can also adjust
to that if we do this data
manipulation in real time.
And in all of that often
done with machine learning.
So these are some
characteristics, you know.
So the digital footprint
basically comes for free,
which we leave behind for free.
The data fusion that
they're complementary sources
that we can bring together.
The idea of having it often
real time and then doing
with machine learning, just
throwing machines at it
and looking for patterns that
we don't even understand.
But the machines can pick up.
So machines just from
the way you talk can pick
up your personality, right.
By hand it would
be very difficult,
but they discover
these patterns.
>> John Haskell: So
to pivot to politics
and elections, dive right in.
So, those of us, you know,
some of us can remember back
when there was different
ways that politicians
and their consultants
would try to manipulate us
and influence us
to vote for them.
In the contemporary era, I think
at least some people would say,
and maybe this isn't accurate,
and you can straighten me out,
but the Obama campaign in
'08 made great advances,
perhaps by plumbing our
social media habits.
I'm not sure.
But I'm curious, is, you know,
clearly in the last ten years
or so there've been
great advances in efforts
in campaigns to influence us.
So what kind of advances
have been made since then?
Like what's going on
now in campaigns to try
to influence us that's
much more advanced
than let's say the Obama
for America campaign
was ten years ago.
>> Martin Hilbert: Yeah.
I think in two ways.
One is it got a little
bit deeper,
that means we've
got more indicators
and can make better predictions.
And second, it's more
fine-grained, the data.
And second, the samples
are more [inaudible].
So to pitch kind of the Obama
versus the election
two years ago in 2016,
so what Obama did there, he
spent a billion dollars, right,
to set up this project
called Project [inaudible].
And the idea was to classify
16 million swing voters.
And with these 16
million swing voters,
they all did the data fusion,
so they took a voter registers
until TV setup boxes, whatever
they could get their hands on,
right, and put them together
and created this database,
kind of like patched
together these 16 million.
And they run models,
and then with that,
they pitched tailormade messages
to the 16 million, right.
So for example, on Facebook,
if you think about that,
just to give you an example,
one of the ways this is done,
so on Facebook, in political
campaigns it's very easy
compared to marketing.
Marketing is much more difficult
because marketing you want
people to buy one product.
A politician has kind of like
80 campaign promises, right.
So it's very easy.
Like of the 80 it might be
that you're not in agreement
with the 78, but two
you're going to be
in agreement with, right.
So it's very easy to actually
pitch because they just see,
you know, what are
your interests are,
and then you pick the two
you're in agreement with.
Then you create these
filter bubbles, right,
that's the technical terms,
where you just show them
always these two messages.
Or actually, some
of your friends,
one of them might
have made click
on the Obama campaign, right.
A post on their Facebook
page, not even a message
from the campaign, but like
a New York Times article
that talks about how
Obama is the hero
of whatever, green energy.
And you, after three
months seeing that,
you think like I don't like
Obama, but he really seems,
all my friends say this and
that, and then, you know,
you start to see the
messages that agree with you
through these photo
bubbles and echo chambers.
Kind of like your
friends talk about it,
so there's a combined effect for
the bubbles and echo chambers.
So what that then creates,
what they achieved is
they changed the opinion
of 78% of the voters, right.
Which is big.
I mean, yeah, almost 80%
of the voters they
targeted they changed.
Now, what happened in the
years since then, first of all,
the coverage got bigger.
Instead of going for 16 million,
the company is now really go
for the 240 million, right.
So we have a profile of them.
Or Facebook.
Facebook, as you heard, Mr.
Zuckerberg two weeks ago he
and Congress, right, the
famous shadow profile,
he goes for a profile for
everybody on planet Earth.
There are these two billion
people that are on Facebook,
but he has a profile potentially
for everybody on planet Earth,
so 7 point something
billion, right.
So that's what they
are going for.
And then you have much better
coverage, first of all.
And second of all,
the information
that we have got deeper, so
the machine learning picked
up some algorithms that allow
us to make better predictions,
for example, the famous
psychological profiles
that Cambridge Analytica
supposedly worked with.
So what that basically
is, it came from a study
that a researcher
did in Cambridge,
a researcher called
Kosinski and his team.
It was a great study,
so basically what they
did was they took,
they gave you a little
survey on Facebook
that says well test
your personality
in less than five minutes.
All right.
And then it says sponsored
by University of Cambridge.
You're like great, you know.
Tens of millions
people participated,
filled out the survey, and then
they know they extroverted,
introverted or whatever.
Between these things, you
know, when you scroll down,
the stuff you never read,
right, the terms of agreement,
it also gave them permission
to scrape all their
Facebook history.
So okay, so now these
researchers they had the
Facebook history and the
psychological profile
because you gave both of them.
Then they had a machine learning
algorithm learning that,
and the idea was, how many
Facebook likes do you need
in order to predict the
personality of a person?
It's a machine learning
problem, right.
They have the personality.
They have their Facebook, and it
just says how many do you need.
It turned out like with 100
likes you could predict the
personality extremely
accurately.
And then they ask the
significant other, right.
So your spouse or your
mother or your father
or your siblings, right.
Your family, your best friend.
Fill out this personality
profile
in the name of that person.
All right, I mean your mother
should know you, right.
And it turns out that with 150
likes, the algorithm was better
to predict your personality
than your mother, right.
And then they ask you
yourself and what do you think?
Are you introverted,
extraverted, what do you think
about your personality?
It turns out 200, 250 likes,
the algorithm was better
than you yourself in predicting
your personality, right.
And that was a big study.
Kosinski got a tenure
track on Stanford for that,
so he moved Cambridge to
Stanford and went to California.
And then this method was
around in some other people
in Cambridge unlinked to
Kosinski, right, started to work
with it, and that's where
this company came from,
Cambridge Analytica.
And they tried to
do the same thing.
Now, it's unclear how successful
they were in that actually,
in doing the psychological
profile.
But we know there have been
some attempts, for example,
during the third campaign
between Clinton and Trump.
We know that Trump
sent some kind
of message, for example,
I think.
I defend the right to
bear arms or whatever.
And they sent 175,000 different
versions of this sentence out.
So the sentence is
the same, but you kind
of like personalize it
according to people's fears,
because that's the
easiest thing, right.
That's what hits home most.
So if there is a single mother,
they would pitch this message
with a picture of
a burglar, right.
And they could even find you
in the picture of a burglar
in a house that is kind
of like close by that kind
of like subconsciously
rings a bell, right.
And then, if there's
a father sports father
with three sons they will pitch
this message with somebody
who was hunting, right.
So they had 175,000
different versions,
and the idea is potentially you
have tailor-made messages per
person and can send that out.
>> John Haskell:
Right, because you know,
I did a lot of research
on consultants
who were working in the 70s.
And they had it broken down
at best to 480, you know,
categories of people, and
you try to pitch that way.
So there's --
>> Martin Hilbert: Right.
>> John Haskell: It would be
interesting to know exactly,
you know, whether you could
measure how much better
you're doing.
But I'm sure it's a
measurable amount better
when you can be that refined.
>> Martin Hilbert: Yeah.
>> John Haskell: So it's, a
lot of what you're talking
about isn't just
figuring out who we are
through the digital footprint
but then coming back at us.
So you've got this recursive
model where they're coming back
at us and hitting
us with content.
>> Martin Hilbert: Right.
>> John Haskell: And
maybe even news, right,
to try to have an impact
on our vote in this case.
We'll talk about
institutions in a second,
but our vote in this case.
Is that a good way
to think about it?
>> Martin Hilbert:
Yeah, yeah, absolutely.
And that's basically what these
social media companies do,
right, to keep with this
topic of Cambridge Analytica
since it's fresh
on people's mind.
People get very upset that
they've got, I don't know,
50 million or whatever, 80
million Facebook profiles.
It doesn't really matter
if Cambridge Analytica
has 50 million,
80 million, 100 million.
That's not even the
discussion, you know.
Facebook has two
billion profiles
and does exactly the
same thing, you know.
The Trump campaign
spent $17 million
on Facebook doing the same
thing officially, right.
So you don't need
Cambridge Analytica for that.
So the Facebook actually
doing the election,
that's what Facebook
does, right.
It gets to know you, and
it's a commercial company
and it tries to sell
you the ads.
And the clients can also
be political parties.
So actually, they set up a team
during the election and went
to the presidential candidates.
All presidential campaigns,
also from the primaries,
spent $1.4 billion on these
kind of social media ads.
So that's a very
lucrative market, right.
So they send out those specific
teams that went to the parties
and to the candidates and
said we going to help you.
The Trump campaign was
a little less organized.
That's why they spent
$70 million on it and put
like six million ads out.
The Clinton campaign said
no, no, we are covered.
We do that ourselves.
They only put 60,000 ads
out, and well people say,
you know what happened.
So Facebook clearly, and it's
not that Facebook went to Trump.
No, they just go to
everyone who wants
to do business with them, right.
It's a very lucrative
market, right, $70 million.
And then they did
exactly the same thing.
Now the question is,
what's the difference
if Cambridge Analytica does
it or Facebook does it?
There's absolutely
no difference, right.
The question is rather,
do we, is this kind
of model that's developed
for marketing, you know,
commercial marketing,
should we allow that or not
for political campaigns?
You know, political campaigns
in many companies are regulated,
in this case, on social media
they are not regulated, right.
Nobody even has to tell you
even on TV you have to say,
also here in this
country you have
to say it's a political
ad or not.
And who sponsored
it on social media.
Nothing like this happens.
And they just went in and
made a whole lot of money,
much more money is made there.
In the Obama, even in
the Obama 2012 campaign,
more money was already spent
on social media campaigning
than was on TV campaigning,
right.
On big data was more
spent than entire TV,
as well as completely
unregulated.
So the question is not does
Cambridge Analytica or not,
the question is, if we
should, if we want, you know,
social media with that kind of
grand analogy of information
about us get into the business
of doing democratic campaigning.
>> John Haskell:
So if you stipulate
that as citizens we're
dependent on some form of media,
forms of media, to get our
information to make decisions
in voting or to write a letter
to a congressman or whatever.
How radical is the
change compared
to however many years ago in
the way we're getting news?
You know, because I can tell you
that my dad just got it
from Walter Cronkite.
Maybe a lot of you
haven't heard of him.
He was CBS News.
But, and that was it.
You know, maybe he read a
Cleveland newspaper, you know,
it limits what you
can get out of that.
And that was it.
But today, obviously,
that's an extreme.
But that isn't that many
years ago, you know,
in the greater sweep of things.
So how radical is the change
now in terms of where any
of us might be receiving news
that would have an impact
on our decision making
in politics.
>> Martin Hilbert: It's
still changing a lot,
so even the last year still.
So last year, 60% of
Americans received news
through social media.
And this year it was 70%, right.
>> John Haskell: Is it
that they're getting
most of their news?
>> Martin Hilbert: Most
of the news is about 50%.
So the 20% like they
say not so much.
But you know, one glimpse
of something is enough
to change an opinion, right.
So even if it's not so much,
you cannot get this image
out of your head or this
video out of your head.
This video might be fake
or not fake, you know,
it might be made by AI.
But you cannot get
it out of your head.
So even if you don't
do it a lot,
it still has a lot
of influence, right.
So 70%, and it's
still increasing.
And especially last year we
saw a big increase in people
over 50 years of age, they
increased a lot, and nonwhites.
So nonwhites it's over 75%,
like 75% that get their
news on social media.
And the less educated.
So the ones without
a college degree
that also was still increasing
or is still increasing for that.
And the ones with above
a bachelor degree,
that means with a
postsecondary degree,
it was decreasing slightly.
But it decreased, so it's
down to 62% of something.
But it's all in the same.
>> John Haskell: So it's not
that they're just
getting some news,
a lot of people are
getting most of their news.
>> Martin Hilbert: Yeah,
I think 50% get most
of the news from there, yeah.
>> John Haskell:
And that's going
to differ you're saying based
on kind of a socioeconomic --
>> Martin Hilbert: Based
on socioeconomic, even so,
I mean I said these are
differences, small differences.
It means like if you're highly
educated and you're white,
you're like 60% of them.
And if you're nonwhite and
less educated, it's 70%.
But it's still like
it's in this range.
It's not such a significant
difference,
it's a really mass phenomenon.
Now, it comes through
different channels.
That makes actually
more a difference.
So the average American uses
about eight social
media a day, right.
So, and some of them are
much more news-focused.
Twitter, of course, right, the
social media of the president.
Then Facebook and Reddit.
So these are very political.
So we have a lot
of people, say 60,
70% of what's happening
there is political.
But even others,
20, 30%, 20, 30,
40% of other social
medias is news.
For example, in YouTube,
they made a big effort
recently to bring up the news.
And Instagram, Snapchat,
Tumblr as well.
Snapchat made a big move
in recent months getting
CNN and others involved.
You know, they have
special news shows on there.
But even WhatsApp.
I mean 20% of people
say that yes,
some kind of news they often
get from WhatsApp, right.
So you get it through all
different kind of channels.
But within these
different social media,
yeah some of WhatsApp is
80% conversation, 20% news.
>> John Haskell: So if
scholars got to the point
where they're making
judgments, I mean, clearly,
we're being influenced
because of our, you know,
digital footprint in
a thorough-going way.
Or people are trying
to at least.
>> Martin Hilbert: Right.
>> John Haskell: So have
scholars made any judgments
about whether that's
affecting how we're governed?
Or in what ways it's
affecting how we're governed.
>> Martin Hilbert: Yeah,
I think it affects it
in many different subtle ways.
And it's difficult to
actually to point to anything.
I mean, we saw clearly in the
aftermath of the 2016 election,
even so we still don't really
understand what's going on.
But yeah. Putting the question,
the answer in a nutshell
I think, you know, in,
it's really like this.
In 500 years, you know,
historians will come back
to planet Earth and look what
happens during these decades
where we digitalized the
world's information stockpile
in a generation, which
is our generation, right.
They will find that it
profoundly changed the economy
and healthcare and
education and whatever.
But I think the most
profound change they will find
in retrospect has been the way
that we're governing ourselves.
So, to answer your
question, yes,
I think the most profound way.
But I think you can only see
it from this bird's eye view.
Because it's like we're a
part of this process, right.
So I can go in, and I
can digitalize a company,
or I can digitalize a
school or a university.
Or I can digitalize even
an entire government.
And I've been involved
in these projects.
It's basically a
digitalization project.
That's okay.
But digitalizing let's
say, you know, society
and how it forms
its [inaudible],
like we are like part of it.
It's not like a project, right.
It's a process in the making,
and we are part of
this thing, right.
So we see some things
that are happening there.
And we see some things
that actually go wrong,
one recent thing the many people
pointed about is exactly this,
what's happening with Facebook.
And that's why Mr. Zuckerberg
was mainly invited to Congress,
to the Congress hearing, right.
It's basically these two
companies, Facebook and Google,
which are the two big
elephants in the room.
Which at the beginning of
digitalization, it was almost
like a left-wing socialist
vision that they have
to make information free
for everybody, all right.
So the idea and the vision
was, in Silicon Valley,
I mean it's very,
it's very moved
to the left entire discourse.
So the idea was we create this
information free for everybody,
and we have this brave new world
where everybody can do for free.
And that was great ambition.
And yeah, you can use Google
Maps and whatever for free
and WhatsApp, and
it's all for free.
And Facebook, it's all for free.
And yeah, they implement it.
They implemented that.
But with the cost of
this, so they made a bond
with the devil, right.
They sold themselves
advertising.
So actually, they had
almost this leftwing idea
and they created the tightest
capitalistic machinery history
has ever seen, right.
So everything that's going on on
these two platforms especially,
Facebook and Google, is
mediated by some kind
of commercial interest.
That was very different
than back in the days
when we paid our
monthly fixed line bill.
And I was talking to you
but nobody was in between.
I was talking to you
over the phone, right.
Nowadays, if I talk to you
over Facebook or over Google,
there are incountable
commercial interests in between,
which absolutely distorts
the message, right.
So actually, by creating
these communication platforms,
having the devil in the
middle kind of like, right.
Many people asks
themselves, wouldn't it better
if we go back to this, right.
We just pay a monthly fee.
There's some other
companies did, like Netflix,
Amazon and whatever
with prime services.
They've entered this like let's
go back how it was in the day.
You just pay a monthly fee.
And, you know, we're still
going to do marketing,
but we don't depend 100% on it.
Whereas Facebook and Google,
they are so in the corner
because they 100%
depend on that.
They don't have an Amazon
Prime to fall back on
or whatever, you know.
>> John Haskell:
So you started this
with the whole sci-fi
looking back
from 500 years in the future.
So put on your futuristic
hat if you'd like.
>> Martin Hilbert: Oh my.
>> John Haskell: You know,
where might this be headed,
particularly when you think
about, you know, the advances
in artificial intelligence, in
terms of how we're governed.
>> Martin Hilbert: Yeah,
that's a big question.
It's difficult, it's
very difficult to answer.
Let me put that in perspective,
why it is so difficult
to answer.
It's because we don't
understand the technology yet.
Which is absolutely normal that
we don't understand it yet.
It's always been like.
So people say like oh, we don't
understand neural networks.
We never understood
the technology
that we were dealing
with, right.
But the end result is always
we made much quick advances
than we always had hoped
for even in our [inaudible].
I gave you a few
examples, right.
So, take the industrial
revolution,
all technological
revolutions work like this.
That's why all the
theories are important.
So technological innovation
theory is very important
if you're in this field
because the fundamental theory
of innovation has not changed.
So, thermodynamics, right, the
first industrial revolution
or the second, depends
on how you count it.
When Carnot, Carnot studied
steam engines, right.
So trains were already running.
We had no idea actually
how that actually works.
The equations that Boltzmann
wrote done of thermodynamics,
they came 50 years later, right.
The same with electricity.
Faraday built the
first electronic motor,
but electronic motors were
already among us before Maxwell
wrote down the electromagnetic
equation.
That also came 50 years later.
Or take the brothers Wright.
So the Wright brothers, they
flew the first time for 100 feet
and was killing themselves.
That's nothing.
That's like a large jump,
you know, like 30 meters.
We had no idea what
flying actually was.
We always thought it has
to do with feathers, right.
At least with flapping your
wings because we saw everything
that flew has feathers or
was flapping their wings.
And we always thought like wow,
that's how biology
came up with it.
That's what it has to do.
And since da Vinci we
had this confusion.
Then when the brothers Wright
built the first flying machines,
we understood like
oh wow, it has to do
with the curvature of the wing.
It kind of like sucks
you up, you know.
And then we developed
aerodynamics much later.
And 60 years later we
flew to the moon, right.
So that's, so we don't
understand what we're doing
when we have these
new technologies.
It's always been like that.
But we make these huge jumps,
and we create all
these different kinds
of alternatives then, right.
So we didn't only, not only did
we discover it has nothing to do
with feathers, we built
helicopters that had nothing
to do with [inaudible] blades
even though they have the same,
they can stand in the air.
Drones, satellites, rockets,
jet planes, you know.
And right now we're doing the
same thing with intelligence.
So, kind of like this
information process
that Mother Nature came up with,
it's kind of like the feathers.
It's kind of like
the birds, right.
It's one solution to the
problem of intelligence.
Just like birds are one solution
to the problem of aerodynamic
and how to go about it.
But there are many other ways,
just as there are rockets
and hovercrafts and jet planes.
There are many other kinds of
intelligence, as in right now,
we don't understand it.
And people freak out because we
don't understand neural nets,
but it's always been like that.
And what we understand is
like oh, we are just one part
of intelligence of
this larger picture of,
let's call it not the theory of
aerodynamics or thermodynamics,
the theory of intelligence.
And we see where we fall
in into that, right.
So the fact that artificial
intelligence comes in,
that artificial intelligence,
we see that these other
intelligence are much better
in some things.
One thing they're much
better is executing laws.
Because the algorithms,
that's what it is, right.
The other thing, they are
much better being impartial
and being really neutral.
That's what democracy
strives for, right.
So the rule of law,
neutrality, democracy,
the equality among each other,
having the big picture
processing a lot of information,
they're much better than
this little, you know,
nature-like solution we
came up with, we cannot.
So all these different factors
that we have, we know this kind
of intelligence is
much better than that.
And it will lend itself
to create a much more solid
governance system that we have.
It's kind of like we are the
Wright Brothers and we try
to speculate about
flying to the moon.
>> John Haskell: So we
know a lot more today
than we did yesterday about
artificial intelligence
and how essentially you're
saying that as an individual,
we're not quite as bright as
some other way of, you know,
bringing information together
and making a decision.
You brought up the law.
I'd like to hear you talk a
little bit more about that.
But I thought as a practical
matter, do we know enough now.
I think you're hinting at this.
Do we know enough now
to enhance policymaking
so that it would be
more evidence-based
and maybe less discriminatory,
something that would, you know,
be in tune with the
spirit of democracy?
>> Martin Hilbert: Yeah.
Yes, I mean there's the promise.
There's not until now, because
we don't have implanted now just
because most of AI has been
developed to optimize ads.
Like honestly, right.
So, but we know theoretically
and also practically,
like academics are
working on it,
some other colleagues
are working on that.
And we can do that.
So one thing is to say for
example, the discrimination,
that's a very good point
that you mentioned.
So, you know, we strive to,
for example, execute the law
in a very nondiscriminatory
way, right.
If the law says that, we
should actually execute the law
like this.
And the colleagues next door,
the Supreme Court, right,
they've been trained
for 50 years
of being really impartial
in doing that.
Now, we can show, and studies
show that even after 50 years
of training, you
will still be biased.
You will still have stereotypes.
You cannot get them
out of your head,
because these are these
variables like race,
like gender, religious belief.
These are these all income
[phonetic] bracing variables
that are so powerful and have
so much [inaudible] power
that this little processor
that we have here loves
to work with these variables.
Because like a big
bat, you know.
Like oh, you just
take gender and race
and I can have all these little.
So we cannot get
it out of our mind.
Now algorithms can work
with much more fine-grained
indicators, you know.
So instead of working
with gender,
we can work with all these
underlying indicators.
Now in practice, they
are not until now.
So right now, there
have been studies done.
You take a neural net, and you
feed it with everything you find
on social media, on Wikipedia
and all the newspaper articles,
everything you find on the web.
You feed it and you
ask this algorithm
who to invite on
a job interview.
Somebody with a male
or female name
if all the qualifications
are the same.
It will recommend you
a male name, with,
I don't know, 60% likelihood.
If you ask somebody
if everything sounds the same
except the first and last name,
Afro-American or European,
it will recommend you
to go for European.
And if you look into
[inaudible] and you can, right,
this multidimensional space,
and you see that
African-American names,
they are in the corner
together with terms like prison
and agony and violence.
And European names in a
corner together with success
and celery and whatever.
And who taught the AI
such a racist view?
Well, we did, you know.
It read 250 years of
our writings and says
like you guys say that
goes together with that,
and that goes together
with that.
That's all AI learned.
It learned it from us,
from learning 300 years
of our writings, right.
So, now what we can do as well
with AI is say, since it be able
to [inaudible] the small
indicators below these big
indicators, we can tell AI, use
all the information you can get.
But don't use the
variable's gender,
race and religious
belief, right.
Do not discriminate on that.
Don't use information.
You will inevitably
lose accuracy,
because every variable gets
you more accuracy, right.
You will lose accuracy,
but it turns
out that you can design
the algorithm in such a way
that you lose accuracy
very minimally.
So if a person, for
example, is 70% accurate,
the machine is 86.6% accurate.
If you take these three,
four variables, out is 86.2.
>> John Haskell: Still
better than the person.
>> Martin Hilbert:
Well, much better
>> John Haskell: By a long shot.
That's the point, right.
>> Martin Hilbert: Yeah,
much better than the person.
And we know the person
will always be biased.
The person cannot
get these variables
out of its head, right.
The machine can with
absolutely no discrimination
and still be way
superior than the human.
So in theory, yes, we can.
Now, we haven't developed that
yet for a commercial scale
or applicable scale because
nobody put the dollars
on the table.
I mean, we spend a lot of money,
we employ 40,000 mathematicians
in the NSA to keep
Americans safe.
We employ tens of thousands of
programmers in Silicon Valley
in order to get people
the ads that they want.
But nobody put the
money on the table
to say let's develop
some AI, you know,
to make democracy work.
And it wouldn't be a big thing
just hire 10,000 programmers.
Nobody has been doing
that until now.
Now these applications,
in theory, they have a lot
of potential to improve
democracy in practice.
We haven't started
to look into them.
>> John Haskell: So for, and
being a Congress-focused person,
my academic career, I
think like, well members
of Congress have a lot
of different reasons
to introduce bills.
We know that, right.
You know, some of them
have nothing to do
with actually passing a law.
That's a news flash, I
know, for a lot of people.
Because they have other
completely legitimate reasons
to introduce a bill.
But a lot of times they actually
do want to achieve something.
>> Martin Hilbert:
It's good to know that.
>> John Haskell: Right.
We're trying to be
snarky about that.
But seriously, I mean there
are times you introduce a bill
for other purposes that aren't
about legislative product
at the end of the day.
But there are times
when it is about that.
So would there be some way to
figure out how to craft a bill
that would take advantage of an
intelligence that's better than,
say I'm a congressman, and
you three were my staff
and us just having
a brainstorming.
There's got to be a better way,
right, based on what you said.
>> Martin Hilbert: Yeah, right.
Yeah. So I think the best,
one of the things is this
amazing thing, so for example,
the bills, for example,
how that works.
We have a one-dimensional
scale, left and right.
And usually we divide opinions
are you left or are you right?
Right. In most countries
around the world.
So that's one I mention.
Opinion doesn't have
one dimension.
Opinion has many
dimensions, right.
So you have 800 dimensions,
2,000 dimensions,
that you have in a big space.
And you can see actually
where things hang.
Actually, that's another thing.
We can play around with that.
If you turn on the computer
again, you can go to that.
So that's an AI.
Most companies actually
open up the neural nets
so you can actually go there.
They opened up because they're
a little scared of it, you know.
We have a terminator
in the basement.
We just want you to
know we have one.
Have a look at it.
They don't give you
like the, they give you
like an empty brain, right.
They don't give you the
brain that they trained
for 20 years with their data.
It's kind of like they have
the Ivy League graduate,
and that's what they
make money with.
And they give you
like the newborn baby
and say here you go.
But you can trade
in yourself, right.
At least they give you
the architecture, right.
So you can go here.
One, for example, is
projector tends of flow.
And that's one of
these deep neural net
that allows you to
represent words.
So this here is a
200-dimensional space,
well shown in a
three-dimensional space.
I once asked one of these
computer scientists how they
imagined 200 dimensions, because
I can only imagine three.
And he said well, that's easy.
You close your eyes and then
you scream very loudly 200.
All right, so they
also cannot, right.
But yeah. So in here
you can see, you know,
you can kind of like zoom in.
You see it's all these
words here, right,
and they hang together these
words and different corners.
And then you can see,
for example, here,
let's see if we have Congress.
Yeah. We can see Congress here,
and then we can see the 100
neighboring words of that.
Jefferson, Washington, Vienna,
interesting, house, presidency,
Senate, policy and so
forth, act, library.
Oh look, library is here
too next to Congress.
So, that depends on how
this network was trained,
how this neural network
was trained.
And you can project it now
onto a three-dimensional space
as well and you get
these kind of shapes.
These three-dimensional
shapes calculates it now
and breaks these two dimensions
down into three dimensions.
And you can really
shape, like the shape
where I said you before where
I told you before so people
with African-American
names are in the corner
with these kind of concepts.
And that's basically
what that is.
So now, it beaks it down,
and it's called tisney
[phonetic] this algorithm
that people swear it's the best
dimension reduction algorithm
being seen for these
kind of purposes.
And you can actually see kind
of like a shape forming
over time here.
And then see where it hangs out.
So, when you are here,
maybe we can stop that now.
And you can move
this around, right.
And you can actually see
this word cloud here.
And then you can look at
the different corners.
Let's, for example,
look at this corner.
Tim, Tom, Steve, Barry, so
these all seem to be last names.
Maybe let's look at
another, first names.
Look at another corner.
Oops, where am I?
Here. MacIntosh, IBM, PC, okay.
This is also.
Songwriter, writer, reformed,
entrepreneurs, so these seem
to be methodologists, scholar.
So these names, basketball,
so these names seem to be kind
of like jobs or entertainment.
And you can now see like which
words kind of hang together.
And whatever you
train it with, right,
you will see different things
that oh here is Alabama,
Mississippi, Indiana.
So these seem to all be states
that have to do with each other.
Now nobody told this
network what actually to do.
Basically, how they work is
they do a prediction game.
They read this entire text and
try to predict future words
or sentences based on based on
previous words and sentences.
So it's all syntax.
There's no semantics.
The semantic is the
result of syntax.
Syntax, we always told
was different things.
No, we can get meaning
just from structure.
Which is an amazing thing
once we have the big data.
Now imagine we do
the thing like that.
I always was thinking about
if you do a thing like that
with the bills, for example,
or with something like that.
Just throw everything in,
and you will actually see 200
dimensions how it maps out.
You know, what's the
opinion structure
of these different bills.
You have a few thousand bills,
and you can see like actually,
well these kind of
rubrics of the bills,
these concepts, these
hang together.
And then you can very quickly
figure out instead of having
like one dimension,
you know, left, right,
there are many concepts
in a bill.
And with that information
process,
you can actually quickly figure
out like how could I get a 50%,
how could I get a majority?
Like what do I have to put
together in a bill in order
to get a majority to
get it through, right.
Because it very quickly maps
it out on these kind of scales.
>> John Haskell: And
then only the Senate gets
in the way at that point.
>> Martin Hilbert: Yeah.
>> John Haskell: So, let's
continue this conversation
with you all to see
what direction you'd
like Martin to take this.
So we've got a couple
people who have microphones
so that you can, so just raise
your hand and indicate to,
we've got somebody
way up in the front.
Get our exercise in.
This gentleman here
was the first
to raise his hand
in the green shirt.
>> Thank you.
One quick clarification,
when you mentioned the 78%
of the 17 million voters
that Obama swayed
of the independents.
I think those were the numbers.
And you said changed
their minds.
Did you actually mean just
impressed them in a certain way?
Because they were
undecided, weren't they?
>> Martin Hilbert: Yeah,
they were undecided.
So they swayed them over, yeah.
>> Swayed them in a way
that might have been the
way they already were.
So it didn't like, it
wasn't a 70% change of mind.
>> Martin Hilbert: Well,
50% were like, undecided,
an average would have
been a 50/50 call,
and at the end of
this 16 million --
>> John Haskell: They could
have been on the fence, right?
They could have been
on the fence,
and this tipped them over.
>> Martin Hilbert:
Right, yeah, yah.
>> John Haskell: I
mean that's what --
>> Martin Hilbert: If they
would have done nothing,
it would probably be again 50%.
Because if they're real
swing voters, you know,
it would have fallen 50/50.
But it turned out it was 78,
which was different
than 50/50, right.
>> John Haskell: That's good.
Who else has got something?
Lucy Ann over here.
>> Hey, this has been
very interesting.
So, I would like to press
back a little bit on this idea
that an algorithm can be
truly nondiscriminatory.
Because you use this example,
right, of taking European
versus African names, which
is very focused on, you know,
one kind of learning, something
that's focused on language.
But you're not touching
on the fact
that language itself can
be specific to ethnicity.
It can be specifically gendered.
And a lot of that
is baked already
into the way we use language.
And that is not something that
you can easily flag as being,
you know, anyone can try
this at home if you want
to do a Google search and phrase
your question using, you know,
hyper academic language
and then using slang,
you get different
results, right.
And this doesn't touch
on discrimination
in like image facial
recognition algorithms.
So I wonder if you can comment a
little bit on that, because it's
like very utopian to say
that these algorithms
can be nondiscriminatory
but also whose motivation
will it be to fix them
when the creators of algorithms
area already a monoculture
to themselves?
>> Martin Hilbert: Yeah, right.
Now that's a very
important point.
And a very good point.
That it's not, yes, it can, I
think they can be, like I say,
they can be as, even
less discriminatory
than we are, right.
If we exactly know what we do
when we say we are
nondiscriminatory, right.
If a supreme court judge
really does something,
these are the principles,
then I can right now everyone
does exactly that, number one.
So it can be at least as
nondiscriminatory as we are.
And second, even
more, in a sense,
better because we can
take something out.
It doesn't mean that we still
have to define what we take out.
So when I said we make
nondiscriminatory algorithms,
it's we make them
nondiscriminatory
to these three variables.
You have to define these
three variables, right.
So I said gender, race
and religious belief.
And why these three?
Well, because, you know,
the law gave them to us.
Right. So that's just, we define
on these three we don't
want to discriminate.
And then I can say
okay, make a decision,
but don't discriminate
on these three.
Like what are these three?
That's really for
society to decide.
It might be another society
just say you can discriminate
according to that because
that doesn't matter to us.
And then how and
what you discriminate
and what you don't discriminate
and to what degree and in
which sense, yes, that
is completely baked
into the decision algorithm.
Same as it is baked in
any decision algorithm
by a supreme court justice.
So, the difference is
then that if you have it
in a digital algorithm,
you can open it up.
And if it's open
source, also discuss it.
And I think it will
be very important
that these algorithms
will be open source.
So they have to be
completely open source.
Everybody should be
able to look into them.
Everybody should be able to
exactly know what they do.
And this new law in Europe that
comes into effect at the end
of the month, kind of like
goes into that direction.
I talked with the German
government last week,
and they have no idea how
they're going to implement it.
But this is this right
to know law, right.
So in Europe you can go now
starting for next month,
and you can say somebody,
this algorithm does something,
and it took a decision.
I want to exactly know how this
algorithm got to this decision.
So Facebook, you need
to explain to me,
or somebody who hires
you, you need to explain
to me how this algorithm
made the decision not
to invite me to a job interview.
And the person that used the
algorithm or the provider
of the algorithm then has
to lay that out to you.
So it goes in this
direction of being,
but you are completely right.
Yeah, these algorithms, these
should be exactly transparent.
Now the good thing is we
then know what they do.
For example, there's this
famous study also done
by the same Kosinski who started
this psychoanalytic stuff.
So he took, I think that's
what you are referring to.
He took images in Facebook
and trained the algorithm
to detect homosexuality.
And it was very successful,
just from your profile shot,
the algorithm could detect
your sexual preferences.
Just for looking at the image.
Now the interesting thing was,
now once we had an algorithm,
and he just said, you know,
just show that it's possible.
And it is possible.
You can detect sexual
preferences just
by the picture, by
the face actually.
It just, what's really
surprising to many.
Now, since it's the algorithm,
what came of that
is they can look
at what the algorithm
actually did, right.
And then HPC like what does
the algorithm actually do
when it gets to this
or that decision?
Like humans make the
decision all the time, right.
We just don't know.
It's tacit knowledge for us.
But we kind of making
that implicit.
Once we have an algorithm,
we can learn more
about why it's discrimination.
What is the discrimination.
Now, the results we take from
that can be extremely ugly,
or it can be very useful.
And that's still, that's
a decision completely out.
And that's really up to us.
That's not, the technology
doesn't care
about that, what you do with it.
>> John Haskell:
So the gentleman
with the yellow shirt right
next to you, Mike, was next.
>> Okay, thank you.
Thank you very much.
Just a very simple, well
maybe not so simple,
but very short question.
Is this is a danger to
participatory democracy?
Are we going in a direction
that could be construed
such that a country or
a government that is not
so democratically inclined
could take this information
and really control
just about everything?
I mean I'm looking
at news from China
where they have these facial
recognition situations,
and who knows?
I mean, prediction
can be dangerous.
And I just don't know if you've
had any thoughts or ideas
on how this type of thing
can be used by governments
that are not so benevolent.
Thanks.
>> John Haskell:
That's a great question.
>> Martin Hilbert: Yes.
Yeah, as I just said,
the technology is always
normatively neutral.
It's just that technology
is just a tool, you know.
It has to be socially
constructed, just like a hammer.
I mean if you want to build a
shelter to protect yourself,
you need something
equivalent to a hammer.
Now, everything equivalent
to a hammer can be
used to kill somebody.
It's not the hammer's fault.
But if you want to go
on evolution, you know,
if you want to be a
civilization, we need to,
need something like a hammer
in order to build a shelter.
You know, otherwise, you'll
be back with the animals.
You know, like, but the
hammer is just, you know,
and any technology is a
tool like that that has
to be socially constructed.
It's not technologically
deterministic in that sense.
So it can be used for
participation, it can be used
for participation for
a very good sense.
A colleague of mine at
MIT [inaudible] he talks
about this idea, he just gave a
Ted Talk last week about that.
How actually, his idea would be
to kind of like have an Avatar
for each one of us that
basically reads all
of our digital footprints
all the time what we read,
what we do on Facebook,
who we're in contact with.
What our friends
think and so forth.
And this Avatar then basically
internalizes our political
views, right.
And then we create this Congress
supplement or complementary
to existing Congress, right,
where we send 250
million Avatars.
And if a bill goes
through, you know,
these Avatars then
basically say well, I'm busy.
I have other things
to do, you know.
So I sent my Avatar to this
hearing, and my Avatar is
like now, like my opinion
would be like this.
You have to change that or that,
or it doesn't fly
with me, right.
So, it would be a complete
director [inaudible]
because they always
update it in real time,
like I update my Avatar in
real time, as all of you are.
And we have this 250
million people assembly there
of Avatars assembly
there, right.
And, you know, these
digital footprints exist.
I mean Facebook has them.
Why, and Google has them.
Why shouldn't my
Avatar have them?
And I send them to
Congress, right.
So is there a big benefit?
And this is almost like
direct democracy then.
Now, remind ourselves
the founding fathers
of this Constitution,
of this country here,
were very skeptical of tyrant
democracy, right, because,
you know, the mob
killed Socrates, right.
So we need a mechanism
to refine and to enlarge,
and that's why a representative
of democracy was created.
So going, we have a
lot of tools for that
to make participation better.
But also, that's not
the entire solution.
We need to also like
representative democracy is more
than just going back
to Athens, you know.
So to direct democracy.
On the other hand, on the
other extreme of the question
with using abusing
that, yes, absolutely.
I mean the oldest vision
of this entire scenario
of what we call maybe
information society,
[inaudible] society is it
doesn't come from an academic.
It comes from George
Orwell in 1948, right.
>> John Haskell: '84, right.
He wrote it in '48, yeah.
>> Martin Hilbert: Yeah,
in '48 it wrote '84, right.
So, the academics
started to talk
about the information
society in the '80s.
He talked about that
in '48, right.
And it was this kind
of vision, you know.
I mean nowadays he would
probably turn in his grave
if he knew what was
going on, right.
Because it's much more severe
than he could have ever
envisioned back then already
with what, government
industrial complex,
that's how he called it,
knows about us, right.
>> John Haskell: Gentleman
two down is the next,
and then we'll come over here.
>> Excuse me.
I came in late, so I don't
know if you've touched on this.
But this is a question about
the, highly speculative question
about the future of
artificial intelligence.
This is the 50th anniversary of
2001 a Space Odyssey and how.
And I think we're
mostly familiar
with the outcome of that.
Scientists have seriously
thought
that true artificial
intelligence can be achieved,
not in the far distant future.
But perhaps with the
next 30, 40 years.
What would happen to all
these algorithms that we now,
we nicely plan and
put into computers,
when the computers
themselves think
and decide well,
about discrimination.
Well maybe democracy
isn't such a good idea.
Maybe it's more effective to
have say [inaudible] thought,
you know, a philosopher king.
Or us, the computers,
who know far more
than these petty human lives.
So I mean, I know this
sounds like science fiction,
but science fiction has
gone science back rapidly
in the last 50 years.
>> Martin Hilbert: Right.
Yeah, wow, that's
a big question.
I mean two, three
things coming to mind.
So obviously, the question is
with the singularity, right.
So, I do, I think
it's complementary.
So there are two things.
I don't think that artificial
intelligence basically cares
about replicating
us or going to,
when we say true intelligence,
it's kind of like, you know,
you don't have to figure
out how a Colibri flies
in order to build helicopters.
You know, it's just
that helicopters
that Colibri cannot
do and drones can't
that Colibri cannot do.
So it's not, like you
don't have to like,
oh how does the brain work?
It doesn't really
matter how the brain,
it doesn't really matter how
birds fly in order for us
to fly to the moon, right.
So if you have infinite human
resources, get some people
to study how the brain works
and maybe you can learn
something from that.
But you don't really need to.
Once you understand the
general concept of aerodynamics,
you can build all kind
of flying machines.
You don't need to study birds.
You can still do, but it's just
this one solution nature came
up with, right.
So same with intelligence.
What is true intelligence
is actually an
interesting question.
Because we have only
like this one thing
that Mother Nature came
up with after tinkering,
but that's basically it, right.
So, yeah, I think it's not like,
I don't think it's the
terminators and so forth,
it's just a different
kind of intelligence.
And that brings me to the second
point is it's the singularity.
It's not like we will
have a terminator
which will be like us.
No, it's more like a jet
plane compared to a Colibri,
or a helicopter compared
to a Colibri
and a jet plane compared
to a Condor, right.
It's more like this that
the difference will be there
complementary doing
different things similar
in the same concept, but
complementary things.
And I think this
digital technology
where it's very good is
it brings us together,
especially talking
about governance, right.
It's kind of the glue
that stitches us together.
I'm in the department of
communication because, you know,
it's communication
that holds us together.
And that's what's
being digitalized.
Laws are the same, the
communicative structures laws,
algorithms, you know,
on this level.
So laws are algorithms
on this bird's eye view.
And then that level, we already
merged with the technology.
I mean singularity
already happened.
And 80% of the transaction on
the stock market are decided
by artificial intelligence.
Ninety-nine point nine
percent of the decisions
on the electric grid are made
by artificial intelligence.
And over half of marriages in
the United States are decided
by matching algorithms,
artificial intelligence guided
on dating sites, right.
So if you tell me, hey Martin,
we found this new species,
our extraterrestrial
species, right,
80% of the resource
distribution,
100% of the energy distribution,
and 50% of the procreation
decisions,
are taken by this
thing called AI.
I would say like hey,
yeah, that's one, right.
You guys already merged, right.
You already, it's
inseparable already
on a bird's eye view
societal level.
And that's what I
use society, right.
Now, you could turn off your
cellphones, never again be
in touch with money, because
that's all digitalized, right.
Go to the mountains, and you
probably will also survive
without any contact
or digital whatsoever.
You would be able to
survive for a few years.
But under no circumstance
can you claim
that you coevolve with us.
Right. Human evolution
has already merged.
On an evolutionary level,
we already have a socially
technological system, right.
So you can step out, but
you're not evolving anymore
in a biologically evolutionary
sense with us as society.
Right, so I think singularity
has already happened.
And I think the biggest
effects are
from this bird's eye view level,
because who cares
about this brain.
It's just one brain.
It's kind of like okay, you have
birds, birds and planes, but,
you know, these are
just different things
that make a bigger hole.
>> John Haskell: So we're
going to do one quick question,
and then we, and the exciting
thing is there is like wine
over there when we're done.
The last one is the
lady in the middle here.
She'll have the last question.
And then people can
have at Martin.
He has no bodyguard.
>> Okay, I feel like when
I use Facebook I can get it
to make me aware
of a lot of things
that it might not naturally
choose to by selectively looking
at other things and liking them.
Like if I read the New York,
if I read the Washington
Post then I'll look
at the Washington Times.
And if I get things
about certain problems,
I'll try to look up.
And once I've done that,
it just keeps feeding me
from both sides.
I mean, if I make a few choices,
it'll just keep feeding me.
>> Martin Hilbert: Right.
Yeah, so, that's a question
how these algorithms work
and how those algorithms
are set up.
So Facebook has set up
the algorithm in order to,
like Facebook, what all
the social media sites try
to maximize is to keep you on
the site as long as possible.
All right, and they
have these maintenance
optimization algorithms.
So YouTube and Facebook
and whatever,
they try to get every split
second out of you to stay longer
at the site because they
have more interaction,
more data about you, better
ads and so forth, right.
That's the business model.
So, the easiest way to
keep you on the site is
to show you what you like.
We know if we show you exactly
the opposite of what you
like you will run, right.
>> I say I like this and the
opposite thing [inaudible].
>> Martin Hilbert: Well, you can
try to confuse the algorithm,
but your digital footprint
is way too complete
as to really confuse it, right.
I mean bots sometimes
try to confuse that.
And that's been the
solution to it.
But, your digital footprint
is from all the different,
from all the different ads.
It's not only on Facebook.
Facebook is everywhere,
right, even if you're not,
you can download
your Facebook data.
You will see that most of
it is actually not collected
on Facebook.
If you don't have
a Facebook profile,
Facebook still has a
dataset about you, right.
I invite everybody,
just go to your email.
Go to the search function
and type in Facebook.
You will see that almost
every email in your inbox,
something to do with
Facebook, right.
So most websites have the
Facebook pixel on them.
So independent government,
even other they have
the Facebook pixel.
So they track you
when you get there.
So it's everywhere.
So they actually have a much
too complete digital footprint
as to confuse them with one
or two different likes, right.
Now you can completely
try to be another person,
but then in a Goffmanian sense,
you will become this
other person, right,
because we are all actors.
>> John Haskell: All right, we
have to go to the reception.
Thank you all very much.
Thank you, Martin.
>> Martin Hilbert: Yeah.
>> John Haskell: This is great.
>> Martin Hilbert: Great.
>> This has been a presentation
of the Library of Congress.
Visit us at loc.gov.
