[APPLAUSE]
IRIS BOHNET: Thank
you for having me here
at Google and for
coming to my talk today.
I'm a fan of many of the
things that you're doing.
I'm a huge fan, in particular
because the company is
data-driven in many ways.
And also in the ways that
I particularly care about.
In its work on
diversity and inclusion,
and generally in its
talent management.
So with that, kind
of, I hope this
is going to resonate with you.
What I was trying
to do with the book
was really bringing
data and evidence
to bear on a question
on which many people are
very passionate about.
And I share that passion.
But what I'm trying
to argue is that we
have to bring the
same kind of rigor
that we use to
analyze unemployment
or inflation to questions
of gender equality.
So here's what I want to do.
I want to start out by kind of
talking a little bit about why
we might want to care.
Then I want to talk about
what we're up against.
And I know you
all or many of you
have gone through
unconscious bias training,
so you will be familiar
with much of that.
And then I want to spend
most of the talk discussing
how we can redesign our
organizations to level
the playing field and
making it easier for all
of us to do the right thing.
So some of you might
be here because they
care about this pyramid
here, because you
care about the absence
of women in leadership.
And then some
others might be here
because you care about
even bigger questions.
And I would actually
suggest to you
that this slide is a slide that
you should most care about,
even more so than the pyramid.
The UN, in fact, now estimates
that about 200 million women
and girls are missing
because of gendercide.
Gendercide because of sex
selective abortion or neglect
in the first five years.
Now this in itself,
of course, is tragedy.
But I'm starting with it because
a problem that many thought was
too difficult to even
address led to some really
inspiring research.
A colleague of mine, Rob
Jensen who's now at Wharton,
went into India and
exploited the fact
that many call centers had
moved into India in the '90s.
And they often hired women.
So what he did was he ran an
experiment, a field experiment,
where he had treatment villages
and controlled villages.
And in the treatment
villages, he
offered service and
training for women
to go and work in call centers.
And yes, he was interested
in whether that would affect
the likelihood that these
women would go and work
in the call center.
But more importantly, he was
interested in whether this
affects how parents
treat their zero
to five-year-old daughters.
Do even the poorest
of the poor care
about returns on investment?
And that's what he found.
So he measured survival chances,
he measured body mass index.
He and his team.
Of course, he measured
whether these girls then
would be in school.
And really try to understand
whether parents started
to treat their daughters
better when there
was a future for the daughters.
And that's what he found.
It was a long experiment,
over about 10 years.
And he could show that
economic opportunity actually
can change how parents think
about the value of having
daughters without negatively
affecting their sons.
And I think that's
why we should care.
We should really care because
for some, this quite literally
is a matter of life and death.
But bringing you
back to somebody
who I think many of you know.
This is Heidi Roizen.
If you have gone through
the unconscious bias
training at Google, you
might have encountered her.
She is an entrepreneur,
a venture capitalist
in Silicon Valley, and
she was famous before.
A case study made
her even more famous.
And the case study,
in an interesting way,
made her famous because
some colleagues of ours
at Columbia Business
School used this case
to teach their students
about bias in the moment.
How this is done now
across, really, this country
and some other schools
around the world
is that half of
the students would
get the case of the
protagonist being called
Howard and the
other half would get
the case with the protagonist
having her real name, namely
Heidi.
And then students read the case,
everything identical, and then
rate Howard and Heidi.
And again, you will
have been there
if you have done the training.
We generally do think that both
Heidi and Howard do a good job
and are competent, but
we do not like Heidi
and do not want to hire
Heidi because Heidi
violates gender norms.
That's what we're
up against when
we try to overcome some
of these patterns that
affect our thinking.
But let me ask you to
take a look at the pattern
here that you see.
Why don't you compare
squares a and b for me?
I presume most of
you will see square b
as being lighter than square a.
It turns out that
this is an illusion.
And what I'm going to
do next is I'm going
to cover the surroundings here.
And I presume that most
of you now see the squares
as having the identical color.
I'm going to go back
just quickly because you
look at me puzzled.
So here's what
happened in your minds.
Your mind made immediately sense
of the pattern that it saw.
A checkerboard.
And your mind knows that
the light square has
to be next to a dark square.
And you also take
a bit of the shadow
into account to correct
for that shadow.
So the question, really, in
front of us is the following.
What kind of patterns
do we see in the world
out there which keep
us from seeing square b
for what it really is?
Another dark square.
So some creative interventions
build quite directly on this.
And you might have
heard of orchestras,
the larger orchestras
in this country,
introducing blind auditions.
In the '70s, many major
orchestras in the United States
introduced curtains and had
their musicians audition
behind curtains.
That increased the
likelihood that women
would be hired by 50%.
Or put differently,
blind auditions
played a huge role in
increasing the fraction
of female musicians
from 5% to now
almost 40% on the major
orchestras in this country.
This is quite different
from the roughly 10%
female musicians, for
example, in Berlin
or on the Vienna Symphony.
So blind auditions
are an attractive tool
that, in fact,
many organizations
now increasingly rely on.
Of course, not in terms
of auditions, but in terms
of blinding themselves to
demographic characteristics
of the top applicants.
But I want to use the blindness
primarily as a metaphor for us.
Because what we're trying
to do here is really
learn from these curtains
for other types of design
interventions which could make
it easier for all of our minds
to get things right.
And that's where I
want to take you today.
In fact, before I
talk about gender,
I want to leave you with another
metaphor which surely must
be familiar to many of you.
Most of you must have
been in a hotel room
where the room key card did
not only serve the purpose
to open and close the doors, but
also to turn lights on and off.
This is another little
bit of technology,
a little bit of
design which makes
it easier for all of us who
actually think that we care
about the environment to follow
through and leave the room when
the lights are off.
So that's where I'm going.
Trying to make it easier for
us to do the right thing.
And this, of course,
is very, very different
from other types of things
that we have been doing
and could be doing.
It's different from
diversity training.
Now clearly
diversity training is
important for raising awareness.
But as we all know, it is
often hard to follow through.
Because by definition, these
biases are unconscious.
And even though I
now realize that I
will treat the male kindergarten
teacher or the male nurse
differently from their
stereotypical counterparts who
happen to be women, I can't
guarantee that tomorrow
when I see a male nurse,
I will objectively
evaluate that person.
Seeing really is believing.
So diversity
training is a start.
But I am arguing that to
really advance the needle
and make a difference, we
have to go deeper and do more.
And yes, trainings
enabling traditionally
disadvantaged groups
to succeed have
been shown to have some impact.
But again, I don't
think the solution can
be to fix women or people of
color or other underrepresented
groups.
But eventually we have to
move to fixing the system.
So that's where I
want to take you.
I want to talk about
three different topics
that I touch upon in the book.
The first one is
talent management,
something that everyone
in this room and everyone
across the globe, really, that
is listening to my talk today
has been involved in
in some shape or form.
Either because you've
interviewed for a job
or because you were one of
the people evaluating others.
Then I want to talk a
little bit about redesigning
school and work.
And give you some examples
of how that might be done.
And then finally, I'll talk
about possibly the hardest
topic, and that is how
to design diversity.
How to make diversity
really work.
OK, so any talent
management, of course,
starts by attracting the
right kinds of people.
And curiously
enough, we have been
thinking about, for example,
gendered advertisements
for a very long time.
Not just Coca-Cola, not
just Pepsi, not just
other soft drink
companies, but all of us.
We are kind of aware of the fact
that some colors, for example,
some shape, some names, appeal
more to women than to men,
or vice versa.
In Coke's case, it
was their word diet.
Coke and other soft
drink companies
realized that men don't
seem to be buying Diet Coke.
It, of course, could
have lots of reasons.
Either men don't care
about the calories
they take, or they don't
have an issue with calories,
or they run more along
the Charles River.
Or diet is not their word.
So they replaced diet by
Coke Zero, which was for men.
Pepsi did the same thing.
Pepsi Max instead of Diet Pepsi.
And of course, Gillette
did the reverse
when introducing
Venus Gillette that
comes in colors that resonate
to people like me, in pink.
I argue that-- I don't want
to defend this in any way,
I'm just describing
that this is happening.
But what I do want
to argue is that we
should use the same
kind of scrutiny
in our job advertisements.
So here is an ad
that a school posted
which wanted to increase the
fraction of its male teachers.
As you probably know,
in the United States,
we now have about 10%
to 15% male teachers
in our elementary schools,
which increasingly
poses a problem for our boys.
Because they no longer
have male role models.
So this ad looked
like this. "Looking
for a warm and caring teacher
with exceptional pedagogical
and interpersonal skills to work
in a supportive, collaborative
work environment."
The adjectives that we
highlighted, of course,
are gendered and typically
associated with women.
And research suggests that this
will, in fact, substantially
decrease the likelihood that
men will apply to these jobs.
So an alternative ad could have
looked something like this.
"Looking for an
excellent teacher
with exceptional
pedagogical skills."
Now of course, the
school might say,
but we really care
about the caring.
And if they do, that is OK.
I'm not prescribing what
schools or any organization,
for that matter,
should be doing.
But what I am arguing is that
we should do it consciously.
We should understand
what messages we send,
and send the messages
that we do want to send
and that are important
to us in a conscious way.
And that, of course,
often means that we
have to measure, that we
have to collect the data,
and evaluate the impact
of what we're doing.
Now let me move on to
somewhat higher-hanging fruit.
This is a hard one and I
know Google and your people
analytics groups have
worried about this
for quite a long time.
Of course what we're up
against in evaluating
people is that most of
us believe that we are
particularly good interviewers.
And those of you who've
read the book by Laszlo Bock
will recall that among
all the Googlers,
there is this one outlier who
is an amazing interviewer,
and everyone else
is just average.
And that's kind of
true for the world.
That we all think
we're very good
and we'll feel whether you
belong or not, when in fact
what we're building
our assessments on
are these stereotypes and are
these implicit assumptions
that we often even
can't articulate.
In my case, for example, a
real moment of a wake up call
was when a job candidate--
this is a real story-- a job
candidate and I started
to talk about the fact
that we both had been
synchronized swimmers.
And I felt immediately
that, oh my God,
that will make her an
amazing Harvard professor.
Now we can't keep
that from happening.
That's the problem
with interviews.
It's not that everything's
just bad, but it's hard for us
to distill the useful
information from noise.
And so I have this somewhat
cheesy stock photo up here
to suggest to you that almost
everything that you see here
is wrong.
So the first thing that
is wrong is that we
shouldn't do panel interviews.
We shouldn't do panel
interviews because the sample
size of a panel interview
is basically one.
These three people will not come
up with independent assessments
and is, of course,
much better to have
three separate interviews with
three separate evaluations
going on independently.
A second myth is that diversity
on the evaluation committee
itself will solve our problems.
Now diversity can be helpful
in that we might reach out
to different networks and
invite different people to apply
to the jobs.
But it doesn't protect
us from implicit bias.
Seeing really is believing.
And if we don't see female
engineers or male kindergarten
teachers, we don't naturally
associate those jobs
with men or women respectively.
So diversity itself
can't be the solution.
Then thirdly, and
now, of course,
reading a bit much
into this picture here,
what's really important
is that we always
try to calibrate our
judgments by forcing
our minds to make comparisons.
So why am I saying this?
A very basic insight
from behavior science
is that everything
that you judge,
everything that you evaluate,
the coffee that you now drink
or the water that you
have in front of you
has something to do with
the kinds of coffees
that you normally drink.
That is your reference point
or your reference coffee.
That helps you evaluate whether
that's a good or bad coffee.
Of course, we do the very same
thing when we evaluate people.
We tend to evaluate
them compared
to what we're used to in these
specific professions or jobs.
And so what we're
trying to do is
to overcome that need to rely on
this internal little reference
person sitting in our head
who looks like the stereotype.
And what we've been showing
with a number of experiments
is that when I force you
to compare at least two,
could be more, but
at least two job
candidates at the
same time, you will
be able to overcome
your stereotypes
and are much more likely
to focus on these people's
individual characteristics,
their ability,
and what they bring to the
table rather than the groups
that they belong to.
So comparisons can
be a powerful tool
to calibrate your judgments.
All of this, of course,
hints at the fact
that what we really should do is
use a more structured process.
I was very happy to
learn that Google
uses many of these
insights already
in that you predetermine the
questions that you want to ask.
You ask all of
your job candidates
the same kinds of questions.
You ask in the same order.
And ideally, what
we should also do
is we should rate every
question, every answer that we
get, after we've
asked the question,
and then move on to
the next question
so that we're not
biased by whatever
the candidate responded to the
first question that we asked.
There's a number of these tools
that I discuss in the book,
and I'm actually quite
excited because there
are now these start-up companies
using some of these insights
and translating them
into the technology which
will make it easier
for all of us to use
more structured approaches
to our hiring and evaluation
processes.
But of course,
behavioral insights
shouldn't stop at
the entry level.
And many of you will argue,
going back to the pyramid,
that the really big
challenges start once you
are in an organization.
And let me give you kind
of three quick thoughts
on the kind of things that we
might want to change there.
A first one is super
trivial and won't strike you
as a surprise at all.
And that is just
measure the support
that we give our employees
to help them succeed.
So just down the
road here, MIT was
one of the first
institutions, one
of the first academic
institutions,
I should say, actually measuring
the support that people got.
And given that they're
MIT, of course,
the data spoke for itself.
They literally used
the measuring rod
to measure people's office
spaces, the laboratories
they had available, the support
staff, research assistants,
resources, et cetera.
And they found what then later
was called performance support
bias which disadvantaged women.
Now that, of course,
again, is low-hanging fruit
that we can fix easily.
It gets a bit more
complicated when we think
about performance appraisals.
A first insight is that whenever
I work with organizations,
I typically find the bias is
not so much when organizations
evaluate past performance, which
many organizations literally
do on the x-axis.
But typically when organizations
also evaluate potential.
Because potential
by definition is
forward-looking
and by definition
is very hard to measure.
And that's where the Heidi bias,
the leadership bias, kicks in.
Because we cannot imagine
that women or other
under-represented groups who are
not typically in the leadership
positions would want to
climb up the career ladders.
So potential is
certainly something
that you should
be worried about.
And if you want to use
potential, what I typically
try to argue is we should try
to define as precisely as you
possibly can what we
really mean with potential.
And force ourselves to quantify
it as much as we possibly can.
And then thirdly and
finally, we should
stop sharing self-evaluations
with our managers.
Many organizations
ask their employees
to self-evaluate themselves,
often on a rating scale,
let's say, from one to 10.
And then ask the employees
to share these evaluations
with their supervisors.
Now a little bit of
behavioral science
already suggests to us that this
will encourage the manager's
assessments.
Because any numbers
that I throw at you,
whether in a negotiation or
in performance appraisal,
will affect your judgments.
And if people differ in
their self-confidence,
that will affect the evaluations
that they end up with.
These are just some ideas of how
we can kind of fix and improve
how we do our talent management.
But let me go to some
bigger questions.
And this one might resonate
with you in particular.
Now I don't see too many
people who have recently
taken the SAT in this
room, but most of you
will have taken it at
some point and might
remember that part of the
SAT is a multiple choice
questionnaire.
Now think about the
following thought experiment.
If, in fact, people differ in
their willingness to take risk,
some people will be more willing
to guess or volunteer an answer
and others will be more
willing to skip the question.
So generally much,
much research suggests
that women tend to be
more risk averse than men.
And a former doctoral
student of mine,
[? Katie Baliga Kaufman ?],
in fact, took this to heart
and wanted to check whether that
might cost the skippers points
on the SAT because they
weren't willing to guess.
So she brought a large number
of subjects to the laboratory.
They participated in an SAT.
Only in the multiple
choice part of the SAT.
And then given that
this was the laboratory,
she could force everyone
to answer every question.
So she could take out
the skipping option,
and thereby measure what people
would have known had they
answered all the questions.
And what she found was that for
equally able people controlling
for ability, women are
much more likely to skip
and men are more
likely to guess, which
costs women dearly on the SAT.
Now the happy
ending of the story
is that this month--
no, last month.
It's already April.
March 2016, as you probably
have read in the news,
the SAT has been redesigned.
And one of the new design
features is to de-bias the SAT.
To gender de-bias the SAT.
Really, in many ways, the
first time in 100 years.
The SAT now is trying to
provide a level playing field
and it could have done
many different things.
The College Board
ended up choosing
to take away the penalties for
guessing wrongly completely.
In the old SAT, you got a
point for every right answer
and a quarter point
deduction for wrong answers.
So a little bit of math
suggested to people
that if you had five
possible answers available,
if you can exclude one, then
it is the dominant strategy
to guess.
But if people differ
in their willingness
to take risk, of
course, the risk lovers
will be more likely to guess
than the risk avoiders.
So the new test takes the
penalty completely away.
At which point the
critics, of course, said,
oh my God, you're
enabling guessing now
and you're inviting wild
guessing by everyone.
At which point the
answer must be,
well, we have been
inviting guessing
by 50% of the population
for about 100 years.
And now we're making it
legal for everyone to do so.
So that's design.
Design can be powerful, can
really change how we do things
and level the playing
field quite dramatically.
Here's another example
that can be quite powerful
and that is literally
the power of role models.
Leaving the US for a moment,
because interestingly
enough and unbeknownst
to many people,
the longest running
quota experiment
has, in fact, been run in India.
Not in Norway or
some other countries
which recently have introduced
quotas on its corporate boards.
India amended its constitution
in 1993 with the provision
that a third of village
heads had to be female.
What was beautiful from
a research point of view,
the third was literally
picked out of a hat,
allowing researchers to
evaluate what difference
difference really makes.
And a number of papers have
been written in those 22 years,
roughly.
But the one that I want to
particularly focus on here
was recently
published in "Science"
suggesting that if a
village has been exposed
to two female leaders
in those 22 years,
mindsets are starting to change.
And parents and
girls are starting
to associate political
leadership with women.
That's pretty dramatic.
Again, suggesting that
seeing really is believing.
And that if we see
counter-stereotypical people
in those jobs, we can
actually imagine ourselves
in those jobs.
And it has quite
real implications.
So I'm sad to say, this is my
own institution, the Harvard
Kennedy School, that
only 11 years ago, we
realized that all the portraits
on our walls of leaders
were of men.
50% of our students are female.
And it wasn't our
conscious intention
to suggest to our
female students
that they were not
made to be leaders.
So we've changed that since.
This is Ellen Johnson Sirleaf,
the President of Liberia.
Also a graduate of the school.
We commissioned a portrait
of hers, Abigail Adams,
a number more, to change
the face, quite literally,
of the Kennedy
School and make it
a more inclusive environment.
Very serious research
suggests that even
what we see on our walls
can affect our beliefs.
And then, of course, there
is some really happy news.
That recently we've had a
female protagonist, Rey,
in "Star Wars."
And of course, that does
matter in what we think
is possible for ourselves.
Sadly and often, you probably
have read about this.
It didn't transpire everywhere.
Monopoly created a special
version of Monopoly
based on this particular
episode of "Star Wars"
and forgot to include
the female characters.
Now Monopoly, I should
say, has fixed this since,
but it's still remarkable
that something like this
is still possible.
Now let me move on
to our last topic.
How to design diversity,
which I already announced
is a really thorny topic.
On the one hand, much evidence
suggests that diverse groups
outperform homogeneous groups.
But the tricky part is that when
you ask people who participated
in a diverse team how well
they think their team performed
and how enjoyable the
task was, they will,
time and again, report that the
team probably didn't do so well
and it wasn't really fun.
Because diversity is
hard work and, of course,
because what we're
trying to achieve
by having diverse perspectives
represented in a group
is exactly what
makes uncomfortable.
You want people to disagree.
We don't want people to
fall into group think
and just run in
the same direction
because somebody said
a was the right answer.
And that makes
diversity so hard.
So let me give you
kind of a few thoughts.
So some is really old news.
Yes, critical mass does matter.
It does matter whether
you're the only one of x.
The only woman, the only
Swiss, the only economist,
whatever it might be in a team.
It does matter.
You will be turned into
a token and you are also
much more likely to perceive
yourself as a token.
So having three of x in a team
or roughly 30% in many cases
is helpful.
But diversity is not
just a numbers game.
And I just want to end
by highlighting this.
It goes beyond numbers.
Numbers themselves are
very helpful and important,
but we have to think
about the decision rules
and rules of engagement
on our teams as well.
And here's one that was
a personal surprise to me
and that is political
correctness.
So I came from
Switzerland to the US.
And Germanic culture is
not a culture known for PC.
And I have to tell
you that initially, I
used all my stereotypes about
Americans thinking, oh God,
this is very superficial,
this whole PC thing.
Now it turns out,
really serious research
suggests it's actually working.
Why?
Let me show you a
picture here and ask you
the following question.
Where would you be more likely
to drop a piece of paper?
Probably on the dirty beach.
So what we see or what
we hear signals something
about the prevalent norms.
And the question
for us [INAUDIBLE]
really is, where
would we be more
likely to drop a dirty joke?
Not in a PC environment.
So norms can matter.
And how we present
information can matter.
And I have one of
my favorite slides
up here, which I'll start on
the happy note, learning really
is possible, and
the sad note, we
have been using this pyramid
for decades in this country
to help us make more
educated food choices.
Now here is the deep insight.
We do not eat off pyramids.
This is the new image.
It's a plate.
And I'm sure it resonates
with you that immediately, you
can see whether you eat too much
dairy, too little dairy, too
little protein, too much
protein, things of that sort.
All of us probably
have some reflection,
some reaction to what they see.
For me, it's the dairy.
I'm a dairy lover, and I'm
still disputing the fact
that this is so small.
But anyway.
So here's how we've
used this information
to reshape some of the norms
in the gender diversity space.
This is a cover from the UK.
Some of you might be familiar
that the UK decided in 2011
to increase gender diversity
on its corporate boards to 25%
by 2015 without the
introduction of quotas,
but instead by relying
on behavioral insights.
So we've worked a bit with
the various groups involved,
specifically for us, it was
Vince Cable, the Secretary
of Business, in thinking about
how behavioral insights could
be helpful.
And this is the brochure that
they showed to us in 2013
when we were first approached,
showing that 17% of board
members were female.
Here's what concerned us.
Sometimes descriptive norms can
turn into prescriptive norms.
Not just describing
how the world is,
but suggesting how
the world should be.
And so we were nervous about
this depiction of reality
because it might suggest to us
that yes, the right thing to do
is to have a small fraction
of women on boards.
So we redesigned the
cover page and focused
instead on the
organizations which
already have diverse supports.
It's the same sample, the
100, because companies, the
[INAUDIBLE] 100
companies in the UK,
but what we were
focusing on was who
and what fraction of the
100 largest companies
are already diverse.
And at that time, that was 94%,
signaling that the thing to do
was to join the
club and be diverse.
So if you're interested
in learning more
about some of these
findings, we've
created an online platform,
the Gender Action Portal,
which is searchable,
where people can find out
more about what
works to close gender
gaps in economic opportunity,
but also in health, education,
and political participation.
[APPLAUSE]
I'm happy to take questions,
comments, thoughts.
MARTA: Hi
IRIS BOHNET: Hi.
MARTA: My name is Marta.
I have a question
about, I just want
to hear your thoughts
on motherhood,
because I've been reading a lot
about how we can do a lot up
front to recruit more
women, but there's
a lot of bias associated with
women once they get to a point
where they're considering
having children.
I myself have not-- am
thinking I might not
want children, which is a
whole other conversation
about the reactions
I get for that.
But there's an immediate
assumption right
after a woman gets
married, I feel even here,
that their productivity
might decrease
because their
priorities will change.
And I'm trying to reconcile
that with women I hear saying,
in fact, their
priorities do change,
along with fathers
who say the same.
So I'd just love to hear
your thoughts on that.
IRIS BOHNET: Thank
you for the question.
In fact, I'm going to draw
on some of your own research
at Google.
So as you can tell, I am a
fan of a Laszlo Bock's book,
"Work Rules!"
And when Google
realized that women
were more likely to leave than
men, they analyzed the data.
And the data told them
that it wasn't actually
women who were more likely to
quit, but it was young mothers.
And Google being
Google then could
increase its parental leave
and both, in fact, not
just for mothers, but also
for fathers, young fathers.
And now apparently doesn't
have a gender gap in likelihood
of leaving anymore.
So that's, I think,
the power of data
and the power of
something that is clearly
more than behavioral design.
And that is kind of taking into
consideration that people have
lives outside of their jobs.
And that we have to
accommodate those lives
and those needs to make sure
that the employees can also
thrive in our organizations.
So parental leave policies.
Again, this is beyond
behavioral design.
This is just now the
economist speaking based
on economic evidence on that.
Parental leave policies
are quite possibly
the most powerful
tool we can use
to decrease the 'motherhood
penalty' that you allude to.
Now what, of course,
it doesn't correct
for are the biases that we
have, the stereotypes, that
go with seeing, for
example, a pregnant woman.
And there isn't a lot of
research suggesting that there
is something like a 'motherhood
penalty.' And that yes,
mothers do earn
less than fathers.
And that, in fact,
the correlation
goes the other way around.
That men tend to
make more money when
they have children and women
tend to make less money when
they have children.
So I do think the
biases, the stereotypes
are absolutely well and alive.
And by becoming aware of them,
we won't solve the problem.
But in fact, I applaud Google.
I'm not just saying this here,
I say this in my book also.
I applaud Google for
going to the data
and really trying to understand
what is happening here and then
trying to fix what's
actually broken.
So generally, by the way, a
bigger answer to your question
is I am skeptical
that we will ever
be able to overcome our
biases as human beings
until we see
something different.
So for example, let me run the
following thought experiment
with you.
Maybe orchestras could,
the major orchestras
in this country, could
remove the curtains now.
Because now that we have almost
40% female musicians, maybe
we're starting to associate,
building on the India evidence,
we're starting to associate
playing music with women.
And maybe we don't need
the curtains anymore.
Now of course, I
might be wrong, right?
This Is an experiment
yet to be run.
But the evidence that we have
so far from, and particularly
from India where numbers have
changed very quickly because
of quotas, makes me
kind of optimistic
that when we see the
change, eventually
our mindsets can change.
But I don't think by just
being aware, for example,
that there's much
of a penalty, we
will perceive
mothers differently.
OK.
Oh no, one more question.
AUDIENCE: Can't-- can't let
only one question happen here.
About the resume bias.
That's pretty well-known by now.
Plenty of research
on that, especially
with both minorities and women.
But my understanding
is that there
may be similar bias at
the interview stage.
And I was reading an
article recently about this.
And apparently companies
now have sprung up to,
essentially what they
do is to do screening.
But the way they do
screening is they
give tests that are designed
by the hiring companies
and submit the test
results to the company
without any sort of identifying
information with them at all.
And have the companies
first select the candidates
that they will interview based
only on these test scores.
And then apparently,
according to what I've read,
this tends to also increase the
number of women who eventually
get hired because they
don't get screened out
at an individual
interview stage.
I wish I remembered more
of the details of this,
but I'm wondering what you know
or think about this part of it.
IRIS BOHNET: Thank
you for the question.
I discuss it at great
length in the book.
So absolutely.
The best predictor of
future performance,
and that's, again, not rocket
science, is a work sample test.
So when I hire a
research assistant,
that is not very hard for me.
I can give the
person a problem, ask
her to do some data analysis,
run some regressions,
write a short report, and
that's a very good predictor
of how well the person is
going to perform in the future.
So a work sample test
is the best predictor
of future performance.
Full stop.
One of the worst predictors
of future performance
are unstructured interviews.
Now social science, that's
actually not new news.
Social scientists
have been trying
to convince the world that
unstructured interviews are
bad predictors of future
performance for about 50 years.
So being a behavioral
scientist, I actually
don't believe that we'll ever
convince people to give up
the interviews, right?
In 50 years, either we
are bad communicators,
and that might be part of it.
Who knows?
But in any case, I think
we're clinging to interviews.
So I served as Academic
Dean of the Kennedy School
for a few years.
I could not imagine hiring
a new faculty member
without having talked to them.
So I'm totally guilty of that.
At least I used the
structured interview.
So that's why my
recommendation would
be to combine a work sample test
with a structured interview,
right?
Which at least is using
a structured process.
And structured
interviews are actually
better able to predict
future performance.
But there are companies now,
and it's super exciting, super
interesting to see, which do
away with resumes completely.
And instead just do
the work sample tests.
I mean, that's exactly right.
That's exactly right.
And then only the very
last stage of the process,
they actually see people
face-to-face and interview
the last 10 or the last five.
But using structure protocols.
So I think it's
very, very appealing
to think of the kinds of tests
that you could use to, in fact,
predict future performance.
Here's one thing where
the interview is helpful.
And that is, of course, you will
probably think that right now,
and you're, of course, right.
In an interview, we're not just
evaluating a job candidate,
but we're also telling
something about our companies,
our organizations, right?
So it's also a bit of a
sales pitch on my end.
And that's OK as long as you're
done with your evaluations.
Right?
At the end, I'm very
comfortable and I did that too.
I'm very comfortable to
have an unstructured part
and talk about
synchronized swimming
and talk about the
teaching load at Harvard
and our wonderful students,
or whatever else it might be.
That is different from me trying
to evaluate a job candidate.
Just one more
thing, and then I'm
going to call on
the next question.
But the best evidence,
if you need evidence
on kind of what interviews--
not maybe the best,
but one that kind of
drove the point home to me
was a bit of an eye
opener, comes out of Texas.
So a few years back,
the state of Texas
realized they didn't
have enough physicians.
And so what they did
is they went back
to their medical
schools and told them
that they have to increase
the intake of new students
by about a quarter.
So just one medical school that
analyzed the data at Houston
had already admitted
150 students,
the top-ranked students.
And now in May, very late
in the academic year,
had to go back to the
rejected applicants
and admit 50 of the
initially rejected people.
In fact, the people they
had to admit that at point
were ranked between 700 and 800.
These are basically all
people who nobody else wanted.
And they thought initially,
of course, catastrophe.
Anyway, they will
never make it here.
But, of course, it's turned out
into being a quasi-experiment,
allowing research to follow
the 150 top-ranked students
and the 50 lowly-ranked
students over many years
to see whether that initial
evaluation system correlated
in any way with
how they performed
in medical school and
post-medical school.
I wouldn't tell you, of course.
You know where I'm going.
Correlation, non-existent.
Doesn't matter whether you were
initially 788 or number two,
you did quite equally as
well in medical school
and post-medical school.
So something clearly was wrong
with their evaluation system.
So then they're going
back to the evaluation
system which heavily
turns out, was heavily
based on interviews.
About a third of
the final score was
due to more
quantitative measures,
such as previous grades,
letters of recommendation,
work experience before you
went to medical school.
And 2/3 were based
on these interviews.
So if you take out
the interview score,
then at least you
get a little bit
of a correlation between
the quantitative scores
and future performance.
So the interview was
just making things worse.
In fact, the authors
of the research paper
concluded at the end
that a better mechanism
would be to just use a lottery
rather than interviews.
So that's just one study.
There's many of those.
But truly,
unstructured interviews
are kind of really
discredited in social science.
Yes, please.
AUDIENCE: Well,
I'd like to start
by saying that I do a lot
of interviewing myself.
And if we could talk Google
into dropping interviews,
I'd be thrilled.
So one of the
issues that we have
in hiring women for
engineering is the pipeline.
We're hiring right now, and
just the resumes coming in.
It's male, male, male,
male, male, female, male.
You know?
It's just very hard.
So I'm wondering
if you've thought
from a behavioral
psychology standpoint of how
do we address that?
Is there something
that we can do
to persuade all the
young women out there
that computers are fun, it's
a good job, it's well-paid?
Join us.
Come on in, the water's fine.
IRIS BOHNET: Yes.
Yeah, no, absolutely.
And pipeline issues are real.
And again, when I work
with organizations,
I quite literally look
at what is the pipeline
and when do we start
to see, for example,
underproportional,
overproportional promotions
which then would suggest
that maybe there's
some bias going on.
But the pipeline
issues are real.
So last week I spoke at
two different conferences.
One was Women in STEM.
And I was actually,
I chaired a panel.
And we had a number of very
interesting NGOs and start-ups
working with schools.
And the kind of things I learned
there were astonishing to me
as well.
That Algebra 2 is not taught
at most of our public schools
in the United States.
That most of our
teachers are not
equipped to teach Algebra 2.
So first of all,
many of our students
aren't even equipped
with the kind of tools
that companies like
Google, for example, need.
That doesn't explain the
gender bias yet, but just more
generally.
And so what they're
doing is many of them,
versions of providing
kind of help to teachers.
So one project was
Science from Scientists.
Just getting scientists
into schools,
helping teachers
teaching science.
Many of them were focused
on girls, primarily
or exclusively, also provide
mentoring, sponsorship, support
systems.
So that's kind of one thing.
The other conference I just
spoke at was on Saturday.
It was Women in Math at Harvard.
And again, I learned
some interesting things.
And some are kind of really
ripe for some design changes.
Some of you might have
studied mathematics
and will remember that
it's super competitive.
To get into the best
schools, like Harvard or MIT,
apparently you
need to participate
in lots of competitions
already in high school.
And it turns out that much
research suggests that women
do not like competitions.
It's a bit similar
to willingness
to take a risk, self-confidence.
We tend to want our
work to be evaluated
for what it is and not
necessarily participate
in hyper competitive
environments.
So lots of research
suggesting that that
might actually
decrease the likelihood
that women will choose
those kinds of fields.
So I think it's a
combination of enabling
boys and girls to do the work.
And enabling our teachers to
teach the kinds of subjects,
of providing role models,
mentorships, same sex teachers.
So lots of evidence
suggesting that same sex
teachers, in particular,
counter-stereotypical subjects,
matter.
So this, of course, for girls
will be math and science.
For boys, reading and writing.
So equally as serious.
So all of those kind
of we need to attack.
And then think about
kind of the designs
that people are in, such as
the hyper competition that
seems to be apparent
from mathematics,
and whether that's really
necessary for people
to succeed and become
good mathematicians.
Thank you.
AUDIENCE: Hi.
We were lucky enough to
have Geena Davis talk
at our headquarters
in Mountain View
a couple weeks back
about-- all right,
you know what I'm going to say.
Her take on how the media can
help make the world better by,
you know, you talked
about changing out
the portraits in the hallway.
And she's talking about,
can we change the things
that we see in movies and
TV to help solve this issue.
I was curious what your thoughts
on that, and whether you
can say anything about that.
IRIS BOHNET: Yeah.
So she and I were at the same
place in California in October,
spoke at the same place.
And she might have
told you that as well.
Some research that I
wasn't actually aware of,
that when we represent
groups of people,
then the typical group is
like one third of or a quarter
female, and 2/3 or 3/4 male.
So yes, I completely
agree with her.
I think the evidence
on seeing is believing
is really overwhelming.
And kind of goes back to
the earlier question also.
The kinds of books that our
kids read, the kind of cartoons
that they watch.
And that's why Rey, I mean, was
half a joke but half serious.
It does matter what we
see, what people wear,
how we represent
different characters,
whether on the screen or
in a book or on our walls.
So yes, I am completely,
completely aligned with her.
Yeah, OK.
OK, I just got the time.
I think we have to wrap up.
Thank you very much for coming.
