The Data on Diversity_ It's Not Just About
Being Fair - YouTube [720p]_Google_2997fps-TRANSCRIPT
53 minutes
Farmer: Hi, I'm Sabrina Farmer, and I am an
engineering manager here at Google for Google
apps and site reliability. And a few years
ago, I became very active in Google's diversity
efforts and learning about all the various
programs that existed here at Google. And
I was pleasantly surprised, as I learned about
why we were putting together these programs,
that it was not about doing something that
was "nice" or it was really about the importance
to our business and our company and also to
the field in which we're committed to innovate
in. One of the things one of the people
that I came to know during this process was
Beryl Nelson, who's an engineering manager
in knowledge. And she was the person who was
sharing a lot of the research and the background
with the executive team. And so I got the
pleasure to collaborate with her on multiple
occasions about putting together the information
about why we were doing this, what it was
important, and specifically how managers needed
to change the dynamic in their teams to be
more inclusive of all the different people
who want to contribute to technology in today's
world. So Beryl finally published her work
externally. It's been published at Google
for some time, but the ACM just published
it in November's edition. And she's here today
to do her presentation on the background about
why diversity is important to people who want
to be innovative, who want to drive and be
successful in this business. So it is my pleasure
to introduce Beryl Nelson.
Nelson: Thank you. So it's really
pleasant to see this in print because I submitted
it to ACM in December of 2012. So next time
you think that one of your code reviews is
taking a long time, just think about me, right.
It is better because of all of the peer reviews.
You can find the article in the November issue
of Communications of the ACM now. So let me
put this into presentation mode. And thank
you all for spending this time. So first I
want to tell you why I care about this, you
know. I was hired by Google five years ago
in India, and that's the first time I was
back in a Western society, really, since 1995
when I moved to Japan. And I saw that things
were a lot different than they were when I
had been working - I worked at Digital Equipment
Corporation in the '80s until '95, when I
moved to Japan. And it was different in a
lot of ways, right. One is that there's a
lot more narrow definition of what makes a
successful computer scientist than before.
We used to have people from all ages and all
sorts of academic backgrounds. I also was
heavily influenced by living in Asia for 15
years and being a visible minority. I've
been working in academic settings that are
mostly mixed since I was ten years old. And
that is nothing compared to being a visible
minority in Japan and India. I really felt
it much more strongly. But the second thing
that happened was, in my first week at Google,
something appeared in my calendar, which said
something like, "Why women mean business."
And at the time, I thought, if something appears
in my calendar, it means they want you to
go. So I went. And it turned out to be a talk
by an author of a book. It was hosted by [Elisa
Noxin?], our Sydney office - was it Sydney?
No, she was in Singapore. But the point is
that it gave some information about the correlation
between successful companies financially and
having women in the executive team. And that
really opened my eyes to the fact, oh, it's
not just a conversation about being some kind
of numbers game that maybe we're missing 50%
of the population if we don't hire women,
or if we don't hire blacks. It's actually
really important to have a lot of different
kinds of points of view at the table if you
want to innovate. So I've read a whole lot
about this. It's not my day job. My day job
is, I work on Google search. It's just something
I'm interested in. And I feel like there are
a lot of people who are really interested
in this, and they just don't have the information
easily accessible. And that was my goal in
writing this paper. So the outline of this
talk is this, that teams whose members are
heterogeneous in meaningful ways have higher
potential for innovation. But there are social
science experiments that use quantitative
methods, and they're repeatable, to show bias,
stereotype threat, and methods to combat them,
right. But the effectiveness of diverse teams
depends on trusting and supportive cultures.
There are many things that can be done to
interfere with bias and stereotype threat
and so on. Publishing data is one of the important
things. And I'll go through some of the tools.
Cognitive illusions - they're actually not
covered in the paper. There was a space constraint.
But I would like to talk about them too. They
create another barrier to effective decision-making.
But there's hope. There are people who've
actually made substantive change, and there
are tools for this. And if we have time, I'll
give some positive examples.
So in terms of financial performance - oh,
and this experiment is about women. But I'd
like to say that there's just a lot more data
about women or about race than any other axis.
But we should really assume that
almost all of these studies will refer to
any kind of difference, okay? So in this particular
study, it's one of many that shows that companies
that have a higher percentage of women in
the management part - in the management part
of the organization do better financially.
In this particular study, they created a metric
of how women were from the top of the company,
and they aggregated it over the entire company.
So companies with a higher metric are on the
left, in the darker color, and those - the
average is on the right. So the perform better
return on equity, better earnings before interest
and taxes and better stock price growth. And
then a similar experiment was done, looking
at companies that were in the top quartile
for having women in the executive team compared
to zero women in the competitive - in the
executive team. And you can see both return
on equity and average earnings before interest
and taxes are higher when there are more women.
Similar - a similar study was done, looking
at companies with at least three women on
the executive board. And again, this shows
a similar result. It's not just in the West.
In India, there was a study done that showed
that companies headed by women have an annual
growth that's much higher than the average.
The average is something like 35% to 21% over
the course of three or five years. And women-led
companies were 64% or 54% in the same period.
So it's not just in one geography. Multiple
geographies. Cedric Herring has shown similar
results for race. And it's true that these
are not proving that the reason that they're
more effective is because of the difference.
But the experiments have been done in many
different geographies with different methodologies
with similar results. So we cannot ignore
this, right.
So similarly, diverse teams are better at
innovation. And the way you can think of it
is that if you have a routine task that's
repetitive, you really want communication
to be easy, and then you're more likely to
want people from the same background. It's
easy to communicate. But when you want to
innovate, you're better off having a team
that comes from multiple points of view so
you have more points of view at the table.
One of the results that's really interesting
is that when you look at gender tech patents
since 1990 in the U.S., more than 90% of them
have male only authors. But the mixed gender
patents are cited 26 to 42% more than any
of the single gender patents. And that has
not been well explained. Another really interesting
study was done by some people at the Sloan
School. I think I misspelled the name here.
But I went to talk with Tom Malone about this
one time, and Anita Woolley is one of the
collaborators. What they did is, they did
a study of the collective intelligence of
teams. They took participants with a wide
range of ages, something like 16 - no, 18
to 60, put them in teams of from three to
five, and gave them a number of tasks to do,
one of which was just too large for a single
individual to perform. And what they found
was that the only predictor of team success
was whether there were women on the team.
So the way that you read this chart, the vertical
access is how well the team scored, and going
from left to right, it's zero percentage of
women to 100% women. Each of these bars represents
a group of teams that scored similarly, with
the blue circle meaning the average for that
group. And the red is the standard variation.
The important thing to note is, all the high
scoring teams were near the middle. There
are some low scoring teams near the middle
also. But the teams at either end did not
score nearly as well as some of those in the
middle. So they tried to understand, why are
these teams scoring better? In fact, Tom Malone
told me, "Well, I didn't want to believe that
it has to be women to make the team smarter,"
right. So - and it was a surprise result.
That's not what they were looking for. So
they've come to believe that the reason that
these teams are smarter is that there's a
strong correlation between women and the social
intelligence that causes people to be able
to get all the ideas heard in the room. And
they have some more experiments planned, related
to this.
Now - so until now, I've been talking about
why it's important to have a diverse organization,
and now I want to talk about why it's hard
to make it work. It's really hard to make
it work. There are really significant barriers,
and these are just some of them. There's unconscious
bias. There's stereotype and identity threat.
There are cognitive illusions. There's exclusion
from critical social networks. So for example,
if you play golf, and somebody else plays
golf, you spend a lot of time together, and
you hear about things. But if you are not
really friends with another person you're
working with, you won't have the casual social
conversations that make you aware of things
that you would when you spend more time together.
A lack of role models and mentors, or unaware
managers. People have different career goals
and motivating factors. So all that you're
going to hear now is going to end up with
the conclusion that the organization's culture
is really important to determine whether it
can benefit from diversity. It has to be inclusive
and adopt practices that reinforce that inclusivity.
Now two of the popular books about this phenomenon
are called The Hidden Brain, by Shankar Vedantam,
and Blink, by Malcolm Gladwell. And they approach
the problem slightly differently. Blink talks
about judgments that you make quickly to good
or bad effect, and they study several of these
examples. So for example, there's a study
of a car salesperson. So there's one person
that's studied in the book that is really,
really good. Like, everyone who comes, practically,
will send a letter saying thank you to him.
And what he says he does is, he just doesn't
pay attention to how that person looks physically.
Like, when you came in this room, you saw
me, and you probably made some assumptions
about me based on how I look, probably within
the first few seconds. And he says, "I know
it's there. I totally ignore it. I try to
treat everyone the same, that I don't know
whether they're really going to buy a car."
But some - there was a guy named Ayres who
did an experiment in the 1990s where he sent
some young people to car dealerships in the
Chicago area. And he had white men, white
women, black women, and black men. They all
were---were all dressed similarly. They had
a script. They'd go in the dealership. They'd
ask for the cheapest car and start bargaining
until they wouldn't bargain anymore. And then
they came back, and they got some data. So
the average price given to white men was $725
above invoice - for starting price, right,
before the negotiation. For white women, it
was $935. For black women, it was $1,195.
For black men, it was $1,687. After negotiations,
the black men, on average, were told $1,551.
It's almost twice as much over invoice as
for white men at the starting position, before
bargaining. Now if you ask people, are you
biased, they'll say no. "Hell, no. I treat
everybody the same." But if you look at things
like the Obama election, right, the exit polls
say something different from what you actually
saw in the numbers. So why is that, right.
I'll get to a good study of that soon.
Another thing, which you saw in the talk announcement,
is, there's a disproportionately high set
of men who are very tall who are CEOs, right.
If you compare with the general population,
something like 52% are at least 6' tall compared
to 14% in the general population. And there�some
large number, about a third are 6'2" or more,
and that's less than 4% in the general population.
So when people talk about women or blacks,
they might say, "Oh, we have a pipeline issue."
But when it comes to short people, we don't
have a pipeline issue. They're everywhere,
right. Right. And if you asked, well, do you
think that tall people are smarter? I mean,
there's only one person who ever argued with
me about that, and he was 6'2", right, an
engineer in one of our Poland offices, right.
We don't believe that we're biased that way,
but our actions show that we're making these
biased decisions, and we don't understand
why.
Now The Hidden Brain looks at this from a
different point of view, and that's - the
way we make decisions that have an opposite
effect to our intention. So for example, one
example that is given is, there was a study
done in which there are two job applicants
seen waiting in a waiting room, right. It's
a test situation. It's not a real job, right.
But what was found is that if an average weight
applicant is seen sitting next to an overweight
applicant, the average weight applicant is
rated lower in competence, hire-ability, everything.
It's like it's sticky, right. I mean, not
only is there this bias against overweight,
but it's if you associate with that person,
you're also associated with that bias. Another
really interesting study that many people
have cited is this one. You might have seen
it before. What happened was that Melissa
Bateson went to coffee stations at a workplace
in northern England, and they had an honor
system for paying for the milk you put into
your tea or coffee. So there was a sign that
said how much money to put in, right. So they
changed the sign. And what they did is put
photographs with the sign for the money, and
they changed them every week. And the amount
of money they collected each week, you can
see in this graph, right. So the weeks that
had flowers, like the second week, the fourth
week, and so on, they got less than half of
what they got in the weeks with eyes, see,
like the odd-numbered weeks. And in the weeks
where the eyes look straight at you, they
got really lots of money, right. And at the
end of the study they asked, "So did you notice
these pictures?" Nobody noticed. But their
behavior reflected that they were responding
to this. So the interpretation is that your
subconscious notices when you're being observed,
and you'll have a higher standard of ethics
when you feel like you're being observed.
So clearly, we're making lots of subconscious
decisions all the time, and it's really scary,
right, to know about this. So how can we understand
it better, right?
So this - some of you have heard of this,
a lot of you, probably, by now. There's a
study that started, I suppose, in a number
of universities at the same time. But one
of the most easily accessible is called Project
Implicit at Harvard. And what was observed
was that 100 years ago, if you asked, "Are
you biased against blacks or whatever," people
would say, "Of course, I am," right. But around
1990 or later, people would know that they
didn't really want to act that way, and they
wouldn't answer that way, but their behaviors
would still reflect the bias, right. So what
these people did was, initially, they took
just a piece of paper. And they would show
something like a flower or a bug. And they
would try to match it to something, like something
good or bad. And in the West, anyway, we think
of bugs as icky, bad things, right, although
in Japan, people like having these big beetles
as pets, you know. But there's even a temple
in Kyoto that's all covered with the wings
of a very interesting bug. But for
flowers, we associate them with really happy
occasions, you know, like weddings or birthdays
or things like that. So initially they did
a test like this just to show how long does
it take you to answer a question when you
put together flowers with something good and
bugs with something bad, versus the other
way around, right. And then the second test
that they developed had to do with race. They
showed photographs of people. There'd be a
white face or a black face, and then they
would put the face together with a word meaning
good or bad. And again they found that it
was much easier for people to answer questions
when the white was with good and black was
with bad than the other way around, right.
So now there's an online test, and they've
had thousands of people participate in this.
So it's - for any individual person, if you
take the test, you might not get exactly the
same answer two times, but over the course
of large numbers of experiments, it comes
out really, really reliably. And by measuring
your response time, they can find out many,
many different associations. They're called
implicit associations. So about 70% of people
prefer white people to African Americans.
But if you ask about your preference, only
15% will say they prefer whites. And that
also includes African Americans. They also
have this preference. 80% prefer young people.
Age is one of the most strongly measured biases
that there is, and that includes in Asia,
where people really revere and
respect the experience you have with age.
But still people test as preferring young
people. 50% prefer non-Arab names. 76% prefer
abled as opposed to disabled people. And so
on, right. I test as moderately biased against
women in science and technology, and that
is totally against my self-interest. I have
been working since 1981 in science and technology.
So there's a book called Blindspot, which
describes the development of this test and
some of the associations. One of the things
that's really interesting is, is there a predictive
link between the way you test on such an internet
test and the way that you behave? And there
isn't really conclusive work so far. There
was some research that indicated that in the
case of strong bias against race that there
was a predictive association in the way that
you would behave. But the research is not
very strong on that regard yet. The fact that
we can measure it at all, though, is a really
big step forward because it means we can examine
ourselves.
Now there's another concept, called stereotype
threat, and I'm sure many of you have heard
of this. So let me define what I'd like you
to think of it as. This graph indicates one
of the manifestations. So for example, if
you're talking about math scores for girls,
under high stereotype threat the girls will
test a lot lower than boys. Under low stereotype
threat, they'd be similar. So stereotype threat
is the phenomenon that has to do with the
fact that we have multiple identities. So
I am a woman. I am a mother. I am an MIT alumna.
I am a daughter. There are all sorts of characteristics
that I have. But when I'm put in a situation
where my identity is under threat, then that
identity becomes salient. So for example,
I'm an American. When I was in an auto rickshaw
late at night in Hyderabad with someone who
was fuming on and on about the Bush invasion
into Iraq, like, I was getting pretty nervous,
right. It's scary. And it became really salient
that I'm an American alone at night with this
angry man, right. Whenever you're in a situation
in which there is a stereotype
that you know about, that is disadvantageous
to you, and it's something that you care about,
it can affect your performance. Now Claude
Steele has written a wonderful book called
Whistling Vivaldi that talks about the history
of the research related to this. And the way
that he started was looking at the problem
of why college entrance scores were not predictive
of performance for black students. If you
looked at the slope, starting from entrance
score to where students would score after
four years of university, for Asian or white
students, it would go up at a similar slope.
But for blacks, it wasn't doing that. And
it wasn't just in one school. It was everywhere.
And it was even at a lot of schools that were
trying to have interventions. And people didn't
understand it. So the question is similar
to the Larry Summers question in which Larry
Summers asked, "Well, perhaps women just don't
have what it takes to excel at science," right.
He didn't survive his job after that. But
similarly, people were asking the question,
maybe there's some inability difference. And
what was beautiful about Claude Steele's experiments
was, he established that the same people under
different conditions can show opposite effects.
So let me tell you one of his experiments.
They took some black and white male college
students and took them to play mini golf.
In one situation, they told them, "This is
a test of your athletic ability." And then
what happened is that the black men were fine,
and the white men needed three extra strokes.
So then they did it again. And they said,
"This is a test of your sports strategic intelligence."
Then the white men were fine, and the black
men had to do five extra strokes. So it showed
that the same population would have opposite
effects, depending on the similar the way
that the problem is posed and how that
you - how you deal with the way you present
the problem, basically. So some of the things
that are really important about this are that
it works to say things like, "This test does
not have a gender difference," if it's a credible
statement, right. That's a way to reduce stereotype
threat. It's also important to say that it
happens all the time, right, like, you only - you
only have to know that the stereotype exists.
And it starts really young. Like five-year-old
girls, if they color a picture of a girl holding
a doll, they'll do less well in a math test
than if they color a picture of a child eating
with chopsticks, or a landscape. Five years
old, right. Another thing that's really interesting
is that positive stereotypes work, right.
So if you tell a woman - you're an Asian.
Well, there's a stereotype about Asians doing
better in math. Or you're a Stanford student.
If you tell a white man something that reminds
him of the Asian stereotype, he'll do worse
in his math test. So, like, you can manipulate
the situation in some sense because of the
way that you phrase this.
Now another thing that's really important
to counter stereotype threat is critical mass.
One really good example was the Supreme Court
of the U.S. When the first woman was appointed,
Sandra Day O'Connor, she said the press hounded
her all the time. Every time there was a decision,
they would say, "What did the woman say?"
And it all went away when Ruth Bader Ginsberg
joined the court. Of course, that has nothing
to do with how biased it might be within the
court, the way they make the decisions. But
the external visibility, it just reduced her
from being a person to being a representative
of all women, right. And then when Sandra
Day O'Connor retired, all that attention went
on Ruth Bader Ginsberg, which surprised her.
So critical mass is really hard to define
for what will work best. But in general, somewhere
around 40% is where people start feeling comfortable.
And some experiments have been done, for example,
in setting up brochures for a flyer for a
company. And around 40% minorities in the
photographs is the point where minorities
start feeling safe and feeling like they're
willing to apply. It's also important for
an organization to say, "We value diversity."
An organization that says, "We're colorblind,"
people don't trust it. It's sort of like,
"Well, you have a deficiency, but we'll ignore
it," right. But saying that your difference
is important to us is a much stronger statement.
It makes a big difference.
It's also important to have a credible narrative.
And that means that you have a way of explaining
a story to someone that allows them to see
their own success. So for example, in university,
if you have a freshman, a first year student,
who can talk to someone who's in their third
or fourth year who looks like them, then they
can start to feel like "yes, this is something
I can do. There is somebody else who has faced
similar problems to what I face, and I'm able
to face them too." It's really important to
have an expandable view of intelligence, meaning
that knowledge is not just static. There are
things that we can learn. Self-affirmation
is an exercise that has been incredibly successful.
It started - the experiment started with a
school in which students were given an exercise
for 15 minutes to write something about one
thing that they valued. In the control group,
they were supposed to write about something
that someone else might value but they didn't
particularly value. But students who wrote
about a value, two years later were still
scoring better in their classes. And this
has been repeated for physics college students
for women. It's amazing. It's a very small
intervention, and it has a positive effect.
It's important to improve critical feedback.
So people thought that it would have a positive
effect on students if you gave them positive
feedback, sort of like cheering them on, "You
can do it. I'll support you." It actually
doesn't help. What really helps is being very
engaged, saying, "I have very high standards.
But I'm going to work with you, and we're
going to make sure that you can reach this."
It's a non-intuitive result.
Having communities of support turned out to
be a really important different between Asian
students, who also were often minorities,
and black students at universities in the
U.S., because Asian students would naturally,
culturally, form students in which they would
discuss problems with each other. And the
African-Americans normally did not form these
groups in which they'd study together. But
once those groups were formed, they had similar
positive effects. When someone feels really
isolated, and I've felt it myself as an engineer
at Google. Sometimes you're starting, and
you feel like, "Oh, if I say something, I'll
look dumb," right. And you kind of isolate
yourself, not wanting to show other people
that you conform to the stereotype. And sometimes
people even refuse to take advice which would
help them because they think taking the advice
would just confirm the fact that they're dumb,
whatever. But having a group of people in
which you can share your learnings really
increases the effectiveness of the groups.
And another really interesting result has
to do with intergroup conversations. So for
example, say that you have a black man - no,
a white man taking a black history class in
the U.S. Often what will happen is, that white
man will be the only white person in the room.
Or maybe there'll be two. And then they feel
this kind of reverse minority effect, in which
they are afraid to say anything, because if
they say something that sounds like they're
insensitive or just unaware, that might put
them at one end of the spectrum, or they might
sound just really arrogant or uncaring on
the other end of the spectrum. And so it's
really difficult to find a balance where they
can actually participate and learn. So again,
Claude Steele studied this situation in an
interesting way. They asked participants to
put chairs out for an intergroup conversation
about race. But they didn't tell them that
putting the chairs out was the experiment.
And what they found was that if a white man
was putting the chairs out for a biracial
conversation, by default they would put their
chairs, subconsciously, a little bit farther
away from the others. What worked to change
that is to say, "We're going to help you have
a conversation in which you can have a meaningful
exchange about how to deal with these difficult
issues. Treat this as a learning opportunity."
And when they introduced the conversation
that way, they would put the chairs the same
distance that they would have in the other
standard case.
So one thing I forgot to mention is that you
can actually measure physiological effects
for stereotype threat. Like people's blood
pressure goes up. You can measure sweat. But
if you ask them, "Are you feeling under stress?"
they'll say, "No." Right, people don't actually�they're
not really very self-aware about when they're
under these influences. And it's really important
to deal with them because it's something that's
constant, right. If you're the only one like
you in your group, it's not just a job interview
you're talking about. It's all the time, every
day, all your interactions, and finding a
situation in which you have a trusting environment,
it's really important to make the team effective.
Now there are a number of consequences of
stereotypes. One of them is these prescriptive
stereotypes. Now some of you have heard this
study. But there was a study done in which
a man and a woman were each asked to help
someone who had to fix a broken copier. And
afterwards, the man or the woman colleague
was rated on their effectiveness. If the man
helped out, then his score was higher than
the base score. If a woman helped out, it
stayed the same. If they didn't help, the
man's score stayed the same, but the woman's
score went down. So why is this? Because women
are expected to be helpful, altruistic. When
a woman doesn't conform to this prescriptive
stereotype, she pays a penalty. But if a man
does it, it's an extra, right. This is the
kind of bias we have to watch out for.
We have attribution error. When a man and
a woman together are performing a technical
task, if we report it as "this group did this - had
this accomplishment," then most people will
assume that the man has higher responsibility
for this. But if you separately report, "The
man did this, and the woman did this," all
that effect goes away.
There are differences in career advancement
strategies that work, and one of them is that
sponsorship is really more lacking for women
than for men. And it's a really important
factor to help them succeed. Sponsorship is - the
difference between sponsorship and mentoring
is that mentoring, someone gives you advice.
In sponsorship, someone actually takes a risk
for you and puts your name out there for some
new activity. It's also been found that at
the high levels of an organization, changing
jobs can actually increase salary over the
long run for men, and it decreases it for
women. So the same strategies that work for
one situation don't work for everyone.
A small bias can have an unexpectedly large
effect in a large organization. A simulation
was done in which there were eight levels
of an organization, and it was assumed that
in each review cycle, some fixed percentage
of people would leave and that people would
get a randomly assigned performance score.
People with the highest performance score
would be promoted. And this was repeated until
the entire organization turned over. So when
you start with the entire organization, with
50/50 women and men, by the end of the experiment,
at the lowest levels, it's 53% women, and
at the highest level, it's 35% women. And
when you simulate with 5% of bias, it's an
even stronger effect. And 1% of bias, you
cannot even measure it. So that's one of the
reasons it's really important to pay attention
to bias in an organization. A small amount
can have a huge effect.
This is also a famous study. A couple years
ago, there was a study published in which
U.S. science faculty received resumes that
differed only in the name. They either received
John or Jennifer. And it was an application
for a lab manager. So they were asked to rate
them on their competence, their hire-ability,
if they were hired, whether they would mentor
them, the salary, and whether they were likeable.
There was a difference, a measureable difference,
between the way that men and women were rated
on every measure. And the men were offered
$4,000 or $5,000 a year more than the women.
The women faculty exhibited the same bias
that the men did. And the only
measure on which the women scored better was
likeability. So this bias actually exists.
Now I presented Poland, and someone said,
"But what if it's conscious bias? What if
they..." but I think we don't want to go there.
It's possible, right. There was a society
of a real person who was a successful VC.
And her bio was given to some MBA students,
some with "Heidi" and some with "Howard."
Everything else was identical. And the students
were asked, "Is the skill the same or different,
and who would you like to work with?" So the
good news is that the skill level was addressed
similarly, was assessed similarly. But when
asked, "Who do you want to work with?" almost
everyone preferred Howard, like, 70%. "Heidi's
just too aggressive." So women at higher levels
actually face a penalty, right, that there's
a narrow band of behaviors that's acceptable.
By showing some of the leadership traits that
we associate with successful leaders, they're
going against some of the prescriptive stereotypes
for women. And so it becomes a difficult balancing
act.
In meetings - the studies are a little bit
more soft relating to meeting behaviors. But
in general, the studies agree that men tend
to talk more and interrupt more. Women are
more likely to be interrupted. They're more
likely to allow a successful interruption.
And if they interrupt someone, it's most likely
to be another woman, right. In formal settings,
men will gain the floor and keep it more.
Women take a lot less time in framing a question.
And this is important, because we want our
leaders to be able to frame the questions.
And by self-training women to not speak up,
or minorities, then they're less likely to
have the perceived skills needed to be an
effective leader.
In informal settings, woman excelled, all
the collaborative skills and so on. I'm trying
to go faster 'cause I'm going to run out of
time. I always do.
So it also is really important to realize
that career needs are different, and what
we assume is not always the case. So this
represents a study of managers and why they
leave a job. And you see that men are almost
10% more likely than women to leave because
of a promotion or for a higher salary. Women
are something like 9% more likely to leave
because of a difficult manager than men. And
almost nobody leaves, really, just because
of childcare. What happens is that we perceive
it as a woman leaving because of a child-raising
issue. Oftentimes they'll just say that and
not really say what the real issue is. The
real issue is that if you feel like you don't
have a future, it's not worth the trouble
to try to keep a job and raise a family at
the same time. This happened to me. I was
told by my manager to quit when I was pregnant
with twins - just to be good to the twins,
you know. So that's part of why we moved to
Japan. I figured, well, why bother with this
manager, right. And similar results have been
found in Japan. And that was before Google
existed. It had nothing to do with Google,
okay?
So what if you could study the same person
being both a man and a woman. And it turns
out some people have done this, right. There
are studies of transgenders. And pretty consistently,
they find that when men become women, they
experience disadvantages that they didn't
have before. Initially, they might have their
incomes fall by 12% and over the long term
by 30%. When women become men, they get new
privileges. Their professional lives become
easier. Their incomes rise by 7 1/2% initially,
10% in the long term. But even better, there
are two biologists at Stanford that are transgender,
one from man to woman, and one from woman
to man. And they're willing to talk about
it. So the person who went from woman to man
is named Ben Barres. He was oblivious to sexism
in early life. He couldn't understand the
feminists. But afterwards, people would say,
"His work is much better than his sister's,"
people who didn't know it was the same person.
He says he gets better treatment in everyday
life. He can complete a whole sentence without
being interrupted. And now he starts remembering
all these things that happened to him as an
MIT student, things like the professor saying,
"Oh, your boyfriend must have done that assignment,"
things that at the time didn't seem that important,
right. The one that went from man to woman,
she's named Joan Roughgarden. As a man, she
made a controversial proposal - she works
in biological ecology, I think. And - but
she still had some authority, and people engaged.
But after becoming a woman, she made another.
And the establishment was totally livid. Nobody
engaged with her. And at workshops, people
even came and threatened her physically, like
jumping up on the stage or starting to shout
at her. And that had never happened as a man.
Like, she says, "Well, you feel like this
guy is going to come over and hit you. And
should I get ready," right. She's no longer
on any committee. Her salary has dropped.
She gets interrupted when talking. She can't
frame issues. And so it's
really stunning to hear such things.
Now I want to change gears a little bit. Daniel
Kahneman is not an economist. He's a social
psychologist. But he won the Nobel Prize for
his work in prospect theory in economics.
And that's a story of risk-taking when the
outcomes are known. So for example, if you
tell someone that "you have 10% chance of
dying if you take this shot," or if you tell
someone, "You have 90% chance of living if
you take this shot," then people take opposite
decisions - even experts. So people are very
influenced by the story. So I won't tell his
whole story, because we'll run out of time.
But he had a lot of really interesting anecdotes
that show why he was interested in this area.
And he's come up with this idea of a cognitive
illusion, in which the coherence of a story,
rather than data or truth, leads to an illusion
of confidence. So when people - basically,
people reject data when it doesn't match their
story. And that's a really important thing
to watch out for. He coined a term called
"What You See Is All There Is" too. For example,
people will infer anecdotally from their experience
that the world is like this. For example,
someone might think, "Well, look at this person
over there. She's a very successful engineer.
It means that any woman can be successful
if they're just like her," right. So there
is no problem. Or conversely, "My sister had
a terrible time. Therefore there's a terrible
environment out there, and nobody can succeed."
And either one is really inferring from insufficient
data. These cognitive illusions take greater
effect when it's something relating to your
own professional competence.
Now there are a few examples of very positive
change. One is the MIT science faculty study.
In the '90s, there were three senior women
faculty who believed that there was bias in
the way that lab space was allocated or grants.
They didn't know for sure, but they felt that
it was true. So they decided to talk to all
the senior women faculty in science, and it
wasn't hard 'cause there were only 15 of them.
And most of them agreed this was a problem.
They went to the dean. The dean decided that
they would do a study. The committee included
three men who were current or former department
chairs and one senior woman from each department.
They looked at all the data except salary,
and they found that indeed there was bias
in some of the departments. And it surprised
everyone because it did not look like what
they thought discrimination looked like. In
any individual case, they could argue this
person shouldn't get this, but when they
put it together, it described a pattern. And
they weren't even aware that they had it.
It really shocked the president of the university
when he figured out that all these women used
to think there was no problem, when they were
young faculty, but their experience changed
over time. What I think is really amazing
about this is, it's so public. You can just
search and find it. It was around 1997. You
know, in this year a lot of tech companies
have started releasing data. This was done
1997, and they made effective changes. There's
another report about 12 years later. Harvey
Mudd College is amazing. They went from 12%
women to about 40% women in computer science
in five years. And it wasn't rocket science
to make these changes. They took all the first
year students who wanted to, to Grace Hopper,
which is a large women-in-computing conference.
And that provides role models, sort of like
"I can do this." They changed their introductory
computer science course to not require previous
programming experience and to become more
holistic. And they involved students in research
from their second year. And it not only increased
the number of women in computer science, it
increased the number of computer science graduates,
period, from around 25 to 30 per year to about
75 per year. This is amazing. It made the
whole field more attractive. They also increased
the percentage of women in engineering. Closed
captioning on YouTube was introduced because
there was a deaf engineer, Ken Harrenstien,
who decided we really needed to do this project.
So he worked together with Naomi Black, who's
an accessibility engineer. They're both still
here. But when it was released, it was a killer
feature that nobody had predicted. Nobody
had realized that this is a feature that would
be really critical to people even beyond the
hearing impaired community. So these are just
a few examples of cases when organizations
have made huge changes because of the composition
of the people that they have and the way that
they approach problems.
So this is a summary of some of the many things
that you should do if you want to have an
organization that really values diversity.
Data is almost - is probably the most important
thing. Sharing data makes it possible for
people to change, because otherwise you're
just working based on your intuition. Creating
an atmosphere of trust is important. Critical
mass is important. Another really nice experiment
was, women did math tests in groups of three
women or two women and a man or one woman
and two men. And their scores got progressively
worse as we reduced the number of women. Adopt
an expandable view of intelligence. Know yourself.
Fnd out what your own biases
are. Create an inclusive culture. Actively
value diversity. Sponsorship is important.
Support alternative career goals. And share
strategies that work. So I have a bunch of
resources listed here, some of the books.
My full book bibliography is on my blog, The
Rule of One. It's called Rule of One because
this is a phrase my aunt came up with. When
people infer from just a single instance,
then they come up with poor conclusions. I
call it "reasoning from insufficient data."
My aunt calls it the rule of one, right. So
my blog is called rule-of-one.blogspot.com.
And so the full book bibliography for the
article is there also. And this article is
in CACM. Well, thank you. You're troopers.
You stayed a long time. Thank you.
