>> It is my great pleasure
to welcome Tess Posner
to Microsoft today.
She is the CEO of AI4All,
an organization that
is trying to increase
diversity inclusion
and reliability of
AI systems in the world and she's
going to be talking about
the wonderful work they
are doing in that space.
Before joining AI4All,
she was Managing
Director of TechHire at
Opportunity at Work,
an organization and should
they have launch out of
White House to increase
diversity in the tech economy,
so very relevant.
Before that, she built
and run summer school and
non-profit supporting
low-income people
to find work in
the digital economy.
Tess's work has been featured
in Business Insider,
TechCrunch and Fast Company and
she's going to be
talking about how we can
hopefully collaborate
together to make
AI much more inclusive
in the future.
>> Thank you so much.
Thank you all and I'm really so
excited to be here
today and about
both our existing partnership
with Microsoft as well as
new opportunities to really
grow and develop how
we work together.
To get started, none of
this is a secret to you,
but obviously artificial
intelligence is really one of
the most important technologies
of today and tomorrow.
It's been called
the new electricity,
the driver of the Fourth
Industrial Revolution,
and is clearly poised to make
incredible economic
impact on society,
as well as get embedded into
our daily lives and many
of the things that
Microsoft is working on.
In terms of the incredible impact
there's also potential for
AI to create breakthroughs in
moonshots and solve some
of the world's most
pressing problems.
This actually came
out of the future
computed which is
a report coming out
by Microsoft earlier this year,
talking about some of
the ways that AI can
impact the water supply issues,
agriculture, climate change,
biodiversity and really
create potential moonshots to
solve some of these problems.
There's tremendous potential with
this exciting technology that
has been around for awhile but
because of new developments
were obviously in sort of
an AI exciting moment
right now and it's getting talked
about a lot in the media as well.
However, given that this
is such an important
technology there
are some key risks to
both responsible and
ethical development of AI,
that I know Microsoft
is really focused
on and has been leading
in these areas.
This includes bias and
algorithms and
machine learning systems,
the diversity and
talent crisis in AI,
both in terms of industry but
also in research and academia,
education, and really the
lack of access and
knowledge about AI.
I want to talk a little bit
more about these issues.
The first is around
the diversity crisis.
So, we have heard a lot about
the diversity crisis in
the tech sector more broadly.
In AI as a sub-field
of computer science
it's also pretty extreme and
we call it an actual crisis.
So, some of the data
shows 13 percent of
AI and machine learning
companies are run by women,
and the numbers are even more
problematic and scary frankly.
If you look at research,
tenure-track engineering faculty
less than really
four percent are minorities.
We also see and we hear a lot of
this working in
different communities talking
and trying to expose people to
AI and just hearing what
are their perceptions,
what do they actually
know about it.
Some of the perceptions
that are out
there are that it's
the terminator,
it's coming to get you,
it's the robot apocalypse,
it's scary, it's dangerous
because it's going
to take over and
automate all of our jobs
and what's going to happen.
There's wildly different
predictions about how
it's going to affect
the economy and the workforce,
and lastly that it's
exclusive and that you need
a PhD to get involved and
really that it's not for me.
This is what we hear from a lot
of our students at AI4All.
This is not for me.
So, this perception really
is a problem because it
prevents people from getting
involved in this exciting field,
as well as shaping
our imaginations about what
is possible with the technology.
We also see bias creeping up in
more and more decisions that
are affecting people's lives.
Some of you may be familiar with
this research that came
out several years ago
about how the software called
Compas is being used in
the criminal justice system,
to help judges make sentencing
and parole decisions.
This software was studied
10,000 cases of it
used in Florida were
actually looked at and
there was bias that showed
up in these risk
score algorithms,
that were shown to
be more favorable
towards white
individuals and would
have higher risk scores
for African-Americans even with
lessor criminal
background history.
So obviously, this is incredibly
problematic for how
these systems are getting
embedded and already being
used out in the wild and
affecting people's lives
in such extreme ways.
Other bias examples,
and I'm sure you all are
familiar with many more,
but this is one that obviously
came out of [inaudible]
Bruce research from
Microsoft looking
at facial recognition systems,
and showing wildly
different levels of
accuracy when you look at
subgroups including
gender and race.
Obviously, these
systems are getting
embedded into hiring processes,
into the criminal justice
system, into the TSA.
So, looking at accuracy
and bias is critical as
these systems are
getting more and
more prevalent in
our daily lives.
So, this isn't just a problem for
those individuals that are
actually affected these subgroups
that we're talking about.
It's also a problem because
these systems are being used.
So, a recent Gallup
poll showed that
85 percent of Americans
use AI everyday.
This is pretty impressive
and it's just going to grow.
So, how do we actually
addressed this?
There are a lot of things
that I'm really excited to learn
more about from all of you
that Microsoft is leading on,
including better systems
of how we test
monitor and audit AI
systems before and in
the wild so we can track and
take actions actually
mitigate some of these.
Fairness in ethics standards,
transparency and explainability,
and I know some of
you in the room are
actually working on this,
and of course lastly is
diversity inclusion.
All of these we consider AI4All
to be really critical and
important to addressing this
and mitigating these risks,
but we believe that one of
the most important things
is to actually
address diversity
inclusion and AI.
So that's why we started AI4All,
which is a non-profit
organization founded to increase
diversity inclusion
in AI development,
policy, and research.
So, not just looking at
AI development but also
all the different parts that
this technology will touch,
including the education system,
government systems and
everyone needing sort
of not just AI literacy but
to have more diversity in all of
these areas that are
going to be shaping
such important decisions.
So, I want to share
a little bit about our work.
So, why did we start AI4ALL?
There's a challenge
when you're looking at
barriers to under-represented
populations actually
getting into the AI field and
touching the different ways that
this technology will impact
all these different areas and
in an interdisciplinary
perspective.
So, in terms of the current
homogenous culture and AI as well
as some of the perception issues
that we talked about earlier,
that doesn't necessarily attract
new people getting into
the field and can be a barrier.
There's also a lack of exposure
to technical concepts
and certainly AI.
So, AI is currently not taught in
high schools and it's not part
of the computer
science education.
In fact, only 40 percent of
high schools have computer
science education even been
offered not to mention
that they're not
including new and emerging
technologies like AI.
There's a few relatable
role models.
So, a lot of the reason why
we choose different careers
and why we want to even
go into these areas,
is because of who we'd
meet and get exposure to,
what they do, why they
do it, how they do it.
If we don't see individuals
that we can relate to,
it's much harder to picture
yourself going into that field.
A lot of these are
actually based on
a Microsoft research study
that came out about
earlier barriers to getting
into the computer science field.
So, we actually built
AI4ALL to address each of
these barriers and
ensure that people
from all backgrounds are
able to get into the field.
There's also this idea of
peer community and
support that is lacking.
So, if that is
not prevailing at your
high school for example,
it can be much more challenging
to feel like you can go
into a computer science
club for example,
that's 100 percent male
if you're a young woman,
and that's what we
hear from a lot
of our students as well.
So, we addressed this with
three core initiatives
that are really
trying to not only
bring more people into
the field to begin with,
but also to support their
pathway and long-term success.
So, our first initiative is
a summer camp program that
exposes diverse
high school students to
artificial intelligence and
we partner with some of
the top research organizations
including Stanford University,
Princeton, UC Berkeley.
We actually bring students into
these research labs over
the summer for
a two to three week camp,
and they learn directly
from AI faculty,
researchers, get
technical exposure to AI,
get connected with
role models and mentors,
and then also get to
work on AI projects.
The second piece is
increasing awareness and
access to AI and this is
really getting back to
the perception issues that
we talked about earlier,
that AI literacy is really
becoming something
that needs to be
basic for almost everyone
participating in
our society in the future.
Then lastly research.
I want to talk a little bit more
about each of these pieces.
The first is our
summer camp programs,
which I talked
a little bit about before.
So, we last year we had
summer camps operating at
Stanford and UC Berkeley.
This summer, we're tripling
the number of schools
that were at,
and as you can see each of
these schools focuses on
a slightly different
population as
well as different
geographical focus.
I wanted to share with
each of you a little bit
more about the
curriculum that we've
developed in partnership with
these universities
as well as our team.
So the first piece, is
technical exposure.
Our camps are pretty rigorous,
because we want to give students
that hands-on exposure to some of
the key concepts and
importantly ethics
and the societal
implications of AI are
really a lens that is throughout
all of the pieces that we
share with students even
the technical concepts,
we're always talking about.
What kind of bias could
potentially creep up here,
what are some of
the mitigation strategies,
and what are the
implications of this will
have both positive and
negative in the world,
and what are some
of the frameworks
that are getting developed.
Although there's not like a
consistent framework
out there now,
we are exposing our students to
some of these frameworks
getting developed.
We're also giving them the
opportunity to get involved.
So on Monday, we're
going to be launching
the first ever high school
committee with the I Tripoli,
AI and ethics of
autonomous systems initiative.
So, young people are
actually going to get
a chance to be involved in
this conversation and
shaping what some of
these standards are and getting
their perspectives in the mix.
Since this generation is really
going to be the one that
is most impacted by the
decisions that we make today.
It's critical that we involved
them at these early stages.
So, the second is soft skills
and career exploration.
So, again, this is going back to
the need for role models
and mentors.
So, we bring in individuals
from our industry partners and
all different parts
of AI to really
understand what kinds of
careers are out there,
what does it actually
look like day to day,
and also developing that peer
and mentorship community
that's so important for students.
The last piece which
is really exciting
and I think we've
found has been one of
the most successful ways that
engaging diverse groups
of young people in AI,
is to actually create
AI for good projects.
So, not only are students getting
hands-on exposure to practice
some of the skills that
they've developed earlier on,
but they also are seeing what are
the exciting applications of
this technology for
good and to solve
some of the big challenges
in the world today.
We've seen that this has not
only captured
our students imaginations,
but that they've gone on to work
on AI projects afterwards,
start AI clubs at their schools
and even win awards.
>> So, just [inaudible].
>> Yes.
>> For this summer camps that is
happening in these universities,
how does production look like?
Do some of them focus
on research and some
of them are more on
the curriculum for learning
about these topics or it's
a mix or it ranges from
university to university?
>> Yeah. That's a great question.
So, our summer camp is
structured in a way that we have
a core curriculum that focuses on
those technical concepts
that we've mentioned.
Each camp sticks to
that and tries to keep
that consistent across.
Then, we have obviously those
in-person activities
that might vary in
terms of which individuals
are on the career
panels and all of that,
and each university
is responsible for
developing AI research projects
and partnership with us.
But that really is based on
some of the research
being done and out in
those labs as well as what
the faculty and PhD students
are really passionate about.
So, it's very much
taking advantage and
leveraging the assets that are at
those universities for
the research projects.
>> How long is the camp?
Sorry to [inaudible]
>> Oh no problem. They are
two to three weeks long,
depending on each side and
they're pretty intensive.
So, it's all day
and then they do
fun activities at night.
So, one of the things
that we learned
this past year in
expanding and tripling
the number of camps,
is that there's
incredibly high demand
for these types of skills.
So, one of our program's only has
20 slots, got 900 applications.
So, we were like, "Wow,
this is amazing."
We also get a lot of interest
from all over the world and
how do we embed AI curriculum
into what we're already doing,
places in every
continent basically.
So, we decided that
we want to open
source our curriculum
and really try to
scale the learnings that
we've had and make this
available worldwide and we see
that there's tremendous
scale potential here.
This is very early stage,
and we look forward to
launching it later this year.
The third piece of what we do,
that's very exciting is
taking graduates from
those first two programs
and connecting them
with mentors to work on
more rigorous and
longer-term AI projects.
Obviously, having portfolios and
actual demonstrated work product,
is so critical in
both your education and
your career these days.
So, we're really trying to
develop students portfolios.
So, we just finished
a cohort two weeks ago,
and we had students working
with mentors from IBM,
from Pandora, from OpenAI
working on research projects,
and here are some of them.
So, using AI to detect wildfires,
ranking the urgency of
ambulance calls and personalize
the student on the right
working with a mentor from
OpenAI built a math tutor,
that is adaptive and
personalized that she's
planning to actually launch.
So, we see incredible
results when we pair
our young people
with their passions
and connect them to that support
and that mentorship.
Oh, yes please.
>> I am wondering for
the mentorship program,
what is the level
of the students?
Are they undergraduate or
graduate students, high school?
>> High school students.
>> High school students.
>> Yes. So, they're all
10th through 12th graders.
So, we talked a little bit about
AI4ALL's work and why we
think diversity is so
critical in mitigating
the negative impacts like
bias and the other ethical risks
in artificial intelligence.
But really diversity
is critical also for
maximizing the potential for
AI breakthroughs and moonshots.
So, if we increase
diversity in AI
these early stages especially,
we're going to see
more innovative products,
a more diverse set of
problems actually addressed,
as well as the network
effect of bringing
more people into the field.
I want to share several case
studies that showcase this.
So, we know that
diversity is just generally
good for business.
Recent Intel report
showed that if we
increase diversity
in the tech economy,
it could add $500 billion
to national GDP.
So, it's a good
business proposition.
Additionally,we'll see greater
innovation from
increasing diversity.
So, recent study from
Raj Chetty showed that if we
increase early exposure
to innovation,
the innovation economy to
women minorities and
low-income students,
the rate of innovation in
America would quadruple,
this is no small thing.
So, it's really important to
have that early exposure,
so that we can maximize
innovation in our economy.
The second thing is we might miss
out on untapped talent that
might develop the next big thing
in AI or otherwise.
I'll share the example of Amy.
So, Amy graduated from
our Stanford AI4ALL
program in 2015.
She was so passionate about AI.
She had never have
been exposed to it
before that she kept
on with her research.
How many of you have heard of
the NIPS Conference?
Okay, everyone.
So, Amy won Best Paper Award
going up against hundreds of
adults for her research on
improving surgeon technique
using machine learning.
This is pretty amazing for
a student that's
still in high school.
So, we don't want to miss out on
talent like Amy that might
develop the next big thing.
If Amy had never heard of AI
or never been exposed to it,
she never would have
had that opportunity.
We also see that if we increase
inclusion and diversity,
a more diverse set of
problems more creative set of
problems will actually
be addressed.
So, I want to share
the story of Stephanie.
So, Stephanie is from Salinas,
California, not sure if
any of you are
familiar with Salinas.
But it is an area where
she actually grew up as
a daughter of farm workers in
a very low income family,
first generation
Mexican-American.
She graduated from our program.
Not only did she start
an AI club at her high school,
which is pretty impressive,
and she's teaching
younger students about AI.
She's doing research
to track the flow of
contaminated water using machine
learning with her
mentor from Accenture.
This is something
that she's still in
the early stages of
but she wants to bring
this back to her community
because it's something
that's faced
her both personally and directly.
So, here's Bekah.
So, Bekah went through
our AI4ALL program,
and she was really torn
between her passion for
social justice and
humanitarian work.
But through exposing her to
all the different ways that
AI can be applied for good
and to benefit humanity,
she became really
really passionate
about combining
those two pieces together.
So I'll just share a quote.
Seeing the humanitarian
applications
in AI at Stanford AI4ALL,
I realized that I didn't
have to sacrifice
fundamental aspects
of my identity
to pursue computer science.
I love computer science
and I see it as
a tool to utilize in art,
music, and political advocacy.
STEM can be really powerful
when applied to other fields.
So, we see that if we
increase diversity,
we're not only going to get
more creative uses of AI,
discover untapped talent that
we wouldn't have found before,
but also new interdisciplinary
ways to apply AI to
different fields
and really tap into
people's unique passion
and interests.
We also see that when we
increase diversity and inclusion,
it really has
this network effect that
we've seen that's been
really, really powerful.
So, most of our students leave
AI4ALL programs wanting to pursue
AI and feeling like they're
part of this
incredible community.
What that's led to is that
they go on to educate
their peers as well as their
community and we track this.
So, for every one student
that we educate,
they go on to educate 11 more.
So, when you bring
more people into the field,
it has this incredible
multiplier effect.
They bring their friends,
they bring their friends
and just a small investment
in one person can be really
powerful in making
an impact there.
We also see the same thing
with organizations and companies.
I think Microsoft is
certainly a great example
of this where when
one company steps up
and invests in really
this principled approach
to AI development
and really looking
at bias and mitigating that
at these earliest stages,
as well as investing in
the next generation of talent,
it then brings other
leaders into the field and
it shows the way and it
influences other companies.
So, these are a lot of
our AI4ALL partners that we
currently work with and
the list is growing.
So, I think it's really
important not just
to have inclusion and
diversity bringing in
individuals into the field,
but also in showing
other companies and
organizations and institutions
that really need
to step up and be
leaders in this way and bring
their colleagues and peers along.
So, I want to talk a little bit
about how you all can
get involved with us.
So, we currently
are really excited
about our partnership
with Microsoft.
There's a couple of specific
ways that I've set out
here that we'd love to
continue deepening this work
that we're doing together.
But I also want to hear from
you all if there
are other things.
The first is recruiting.
So, we are obviously graduating
an amazing set of the next
generation of AI leaders.
So, if you have internships,
if you have an
apprenticeship program,
if you're hiring, consider
AI4ALL a source of
talent for these roles.
Secondly, we have many,
many volunteering opportunities
for folks to serve
as mentors for this next
generation of talent.
A mentor can absolutely
change someone's life and
change someone's
trajectory and we have
both one-day
opportunities as well as
these longer-term
opportunities to support
students on an actual
research project.
Share feedback, again that's
such an important part.
We consider this a dialogue and
a two-way street that
we want to learn
what are you all finding and
the best practices
for mitigating bias.
What are some of
the challenges and
nuances that you're
seeing in your work and
how can we leverage
those learnings to prepare
the next generation to do
this well and to make
the right decisions?
So, we want to hear from you
and worked closely on that.
I know you all also do
a lot of outreach and
education yourselves and some
of you in the room
are doing that.
I want to hear more,
what's working,
how can we partner and
collaborate on those efforts.
This problem is
obviously not something
that one organization or
one company can solve,
so it's really important
for us to come
together and really
solve this together.
So with that, and if you are
interested in volunteering,
I have a link right here
that makes it really
easy to sign up and
you can let us know if
you're interested and
in what capacity and also
our contact information.
But I'd love to hear from
the room if you have
questions either
about our work or
ideas for ways that we
can work together to
both support the next generation
of diverse AI leaders,
but also ensure
a responsible and ethical use
and deployment of
this important technology.
So, any questions
thoughts feedback? Yes.
>> Okay. I apologize,
I was a few minutes late
coming from another meeting.
>> No problem.
>> Let's me show number
of students touched,
which is extremely impressive.
How are you getting
that volume of touch,
when you say in programs that
only accept 20 students?
>> Yeah, that's a great question.
>> So, where does
that number came from?
>> Yeah. So, right now,
our main programs are at
these universities that have
between 20 and 30 slots.
I think what you're
referring to is
our open sourcing project
that is really our scaling plans.
So, we're getting so much demand
and so much interest.
We believe that this access
to AI tools, education,
and resources should
be widely accessible,
and that's not getting
addressed right now,
at least in the US
at the K-12 level.
So, we're taking the curriculum
and learning that we've built for
our summer camps and open
sourcing them to make them
more widely available.
Which we have the goal of
reaching millions
through that model,
by working with
existing partners, schools,
and companies who are
already doing this work.
Thank you, good question.
>> So, when your summer
camp curriculum,
what are the kinds
of applications
you got the kids working on?
>> Yeah. So, we
focus on robotics,
computer vision,
natural language processing,
and each of the
universities develops
a set of projects that are
focused on those applications
but are all around AI for good.
So, for example, looking
at social media data
and analyzing that after
a natural disaster,
we're looking at
after Hurricane Sandy
dataset to see how
would you better funnel
resources towards that
in response to sort
of lessen response times and
make them more accurate,
making hospitals safer
using computer vision,
or how does the
self-driving car revolution
affect mobility for
ageing populations.
So, those are some
of them and we have
about four to five
projects at each of
our universities
that are co-designed
with those researchers and
faculty. Great question.
>> Yeah, I love that,
because when I look at
the most curricula
especially for kids,
you see sort of
the same things come
up again and again like robots.
A kid likes, like
the one you showed.
It was really interested
in social justice
that's just not
going to turn off.
So, I love the approach
you're taking.
>> Thank you. Yes?
>> What about datasets for these.
Isn't that the program
has dataset at
all universities or
other researchers
at different institutions
providing the datasets?
>> Yeah. Great question.
It's a combination.
Obviously getting good datasets
is one of the key
challenges, right?
So, that's part of what we
try to teach our students is,
"How do you find
the right dataset?
How do you analyze the dataset?"
So, it's quality, what
is the size matter,
where do you host it all that.
So, that's why we certainly
provide some of them.
We also allow the students
to try to go out and
identify where can you find
this kind of open source.
We've also leverage tools like
Kaggle and other kind
of open-source tools
that are out there,
and in some cases
the universities
will provide those datasets.
It's a great question. Yes?
>> Do you also have
opportunities for people who are
already sort of studied the field
and sort of looking
to move into that?
Because these are I think
mostly focus in students,
highschool students who are
interested in getting
into it and they
haven't had no education
shaped in a particular way,
but do you also have
opportunities for
people switching fields?
>> That's a great question.
So, currently we don't.
So our summer camps are
mainly for highschoolers,
but our open source
curriculum will
be available and open
for anyone to use,
because obviously
there is a gap in
basic AI training at all levels.
So, it's good question.
Other questions, yes?
>> I think there's a difference
between creating wagon adding
summer course and the
approach summer schools.
So, I feel like
the impact you're having
you can actually access it,
if the students are
sitting at home and just
watching the material
in an online ways.
There seems like there's
a human touch parts and
meeting these people,
working with them, and
getting their hands dirty.
So, in terms of scaling that,
open sourcing the contact is
one part of it but I
guess you still need
partners in different parts
of the country
and the world that are going to
physically have host
these camps and be the mentors.
I'm trying to understand
how much of it is going
to be the open-source content
and that scales?
How much of it is
still going to be
the organizational and hands on
and one-to-one mentoring
kind of style.
>> Absolutely, yeah.
No, it's a good
question and we find
obviously a lot of
the barriers for
people that are
currently excluded from
these fields have to do
with lack of role models,
and mentors, and those pieces,
and we don't want to miss
that in these new models.
So, in terms of
our open-sourcing curriculum,
it's not going to be just a MOOC.
So, there will be some materials
that will be openly available,
but then we also are working with
partners closely who are already
teaching let's say
CS classes or run
after school STEM programs
that can actually provide
that in-person component.
So, absolutely we
don't want to miss
out on that because
the peer community,
that human touch as you said,
is really a critical part of
why our programs have
been successful.
We also will be offering
again that mentorship,
fellowship,
opportunity to students that
want to continue their learning.
We also see that with
the open source piece,
that will really give people
exposure to what is
available in this field,
and then they can take
that knowledge further.
I think there's a big gap
right now in terms
of what is even a career in AI.
So, the work that you all do
is absolutely so interesting,
and if more people could
understand like what are
the different pathways,
what are the things
that you can work on,
I think that will inspire a lot
of people to pursue this.
So, we're also going to have
a career exploration, component,
and element where
people know, hey,
if I'm really interested in
research and what does
that actually look like,
I can go in this pathway
in this direction.
So, I think that will give
them specific steps of how
to move on to that next phase
of their learning.
Good question. Other thoughts?
>> I think in terms of
other side of career in AI,
it seems like the program covers
the education part and
the research archivist partners,
but it seems to me
like there is a gap
like the industrial part
is missing here,
because AI is more like
a common practice
or will become more
common in the industry part,
with engineering segment as
well as data scientist
and analyst.
These parties are
missing this program.
So, is there a specific reason
why you focus just to
focus on research instead
of the engineering part?
>> That's a great question.
So, I think that's where
our industry partners come
in and are so critical to this.
So, we're actually
working with a lot of
different AI focus companies
that can offer our students
both internship opportunities,
job shadow opportunities,
mentor opportunities
to really get
exposure to what
these different careers
are not just in research.
We focused on research in
the class itself because
that's a fun and
hands-on way for students to get
exposure to some of
the applications.
We also do cover
some Python coding,
but they're also when you look at
the computer science
education world,
they're actually a lot
of organizations that
are doing really amazing
work in teaching,
coding, and engineering
in computer science.
So, we're not trying
to duplicate that,
we're actually trying
to be complimentary.
So, if somebody wants
to continue and
deepen their Python skills,
we provide them with
resources but we're
really not coding program.
So, both were trying to
provide that exposure to
industry through our partners
and connecting students.
So, at each camp we'll have
many guest speakers from
industry talking about
their work, their applications.
We have career round tables
and mentorship opportunities.
So, they are still getting
that exposure to
the different sides of it.
We're not trying to say you
should go into one or the other,
it's not really
a directive experience.
It's really like here are
the broad opportunities
and you need to select which
path is right for you,
and here's what you need to be
successful in that path
and we'll support you.
But we're certainly agnostic
in terms of what
the opportunities actually are.
Great question. Yes?
>> This is really amazing work.
Thank you for sharing with us.
I have the question about
the innovation beyond
the education piece.
The innovation
that's coming out of
the projects that some of
these students are doing.
What mechanisms do you have
in based on what children
sustain or provide
some sort of path forward,
the example that you gave,
the contaminate water flow.
So, what falling mechanisms
have you put in place to do that?
>> That's a great question.
So, we're actually expanding out
our alumni program right now with
that exact goal in mind.
We found that students,
I think it was pleasantly
surprising to us that
a lot of students
wanted to continue
that research already
in high school.
So, we are putting
more formal mechanisms
into place as well as keeping
that community element going.
So, we have ongoing
educational opportunities.
We are now starting to
place students at
internships as their sort
of graduating high school
and into college as
well as providing opportunities
to continue their research.
So, that's really the purpose
of the fellowship and
then again the relationship with
the industry and
academic partners.
Great question. It's really
important because
obviously we want
them to be able to take
the solutions further
and out into the world.
Other questions? Yes.
>> This will be
a very pithy question.
But, AI is such a
broad technology
and we're expecting
tests such broad impact,
right on every industry
and lots of different
aspects of life.
That so much of the discussion
is still starts from
a technical perspective rather
than the different perspective,
like a social science perspective
or a agricultural perspective.
And the solutions that are
needed within those communities
and how technology
can help or better
a lot or more likely to
be productive and they start
their, starting from technology.
I noticed that younger people
are getting into the field,
tend to have
a more natural affinity
towards thinking about
things more broadly,
but I'm wondering about
programs like this
especially with the youngest
people who are getting in.
How can they be encouraged
to think about it from
a multiple perspectives or from
a very specific perspective
that they are super
in to but it doesn't
necessarily start
with the technology?
>> I think that's
exactly what we see
is that the application
first approach,
is what is most effective at
engaging young people
into the field and we
actually don't consider
just technical AI
jobs as the goal.
It's really about
an interdisciplinary
approach which we know is
needed in terms of how
this technology is going
to impact all these
different industries,
all these different areas.
So, I'll give you one example.
Our Princeton program is
more focused on AI policy,
and they're actually taking
a field trip to Washington
D.C and connecting with
key leaders who are
thinking about that on
a broad level Even though it
still has that technical rigor,
because that technical
exposure is important in
understanding that in a deep way,
not just glazing over
it, is really critical,
but we want to make sure
that students are getting
exposed to: how is this going to
affect the criminal
justice system,
how's this going to
affect social services,
get embedded in healthcare,
all these different areas.
And so, I think that's both
in reflected in how
we actually teach it,
starting with the application
and then moving into
the technical side as well as
the projects and other exposure
that our students are getting.
Thank you, that's
a great question
and really critical, I think.
>> Yes.
>> I have a multi-part question,
but I think I'll just
start with the easy one.
How much does the
summer camp cost?
>> Yeah, great question.
So, most of the camps
are free or very low cost
for the students,
so access is absolutely
critical and we
have full scholarships available
for those that can't pay,
even if there is a fee.
>> And then do you like
name the scholarships.
Like let's say, a company
gives a chunk of
money to your organization,
does that then end up in like
branded scholarships for students
or what way is there to
integrate industry partnership
donations with your organization?
>> That's a great question.
We haven't done that yet.
I think we'd be open to it.
For sure, I mean access
is absolutely critical
in providing this education
for students that need it most.
We currently work
with funders that are
more support general
operations or that
have been specifically done like
a scholarship fund per say.
But yeah, definitely open
to those discussions.
>> Cool. I assumed of
that opportunity because
every center that does
statistics costs like up to
5X to recruit a diverse employee.
Like to the extreme costs
between this upscale,
but it seems like you're also,
you're cultivating
that such top talent
with this field that
it makes sense for
an industry partner
to potentially give something
for a specific scholarship
or a specific opportunity.
So, I just think there's like
a huge philanthropist play.
It's a five year docket,
could you talk about
that piece a little bit
or has that conversation started?
>> Yeah and Microsoft is already
a financial supporter of AI curl.
So, that's great and we'd love
to talk about increasing
that for sure.
>> Thanks.
>> Thank you. Other questions.
How are we doing on time.
>> We have another 40-45 minutes.
>> Wow. Well.
>> We can also keep it shorter,
so that people can spend
their time in different ways.
So, first of all,
we just started this partnership
and on the MSR side,
the outreach team is in contact
with her's about planning
and deciding next steps.
But I think we can do a lot
together and if you have
feedback for the outreach team,
this is one of the things
they're thinking about.
Either we can talk
to Tess directly or
plan internally with
the outreach team.
Those are what I think
possibilities and that's one of
the reasons why Tess is here
today to start a connection.
My question is, for
a programmer there are
like my 900 applicants
and to any spot.
What's the trend for
getting into the AI.
>> Yeah. So we have
an application process that
is mostly qualitative,
so we ask key questions
like, about leadership,
their interest in AI,
they don't have to have
any coding experience
or AI experience.
This is specifically
trying to attract
students that are not
coming in with that.
So, it's a set of
essay questions mostly,
like maybe you'd see in
a college application process.
We do look at math and
what math background they have,
because if they're not
at a certain level,
it will be more
challenging to get through
some of the more technical
pieces of the course.
But most young people that are at
a ninth grade math level
and above are good.
And then, yeah, I
think in terms of
where we recruit from,
just to add onto that,
we actually found that doing
direct outreach within
high schools and
working with teachers
directly is the most effective.
Because there isn't a lot of
knowledge out there
about what AI is,
what those terms even mean and
there's a lot of misperceptions
about what it is.
There's a lot of fear out there,
making that human
connection with students
and sharing, what is this really?
And what are the opportunities?
And what kinds of jobs
can you get in this area?
Has been really successful at
bringing in students
that wouldn't
otherwise get exposure to this.
Great question.
Anything else? Yes.
>> And I don't know if,
who will speak to this.
But do you have
an ambition for a role
that MSR might be able
to play this other than,
you said a lot of
industry partners
provide internships
and things like that.
But what by quickly
but the universities
that are involved,
none of them are sort
of in this area.
Is there thoughts of bringing
camps in the Seattle area
or what type of involvement
they are aiming for?
>>Yes. I mean, so
we're definitely
interested in expanding
the summer camps.
I think given this group
and the focus here
research is a really
exciting area.
So, we do have
this fellowship program that
does focus on connecting
researchers in the field with
our students and helping us
design the right Researchers.
There's also opportunities for I
think shared thought
leadership around
what ethics frameworks should
we be including in our curriculum
and how can you I'll
advise on that based on
what's actually
happening in this space.
I think there's
some education opportunities
that we're gonna be talking
about later as well,
in terms of how do we
collaborate on outreach or
education initiatives
that you all are
already doing and
sharing cross learnings.
And we're also in contact
with the TLS program and
so actually we've connected with
them to recruit for this year
for the summer camps.
And so how do we tighten
that partnership even more.
So, those are some
of the key areas.
I think there's expansion,
the technical content research,
mentorship and
taking advantage of
all the amazing talent here
and connecting that with
the next generation.
And then of course,
us being a potential source
of talent for you
all as these young people
go into the field.
But we're certainly
open to other ideas.
I think again,
these conversations
are just at the early stages
and we're excited to
build a deep and
long-term relationship.
So, I'd be eager to get
folks feedback as well.
>> [inaudible].
>> Thank you, yes.
>> I wonder if you've
thought about also doing
some young participants
back to industry feedback?
It's not just that we need
to recruit people who
look different than
our current population does,
is that we need to understand
how it is to be different
in order for that to be
a healthy thing to happen.
I also think that
these young people,
we have a lot of misperceptions
about them as we get older.
And how they think about privacy,
and how they think
about security,
and how they think about
their personal agency,
and how they think
about technology.
And we can learn so much
by having them talk to us
about those things and I think
they could feel very
appreciated and
empowered by having
an opportunity to teach
us but they know that we don't.
>> Absolutely, I think that's
a amazing idea and
it's very reflective
of what we're doing with
iTripoli that I mentioned
earlier where we're actually
having young people give
substantial feedback on
some of the standards,
and working groups that
they're developing.
So, I think that kind of
approach would be amazing,
and I don't know what format
exactly that could take,
but I think you would find
if you met any of our students
that they're skilled,
passionate, energetic,
smart enough,
to offer really
helpful feedback in
those ways and that I learned
a lot from them certainly,
and I'm sure everyone else
would too. Thank you for that.
>> We have two questions,
come in the line.
>> Oh, great.
>> Yeah, one of them is asking,
would love to hear if
rural populations are
under-represented in AI
as the new electricity
do we need a PBA or IAA to
ensure it reaches rural America
is one of the questions.
>> Yeah. That's a really,
really great question.
Absolutely, we consider rural
under-represented and we actually
do outreach to make
sure that- So,
a lot of our camps are
actually residential,
which means our students
stay overnight.
So, it can be
accessible to folks,
not just coming from those areas
where the schools actually are,
which at least right now
are primarily in cities.
So, our AI4ALL,
so we absolutely consider
those areas that
are most at risk of
getting left out of
this conversation and
being part of shaping it,
to be our target focus.
So, in fact, we're really
interested in also
expanding our camps
to other geographies,
to make the geographical
footprint more
diverse as well as this
open source piece,
because access is such
a huge, huge issue.
I think though one of the
challenges I've worked
in rural areas in the past,
and bandwidth and
Internet connectivity,
50 percent of
low-income households in
the U.S. don't have access
to internet at home,
which is shocking
to a lot of people,
but even if you go
to a local library,
the bandwidth may not
be good enough to
even run a YouTube video.
So, how do we address
that and more
of an infrastructure level?
So that we can really
provide access to
rural areas that are
really left behind in
the digital divide.
>> The second question
is we keep hearing
about an AI ethical
framework and principles,
do you have a draft
on those principles?
What is the road map for having
a broader agreement on the
framework and principles?
>> Yeah. That's a great question.
So, we are not
a framework development
organization ourselves.
The best that we
can do right now is
really stay connected to
the conversations of who is
developing those frameworks.
So, for example, like
I mentioned iTripoli,
their AI and Autonomous
Systems Ethics Initiative,
it's a long name,
has great work being done on
creating those standards.
They actually just put out
a paper called,
Ethically Aligned Design,
that has a set of
principles that they
put forth that were developed in
these working groups
from very diverse areas,
and I definitely recommend
taking a look at that.
It's a really interesting
and in-depth-
>> [inaudible] ethics,
committee and framework.
>> Sorry, what?
>> ACM, [inaudible].
>> ACM. Yes. Exactly.
There's a group called
Partnerships on AI that
Microsoft is also involved with,
that is working on creating
working groups to create
standards as well.
So, I think there's a lot
of these initiatives
that we're seeing happening,
and we're not
putting forth one of
these kind of set of standards,
but rather exposing
our students to
these different initiatives
that are going on,
and what some of the nuances
in the discussions are.
So, I hope that answers
the question.
Any other questions?
>> I have a broader
question, which is,
last year there was
the Google employee,
there was this memo from
a Google employee talking
about overall woman in the field,
and how there's positive
discrimination and
men are actually first off
because of that, and so forth.
Being in this community,
how do you think we are
reacting towards these kind
of backlashes or how do
we change the culture?
I'm just curious
about your thoughts
about all of this discussion
that's going on in this space,
not so much about
high school students,
but about the industry
that we're in.
>> Yeah. No, it's a really,
really great question and I
think there's been a lot
of attention on it lately,
with the Me Too movement and
certainly that memo that
you're referring to.
I think as more and
more people speak out,
and more and more researcher
is also being done
in terms of retention
in the tech sector,
and what it actually
takes to not just
ensure that we're hiring
diverse workforce,
but ensure that we're
retaining them,
and that they can be successful,
and advance, and move up.
Because obviously, as we
talked about earlier,
that's good for business,
and good for individual teams.
But, I think inclusion has
emerged as a really
important second leg.
So, at first, the conversation
was all around diversity,
and improving
your hiring numbers,
but you also need to
retain this talent and
so having an inclusive culture
is equally important.
I think we see companies
investing in mentorship programs,
having the right policies,
there are tools that are
cropping up to support
that both in the hiring
process, for example,
to reduce bias as well as
support systems for
employees that are
currently experiencing
harassment or discrimination.
I mean, I think ultimately,
whenever there's
more honest discussion,
and people are speaking
out about these issues,
and it becomes more honest,
and people that have
been maybe victims
of unfortunate circumstances
are able to have a voice,
there's always going to
be a backlash to that.
I think we need to have
the hard conversations.
We need to look at the data,
and we need to be really,
really proactive, and track
all of these metrics,
and hold ourselves
accountable to solving
these issues because it's not
just a problem for
the individuals.
It's a problem for the company,
and it's a problem for the teams,
and it's everyone's problem.
I think that's
my own perspective,
is that it's not just a problem
for women to solve.
It's a problem that affects
everyone and everyone loses
if certain groups are left out.
So, I hope that
the conversation continues,
I hope that we're able to have
the difficult and
challenging conversations
that might be uncomfortable.
I hope that we can address this.
I think with AI, it's
such an exciting moment
because, wow,
we're seeing a lot of challenges,
and certainly there's
a diversity crisis in the field.
There's also the opportunity
to address it at
these early stages and
steer the train in
a better direction.
So, I'm very optimistic
or I wouldn't be
doing what I'm doing,
but I also think we have to be
proactive in all sorts of ways.
It's not just going to
take one magic bullet,
and it's not just going
to take one organization.
It's going to take a community in
our whole society
coming together and
collaborating to solve
this challenge. Great question.
>> Okay. So, if that's it.
Let's thank Tess.
>> Thank you. Thank you so much.
