[MUSIC PLAYING]
Good afternoon, everyone.
It's great to be here.
I'm going to talk
to you a little bit
today about a company called
DeepMind Technologies that I
started six years ago
with some friends of mine.
Our mission is to solve
general intelligence.
And so this means
building general purpose
learning algorithms.
So today I want to give you a
little bit of a taste for what
that actually means in
practice and show you
some of the real
world applications
that we've been working on.
But first, let me tell you about
our company and our culture.
We are at heart a
research organization.
We're based in King's
Cross, up in North London.
We have about 200 or so of the
best AI and machine learning
researchers in the world.
We publish all of our
papers, publish more than 100
peer-reviewed academic papers.
And a couple of years ago,
we were acquired by Google.
And we became Google DeepMind.
And these days, we've
been running independently
from the mothership, from
the Alphabet mothership.
And this is really
important to us.
It was important
when we were acquired
that we were able to carry on
the independence and the kind
of strong culture that we had
developed as a British company
over here in London, as we are.
And we've managed to
preserve that I think
over the last couple years.
We also have at our heart a
very strong social mission.
And I think what
we've managed to do
is combine the very
best from academia,
with its focus on long-term hard
research questions that really
get you thinking about
what life should or could
be like in 10 years time, and
bring that to bear in a very
fast-paced culture that
has the kind of scale
and agility and, of
course, resources
of a large corporation,
taking the best of both worlds
and trying to combine
them together.
And all the while, we're very
much underpinned by, I think,
the kind of social impact
values of the public sector.
It's really important
to us that we're
able to work on
applications that
make the world a better place.
And at our heart,
that is what's really
motivating us to try to
solve general intelligence.
So as I said, our
mission is to distill
the essence of what
has created all
of this incredible civilisation
and everything of value
around us into an
algorithmic construct.
Our brains are
incredibly powerful.
They create our
social relations,
our economic systems,
our culture, our art.
And somehow if we're able to
represent the characteristics
that make us unique
as human beings
and make us distinct
as a species,
represent those things in
an algorithmic construct,
we might be able to do a
better job of planning,
of classification, of search,
and really important things
that make us unique.
But we want to
solve intelligence
so that we can make the
world a better place.
And I think this is a really
fundamental part of our mission
that we've tried to integrate
into the heart of everything
that we work on.
So why do we think it's possible
now to build general purpose
learning systems?
Well, I guess there's
two broad factors.
The first is to say that
we've made incredible progress
on the hardware.
And I think what
is less well known
is that we're making
incredible progress
on algorithmic efficiency.
So let's start with what
you'll all be familiar with,
massively faster computers.
Back in 1990, a laptop just like
this one costing about $5,000
if trained on ImageNet,
one of the classic tasks
in training deep
neural networks.
When compared today, to be
trained on a laptop built
in 2015 costing roughly
around the same kind of money
actually would have
taken 270,000 years
to train on exactly
the same task
that today takes around
three weeks, which
is kind of incredible.
So precisely the same
algorithm trained
on different hardware over
roughly a 20-year or so period.
And what this actually
means is the search
space for new algorithms is
getting much, much smaller.
So it means that we can
almost brute force the search
for the optimal
combination of algorithm
because hardware
is so, so cheap.
We can train much bigger
algorithms much, much faster.
And so we can afford to
search through the space
of all possible
combinations of algorithms.
Now, this is really interesting,
because combined with this,
we've also seen a massive
improvement in the efficiency
of the algorithm itself.
So let's take the
classic chess computer
system developed by IBM back
in the day called Deep Blue.
And it beat Garry
Kasparov in 1997
searching 200 million
moves per second.
And this was a very
traditional system.
It's hard to almost
call it an AI system,
given what we're doing
with AI these days.
It was a hand-crafted
set of rules written
by a bunch of
expert Grandmasters
who essentially said, if you
find yourself in this position,
then it's most likely
that you should
take this set of actions.
It's a very structured,
relational database.
But it is very powerful, and
it obviously used brute force
to search through 200
million moves a second
and beat the world
champion at chess.
But back in 2009,
just a decade or more
later, Pocket Fritz,
a computer program
running on an iPhone, an
early version of the iPhone,
actually won a professional
tournament nine wins to one
against a Master, searching,
most importantly, only
20,000 positions per second.
So this is a 10,000
time speed up
over a 12-year period in the
efficiency of the algorithm.
So when you combine
this with the efficiency
that we've seen in
hardware, these two things
make it possible to train
really very different
algorithmic systems today.
And what we're
ultimately interested in
is how we build general
purpose learning systems.
So I'm going to tell
you a little bit
about the kind of
philosophy or the framework
that we use to
train these systems.
Traditional AI approaches
have focused, as I said,
on very structured,
rule-based learning, that
take the human intelligence as
expressed in symbols or words
and try to represent that
in kind of Boolean logic.
So if this, then that.
And what we've been
very committed to do
is to try to train
an algorithm to learn
its own representation
of the world
through interaction
with some environment.
So all of our systems begin
with the training of a machine
learning system, or an agent.
That agent has some goal, which
is defined by the human being,
and it's able to interact
with some environment.
And you can think
of this environment
in very general terms.
It could be a robotic system
interacting with a kitchen
or a car manufacturing plant.
It could be YouTube,
where the environment is
to try to recommend
videos that people watch,
or it could be an energy grid.
And that agent is able
to take a set of actions
in the environment,
interact with it in some way
by recommending a
video on YouTube
or controlling a robotic arm.
And then the environment passes
back a set of state updates.
So after the interaction
with the environment,
the environment has
somehow changed.
And so all the sensors
that collect information
about that environment send
the updated observations
back to the agent.
And it's through this
process of interaction
through reinforcement
learning, we're
able to train agents to learn
their own representation
of what it takes to
optimize some goal.
And right in the
beginning, we were
very successful in
training our algorithms
on the old-school Atari testbed.
So many of you will
be familiar with this.
It's like 100 or so Atari
games from the '70s and '80s.
And the environment is
simply a set of raw pixels.
So at every frame, we're
passing back a set of RGB,
or red, green, blue
values, to the algorithm
based on where that sits in
a frame, on an x and y-axis.
And so really there's no
pre-programmed knowledge.
Everything is
learned from scratch.
So we're not hand coding any
insight into the algorithm.
We're not saying, this
is what a color is
or this is what motion
and dynamics are,
or this is a paddle,
this is an enemy.
We're really saying
given this set of pixels,
over time can you optimize
your goal function,
which is to get more reward,
by tweaking the action buttons?
And so that is really
all we give to the agent.
We just wire them up
to the action buttons
and give them the motivation of
trying to optimize for score.
So really the best intuition is
to think about a robot standing
in an arcade being fed the
pixels from the video screen
and fiddling around with the
joystick and the fire button
in order to find out
what the association is
between the set of actions it's
able to take, the set of pixels
that it saw in the
last few frames,
and the reward that it gets.
It's kind of just simple
associative learning
at one level.
So let me give you a sense
for what this looks like.
This is the old
game of Breakout.
And here we are after
100 games, the agent's
just arrived in the environment.
So it really has
no idea what to do.
It's sort of moving
around left and right,
trying to explore the
space and randomly discover
what kind of actions
seem to be rewarding.
After 300 games,
it seems to have
learned pretty well that
there's a strong association
between moving the bat
into the right place
to hit the ball back
at the right time.
And this is pretty
much human performance.
This is better than I can play.
But interestingly, we left
it training for 500 games
overnight, and it developed
this really cool strategy
where it tunnels the
ball up the sides
to get maximum points
with minimum effort.
And this was super
surprising to us
as engineers because it was
really cool in some sense
because many humans don't
realize that strategy when
playing the game,
and it was something
that emerged through
the agent interacting
with the game on its own.
So it's really surprising
to see the first signs,
the complex behaviors,
relatively complex behaviors,
can emerge without us
hand-crafting those features
in.
So this was really
exciting to us two
and a half, three years ago.
Another example is the
game of Space Invaders.
And so this is a video
of before training.
You can see that it's getting
killed all of the time.
It's sort of randomly
firing around.
It doesn't realize that
those things are denying it,
when it gets hit by
one of those missiles,
it's denying it
from getting score.
It doesn't realize that when it
hits one of the Space Invaders
above, it's actually
getting reward.
And after training, it's
learned a whole bunch
of really interesting dynamics.
So first of all,
you can see it often
hiding behind the orange
obstacle, because it knows
that if it gets hit by
one of those missiles
it's in big trouble.
Secondly, you can see
that it's actually
often aiming at the top.
So it's trying to
get those ones first
because it gets more reward.
Here, we see the
pink mothership.
And it does an incredible
predictive shot
to try and get that because it
gets more reward for hitting
that at the right time.
And the other thing is you can
see that it very rarely wastes
bullets.
It's actually very
precise and accurate
and it can do really
cool predictive shots,
just right at the end, as you
see, to get the last enemy.
So it plays the game and
indeed it plays 57 of the games
to human or superhuman
performance.
The same algorithm performing
in a very general way
across all of these
different environments.
And the environments
perceptually
have very different physics,
very different objectives,
very, in some cases, long-term
temporal dependencies.
So it's not really
just reacting;
it's actually having to
make a series of decisions
and get them right in
sequence over time.
We were very lucky enough to be
awarded a Nature paper for this
and we managed to get
the front cover back
in 2015, which, as many of you
will know through academics,
is the prized possession
for an academic research
lab like ours.
We extended this work
earlier in the year
to the old classic
ancient game of Go.
So let me tell you a
little bit about the game.
Essentially, the objective
is to place stones on a 19
by 19 board.
And if you're white, I'm black.
And the objective is to
try and place stones such
that you surround your
opponent's territory.
So there's really no
further rules than that.
It's very unstructured.
It's very abstract.
And there are no
sort of tricks that
help you to exploit
structure in the space.
So in chess, for
example, all the pieces
are worth different
amounts, and so you
can focus all your energy
on protecting your queen,
for example, because
it's really valuable.
Or you don't have to
bother about thinking,
can a pawn move
five squares ahead,
because you know
that's an illegal move.
Whereas, in the game of Go,
there's very, very little
structure and so
the search space
is really absolutely enormous.
It's a 3,000-year-old game.
And believe it or not,
it's incredibly popular
all over the world.
There are 40 million
players worldwide.
And the really
extraordinary thing
that make it so
difficult to play
is that there are 10 to the
power of 170 possible state
spaces on a 19 by 19 board.
So just to give a sense
of what this means,
it's a 10 with 170
zeros on to describe
the number of possible
board configurations
that are possible in the game.
And it's been estimated that
there are something like 10
to the 70, so 10 with 70 zeros
after it, atoms in the known
universe.
In every liquid, solid, and
gas in the known universe,
there are many, many,
many orders of magnitude
fewer possible atoms
in the known universe
than there are possible board
positions in the game of Go.
So just impossibly
difficult large number
to intuit and get
your head around.
This is roughly what
the branching factor
looks like in chess, a
much, much smaller and more
structured space.
And so here there
are something like 10
to the power of 40
possible board positions.
And so for every
possible move, there
arise another set of
moves that are possible,
and then another
set, and another set.
And this branches out for
about 100 or so, and that
roughly makes up 10
to the power of 40
or so possible board positions.
But this is what it
looks like for there
to be 10 to the power of 170
possible board positions.
At every moment there's
an enormous number
of positions that are
possible at the next move.
And so trying to model structure
in a space that is so large
is really very, very difficult.
Writing an evaluation function
to know if you're in a strong
position or a weak position
is incredibly hard.
And so all of the top players
essentially rely on intuition.
And that branching
factor that you
see there goes on for
200 possible branches
until you reach the
end of the game.
So we trained two
different neural networks
that learn a representation of
what is effective in the space.
The first is to basically say,
given a set of possible moves
that might be valuable at
this particular position,
how do I identify which
one I should take.
So here what we see is
that, the green layer,
there's 10 different positions
that have been identified.
And that network has been
trained on past games,
100,000 or so past games.
So we're learning from the
very best of human knowledge
to bootstrap our
algorithm to start
from at least the best that
human knowledge has to provide.
And once a particular
position has been identified,
the algorithm then trains
a different network
to search the value of
that particular position.
And so we're trying
to figure out
the probability of
winning if we roll forward
from this kind of position.
And so both of these networks
work in tandem with one
another to make a guess at
which movie should be taken.
Earlier this year, we played
the world champion, Lee Sedol,
over in Seoul in Korea for
a $1 million live match.
Our system, AlphaGo,
played Lee Sedol.
And he is still the legend
of the game, 18 world titles,
and is considered the greatest
player in over the last decade.
We actually won the game,
which was super exciting.
We won four to one.
And widely recognized to be a
decade earlier than expected.
And it created an
incredible buzz.
So there were 280
million live viewers,
which was more
viewers than there
are who watch the Super
Bowl, which is quite surreal
given each game is about
four or five hours long.
So it's of intense.
35,000 press articles.
So, incredibly, it
caused a shortage
of the sale of Go boards, which
was also quite remarkable when
you consider they're
kind of bits of wood
with some stones on top.
So we're really excited
about the buzz and energy
that it created.
And, again, we were
lucky enough to manage
to get a Nature
paper and, of course,
lucky to get the front cover.
The first machine learning AI
lab to get two Nature front
covers in 18 months.
So it was really cool to
be able to do that work.
But, of course,
our real motivation
is to develop these systems
so that we can actually
make progress on some of our
most intractable problems.
That's what motivates
us at DeepMind.
That's what we're
really driven by.
And, in fact, I think
that in some ways
is what makes our
organization so exceptional.
I think the really smartest
people in the world
want to work on the
hardest problems with some
of the other smartest people.
But I think what's
also really important
and becoming really clear I
think in the last five years
at least is that those
people have the freedom
to work on any
problem in the world.
They can go to any
lab or to any company,
they can take their pick.
And so what really makes a
difference is the opportunity
to have meaningful social
impact in the world,
and know that you are
actually working hard
to try to change the model
of running an organization
and change the
motivation and purpose
of our typical
institutional structures
and do things quite differently.
So we took the same
kind of methods
and we actually
deployed them to see
if we could make Google's data
centers much more efficient.
And what we did, so
we trained a model
to look at all the past
retrospective sensor
data that describes how
the data center operates.
And this includes data,
things like air pressure
and temperature, incoming server
load, and so on and so forth.
And what we're able
to do is train a model
to use power more
efficiently to deliver
exactly the same performance.
So there were no
glitches I hope in any
of your experience of watching
YouTube or any of your search
results.
But all the while, we
managed to deliver exactly
the same performance by using
40% less energy across all
the data centers in order
to cool the data centers.
And so this is quite exciting
because it gives us a hint
that these kinds of very
large learning methods
can be used for very
hard real-world problems.
And I think one of the things
we're really excited about now
is how we might
be able to deploy
these very general
algorithms for load balancing
on the national grid.
And so these are really
difficult problems
in data centers where there's
a massive action space
and the real-world data
is really, really noisy.
And what we really need
to do is design algorithms
that can safely explore
different parameters in order
to control very complex systems.
And so, ultimately, what we're
excited about is the potential
to use these same
systems in any building
to predict incoming load and
adjust the energy required
to operate that piece of
infrastructure accordingly
in order to optimally match them
to consume only the energy that
is necessary at the
right time to run,
say, a massive cooling facility
or an industrial plant,
or potentially even
a national grid.
The other thing I
want to talk to you
about is DeepMind Health.
Earlier this year,
in February, we
launched our first business
that's facing outwards,
looking at seeing how we
can deploy our technologies
to radically transform the
NHS, digitize, and then
help better organize and run
the National Health Service.
And at the moment, the
most remarkable thing
that we discovered is
that there is actually
an incredible amount
of really rich data
that describes your
experience when
you're admitted in a hospital.
It's collected very diligently.
It's very comprehensive.
But the challenge
is that it sits
in charts like this at
the bottom of your bed.
And so this is, for
example, a fluid chart.
This is a record of everything
that you've eaten that day.
And there's all sorts of very
rich data that in some sense
describes you and your
experience in admission.
This is your stool chart.
This is a catheter
chart coming up here.
And all kinds of really
useful information
that unfortunately is
only really ever accessed
by the clinician when they
arrive at your bedside.
And so they spend the
first couple minutes
flicking through
this information
and basically asking you to
repeat the stuff that you've
probably repeated many
times before to the previous
specialties, when
you admitted, what's
wrong with you, who
else have you seen.
And that's really
difficult for you
as a patient to try to
remember all of those things
and have to repeat those things
and present your own case
history.
The other kind of
data that exists
is transmitted using pagers.
It would be surprising
I think to many people
to realize that actually
this technology, which
is many decades
old, is still in use
as the primary mechanism
of communication
today in the hospital.
And the downside of these
systems, apart from the fact
that they're very clunky and
when you do get a message,
you have to go and find a
telephone and phone back
the person and
they may or may not
be on the receiving
end of that phone,
and then you end up playing
pager ping with people.
That's one of the downsides.
But the other is that none
of the information that's
exchanged when clinicians are
communicating with one another
is actually recorded,
and yet that's the most
valuable information that
describes what decision was
taken when to diagnose you
with a particular condition,
to decide that you need to
undergo a particular procedure
or a particular pathway of care.
And so that's kind
of lost to the ether.
It's not recorded and it's not
part of the information that
can be used then to audit or
to improve or to strengthen
the quality of service
that you receive.
And I think these two things
contribute significantly
to some really
shocking statistics.
10% of patients
experience some kind
of harm, some kind of
unnecessary harm, when
they're admitted to hospital.
And around half of those
patients it's been estimated
suffer that harm
largely because there's
been a delay in the
escalation of their care.
And that essentially
means that there's
been poor communication,
poor coordination,
and very limited
access to the data
that should otherwise describe
the kind of deteriorating
condition that you're facing.
And actually a really
interesting study
came out earlier this
year from the BMJ
that has estimated that
medical error is actually
the third leading cause
of death in the US.
And so everybody thinks
that the numbers in the UK
are very, very underestimated.
Something like 250,000 people
die a year preventatively
because of medical
error in the US.
It's the third biggest killer.
And so largely
there's two things
that are driving the patient
experience in hospital.
The first is how do we identify
which patients are at risk
of deterioration at any moment.
And there's two or three
sources of data here.
There's the stuff
that already exists
on the very clunky and painful
and old school digital systems,
as well as the
paper-based record.
And I think it would be very
surprising to most people,
although not to many
of the junior doctors
and consultants and nurses
who work in the systems,
to realize that actually
a lot of these systems,
these desktop-based old school
systems that are in deployment
at the moment, were actually
developed for completely
different ecosystems.
So firstly, largely in the US.
And secondly, often
originally those systems
were built for
logistics or accounting.
So not really designed for
health care from the ground up,
thinking about the user
need all the way through.
And so the first job is to
take advantage of all the data
that currently exists, both
that which is digitized
and that which needs
to be digitized.
And then secondly, use that to
escalate care more effectively.
So once you've
identified which patients
are at risk of deterioration
by looking at all the data,
how do you manage
the coordination
of that intervention
more effectively.
And so let me take you
through an example of what
a truly digitized health care
experience might actually
look like.
Here we have our
patient, Robert,
who was admitted
yesterday with a complaint
of a severe abdominal pain.
And so he was kept in
overnight for observations.
And our ward nurse, Jamie,
is taking his temperature.
He's able to record
the temperature
on his mobile phone,
on his iPhone.
And it's really pretty simple.
He's able to swipe
across, update the number,
and immediately we see that
the temperature is actually
39.2 degrees.
And so this is obviously
super concerning.
And it's generated what's called
an Early Warning Score, which
immediately triggers an
alert to the doctor on call.
She is able to read that
information in the lift
and immediately realize that
actually this patient needs
to be seen by me
very, very quickly.
She scrubs through
on her mobile phone
and looks up the
name of the patient.
So there's structured search
across all of the key features
in the hospital.
She can search by consultant,
by ward, by specialty,
she can search my medical
record number or date of birth.
And here she's typing in
the name of the patient.
So she can pull up Robert
Jones and immediately view
his profile.
So this for the
first time aggregates
all of the data that is being
collected across the hospital
system from the
patient's perspective.
And so what she's presented
with immediately is an overview.
What is this
patient's allergies?
Why have they been admitted
in this current episode?
What are their
underlying diagnosis?
She can see here that this
patient's had tonsillitis, also
suffers from asthma, in the
past had a tonsilectomy.
She can scrub down and see
what medication they're on
and she can see all
of the observations
and any other details that are
relevant and most immediately
necessary in the overview.
If she swipes across, she can
see a different stream of data.
She can see the linear
timeline of everything
that's happened to
this patient in order.
So in the last few
minutes, she can
see that this patient's actually
triggered a sepsis alert.
It looks like they
may be at risk
of a very serious infection.
They've got an
acute kidney injury
alert, which means
that they're probably
dehydrated in their kidneys
or are at great risk.
We can see that they've
been sent for bloods.
The bloods have been ordered
and they've been taken,
but they've not
yet been collected.
So she doesn't have to call down
to the pathology lab to say,
is this blood test result ready.
She can immediately
see on her mobile phone
that actually this
is a test that needs
to be reviewed and examined.
She can also see other
radiology reports
and any other
information she needs.
She can click on more detail
to see every single blood
test that's been taken
about this patient,
look at trend analysis,
look at previous graphs,
compare that with all of the
other important statistics.
So here as she can see that
the potassium has spiked.
This patient has a dangerously
high level of potassium rising,
at 6.2.
She can see that the white
blood cell count is high,
which probably means there's
an infection and was probably
why the sepsis alert was
triggered in the first place.
She can also now, having
known that there's
an sepsis risk, go
and look in more
detail at that
radiology exam, where
she can see that the report
says that there's a right lobe
pneumonia.
She can zoom in on that, compare
that to what she's seen before,
and share it with a colleague
and get a second opinion.
And this has all happened before
she's arrived at the bedside.
She's not even met
the patient yet.
There's no need to arrive
and look through the charts
and then take a case history.
She's done all of this in
the corridor, in the lift,
and on her way
there so that she is
prepped and ready to give
her full human attention
to that patient at the bedside.
Now she's here and she's
been able to examine
Robert's condition.
She can text her consultant
and say actually this patient.
Looks like they might need
an appendectomy they probably
basically just need
their appendix out.
And so her consultant
can-- so now
what we've enabled is effective
escalation by very fast
real-time communication.
And her consultant can
text back and confirm
that actually this patient
does need an appendectomy
and we should start
instigating all of the jobs
and procedures and tasks,
both automatic and human, that
need to take place.
So here we see Jamie,
our ward nurse,
texting back saying I'm
going to go ahead and do
the blood cultures and
so on and so forth.
So this is our consultant who's
preparing for the operation.
She's got her own
phone and she's
able to tap on admit the patient
and prepare them for surgery.
And automatically we see that
a whole series of subtasks
get dispatched across
the whole hospital.
So we can see that this patient
needs to be nil by mouth,
they need intravenous
fluids, someone
needs to write up
the antibiotics,
they need to be consented,
and so on and so forth.
And so all these jobs get
sent to the right person
so that they can be carried
out at the right time.
And obviously, following
Robert's operation,
we can still follow up with
all of the live observations
and results.
Our clinician, our
surgeon, can still
read through all
of the information
that she needs to see from home
and keep tabs and communicate
with her team so
that she can take
care of the patient in a
remote and ongoing way.
But ultimately, what
we really want to do
is put the patient in control.
The patient owns
the data about her.
And we want to make that
available on a personal mobile
device so that he or she can
play a much more active role
in administering their
care and potentially
connect up other data
sources to better manage
their own conditions.
And I really believe that
this kind of digitization
has the potential to truly
transform health care.
One of the big challenges that
we face in the world today
is of obesity and diabetes.
So 26% of prevalence of
diabetes among adults in England
is massively rising.
And you're much more
likely, in fact, you're
25 times more likely, to
suffer some kind of sight loss
if you do have diabetes.
And yet 98% of that sight
loss is completely preventable
if you can get in there
early and detect it.
So these are
preventable problems
if we can get access to
the data in real time
and identify who
is at risk of that.
So what we've been doing is
working with the Moorfields Eye
Hospital to look
at these OCT scans.
So these are 3D videos
of your eye, if you like.
And one of the
challenges is that we
have a 10x shortage of
radiologists to actually review
these images.
And so there's massive
delays in detecting disease
and this can lead to blindness.
And so what we
really want to do is
build algorithms that
can learn to identify
the early signs of
age-related macular
degeneration,
diabetic retinopathy,
and so on and so forth.
And so this is an example
of the kind of segmentation
that we train our
algorithms to do.
And this, I think,
will enable much more
standardized analysis,
instant results
with a continuously
improving methodology,
and a scalable technology
that can be deployed
across the entire system.
We're also doing the
same kind of thing
for improving radiotherapy
so that we can do a better
job of detecting cancer care.
The same principle here
is that we're really
trying to make sure that
the right radiotherapy
treatment is given to the right
patient at the right time.
And we can do that much more
efficiently with algorithms.
But, of course, for
us, AI is about making
remarkable progress.
And we really do need to think
about the ethics and impact.
And so we've been
working really hard
to try to propagate
a set of principles
about how we develop
these technologies.
And the first is to say that
we think the benefits of AI
should empower everybody, be
available to everybody, not
some elite few who have
access to the resources
to buy these technologies.
We think that our
research and development
should be open and responsible
and socially engaged.
And we believe
that we really want
to lead the way in establishing
technical and governance best
practices so that we can avoid
the most undesirable outcomes
and we can create new
kinds of organizations,
in some sense, a
semi-permeable membrane where
members of the public,
members of people who
are on the receiving
end of our technologies
can be involved in designing
and governing our technologies
and hold us accountable
as we do that.
So if you're interested in
contacting us and getting
involved, I really
want to hear from you.
I'm actually hiring for
my policy and ethics
team at the moment.
My email is Moustafa,
M-O-U-S-T-A-F-A, @DeepMind.com.
Please email me and come
and join our mission.
Thank you.
