PREM RAMASWAMI: So my name
is Prem Ramaswami.
I'm a product manager with
Google.org, which is Google's
technology-driven
philanthropy.
And I'm here today with my
colleague, Steve Hakusa, who's
a software engineer.
And Steve and I together work
on Google.org's crisis
response team, which looks to
make crucial information
available in the aftermath
of a natural disaster.
Now, we're here today to talk
about how developers like
yourselves can feel empowered in
the aftermath of a crisis.
We're always interested in your
feedback, which we'd love
for you to share, either on the
goo.gl links or any of the
Twitter hashtags behind me.
And with that, I'd like to get
started with our story.
As a team, our story starts on
January 12 of last year.
Now on this day, you might
remember, there was a 7.0
magnitude earthquake that hit
the island nation of Haiti
outside of their capital
city of Port-au-Prince.
Now this earthquake ended
up killing over a
quarter million people.
It left another million
homeless, and two million in
need of food and water aid.
As images started escaping
from Haiti, a group of us
Googlers got together and wanted
to see what we could do
to possibly help out.
Now Google has been responding
to crises since about
Hurricane Katrina.
And the way we do that is by
collecting updated satellite
imagery and publishing it to
our properties like Google
Maps and Google Earth.
Aid agencies then use
that imagery.
So in the aftermath of the
Haiti earthquake, we were
actually able to collect a
cloud-free shot over Haiti
with our partner GOI.
And we were able to publish this
imagery to Google Maps
and Google Earth within
the first 24 hours.
Now this imagery really gave
us an understanding of the
level of devastation and
destruction that occurred over
the country.
Now I said that aid
organizations
make use of this imagery.
It's not just there for the
shock value of it, so I want
to get some key examples of
how aid organizations take
advantage of this.
The World Bank, for example,
used this to conduct wide area
damage assessments.
Other organizations used it to
identify the location of
refugee camps.
It was also a nice way to
visualize the location of all
your clinics on a map that
were popping up in the
aftermath of this disaster.
Now you can imagine why having
this information
geographically placed
is so important.
For example, you want to make
sure these clinics are set up
next to your refugee camps so
you're properly servicing the
right populations.
And I want to give it another
specific example of where
satellite imagery is
super helpful.
This is the Petionville Golf
Course, which is located about
an hour south of the city
of Port-au-Prince.
And this is a satellite image
of the golf course about six
months before the earthquake.
A day after the earthquake, you
see little white specks
popping up on the golf course.
And these are actually tents.
Individuals who live in the
surrounding areas, whose
houses are either destroyed
or they feel unsafe with a
concrete roof over their
head, have now moved
onto this golf course.
And 13 days later the entire
golf course has become a
refugee camp, one of the
largest in Haiti.
Now camps like this are easily
identifiable by viewing
time-lapse satellite imagery.
Not just that, by knowing where
these refugee camps are,
aid organizations can make sure
they're appropriately
allocating their resources.
But as I said, we've been using
the satellite imagery
since about Hurricane Katrina.
So when this group of Googlers
got together, we asked
ourselves, what else
could we do here?
Now, Google's mission, as a
company, is to organize the
world's information and
make it universally
accessible and useful.
And so we decided we'd
organize all of the
information related to
this disaster in one
place, on a web page.
And we call this
a landing page.
And so we started organizing
this information, which
included things such as where
to donate, along with actual
ways that individuals could
help, news and updates that
were coming out of the region,
relevant maps, as well as
user-generated content such
as YouTube videos.
This page was launched in 12
different languages, and it
was linked to from the
Google.com main page as well
as from what we call Enhanced
Search Results.
So if you look for something
related to Haiti earthquake
from Google.com, you'd be
linked to this page.
I wanted to spend the next few
minutes going through some of
the specific features that
were on this page.
And one of those products that
we actually launched was a
tool called Person Finder.
See, after the Haiti earthquake,
there were more
than 14 separate sites that
popped up to take care of
missing persons information
online.
Now all of these sites were
referring to missing people
with different standards.
They were using different
infrastructure.
None of them were integrated.
Which meant that, if I were
missing a relative after the
earthquake, I would have to go
to each one of the 14 sites,
search for them, enter their
information, and keep
continuously looking at
all 14 to try to find
updates about them.
Now this was a place where
Googlers felts we
could really help out.
See, we wanted to build a simple
web application that
was super easy to use.
We wanted to use an open
standard, specifically the
PFIF standard, or the Person
Finder Interchange Format.
And we wanted to make the tool
open, so we'd give open access
via an API so all of these
different sites could push and
pull their data to
and from it.
And so we had this 72-hour
hack-a-thon that occurred
globally, in Mountain View, New
York, Zurich, and Israel.
And engineers around the globe
worked and launched the Person
Finder web application
on App Engine.
And we noticed something
interesting happen, sites like
CNN, NPR, The New York Times
integrated to Person Finder.
And as a victim, I no longer
needed to go to all 14
separate sites.
I could go to any one of them
and now see that same
information mirrored across.
Now another area where we
thought we could help out was
with relevant mapping
information.
See there was a lot of maps that
were being created in the
aftermath of the earthquake.
And this included stuff like
the World Bank's damage
assessment I showed earlier.
Or even things as simple
as earthquake
locations from the USGS.
Now the problem was, this
information was
strewn across the web.
And it was in different
formats, often not
machine-readable formats.
And so our engineers crawled the
web, found a lot of this
information, scraped it,
converted it to a common
format like KML, and then placed
it on this common map
that we then link to from
our landing page.
Now you can just click on and
off any of the layers here,
and as an aid organization,
or someone interested in
understanding what's happening
on the ground, you could get
more of a complete picture
because you could layer on
different layers from different
organizations.
And it wasn't just Google
that was helping
out in the tech space.
There were organizations
like Open Street Map.
So Open Street Map is a
collaborative mapping platform
very similar to Google's
own Map Maker.
Now what Open Street Map did was
they took advantage of the
local Haitian diaspora that
was around the world with
knowledge, with relevant
knowledge about Haiti's roads
to actually digitally draw
them out on a map.
Now why is this actually
useful?
As an aid organization, when I
come to Haiti and I want to
deliver my aid from point
a to point b, I need to
know how to get there.
And the fact is, there were no
good maps to help people out.
The work done by the Haitian
diaspora community and Open
Street Map ended up being quoted
as the most complete
digital maps of Haiti's
roads ever created.
And it was hugely important.
Ushahidi also emerged as a very
important platform after
the earthquake.
Ushahidi was initially created
to report post-election
violence in Kenya.
But it's a simple incident
reporting platform.
Now what that means is,
individuals come to it and
they say something
has happened.
That gets categorized by a group
of volunteers and then
placed on this map.
Now why this is helpful is you
can start seeing clusters of
information start appearing
in a map like this.
For example, as an aid
organization, I can come to a
site like Ushahidi and start
seeing areas that are in need
of immediate food aid.
Or I can see areas which have
outbreaks of violence
occurring so I can keep
my staff away from it.
Now it's important to know that
Ushahidi required you to
be able to submit these reports
via web, but not every
Haitian had access to a
computer, especially after the
earthquake.
But everyone did have
mobile phones.
So a large coalition of
nonprofits, including
Ushahidi, FrontlineSMS, and
InSTEDD worked to launch the
4636 project.
Now 4636 was a free
SMS shortcut.
Individuals on the ground could
SMS a message to the
short code that got picked up
by a group of volunteers on
the back end.
Now we then again categorize
it and place it on a map.
Now this not only made maps
like Ushahidi richer, it
actually helped save lives.
There's an example from Haiti
where individuals stuck in
rubble would SMS their messages
on their location.
They would then get picked up by
these volunteers who would
place it on a map and relay that
message to the US Coast
Guard or US Southern Command.
Using this method, over a
hundred lives were saved.
Now, this isn't to understate
the importance of technologies
like radio and TV, but the
fact is, the internet was
having a key role in Haiti.
And we have some metrics
to prove this.
So Google's landing page had
over four million page views.
There were over 80,000 text
messages that were handled by
project 4636.
There were 2,500 volunteers
around the world managing
those messages.
And there over a hundred
lives saved.
There were over 55,000 missing
persons records stored in
Person Finder by the end of
the Haiti earthquake.
And in addition, the American
Red Cross collected $32
million with an SMS text
campaign in the US.
This is the largest amount
ever collected by one
organization for a
single campaign.
Finally thousands of
contributions were made to
both Open Street Map and Google
Map Maker, and as we
said, Open Street Map ended up
being the most complete map of
Haiti's roads ever.
And so a group of us decided
to travel to Haiti, to
actually understand the
situation on the ground and
see where else technology
could have a key role.
And I wanted to give two
small examples from
our trip down there.
This is a doctor writing patient
care instructions on
the floor outside of
a patient's room.
And this is a map of a field
hospital's resources.
Now, these were areas where we
believe that technology really
could help out.
So upon returning back to
Google, we actually spoke to
Larry and Sergey--
Google's founders--
and we pitched this
idea of a team.
A Google.org Crisis
Response Team.
And it would be modeled very
similar to other Google.com
project teams, with core
engineering and product support.
And our mission as a team would
be to make this crucial
information more accessible
during natural disasters and
humanitarian crises.
Now Larry and Sergey were not
only supportive, they were
extremely enthusiastic at the
possibilities that technology
could have in this space.
We knew that we'd never be
domain experts here.
But what we did know is
that we understood web
technologies, and we wanted to
bring that understanding to
assist aid organizations and
victims after a disaster.
So before I go into any more
detail, I wanted to invite my
colleague Steve up to stage to
talk about how our team has
evolved over the last year and
a half, and how the Crisis
Response ecosystem
has developed.
Steve.
STEVE HAKUSA: Thanks Prem.
So Prem just showed you a number
of internet-based tools
that had a measurable impact
on the response in Haiti.
We think this is just
the beginning.
At Google, we're excited than a
number of organizations see
the Haiti earthquake as the
start of crisis response 2.0,
as a turning point in using
widely accepted consumer
technology to mitigate crises.
Google is looking to do much
more in this space, working
closely with a number of other
organizations that are doing
similar great things.
But this isn't enough.
We also need your help.
Because crises aren't
going away.
It's been-- we've responded at
Google to more in the first
quarter of this year than all
of last year combined.
Now is the time to jump
in and get started.
I want to give you an idea of
some of the problems and
challenges that we see in this
area, both for us and for
developers like yourselves.
And then show you how we're
using openly available Google
technology and web standards
to be able to help.
The first question you might
have, though, is does the
internet even work
after a disaster?
And it might be surprising, but
in every single crisis--
major crisis-- that we've
seen since Haiti, the
internet has stayed up.
And how do we know this?
In three different ways.
The first is local ISPs, which
continue to broadcast routes
from their edge routers.
We pick these up.
And after a disaster, we
still see them happen.
The second is traffic
to Google sites from
the affected areas.
And third is the stories that
we hear of people on the
ground using the internet
as a critical tool for
communication in the immediate
aftermath of a disaster.
The internet was designed
to be resilient, to take
advantage of robustness and
route packets along the path
of least resistance.
We see people taking
advantage of that.
Haiti was a worst case scenario,
an impoverished
island with poor infrastructure
and low
internet penetration.
The earthquake actually severed
the only fiber cable
coming into the country, leaving
just a microwave radio
link to the Dominican
Republic.
And with the scope of
the disaster we
couldn't hope for much.
But fortunately, internet and
telecom companies were better
prepared, and they survived
pretty much intact.
This is a graph of traffic
to Google from Haiti
on January 12, 2010.
You can see that when the
earthquake hit, there was a
large power outage and traffic
to Google dropped.
But you'll notice,
importantly, it
didn't drop to zero.
The internet stayed up.
We heard numerous stories of
people on the ground using
Smartphones, getting on Facebook
and Twitter, telling
friends and family that they
were OK, and providing some of
the first on-the-ground news
reports of the level of
destruction there.
Because of the situation in
Haiti, it took four months for
the internet, to traffic
to Google, to return to
pre-earthquake levels.
But the internet
was always up.
Just a couple of weeks after the
disaster, we heard people
in the Peshionville golf course
camp, that Prem showed
you pictures of before, watching
YouTube videos.
The internet was there.
This is a similar graph in
Chile February 27, 2010.
This graphs that access
a little different.
Each peak and valley represents
a single day,
traffic rises during the day
and then drops at night.
The red line represents an 8.8
magnitude earthquake that
struck there in the middle
of the night.
There was a resulting tsunami
that hit a major network hub
in Santiago.
We actually did stop seeing
from the ISPs routes being
broadcast. But only
for 15 minutes.
After that, the internet was
back up, and you see within a
week, traffic had returned
to pre-earthquake levels.
Here's the situation from
New Zealand, after the
Christchurch earthquake just
this past February.
You can see when the earthquake
hit there was
barely an effect on
internet traffic.
We had conducted numerous
interviews with people on the
ground in Christchurch.
And they said that phones, after
the earthquake, phones
were saturated.
They would not work.
SMS wouldn't be delivered
at all, or be
delivered hours late.
But the internet, the
internet was fine.
People said email was the only
way to communicate after the
disaster in Christchurch.
Vint Serf, one of the founders
of the internet, says, "The
Internet was designed with
a significant degree of
resilience built in.
And I am pleased to know that
it has often served as an
important platform for
communication during
emergencies." Now Vint also
wanted me to point out that
this resilience shouldn't be
confused with the ability to
resist determined
attacks, right?
But I think the point here is
clear, that the immense value
that the internet brings in
communication and data sharing
also applies to disaster
situations.
So given that, the next question
to ask is, what
problems are there?
We look at that and say, who
are our users and what are
their needs?
We see three main
groups of users.
First are those directly
affected by the crisis.
Then their friends, families,
neighbors,
those indirectly affected.
And finally, the crisis
responders and aid workers who
arrive to help.
We're looking at the needs of
each of these groups, and
particularly where
they overlap.
For those directly affected by
the crisis, the most important
thing is the status of their
friends and family.
Getting access to information
in a crisis is challenging,
specifically trustworthy
information.
They also want to know, how do
they tell other people that
they're OK?
Where can they go for help?
And what's the status of
resources, like medical
facilities, power,
roads, public
transportation, shelters?
And finally, those directly
affected by the crisis are
often the first line of
crisis responders.
They want to be able to help,
but I think technology can do
a better job of telling them
where they should be to help.
For those indirectly affected by
the crisis, again, the most
important thing is what is the
status of my friends and
family that are affected?
They also want more detailed
information from the ground,
reports from victims.
What is happening?
These people also want
to be able to help.
They each often have unique
knowledge of the language, the
culture, the geography
of the area.
You saw that Open Street Map
example, tapping into the
Haitian Diaspora to improve
the base maps in Haiti.
And we think there's a lot of
other ways to leverage this
highly-motivated group of
individuals using technology.
And the needs of crisis
responders is actually where
developer tools might help the
most. Most crisis responders
don't have technology
backgrounds.
But at the same time, they have
to be able to collect and
analyze a lot of data to
figure out how best to
distribute aid.
They need better tools to
visualize survey data,
determining what other
responders in the area are
doing, and how they can
share data with them.
Figure out where their own
staff and supplies are.
It makes sense of the torrents
of crowd source and social
media data that's coming in.
I think there are better
tools that we can
design to help these.
You might already be thinking,
what is someone from your
background?
What are some of the things
that you could do?
How could you build some tools
to help these people and help
meet these needs?
I want to walk you through some
of the standards and some
of the tools that our
team is working on.
You notice three common themes
in the standards that I'm
talking about.
First, simple.
Crises obviously are stressful
situations.
People aren't thinking
straight.
As much as possible, the tools
that you work on and that get
used in a crisis have
to be simple.
And even better if
they're familiar.
They use things that people
already use in
their everyday lives.
Second is standard.
As much as possible, we want
to use and promote common
standards to enable different
organizations to
inter-operate.
Finally open, open data
licenses, open APIs, open
source code, open systems really
win in this space.
Person Finder is a great example
of these three themes.
It also meets one of the key
needs that we saw from
multiple user groups:
finding the status
of friends and family.
Now Person Finder is an App
Engine application, but more
important is the data standard
that underlies it, Person
Finder Interchange
Format, or PFIF.
PFIF was created after Hurricane
Katrina by Ka-Ping
Yee, one of Google.org's
first engineers.
And again, as Prem mentioned,
the major design principle
behind PFIF was convergence,
bringing data together from
multiple different sources
and systems.
For something like this to work,
data has to be traceable
back to its original source
because data might have
unknown reliability
or accountability.
PFIF also assumes that there's
no central authority, so if
you're an aggregator of this
data, you can decide which
data sources to trust.
But in a system like this,
duplicates are inevitable.
You're going to have missing
person records
for the same person.
So the format has to be able
to handle that and resolve
duplicates.
The data model for Person
Finder is very simple.
There's just two types
of records.
First is, a person record.
This is entered by someone who's
looking for a missing
person, or you can actually
enter a person record for
yourself if you want other
people to know that you're OK.
Then there's a note record.
And this is entered by anyone
else that has a status update
about that person.
There can be multiple note
records for every person
record, and note records are
also used to resolve
duplicates between different
persons.
Person Finder has a data API.
It's built on XML,
REST, and Atom.
Simple powerful standards
I'm sure many of you
are familiar with.
And it is used in many
Google tools.
The search API allows you to
enter all or part of a
person's name and get back the
person and note records for
that person.
There's a read API where
you can enter the
identifier for a person.
Or get an Atom feed of all
the person records or
all the note records.
And then there's a write API.
You can start person records
and note records as XML and
HTTPS posts them, along with
an authorization token.
Any of you can try out this API
right now by going to our
test instance at
googlepersonfinder.appspot.com.
The user interface to Person
Finder is fast and simple.
If you're searching for
someone, just say I'm
searching for someone.
You enter their name.
You'll get all the records that
match this person's name.
Find the correct record.
On the left-hand side you'll
see the person record.
On the right-hand side you'll
see all the note records for
that person.
You can also enter a new record,
a new note, if you
have a status update
for that person.
Person Finder is an open source
project hosted at
code.google.com at Google
Person Finder.
If you'd like to help, you can
go to the site, check out the
code, play with the API, or
maybe pick up an issue from
our issues list. There's also
a mailing list that's
available if you have
any questions.
The next set of user needs that
we're looking at is the
status of resources.
And in Haiti, we saw a number
of different organizations
trying to collect this
information.
The US Army was surveying
the status of roads.
The World Health Organization
was trying to figure out which
hospitals were open.
And the Red Cross was tracking
the status of what shelters
were being built and where.
This information was useful
to crisis responders.
And we found that they were
sharing it by passing email
and emailing each other
spreadsheets.
This method, crisis responders
spent a lot of time merging
rows together, copying and
pasting between spreadsheets,
doing a lot of things that
a database could do.
Not only is this inefficient,
but it doesn't keep up with
the rapid pace of change
in a crisis situation.
So what if we could
automate this?
Both the merging of data in
spreadsheets and also the
exchange of data between
organizations.
We can make it a lot faster
and simpler for people to
understand the current status
of resources in a disaster.
We envision a project called
Resource Finder.
The goal to enable structured
updates, information about
resources in near real-time
from multiple sources.
So what's the simplest possible
way to do this?
We start with multiple sources,
each with information
about one specific
type of resource.
If anyone wants to add
something, remove something,
or change something, it would
publish that change.
The change could then be
broadcast to anyone who's
interested in receiving it.
Now for this to work, everyone
needs to have the same
identifier for a resource,
call that a record.
And users will also want to
know which source made the
change, right?
So you also have to tag each
change with an author.
Finally, these changes may come
out of order, so they
also need a time stamp.
[UNINTELLIGIBLE]
for this change, you just know
what changed, who changed it,
and when did it change?
Imagine a stream of
these changes.
If you have that, you should
be able to just merge them
together to create the current
status of all the resources.
You take a specific example
of hospital facilities.
Say the Army again, from a
survey they did in March, and
it was the status of
the road access.
The Red Cross knows which
hospitals are open and how
many beds are available from a
survey they did last month.
Let's say maybe a specific
doctor knows there was a road
washed out yesterday,
and now there are
more patients available.
Using the authored information
and the time send information,
you can merge all the columns
together and merge all the
rows together to
get the current
status of all the resources.
And imagine multiple
applications publishing and
subscribing to this information,
using an open
protocol like PubSubHubbub,
which was announced at Google
IO last year.
Some of these subscribing
applications could send email
or SMS when a hospital
is out of beds.
Or maybe it's just an online
spreadsheet that your
organization wants to
keep up-to-date.
Maybe it's an instance of
Ushahidi that I want to keep
up-to-date.
We built a project called
Resource Finder on App Engine.
And we focused initially on
hospital facilities in Haiti.
It turns out this actually
wasn't that well adopted.
Organizations liked receiving
this information in near
real-time, but they didn't
want to have
to learn a new tool.
So we're rethinking
our work here.
We're trying to extract the
core ideas from Research
Finder into a protocol that we
can integrate into tools that
are already being used
on the ground.
We're calling this Tablecast.
The name comes from the fact
that most data is in tables,
and we want to be able
to broadcast it.
It's an Atom extension.
It represents a stream of
changes to a data set.
And since it's Atom, it also
integrates very well with a
PubSub protocol like
PubSubHubbub.
And just like before, it
represents what changed to
what record, who changed it,
and when did it change.
We're still actively developing
this application.
We're looking to build libraries
and tools to
integrate this into
what's already
being used on the ground.
If you're interested in this
specification, or want to
participate in the working
group, you can check out
Tablecast.org.
The next group of user needs
we're looking at is their
access to public alerts
and warnings.
The problem we see here is that
many organizations that
produce these alerts use their
own format and their own
distribution mechanisms.
Some have publish
alerts to their website.
Some have a Twitter feed,
or an RSS feed.
Some offer email
subscriptions.
Some don't have information
like this online at all.
It would obviously be much
better if all this information
was online in a standard
format.
It turns out, there is such
a format called the Common
Alerting Protocol.
It's an XML specification that
normalizes formats across many
different types of
alert messages.
It offers flexible targeting.
Things like language, category,
geographic area.
CAP was developed in conjunction
with over a
hundred emergency managers.
And it's been adopted as an
international standard by
OASIS and the ITU.
Organizations like the National
Weather Service and
US Geographic Survey already
produce alerts--
feeds of alerts--
in the CAP format.
We're working with many other
organizations to standardize
their alerts as CAP.
And we've open sourced a JAVA
library and some tools to help
get alerts into the
CAP format.
We're also trying to standardize
how alerts are
distributed across the web.
We're encouraging providers of
alerts to produce Atom feeds
in the CAP format.
We're going to aggregate them
at a common instance of
PubSubHubbub that we're
calling Alert Hub.
The final standard I want
to mention is KML.
I'm sure many of you are
familiar with it.
Our goal here is to get
information related to a
disaster on the web in open
formats before the next
disaster strikes.
One tool that we found to be
particularly helpful with this
is Google's Fusion Tables.
There's a session in this room
at 1:15 about Fusion Tables
that I would encourage
you all to attend.
Fusion Tables have some
awesome features.
There's two things that we like
about it the most. The
first is that it's
really simple.
If you have a spreadsheet and
it has a column for latitude
and a column for longitude.
Or it just has one column
for addresses.
You just upload that spreadsheet
to Fusion Tables,
and you immediately have
a map that you can
share across the web.
The next thing with
Fusion Tables we
like is that it scales.
It scales to tens of thousands,
hundreds of
thousands of rows
in your data.
And it also scales in terms of
queries per second for people
accessing your data.
We see this kind of traffic
on our landing pages.
And we've had situations where
we haven't been able to link
to data on somebody else's
website, even if
it was really good.
Because we were afraid of
bringing that site down.
If the data is on Fusion Tables,
we don't have to worry
about that.
Fusion Tables also works
really well with
the Google Maps API.
This code's actually from one of
the pages that we launched
responding to a crisis.
You can see it's actually
really easy.
In Fusion Tables, the URL has
an ID for a Fusion Table.
And inside the Maps API, there's
something called a
Fusion Table Layer.
It's a first-class
citizen there.
You just plug in the ID to
create a Fusion Table Layer,
set the map to that layer, and
you automatically can render
thousands of points,
server side, just
like Google Maps does.
This also works really well
for polygon data.
So you can display heat maps
and things like that.
Fusion Tables is really
powerful and really
well-integrated into
our tools.
So to sum up some of the
standards that we've been
talking about.
Person Finder Interchange
Format for
missing person's data.
Table CAPS for enabling
real-time
updates to tabular data.
The common alerting
protocol for
public alerts and warnings.
And KML for geographic data.
We want to keep things simple.
We want to promote
these standards.
And we want to promote
openness.
Open APIs, open data,
open source code.
Open systems win.
[APPLAUSE]
PREM RAMASWAMI: Thanks Steve.
So Steve just gave us a deep
dive into a lot of the tools
we've been working on over
the last year and a half.
And what I wanted to talk about
was how these were all
very useful during a recent
crisis response.
On March 11 of this last year,
there was an 8.9 magnitude
earthquake that struck
northern Japan in
their Sendai Province.
Now this earthquake was so large
that the earth literally
moved four inches on its axis.
And the island of Japan moved
six feet closer to California.
We immediately sprung into
action, wanted to collect
imagery over the affected
region.
The video you're seeing playing,
the top part is
imagery from 2009.
The imagery at the bottom is
aerial imagery we collected
with a fleet of planes.
Now this imagery had enough
resolution to
see individual cars.
But more importantly, it showed
us the level of damage
and destruction that
had been created.
Entire cities were completely
wiped off the map.
Now seeing this level
of devastation,
we had a real question.
Would the internet be
able to withstand a
disaster of this magnitude?
And so we turn to our traffic
graphs again.
Now you can see a slight dip
in traffic occur in the
immediate time of
the earthquake.
But this rebounds back to
pre-earthquake levels almost
that day itself.
And it wasn't just graphs.
There were stories from the
ground that told us this.
This is an open letter of thanks
we received from Ruth
Shiraishi, a Google user
in Tokyo, Japan.
And Ruth told us that on March
11, our technology helped
change her life.
So in her words, at 2:46 PM
when the earthquake struck
Japan, our phones would
not connect.
Our desktop email systems
would not go through.
Our phones SMS would not work.
But Gmail connected.
Gmail connected us with each
other in the company, to our
loved ones around Tokyo, and
worried family and friends
around the world.
And it wasn't just Gmail.
It was other internet sites.
It was Twitter.
It was Facebook.
It was the power of the internet
to connect people in
the aftermath.
Now when you people were going
online to find information,
and we knew there was also a
tsunami alert that had been
issued for most of the Pacific
Rim countries, and so Google
decided to use Google.com--
which gets billions of
eyeballs every day--
as an alerting platform.
So we launched this line of text
that was a tsunami alert
in four different languages--
Spanish, English, Japanese,
and Russian--
to warn users that a tsunami was
approaching their areas.
Now we said that we knew people
come online to find
this information.
I wanted to prove
that with data.
So these are our search queries
for the specific term
tsunami coming from the island
of Hawaii over the last year
and a half.
And you see two very distinct
peaks in this graph.
The first one occurs on February
27, after the Chile
earthquake and subsequent
tsunami warning.
And the second one occurs on
March 11, after the Japan
earthquake.
[UNINTELLIGIBLE]
zoom in on March 11.
So again, these are specific
queries related to tsunami
coming from the island
of Hawaii.
At around 7:56 PM, the Pacific
Tsunami Warning Center issues
a tsunami warning.
And you start seeing a slight
uptake in queries, which turns
into an extremely large peak.
Now this dies down as people
fall asleep, but you see a
secondary peak occur
around 3:00 AM.
And the reason this is important
is because at 3:11
AM the first tsunami
wave hits the
westernmost island of Kauai.
So people are coming online to
find information and see how
this affects them.
Now this peak dies down again
at 4:08 AM after the tsunami
wave leaves Hawaii and starts
heading towards the west
coast of the US.
Now this peak represents over
20% of queries from the island
of Hawaii, which is an extremely
large number.
And it's not just graphs
that tell the story.
This is a Reddit post that said
I don't know if this is
new, but this is a
Google feature I
can really get behind.
And there's a comment on the
bottom from a user whose
mother lives in Hawaii.
And they note that, because they
opened their browser in
the morning, they were aware
that there was a
tsunami alert issued.
Now Steve pointed out one of
the key needs from people
after a disaster is
getting in touch
with friends and relatives.
And so we once again launched
Google Person Finder.
Now over the course the last
year and a half, the tool has
been pre-translated in
40 plus languages.
So we were able to launch it in
Japanese within hours after
the earthquake.
Over the course of the first few
weeks, Person Finder ended
up aggregating over 600,000
missing persons records.
A partner with major
organizations like NHK, the
national broadcaster in Japan,
as well government bodies like
the Japanese National Police.
In the first seven days alone,
we were able to push 22
different releases
to Person Finder.
And this included features such
as the ability to search
in local Japanese
character sets.
The ability to be compatible
with the local tier two phones
in Japan, as well as features
like being able to subscribe
to a specific record.
And this is because App Engine
gives you this great ability
to rapidly prototype.
And web applications also allow
you to quickly make
these changes on live tools.
This is actually our App Engine
dashboard a few hours
after the tool was launched.
And what you're seeing
is a peak in users
coming to the site.
This actually peaks
at a little below
1,500 queries per second.
In terms of analytics, that's
about 36 million page views in
the first two days alone.
Now what's important here is
App Engine can elastically
grow to support this peak.
Our engineers didn't
build a tool that
could scale like this.
And the fact is, crises have
tools that require this sort
of-- that need this sort
of elastic capability.
The fact is the tools often
like dormant for a large
amount of time until
the crisis occurs.
We again launched the landing
page to aggregate information
like Person Finder, donations,
relevant maps, as well as
real-time user generated
content.
But we also had a learning
information on this page, such
as rolling power outages that
were occurring throughout
Japan as well as transit
outages.
Now, I wanted to point out that
one of the sites we link
to from this site-- again, it's
not just Google, it was
sinsai.info.
And this is the local
Ushahidi platform.
Again Ushahidi was that incident
reporting platform.
What's interesting, though, is
Ushahidi was launched by a
group of local volunteer
developers.
Because the tool was simple to
use and open source, local
developers could pick it up,
translate it, and launch it
for the Japanese public.
And we also wanted
to make other
mapping information available.
And so what we did was we
decided we wanted to put
shelter information on a common
map, as one example.
Now shelter information was
strewn across numerous
government websites, state
government websites in formats
such as PDF.
So we scraped this information,
pushed it to a
Fusion Table, and then made it
available in one location.
Now it's really important to see
information like shelter
information on a map so you
know which shelter is
geographically closest to you.
There was something else
interesting that was
occurring, though, at
these shelters.
People were maintaining written
lists of people who
had come to the shelters, and
posting it on common walls at
these shelters.
Now this was essentially their
version of Person Finder, and
it was again siloed
per shelter.
And we wanted to be able to
sync this up to the online
Person Finder tool.
So a group of engineers and
product managers in our office
in Japan decided to get
volunteers to go to the
shelters, take pictures of the
lists, and upload them to a
simple-to-use common
Picasa album.
And then we had volunteers
come and transcribe these
specific photographs and the
names on them, and post them
to the comments, which
we then pushed
back into Person Finder.
Over the course of a few weeks,
we ended up processing
over 9,000 photographs and
updating 137,000 names in
Person Finder.
This was possible because we
had thousands of volunteers
from outside of Google who
were able to help us out.
And we were able to manage
that queue by building a
simple tool that handed out
tasks to them over App Engine.
Again, I said this was possible
because we had this
great team of product managers
and engineers in Tokyo who
worked tirelessly
for three weeks.
This is them taking a pizza
break a few days after the
earthquake.
And it was really inspirational
work.
They had over 40 launches
that they did.
But at the end of the day, any
of you as developers could've
taken part.
Because most of the work that
they did was simple via
standards and open
source tools.
But we'll get back to that
point in a second.
I wanted to point out what
is it that Google is
looking to do next.
So this is an image from Google
Maps of me searching
for tsunami Hawaii in the
aftermath of the earthquake.
And you can see the first result
is the Pacific Tsunami
Museum Incorporated.
Not very useful.
So we want to be able to get
this alerting information
across all the properties on the
web that people go to to
find information.
This is that same Reddit post.
And this is a comment at the
bottom of someone saying that
they didn't see the alert on
their mobile phone.
Why didn't their mobile phone
wake them up if they were
asleep on the beach?
Our landing page required
contributions from dozens of
Googlers around the globe
to get all this alerting
information online and
in a common format.
How can we get this information
in advance of the
disaster so that we need not
have to do all the work after
the disaster to get
it out there?
And so this is where I wanted
to pose a challenge to the
audience out there.
At the end of the day, you're
all the developers with the
great understanding of web
tools, whether that be mobile,
the Maps API, Fusion Tables,
or even App Engine.
And you really do have an
ability here to take part and
make a difference.
And I wanted to give a small
story of two web
developers in Pakistan.
So these two guys were a little
annoyed that India had
better base maps than Pakistan
did on Google Maps, and they
decided they were going to
do something about it.
So they got a bunch of friends
together, and they said we're
going to start mapping
out Pakistan, and
we'll start with Lahore.
Now the time lapse photography
behind me gives you an idea of
how this map grows out starting
in June 2008.
It gets filled with volunteers
add data.
This is completely crowd-sourced
data collection.
Now these two developers ended
up being the top Google Map
Maker mappers in the world.
But why this is important is
last August, Pakistan was
inundated with floods
that covered
over 20% of the country.
And Google Map Maker had better
base maps than the
Pakistani military did.
We were able to share this
data with [? UNICAP ?],
the UN's mapping organization,
which was then able to help
aid organizations deliver aid
from point a to point b.
These are two web developers
with an idea to get friends
together that ended up
saving tons of lives.
And it's not just Google
in this space.
You can actually join the Crisis
Mappers Network, which
is a group of volunteers they
get together and map out areas
in need of assistance.
We talked about how
useful Open Street
Map was after Haiti.
There are also specific
tech organizations.
We talked about Ushahidi,
InSTEDD, and FrontlineSMS. All
three of these organizations
have open source code bases
and very large issue trackers
that could use developers like
you to help them out.
And to get your feet wet, you
can always join a local crisis
camp that's held by Crisis
Commons all around cities
around the world.
Now these crisis camps bring
together aid organizations as
well as developers like yourself
to discuss problems
and possible solutions.
Google, Microsoft, the World
Bank, Yahoo, and NASA all run
the Random Hacks of
Kindness together.
And so RHoK is a conference
that's a 72-hour global
hack-a-thon that run in cities
across the globe that look to
bring subject matter experts as
well as aid organizations
in the same location to pitch
problems to have developers
rapidly prototype and
create solutions.
Now, don't worry, 72 hours,
there will be a lot of pizza
and Red Bull also, there.
And we actually have RHoK #3
running on June 4 and 5.
This is RHoK #3.
It's actually the fourth RHoK,
but as every good geek knows,
you start counting from zero,
so that's why it's RHoK #3.
I wanted to give an example
though of where a RHoK hack
was super useful.
So there's this hack called
Chasm that came out of RHoK,
and it's to identify areas that
are prone to landslides.
There was this huge equation
that a World
Bank engineer had.
And you either needed to have
a PhD in math or a PhD in
geology to actually understand
how to use it.
Now what a bunch of developers
did was, they took that
equation and they created
a simple web
application around it.
So aid workers could actually
put this information in and
know where to dig trenches to
avoid future landslides.
It's a great example
of something you
can do in 72 hours.
Finally, there a lot of non tech
organizations out there
though that could
use your help.
And all crises are local.
I would encourage you to go talk
to your local Red Cross,
your local fire department,
or your local sheriff's
department and see where they
could utilize your help.
Understand their processes.
Help them switch from using
pen and pencil to using
internet technologies to solve
some of their problems.
Maybe you could help get local
shelter information into
Fusion Tables, local evacuation
route data into KML
before the disaster
actually strikes.
Maybe you can help your local
sheriff's department send out
alerts using a format like the
Common Alerting Protocol.
I wanted to conclude by giving
you one final example of
helping out in a
local disaster.
First Responder is a tool that
helps fire chiefs understand
the location of their staff
before, during, and after a
fire response.
Now it was built by John Riley,
who's a volunteer
firefighter.
And John definitely doesn't
consider himself a web
developer, but he's dangerous
enough to code.
So what John did was, he built
a tool that uses Google Voice
to accept incoming calls, then
uses Google Latitude to track
the location of all
the firefighters.
It uses the Google Maps API
to give them turn by turn
directions.
It uses App Engine to log
when everyone shows up.
And it also actually makes
sure that everyone is
accounted for after
the fire response.
It's this great example of using
simple, standard, and
open to solve a very important
local problem.
First Responder is actually an
open source tool that you can
find on code.google.com.
So I wanted to conclude by
conveying to you my real
excitement in this space, and
the possibilities that
technology has here.
I really believe we are at
this turning point where
technology--
widely accepted consumer
technologies--
can really help and alleviate
suffering
after a natural disaster.
I think that all of us, whether
we're geeks, or
developers, or citizens just
interested in doing good,
really have a chance to make
a difference here.
I wanted to thank you
all for your time.
Steve and I will be here for
the next 10 minutes if you
have any questions.
There are two mics set up
on the right and left.
We'd be happy to answer them.
Thanks again.
[APPLAUSE]
PREM RAMASWAMI: No questions?
AUDIENCE: Hello.
I was wondering if you could
just talk a little bit about
how you sort of alluded to
things like Ushahidi and other
organizations.
Talk a little bit about
verification.
You know you have this great
crowd source data.
You mentioned the Open
Street Maps in Haiti.
All of this stuff, you know, I'm
from a media organization,
and the issue of how do you
know how reliable this
information is, you know,
at different stages.
You have the source and the
output, but translating that,
just, how do you guys
approach that?
PREM RAMASWAMI: So this
is a great question.
So the question is basically,
how do you make sure that your
sources are correct
and verifiable?
And the fact is, this is
an open area right now.
So Ushahidi for example tries to
get more than one record so
that they can verify that
something is happening there.
open Street Map and Map Maker
use a version of moderation so
other people with local
knowledge can say yeah, this
is probably right.
Now what's interesting is though
the power of the crowd
is really able to reduce the
amount of spam or false
reports that you seen in
a lot of these cases.
Another good thing is, humans
tend to be good in these
situations and you don't see
many of these cases.
But they do exist. And I think
this is also a case where
local aid organizations do
play an important role of
providing that verification,
to make sure what's being
provided is truthful.
AUDIENCE: This is a
short followup.
I'm wondering if you automate
the threshold for then
publicizing information.
In other words, you said
Ushahidi does multiple things.
Like at what stage do you say,
OK, and is that programmatic?
You know, how do you define, you
know, sort of in advance
of it, of how do you find, OK
what's our threshold for
saying, we think this is a valid
entry, we think this is
not a valid entry?
PREM RAMASWAMI: So you
can use a lot of
signals from the internet.
So there's actually this great
story of this group from them
MIT that won the Red Balloons
Competition that
was posted by DARPA.
And this was to find red
balloons that showed up
randomly around the country.
And they used things like the
location of where the
information was coming in, to
see whether, you know, the guy
who says he knows where the
balloon as in California was
actually sending it from
Oklahoma and was probably
lying to them or not.
So you can use signals like
this, and I think each
organization uses different
levels of signals and a
different threshold on where
they consider it
OK to publish this.
Now I should point out that,
in the case of publishing
things like alerting data,
that we're interested in
doing, we want to make sure that
comes from an authorized
source-- a government agency
or someone who actually has
knowledge of the situation.
And that we're just a
dissemination channel for that
information.
Thanks.
AUDIENCE: It seems like your
tools are after the fact.
What are you doing before the
fact, in terms of having
people preregister, or a
Facebook app where you can
register yourself with the
Google database so when a
disaster does hit your area,
your friends know how to reach
you and know that they have
the right record?
PREM RAMASWAMI: So there is a
lot of work done by different
groups on things like making
sure you have an in case of
emergency contact in your
phones, knowing what to do, or
pre-education in
the aftermath--
before a crisis actually
happens, like what to do when
an earthquake happens,
things like that.
One of the things that we're
doing specifically as Google
is we're trying to get alerting
information out there.
So a key example of this would
be the tornadoes that hit at
night in the southern part
of the US recently.
We want to be able to give
alerts to people who are in
that region who might
be asleep.
Maybe we can eventually wake
them up to let them know that
they should head to
their basement.
So these are things that we
want to try to do the pass
this information out in easier
format so people can read them
regardless of what
they're using.
AUDIENCE: Do we know the name of
those Pakistani developers?
PREM RAMASWAMI: I do, but
I'm forgetting them.
We can follow up later
though, and I can try
to get them to you.
AUDIENCE: You mostly talked
about natural
disasters until now.
Can you relate to humanitarian
issues,
or things like political?
The issues we've seen and what's
Google's position on
what you guys are doing
for things like this?
PREM RAMASWAMI: So that's
a great question.
The fact is that our team
is focusing right
now on natural disasters.
There have been a lot of them
lately, and it's keeping us
pretty busy.
The fact is, we have left the
door open though to work with
humanitarian crises also.
We have done some work in the
past, for example, with Map
Maker, to map out the southern
Sudanese region.
There is a program with
George Clooney called
Paparazzi in the Sky.
The idea was have a satellite
over the area to prevent
future violence.
So we're looking at
areas like that.
We think right now most of our
tools, though, do focus on
natural disasters.
AUDIENCE: I was wondering if you
guys have reached out to
the wide land firefighting
organizations.
In my experience, they
constantly deal with these
kind of disasters and have
some pretty good tools to
support them.
PREM RAMASWAMI: So actually
this is a very
important area, too.
And so I assume we're talking
about the wide forest fires
and the groups that
fight those.
So we do have some tools that
they use using Google Earth,
for example, to be able to model
out wind patterns and
how the fires are moving and
so that they can use this
without needing an internet
connection as they're
literally jumping out of
helicopters and things like
that to fight these fires.
And so we have some specified
tools for
these different groups.
The tools that we're building,
though, as a team, tend to
more focus on general consumers
who are interested
in finding information and
making sure that information
is available to them.
AUDIENCE: Just kind of going
back to you had talked the
responders as a group that
didn't have great uptake
during Haiti.
And I know, as responders, they
have a bunch of tools
that would probably give really
good requirements.
So, something to
maybe look at.
PREM RAMASWAMI: Definitely.
We definitely will.
Thanks.
AUDIENCE: Have you been
playing with the--
the timescale of a model that
you're working on, like in
crisis, everything's
up in the air.
And everybody needs to
fix it in a hurry.
And when the crisis is over,
do you get to throw
away all the data?
Or you don't have
to worry about
curating it quite as much.
But if you spread out the
timeline, any curation becomes
a bit more important.
And the needs become
a bit more diverse.
So like, chaos mapping, instead
of just crisis mapping.
And also have you gotten
a lot of call to
participate in war zones?
PREM RAMASWAMI: So, going back
to the war zones issues.
So we have been asked to help
out numerous times.
Right now we're not experts
in those areas.
And we don't really understand
them as well.
So as I said, we're focusing on
natural disasters currently.
With your previous question
about wider mapping, I think a
lot of the tools that we are
building are applicable in
cases that are general crises
or in broader context.
We might be focusing just on
this, but we try to keep kind
of the vision a little broader,
so mapping is mapping
at the end of the day.
AUDIENCE: So I just was sitting
here, and it was real
interesting that there were so
many people in this session.
And wondered, and maybe I'm
making some assumptions that
we all kind of care about
people and we care about
helping people in crisis, is
there kind of a general group
where-- because I was thinking
it would be awesome to be able
to exchange business cards with
all of you, or make some
sort of contact or connection
where we'd collaborate
together generally.
Not about a specific thing like
the People Finder or the
alerts, but in general?
[INAUDIBLE]
PREM RAMASWAMI: So let
me repeat that
on the mic for everyone.
Crisis Commons is the
group to join.
So Crisis Commons at
googlegroups.com.
It's a great group.
I mentioned them earlier.
They're the ones who host the
crisis camps in cities.
But it's a great mailing list
to join to see, for example,
people, state organizations
might email that group and say
I have all this data on
an Excel spreadsheet.
Can you help me put
it on a map?
STEVE HAKUSA: Crisis Mappers
is also another
group that we mentioned.
If you have specific mapping
expertise, or GIF expertise,
they're also another great
group to join.
AUDIENCE: These are some really
awesome tools you guys
put together.
I'm a developer for a
news organization.
When an event like an earthquake
takes place, is
there a particular place we can
link to so I can tell the
editorial team where
to go, just
directly to Person Finder?
Or do you guys actually have to
go create an event to say
this earthquake has happened,
for example.
[UNINTELLIGIBLE]
PREM RAMASWAMI: So let me talk
about the specific events.
So usually, if the event is
big enough, if you go to
Google.com, you should find
the information there.
And so you should be able
to search for it on our
properties or you'll see a
little link on google.com if
it's a large event.
The other place you can go is
google.org/crisisresponse.
Is our site and we have recent
responses that are listed
there, so you can kind of
get an idea of some of
the stuff we do.
STEVE HAKUSA: We'll
also be Tweeting
anytime we do a big response.
We'll definitely get it out
there [UNINTELLIGIBLE]
like what we're doing.
PREM RAMASWAMI: And this is
really interesting, because
the last time, when we launched
Person Finder, we
Tweeted about it.
We Tweeted about it a few
minutes after it launched.
And we just saw the QPS spike.
And I mean, I guess I should
know the power of Twitter
because I work at Google.
But I was again shocked to
see just how many people
picked up on it.
Before we even had a landing
page aggregated and ready to
push, people were
using the tool.
AUDIENCE: So with Maps,
there's a lot of IP
involved with it.
How do you guys deal with
that during the crisis?
So, for example, the Pakistani
people have made the great
base map that wasn't pushed
out into the community
afterwards.
It kind of just lives
in Google now.
And for example, probably at
OSM, you guys didn't get that
data, because it
was encumbered.
So I deal with it every day.
How do you guys deal with
nonprofit organization?
Very carefully is probably
the solution.
PREM RAMASWAMI: Yes, very
carefully is a good answer.
But I want to give you a more
honest answer to that which is
that we're always looking
to get better.
And so this is definitely
a place where I
think we can get better.
Now, in addition to that, I
wanted to add though that this
data is available over the
Maps API, for example.
And that when there is a
disaster in a region, we do
provide this data to
organizations like
[? UNICAP ?]
so they can can make it
available to all aid
organizations.
And so we'll continue to work on
this, and we'll continue to
see if we can help
in this area.
AUDIENCE: Just to follow
up, you guys probably
know this as well.
But you guys have been pushing
the aggregation of data as one
of your goals.
And when it comes to mapping
data, it's kind of frustrating
because you can't aggregate
that data because someone
might have--
OSM might have a road that
someone got lazy and didn't
put it into Map Maker
and vice versa.
There's no answer, correct?
PREM RAMASWAMI: We're
very aware of this.
This is somewhere where we want
to be able to help out.
AUDIENCE: Maybe the engineer
can answer this, but you've
got something that your
table data might be
able to use as non--
STEVE HAKUSA: Table Caps?
Yeah that's actually probably
a really good example.
You can try to get these
different organizations to be
sharing these updates between
these groups.
Yeah, that's definitely.
AUDIENCE: Middle-aged men like
myself will have a heart
attack and get very interested
in their health because the
crisis is acute.
You go to the hospital.
You go on the cholesterol
reducing drugs.
But what about the chronic
crises, like ocean
acidification, low-oxygen zones,
over-fishing, things
that have the potential to
kill billions of people?
There are certain scientific
organizations--
I'm one of them-- we sit on
petabytes of data that could
be used in front of crises in
a more offensive position,
rather than defensive, waiting
for something to happen.
Is your team at all interested
in those types of problems?
PREM RAMASWAMI: So there's
actually a great Google.org
project called Earth Engine.
And what Earth Engine is trying
to do is take these
petabytes of data regarding
the globes and push them
together and get Google's
computing power to be able to
conduct algorithms over.
And one of the examples is one
of these chronic crises, which
is deforestation.
So using this globe of satellite
imagery from the
last 30 or 40 years, you can
actually see deforestation
occur in different areas.
So there are different
Google.org teams that focus in
these different areas.
I wanted to thank everyone
for their time today.
We're unfortunately
out of time.
Steve and I will be around, kind
of off stage for a bit,
if you have any followup
questions.
You can feel free to find us.
Thanks again for coming
out today.
STEVE HAKUSA: Thanks a lot.
