[MUSIC PLAYING]
STEPHAN LINZNER: Hey, everybody.
As they say the best goes last.
And here we are to talk about
testing Android apps at scale.
My name is Stephan
Linzner, and I'm
a software engineer at Google.
And with me backstage
is Visnal Sethia,
who's also a software
engineer at Google.
So at Google we believe in
diversity and inclusion.
And we build our
products for everyone.
But for us developers
this means we also
have to test for everyone.
So let me tell you a little
bit how we develop and test
software at Google.
But before I start, I
want to talk a little bit
about the scale that we do
Android development here
at Google.
We have about 100
plus Android apps.
This includes all
the billion user
apps shot, such
as Google Photos,
maps, YouTube, Gmails, search.
We have a combined 2
billion lines of code,
and we run 20,000 Android
builds every single day.
And we have a
staggering 27 million
test in locations per day.
So how do we create
these high quality apps,
and how do we maintain this
quality over such a long time?
I think one of the
key things here
is our engineering culture.
A typical developer
workflow at Google
looks a little bit like this.
We have a strong code
reviewing culture.
Code to use a very,
very thorough,
and before you can
actually submit your change
or pull request you have
to get at least reviewed
by one of your peers.
Another important thing is
that all development happens
that hat, and everything
is built from source.
And we have a large
amount of repo,
which allows us to easily
search for code, reuse code,
but it also allows us to keep
the repo healthy by sending
so-called large scale changes.
Also have a very
strong testing culture.
At Google if you have a
change you have to have tests.
Even more importantly,
all of those tests
have to pass before
you submit your change.
To run tests we use a
large scale distributed
CI system, which does
not only run your tests,
but also all of
the tests from code
that depends on your change.
Another thing that is
very unique about Google
is that we have a strong
interviewing productivity
culture.
So that means we
have dedicated teams
that only work on
infrastructure tools and APIs
to make developers productive.
We are part of such
a team, and we've
been working on
testing Android apps,
or Android app
testing at Google.
So I want to take
you a walk down
memory lane what we have done
at Google to scale Android
testing.
So in about 2011 a
lot of teams at Google
actually started to
build for Android,
because Android was becoming
more and more popular.
So at the time they were just
using the standard tool chain.
They were using ANT,
to build their apps
and they were using
Eclipse as an editor.
But with a growing
number of teams,
we also added support to
our internal build system.
One of the problems that
became obvious very early on
was the need for
scalable testing.
And so we actually
started off very
simple by building a small
post site test runner that
would run on the host.
And in fact it was just
a JUnit three test suite
that would literally scan
an APK, list the tests,
and give it to our
instrumentation test
runner that was running on the
device to execute the test.
Once we had that,
we actually built it
into our continuous
integration system.
One of the key decisions
that we made very early on
was to use emulators, we
called them virtual devices,
to run tests at scale.
Because obviously
it makes more sense,
because you can
scale a data center,
but you can probably not
scale a [INAUDIBLE] so easily.
So we wrote this
little Python script,
probably just 20 lines of code--
and I'm sure many of you have
been there--
that puts up an emulator for us
that allows us to run the tests
and shut it down afterwards.
So while we were working
on infrastructure,
our engineers actually
started to write tests.
And they wrote a lot of them.
A key problem here
was especially
around functional UI testing.
And as many of you will remember
in the early days you only
had the low level
framework APIs,
you had activity monitor
to track activities.
You had instrumentation run on
UI threat, or the infamous wait
until idle sync.
And even though these methods,
these APIs were easy to use,
developers struggled a lot
writing reliable UI tests.
And at the time we
thought, OK, maybe we
could find something better,
and we actually found that
in a community with Robotium.
So we brought Robotium into
Google, and it improved things.
And we used it for about a
year until the end of 2012.
But it had its own issues
with the API surface,
and it didn't solve one
of our key issues, which
was synchronization.
And that's when we started to
work on Espresso, because we
wanted a framework that was
easy to use for developers,
but more importantly was
hiding all the complexity
of instrumentation testing
from the developer.
At that point we kind
of had a decent setup
for instrumentation
tests, but we still
had to solve the
unit testing problem,
because as you remember at the
time, all of your unit tests
are usually random
on the device.
But that is expensive,
and they tend
to be slower than
running on the JVM.
So again we're
reached for a solution
that the community had already
built at the time, which
was rogue Roboelectric.
And Roboelectric
allowed our developers
to do fast, iterative,
local development,
and it's actually still one
of the most popular frameworks
for unit testing within Google.
So in 2014 we actually had
built a lot of experience
in testing APIs,
but we were seeing
that the community
was struggling
from the same
problems that we did.
That's why we decided to
bundle all of our libraries
together in the Android testing
support library, which then
quickly became the default
library for developers to write
instrumentation tests.
Fast forward to today, we just
launched Android X test 1.0.
It's not only our
first stable release,
it's also the
first time where we
ship unified APIs that allow
you to write tests once, and run
them anywhere.
And by the way, we just
achieved a major milestone here
at Google.
We now run 10 billion unit
and instrumentation tests
every year on our
infrastructure.
So looking back at
those seven years,
what would we do differently?
There's a couple of things
I want to mention here.
So we would probably design
for any build system.
We made some key decisions very
early on that tightly coupled
us to Google's
internal build system,
but it quickly became a problem
because even at Google not
everybody's using Google's
internal built system,
and we weren't able to
share or build our host site
infrastructure with them, but
also not with the community,
and we couldn't open source it.
Similarly, we didn't build--
some of the tools that we
built weren't cross platform.
So they only worked on Linux
but not Mac and Windows.
Another thing that we would
probably do differently, even
though retrospectively
it probably
was a good thing that we
started off small, and then
scaled up our testing.
But while the apps grew
and the ecosystem grew,
there were more and
more requirements,
and we usually just built
them into our infrastructure,
but we didn't had a mechanism
for teams to customize
this infrastructure.
This led to a point
where we suffered
from high code complexity.
It was hard to maintain,
and some features
couldn't be removed, but
they weren't used anymore.
The other thing I want to
mention here is configuration.
Our host site infrastructure
was getting configuration
from many different sources.
So we had flags system
environment variables,
and conflict files.
And it's made it very
hard to track down bugs
in the infrastructure itself.
So about a year ago our team sat
down with app teams and Google,
and we wanted to learn about
the past and the future,
and especially how the Android
testing landscape had changed.
So what we came up with to
solve some of the problems that
came out of the discussion
was Project Nitrogen.
Project Nitrogen is our new
unified testing platform,
which we first talked
about at IO this year,
and which we will
ship to you in 2019.
Project Nitrogen
is currently used
by a small number of
apps inside of Google,
and we're slowly scaling it
up to some of the biggest apps
in the world.
And the reason why
we're doing this
is simply because we want
it battle tested first
before we ship it to you.
But the point being
here is, we want
to give all this
infrastructure that we use
to run 10 billion tests to you.
So Nitrogen solves
many problems,
but two of the key
issues that we're
trying to solve
with Nitrogen is,
first, we want to
create a unified entry
point into Android
development, and secondly, we
want you to enable to write
test with a unified API
and move them between layers.
If you think about
Android testing today,
it looks a little
bit like this, right?
You have tools on
the left hand side,
such as Android Studio, Gradle.
You have UCI server,
and maybe even another
build system such as Bazel.
On the other end
of the spectrum,
you have all the
different runtimes
that you want to run on.
We call runtime
devices in Nitrogen.
So you want to run your
test on a simulated device,
or a virtual or
physical device, or even
on a remote device that
runs in a device left
such as Firebase Test Lab.
But in order to do so, you have
many different entry points,
and it looks a
little bit like this.
You have a different
configuration for every tool.
You have different roles,
you have different tasks.
And it just becomes a
nightmare to maintain.
And actually what
we see in Google
is, because it's so hard to
move from one to the other,
they would skew towards one
type of a test or another.
What we want to do
with Nitrogen is,
we want to have a
unified entry point.
And Nitrogen itself is
just a stand alone binary.
A stand alone tool which
infrastructure developers
can use to really customize
their infrastructure.
But obviously there's also
all these other developers
who don't work on
infrastructure,
and work on actual app code.
For them we want to
provide integrations
into all the tools
on the left hand side
to make it easy to run tests.
And at that point, if
we have a single entry
point and a unified
test, it fits very well
within your developer
workflow, because you
can do local, fast,
iterative development
on a simulated device.
Then in pre submit, before you
actually submit your change,
you can run on an
emulator matrix.
And lastly in post
submit you can
run on a remote device, a
physical device in Firebase
Test Lab.
And that's really
what we're trying
to do with Nitrogen.
Nitrogen allows
you to run tests at scale.
It is highly configurable.
It was built with customization
and extensibility in mind.
You can execute unit and
instrumentation tests.
It vastly improves reporting,
and therefore debugging.
And maybe one of the
most exciting things
is, it ships with its own
virtual device management
solution that manages
devices for you.
And that's actually
something I think
a lot of people in the
community have been asking
for us for quite a while.
Nitrogen is cross platform.
And we really built
it from the ground up
with all the experience that we
have, seven years in host site
and device site infrastructure.
It will support Mac,
Windows, and Linux,
and is written in Kotlin.
And we really built it
in a way such that we
hopefully-- that it's
hopefully going to be
good for the next seven years.
Nitrogen, as I was saying,
it's just a standalone tool.
So it can be easily integrated
into any build system.
And we're working on
integrations for Gradle,
and Bazel.
We're adding shorting and
trail level test execution,
and continuous
integration support
will be there from the start.
On the device site,
we are initially
planning to have support for
at least simulated, virtual,
and physical devices,
as well as device labs,
such as Firebase Test Lan.
You can even add
your custom devices,
if you have custom hardware.
Let's switch gears a little
bit and talk a little bit
about the high level
architecture of nitrogen.
So nitrogen is basically
split into two parts.
We have a host site,
infrastructure,
that is all the code
that runs on a host.
And we've done
something new, we also
have an on device
infrastructure.
Which basically
means we've moved
some of our infrastructure
onto the device, which
is a much saner environment
to reason about.
And the device is also
the main abstraction
that we use in nitrogen
for different run times.
So the host site
runner is mostly
responsible for finding
a device for you,
setting up the device
for test execution,
and then requesting a test run.
It can be easily configured with
a proto buffer configuration,
and it allows you
to customize things
like the test executer and
the whole test harness.
To decouple the
host from the device
we have a new
orchestrator service.
You can think of it as the
brain of test execution
that runs on a device, and
it's responsible for discovery,
filtering and sorting,
and execution.
An orchestrator service
is just a gRPC service
that can be implemented
by any device.
And we in fact use
gRPC to communicate
between the host and the
device, which does not give us
performance and speed, it also
gives us a lot of stability,
and it allows us to
stream test results
back to the host in real time.
We also have a lot
of extension points.
So we'll have hosts
plug ins that allow
you to run code on a host.
And we'll also have
device plugins,
that allow you to
run code on a device.
So let's dive into
each of these sections.
As I mentioned before, we use
a single proto configuration,
with a declarative
set of protos.
This allows you to define
devices, your test fixtures.
So you can define things like
APKs that you want to install,
data dependencies that you
want to push on the device.
And it can declare your
host and device plugins.
We initially will have support
for single device executers,
parallel device executers,
to run on multiple devices
in parallel.
And we'll also have a new
multi-device executer,
which will allow you to do
things like orchestrating
a test run between a
device and a device,
or a device and a watch.
Which is something
that we increasingly
see as a requirement.
The good news is, if you're
just an app developer,
you usually don't have to deal
with any of this configuration,
because it's built in
in the tool integration.
But if you're an
infrastructure developer,
this is where it gets
really interesting for you,
because you can customize every
single bit of Nitrogen. Let's
talk a little bit about plugins.
So host plugins,
or plugins, they
can execute code on the host.
Plugins that we've already
built are the Android plugin.
They just encapsulate
all the code
that allows us to run
Android tests on a device.
We have a data plugin that
allows us to stage data
onto the device,
or a fixer script
plugin which allows
us to execute
fixer scripts on a device.
And you can have
your custom plugins.
Custom plugins can have
their own configuration.
And with host plugins
you can actually
run before the
test suite starts,
and after the test
suite is finished.
The reason why we
do it this way is
because we want to avoid the
chattiness between the host
and the device.
If you look at the
after all method,
you will also get
access to the whole test
with results, which
is great if you
want to do any post-processing
of your test results.
And you even can submit
an edit request back to us
if you want to attach new
artifacts to the test suite
result.
Device plugins, on the other
hand, like the name is saying,
are running on an
actual device, which
is a much more sane
environment to reason about.
And in fact, most of
our host site code
that we used to
configure the device
is now moved to the device
with a device plugin.
So plugins that
we've already built
are a lock at plug-in,
that gives you a scope
lock at for test method,
a screenshot plugin that
takes screenshots in
case your tests fail,
or a permission plugin.
Which is pretty awesome, because
you can now grant and revoke
runtime permissions,
which was not able before.
And you can obviously also
have your custom plugins.
So the difference from a device
plugin into a host plugin
is that it runs on a device.
But this allows us to
do things like that.
We can give you a callback
before a single test
method is executed, and
after it's finished.
And this is great, because we
can avoid all the chattiness
between the host and the
device, and it gives you
a lot of control.
And if you think about it, I
don't how you set up your test
fixtures now, but usually
you would basically
use something like @BeforeClass,
or @Before after class.
If you want something
more reusable,
you would probably
reach for a [INAUDIBLE]
or there is some things you
can't do with these APIs.
And then you have to
have your custom runner.
And I think the great
thing about this
is, we give you a whole
new way of writing plugins
that actually run on
a device, and allow
you to execute code on it.
So let's move on to execution.
So as I was saying, we moved the
execution to the actual device.
We created a whole new
orchestrator service
and protocol.
What this does, it standardizes
is the communication
between the host and
the device, and it can
be implemented by any device.
Which means if you have a
custom device you can implement
the same protocol, and
you can still integrate
with a host site easily.
On Android the
orchestrator service
is implemented by the
Android Test Orchestrator.
And once you request the
test run on the host,
it will then go, discover all
the tests, apply any filters
and sorting that you
want, and then it
will do either isolated
or best test execution.
It will also call all
your device plugins,
and it will stream results
back in real time to the host.
So the last thing that I want
to talk about is reporting.
With Nitrogen we will give
unified and consistent
reporting.
As I'm sure many of you have
seen this command at the top.
What it does is, it runs
an instrumentation test
from the command line.
If you use the -r option,
which is verbose mode,
you'll get an output like this.
And as you can see, it's
not very human readable,
I would say.
And it's also quite chatty,
because this is just
throwing a single test.
This is showing a passing test.
If it fails the only thing
that it gives you in addition
is a stack trace.
So there is not really a lot of
information or actionable data
here to why the test failed.
With Nitrogen we want to
move to something like this.
A structured data
format which gives you
access to the properties
of the test case,
the status of the test, and
the list of artifacts that were
collected during a test run.
Things like screenshots, video,
[? lock@ ?] and any custom
artifacts that you add
in your post processing.
Again, this will also be
integrated in Android Studio,
and we will surface this
in the Android Studio UI
if you run tests.
The last thing before I wrap
up what I want to mention
is, we also have support
for custom reports.
So you can do things
like [INAUDIBLE] or even
your custom report
that integrates better
with your own infrastructure.
And with that I
want to hand over
to Visnal, who's going to
talk about device management.
All right.
VISNAL SETHIA: Thanks, Stephan.
Running any kind
of Android UI test
generally happens in devices.
There are two
different device types
where you could run your test,
either on a physical device
or a watchful device.
Regardless of which device
type you run your test on,
each of them has its own
sets of pros and cons.
Let's just do a
quick show of hands.
How many people around here have
had setup something like this,
testing on physical devices?
Looks like quite a few.
Follow up question, how
easy was it to manage them?
Hard?
Another follow up question.
Did you ever end up using a
fire extinguisher next to it?
I seriously hope not.
I have a funny story to share
that happened a few years ago
at Google, when one
of the teams decided
that they wanted to test their
stuff on physical devices.
They procured a bunch
of devices, glued them
onto the wall, and integrated
with their CI infrastructure.
Everything was running
reasonably well until one
fine day when the engineers came
back to work on Monday morning,
and things were timing out.
If you were to guess
what went wrong,
what would be your
guess look like?
[INAUDIBLE]
So it turned out to be an
air conditioner problem.
So what apparently
happened was the air
conditioners in the building
in San Francisco went bad.
And because the air
conditioners went bad,
the facilities decided that
they want to switch off the air
conditioner, so that
they could fix it,
but tests were continuously
running on those devices,
and the heat produced
in those devices
caused the glue to
peel off from the wall,
and all the devices
fell off to the ground.
Managing physical
devices are hard.
We just want to give out a huge
shout out to the Firebase test
lab team, that makes
testing on Firebase Test Lab
so much easier for you folks.
How do we solve this at Google?
At Google we use the virtual
device infrastructure.
The test environment that
we use is extremely stable.
The number that you
look at the right
is the stability ratio
of our test environment,
and that's right it's 99.9999%.
The continuous integration
if the virtual device
infrastructure that
we use has the ability
to run locally or
in a CI environment.
And it supports over
500 different device
configurations.
Let's dig in a little
deeper to see what
is its current state at Google.
It's used by over
100 first party
apps such as Google
Photos, search, YouTube,
and so on and so forth.
Just in 2018 it had a staggering
$2.4 billion invocations,
and that number is
growing year over year.
There are over 120,000 targets
that use this infrastructure.
Having a great test
infrastructure is a must
if you want to release
high quality apps.
You'd be thinking, this
is great infrastructure.
How does this fit
in with Nitrogen?
If you remember from slides that
Stephan presented a little bit
earlier, Nitrogen has this
concept of device providers.
So if you want to
run a UI test, you
invoke Nitrogen,
Nitrogen in turn
would invoke a device
provider, which in this case
is going to be the virtual
device provider, which launches
a device, does a bunch
of smart setup, returns
the control back to
Nitrogen, which actually
goes and executes the test.
And once the test is done it
goes and tears on the device.
So in that case you get a
completely stable environment
which is launched by
nitrogen, runs the test,
and shuts it down.
So while designing this
particular infrastructure,
there were 4 key things
that we kept in mind.
That virtual device
infrastructure
needs to be simple to use,
needs to be extremely stable,
should be reproducible
regardless of which environment
it runs in, whether
you're running it locally
or whether you're running
it in RCI infrastructure.
And it needs to
be extremely fast.
Let's dig into a
little deeper as to how
did we achieve each
of these four goals
in building a virtual
device management solution.
So the virtual
device infrastructure
has a very simple
product configuration.
What does that mean?
It's just a
configuration file where
you could go and add the
characteristics of the device.
For example, what's the
horizontal screen resolution?
What's the vertical
screen resolution?
What's the memory of the device?
So for each of these
different device types,
like Nexus and Pixel, the
virtual device management
solution already has
pretty [INAUDIBLE]
in all of these different
device configurations.
So you don't have to go and
figure out the different device
resolutions for each
of those devices.
It supports over 500 different
device configurations.
And because it's a
configuration file
it's a matter of just adding
or removing the changes
to the configuration file.
And it supports several
different form factors
such as phones, tablets, TV
devices, in wear devices.
But how is it simple?
Launching it is as simple as
calling the virtual device
binary, and specifying
the name of the device.
If you want to launch a Pixel
2, you just say virtual device,
device equals Pixel 2,
and on what API level.
You don't have to worry
about creating AVDs,
specifying configurations
and things like that.
That makes things
extremely simple.
Stability.
This is probably one of
the biggest problems most
of the Android app
developers face,
like you running your
test, and an ANR pops up.
And that ANR might
not even necessarily
be the app that you're testing.
We had the same problem
internally as well.
How did we solve this?
Well, sorry.
I wrote has a nifty service
call as activity controller
that lets you suppress
ANR's whenever it sees them.
This is the exact same service
that Android Monkey uses
while it runs Monkey tests.
This increased stability
of our test [INAUDIBLE]
like one of the things
that I forgot to say,
when we started with this
particular infrastructure,
our stability was around 95%.
But that's no good when you're
running things at scale.
So the first thing
that we saw were
ANRs, and once we fixed that our
stability increased, but still
not to the level that we wanted.
The next [INAUDIBLE]
things that we saw
was, we boot up a device, but
the screen is not unlocked.
And if the screen
is not unlocked,
all the key events
that you inject
does not even reach your app.
And if the key events
don't reach your app,
your app is actually
not getting tested,
and your test started to fail.
And it turned out when
the device boots up,
the screen is not locked.
So when a screen is not
unlocked, in APL 23,
I believe Android added an
API from our window manager,
where you could
dismiss the key guard,
and that would
unlock the screen.
So every time you
boot up the device,
we would call the Window Manager
API to unlock the screen.
And this increase our
stability furthermore.
A few years ago Android
changed the file system
from via FFS, which meant
yet another flash file
system, to EXD4.
This was a great
improvement, but it
had its own set of problems.
EXD4 was known was
prone to disk corruption
during a hard shutdown.
So whenever we would
shut down the device,
if it was not correctly shut
down and it had disk errors,
your subsequent boot of the
virtual device would fail.
Leading to test flakiness.
How did we solve this problem?
Well, all we had to do was
call in [INAUDIBLE] to the disk
image that was
unmounted, and this
guaranteed that when
the disk was unmounted
it had no disk errors, and
if there were no disk errors
your subsequent boot
would come up just fine.
This increased the stability
of our test environment
to close to 99%, but
that's still no good.
When you're running a
$2.4 billion invocations,
a one person failure
is 24 million.
That's still a huge number.
As you can see, there were
a bunch of optimizations
that we did to
increase the stability.
I'm not going to talk
about all of them,
but there's one final thing
that I want to talk about.
You would launch the
device, but for--
and the virtual device would
set a boot properly saying
that the device is
completely booted up.
But for whatever reasons the
launcher would not kick in.
So how did we
solve that problem?
Well, all we had to do was send
out an intent to the launcher.
If it was already launched
and it was a no op,
if it wasn't launched then
it started the launcher,
and then we would
return the control
back to Nitrogen, which would
then go and run the test.
Doing a bunch of
optimizations like this help
us get to 99.9999% stability.
The next big pillar that we had
in mind was reproducibility.
So a lot of times when users
were running their test
in [INAUDIBLE] environment,
if their test failed
for whatever reasons, they had
no way of debugging it locally.
So our virtual device
environment that we built
had to make sure that the
environment was reproducible
regardless of where
they are running.
So the virtual device
management solution
helps you launch things
locally, or on the cloud.
And one of the big things
about this environment
is the device starts in
a clean, pristine state
for every indication.
So there is no state
carried forward
between different
invocations making sure
like tests are going
to be extremely stable,
and not fail because
of the device itself.
Android Shell.
There are several teams within
Google that write NDK code.
Like when you're
writing native code.
And they wanted to
test their native code.
But to test their
native code, they
wanted to boot up arm devices.
And booting up arm devices
were extremely slow.
For example on Nougat,
booting up an arm device
takes about 10 minutes.
And this was slowing
things down tremendously.
This made us go back to the
drawing board to see what could
we do to increase the--
to decrease the time it takes
to boot up those devices.
So we ended up going and
created a mini boot mode
in the virtual device.
What does mini boot mode mean?
We, like for
testing native code,
you don't need
the entire Android
stack to be up and running.
All you need is technically
the Linux kernel,
and if the kernel is
up and running you
could test your native API.
So we ended up adding a mini
boot mode to our virtual device
launcher which would come
up in less than 30 seconds,
and that would
help the indicator
upwards to test their native
code much more quickly.
At Google we make a lot
of data driven decisions.
So because we were
running things at scale,
where we were spending
the bulk of our time
while running our test.
And it turned out
50% of our time
was spent in booting up the
emulator, 30% of the time
was spent in installing an app,
because of a process called
list x tool.
And 20% of the time was spent
in running the test itself.
Android made a change
between Lollipop and Nougat
where they wanted to do
ahead of time compilation
using a tool called Dex2Oat.
So because the app installation
times were so huge,
what we ended up doing was,
you have the exact same device,
the exact same app under
test that's being tested,
and the exact same
Dex2Oat file--
the exact Oat file
being generated
for every test invocation.
What we said was, what if we
move this as a single action
as on the Bazel bell
graph, and reuse
the old file that was generated
for all your test runs?
This significantly reduced
the app install time
from over 3 minutes for one
of the apps to under a minute.
If you were here earlier
today when the emulator
team presented about snapshots,
where you could boot up
an emulator, safety
snapshot, and then shut down
the emulator, and
when you restarted
it restarts back
from the same state.
Well, we integrated the
snapshot feature back
into virtual device launcher
where you put up a device,
take a snapshot, shut it
down, and then reuse it
when the test actually runs.
This significantly reduced
our test runtime by over 30%.
Just imagine when you run tests
at 2.4 billion invocations,
reducing test times
by 30% would yield
a huge number of--
like you'd save
huge amounts of CPOE resources.
One of the other features that
we want to work out probably
next year is Cloud
snapshots, which
is a combination of
Textural on the Cloud,
and snapshots called
Cloud snapshots.
But this, we come to
our end of our talk
where with Nitrogen you'd be
able to run your test at scale
in a completely
stable environment,
with all of these
different pillars.
This is our next
generation platform
that will help you test.
In this talk we did
do a lot of technical,
stuff like text
dump, [INAUDIBLE],,
activity controller.
You don't have to
worry about any
of those things, because
all of these things
are already incorporated
in Nitrogen as well
as the virtual device
management solution,
and all you have to
do is like use this.
So we are hoping to release
Nitrogen alpha in Q1
of next year of the virtual
device management solution
is going to be at least
around the same time
as well, with an alpha release.
FireBase Test Lab is actually
integrating with Nitrogen
as well to run your tests.
One of the things that
Stephan pointed out
earlier about integration
of Android Studio and Gradle
with Nitrogen.
Just imagine you're
sitting in front of
your Androids Studio,
you hit the run test button,
which actually involves
Nitrogen, which could actually
launch the virtual device,
run your test, and give you
results back on your Android
Studio itself.
And that's about it.
Thank you very much.
DAN GALPIN: Is that it?
All right.
Well, hey everyone.
I wanted to thank you all
for coming and attending
the Android Dev Summit.
It is a really, really amazing
to be able to do this again.
We really want to
know what you think,
and this is really important.
So all of you should
by now or very soon
will get a survey in your inbox.
Please, please,
please, fill it out.
Because it is--
we so much wanted
to make this event amazing.
Should we ever do it again we
really want to know what worked
and what didn't work, what
you like what you didn't like.
Less Dan Galpin, maybe.
Second thing is, we are going--
we have these QR codes which
we might build to put up
one more time, hopefully.
And these are how you
rate the sessions.
And so we want to know
what sessions you loved,
what sessions you only liked.
What sessions you sat through
because they were there
in the same room,
and you were kind of
comfortable in your seat.
So please also fill
out these surveys
and let us know what worked.
I know it's a lot of work,
but I really appreciate that.
And ultimately if you missed
anything, all of these talks
are actually up
today right now--
OK I guess that's gone.
All the talks that we have are
up on the Android Developers
YouTube channel.
And so again, I think--
and from yesterday
and most of them from
today are already going up.
And by I think the end
of tonight all of them
will be up on the channel.
So you'll be able to
even go home tonight,
if you haven't had enough
Android Dev Summit by now,
you can even have
more from the comfort
of your very own History Museum.
And finally, we have a
little bit of a final reel
here of just some of what was
going on here that we will
share to you as you think about
wandering out here and going
back to the real world.
So thank you so much
for coming, again.
[APPLAUSE]
[VIDEO PLAYBACK]
- Welcome to the 2018
Android Developer Summit.
This is an event for
developers, by developers.
- With 2 billion devices,
3/4 of a trillion apps
downloaded every year,
Android developer community
is growing hugely.
We saw it more than double
just in the past few years.
- So the Android App Bundle
makes it much simpler.
With no additional
developer work needed,
the App Bundle now
makes apps an average
of 8% smaller for download
for end plus devices.
- We simply could not
do this without you.
So thank you.
[END PLAYBACK]
[MUSIC PLAYING]
