STEPHAN LINZNER: Hey, guys.
Welcome to our talk on
Android Testing APIs.
And today we're
going to show you
how you can write to higher
quality apps using automated
tests.
My name is Stephan Linzner.
JOSE ALCERRECA:
I'm Jose Alcerreca.
JAN-FELIX SCHMAKEIT: And
I'm Jan-Felix Schmakeit.
And we are developer program
engineers on the developer
platform team here at Google.
And you might have seen
some of the sample code
in some from libraries
that we've been working on,
in particular the
Android testing support
library is something
that they have worked on.
And with that, I will
leave you with Stephan,
who's going to talk
with you more about
how Android testing has evolved
over the last few years.
STEPHAN LINZNER:
Thank you, guys.
So almost two years
ago when we started
thinking about how we can make
developers more productive,
how can we enable you guys
to write higher quality
apps, one thing that repeatedly
came up was automated testing.
But if you look at the testing
APIs, what you will find
is some of them have changed
in API level three, which
was cupcake.
And most of them have been
around since API level one.
So what that means
is, in the meantime,
Android has evolved
a lot, right?
And I'm so proud of what we've
achieved in the last five
or six years in Android.
And I'm pretty sure
everyone in this room is.
But we haven't evolved
in a testing space,
and that's what we
wanted to change.
We wanted to make
testing easier,
and we wanted to
have a better testing
experience across the stack.
So since then, we've
made a lot of progress
on many levels of the stack.
And we've created a new
suite of tools and frameworks
to enable a better
testing experience
and make it even
fun to write tests.
So we have Android
Studio and Gradle,
which added amazing new
features like the new unit test
support which allows for
faster development cycles
and makes you more productive.
We have new ways to
write and run tests.
And we also now have compelling
ways to display test results
and code coverage reports
in Android Studio.
And now, with
Android Studio 2.0,
you can even refactor across
unit and instrumentation
tests, which is amazing.
But we went even further.
So we ended up creating
a full support library
for testing called the Android
Testing Support Library.
This library contains
all of our testing APIs.
And you can just apply
it to your project
and get easily
started with testing.
But even more
importantly, these library
is unbundled from the
platform, which means
we can update it at any time.
And we can iterate
faster, and we
can fix bugs more quickly,
which was a problem before.
On top of that, we also
created a new testing
library called Espresso.
And this addresses actually
one of the biggest pain
points that we had in Android
testing for a while, which
is UI testing.
In Espresso, you
will see that if you
start using it has a really
nice, beautiful, concise,
and fluent API, which makes
for frictionless testing.
And with, that I'm going
to head back to JF.
He's going to talk
about the code lab app
that we've built to show you
how you can use our testing API
as an action in your project.
JAN-FELIX SCHMAKEIT: Cool.
Thank you, Stephan.
So the app I'm
talking about here
is actually part of
a code lab that we're
going to be referring to
throughout this presentation.
At the end, I have some
links, some resources,
you can actually
check it out and see
exactly what we have done.
So the app we
wanted to build was
meant to showcase the best
practice for testing today.
You know, how you should be
using testing in your Android
application.
So what we have to build is a
very simple, very standard note
taking application.
You can view a list of the
notes that you have in your app.
You can add a new note, which
means you can also take a photo
and attach it to your note.
You can click on a
note and it opens up,
and you can see
its full content.
And we have some navigation
in there as well.
And as you can see, this is very
much a stock standard Android
application, probably
quite similar to the way
you've built an Android before.
If you're using RecycleView
to display the list of notes,
we're using a system
intend to open the camera
app to take a photo using the
system camera application.
And we have a navigation
drawer in there
as well with the menu options.
Just remember the key here was
to create an application that
shows you the best practices
for Android testing today.
So as part of that, we were
thinking very carefully
about the architecture that
we use for our application.
Let me be very
clear here, MVP is
Model-View-Presenter
architecture, obviously.
But there are many other
great architectures out there.
And it really depends on your
use cases, and your application
to select something
that works for you.
Just remember the key
part for any architecture
is that you can separate
the different parts,
the different components, of
your application from another.
So you can test
them independently,
and you can maintain them
independently as well.
So in our case for
our application,
that was the
model-view-presenter
architecture.
And the way this works is
that we have the data storage,
the list of notes at the model.
We have the view, this is
for the actual Android magic
happens, sets the [INAUDIBLE]
with the RecyclerView,
and display the actual notes.
So for example, if you want
to load in a list of notes
and display them on the
screen, the presenter
talks to the model, loads
in the list of notes,
and tells the view
to display it.
And then the view
talks to RecyclerView,
and then displays it.
So just remember
the key part here
is to separate the different
areas and different components
of your application.
This makes it really easy and
really useful for testing.
Now we have Jose
coming up next telling
you a bit more about testing
and independent components.
JOSE ALCERRECA: Thank you, JF.
So JF mentioned
testing in isolation.
In order to test
in isolation, you
need to be able to create
a hermetic environment.
So before talking about
the types of tests,
let's talk about
hermetic testing.
Why do we need hermetic testing?
Because there's something
worse than having no tests,
that's having flaky tests.
I've seen this
slide so many times,
I didn't remember it was funny.
A test this flaky when
it fails sometimes.
So if a test fails just
1% of the time you run it,
you will submit your
code, submit of your test,
and in three weeks
you'll get a call
or you go an email saying
something went wrong,
because this test failed.
So you'll start
your investigation,
you'll log into your Jenkins
instance, if you have one.
And after 15 minutes you will
say the famous last words,
it's probably just a flaky test.
That's something that you should
never have to say or hear.
So in this code lab,
in this project,
we had an objective to reduce
flakiness as much as possible.
So the first thing
we did was to isolate
from external dependencies.
Externally tendencies are one
of the most important sources
of flakiness, especially
the network, for example,
because the network can fail.
When network calls, when
you talk to your back end,
API can fail in many
steps-- your Wi-Fi, your ISP
connection.
The server has to
be on and working,
but also other
external dependencies
to our code like storage,
other devices, sensors,
the camera, et cetera.
So what we're going to do
is replace the components
that talk to these
external dependencies
with fake
implementations that are
going to intercept that and
return fake data immediately.
For that, we could use something
like dependency injection
framework like
Dagger or Dagger 2.
But we found something
that is simpler, flavors.
Product flavors is a feature
in the Android Gradle plug in.
You're probably familiar
with it because this
is what you use if you want
to create different versions
of your app sharing code base.
So it's very common to see
free versus paid dimension
for flavors.
In this case we
have prod and mock.
Prod is your production version,
the one that you distribute.
And mock is the one that is
going to use this fake data.
If you open the project
with Android Studio
and you look at the
build variants window,
you'll see that we
have three variants.
That's because we
are filtering out
mock release, because it makes
no sense, and we don't need it.
If you look at the
Gradle task list,
you'll see that there is
no install debug anymore.
So now we have to choose which
version we want to install.
We can install Mock
Debug or Prod Debug.
To run tests, we usually use
connect instrumentation tests.
We usually use
connectedAndroidTests.
But we can now also
choose what version
to test against with connected
mock debug Android test
and connected prod
debug Android test.
So let's see how it's done.
First, you a source
set per flavor.
So we have prod and mock.
And we also have another
interesting source set, Android
test mark.
This is where you
put the tests that
only makes sense for the
mock version of your app.
Check out the code to
see why we use that.
This is where the actual
replacement happens.
The injection class lives in
both mock and prod source sets.
So that is the
class that is going
to be replaced, depending on
the version that you're using.
Also, we put the
fake implementations
in the mock source set.
This is great because
this is actually
going to hide these classes
from the production app,
so that you can't
use them by mistake.
Zooming in a little bit,
this is the injection class
of the mock source set.
It has two methods,
both injection classes
look the same from the outside.
They have the same
public methods.
Provide image file
is the method that we
use from the component
that talks to the camera.
So we're going to return a new
fake image file implementation.
That class, the only thing it's
doing is returning a string,
a path to an image
file that we preloaded.
So it's a fake image, really.
Provide notes
repository is the method
that we use to
create a repository.
So if you are using
the mock flavor,
it's going to create an actual,
real in memory repository.
But we're going to
inject a fake dependency.
In this case, the fake notes
service API implementation.
That's a very Java name, I know.
It's going not to store
any HTTP connections,
but it's just going
to return immediately
a JSON file in this case.
And that's it.
That's how we set up a
hermetic environment.
Is it actually three to 12:00?
That would be pretty cool.
Yeah, more or less?
OK.
This has some
interesting side effects.
Mock mode, using the mock
flavor is interesting
if you are developing your app.
You usually do manual
testing, right,
if you're developing
IA, specifically.
You change something in the
UI, you deploy, and then
you test manually that
what you've done works.
If you use fake
data, this iteration
is going to be shorter,
so your development
is going to be faster.
It's also good for
concurrent development.
So if you don't have
a back end API yet,
you can use fake
data in the meantime.
We're not only going to
run instrumentation tests
against the mock flavor.
We can also use
the production app,
because it's going to give us
very nice end-to-end tests.
if we want to test the
whole project from the back
end to the app.
This test is going to be,
obviously, more flaky.
But you don't have
to run it very often.
You can run it every 24
hours, or it's actually
a very nice pre-release check.
So before uploading to Google
Play or your distribution
channel, you run this
test against prod
to make sure that
everything works well.
So now that we have a
hermetic environment,
we can start
talking about tests.
And the first type
is unit tests.
They are fundamental for
a good testing strategy.
That, then, can be completed
with integration and UI tests,
a small number of
end-to-end tests,
and other tests like monkey
runs, robot rounds, or then
performance tests that JF is
going to talk about later.
The unit tests are
also called local tests
because they run on
your local workstation.
So they are really fast.
But they're also fast
because they are small.
Unit testing is
about making sure
that the individual parts of
your code work as expected.
So the unit tests
must be small, and you
should be able to run
thousands of tests in seconds.
The problem is that, because we
are running on our workstation,
we don't have access
to the framework.
We don't have access to Android.
So the problem with Android
is that we see these a lot.
We just have huge activities
with all the code typed
into them, or huge fragments.
So this is horrible
for unit testing.
Don't be this developer.
Be this other guy.
He unit tests his
business logic,
and he's really happy about it.
Business logic means what
you're app actually does.
So if you have a
photo filtering app,
the business logic
would probably
be the algorithms that you
use to filter the images.
Only if you have a
small dependencies
with the framework, you
can use this nice feature
that we added to the
unit testing support
in 1.1, The Mockable
Android Jar.
The Android Jar is a
file that you download
with the SDK manager,
and it looks like Android
from the outside.
It has all the public methods
and all the public classes,
but it's actually empty.
So if you have a small
dependency on Android,
you can use a mocking
framework like Mockito
to mock these classes.
We have an example
on GitHub that
is mocking shared preferences.
So if you don't want
to wrap that class,
you can use Mockito
on your unit tests
and run them on your
local workstation.
So let's see how the
unit tests look like.
They live in the test folder.
We have five classes.
The first thing
you have to do is
go to the build variants
window and choose
unit test as the test artifact.
This is going to enable the
unit test, but also refactorings
and something else.
But this is actually going away.
In Android Studio 2.0, you
don't have to do this anymore,
because both
instrumentation and unit
tests are going to be active.
So that you'll be able to
refactor across both test
artifacts.
If you know what I'm talking
about, you know this is huge.
This is how a normal
unit test looks like.
It's a JUnit4 test.
It has the at test and
notation, the name,
and it's as simple as it gets.
The new note method is
called on the real presenter.
And then we are verifying that
the mocked view in this case
was called via that
method, showAddNote method.
Verify is Mockito
API, by the way.
In order to execute
it, you can right
click on the method,
right click on the class,
or on the right click menu.
You just click on Run.
From Gradle, we
simply use "test."
because your unit tests
are supposed to be fast,
and they should
pass really fast.
So you don't need to filter one.
So we just run all
of them all the time.
In the code lab, we use
a test-driven development
approach.
This is about creating
the unit tests first.
And the unit test will act
as a contract that is going
to say how your app behaves.
So the first thing you do,
you create the unit test.
You see that it fails,
and you implement
the behavior on your app
until the test passes.
And then you move
on to the next test
until you have something like
this, a glorious list of tests,
all passing.
So with all that
we've talked about,
the fact that we have an
architecture that in this case
is MVP, but you can
use whatever you want.
The fact that we have the
hermetic testing in place,
and that we are using
unit tests and TDD
is going to lead you to
a very, very healthy code
base, where adding new
features is super easy.
It's a matter of adding a
unit test, a couple of methods
in a interface.
And then Android
Studio is actually
going to tell you where
to fill in the gaps.
Great for maintenance as well.
You don't need to be afraid
of refactorings any more,
because you are
covered by tests.
And you'll see that instead of
adding to your technical debt
every time you modify
your code, all the pieces
are going to fall into place.
So to finish off, I want to
talk about hybrid-type of tests.
These are the unit tests that
run on a device or an emulator.
They are, by definition,
integration tests, by the way.
So we call them
Unit Android Tests,
because we are good at naming.
These tests allow
you to test things
like your possible
implementation,
or your SQLite Integration.
And they're actually invisible.
You just upload the
tests, you run them,
and then the results come back.
So you don't actually
see anything,
because they don't
open activities,
they don't open fragments.
If you want to test those
UI elements and the UI
interactions, Stephan is
here to talk about UI testing
with Espresso.
Thanks.
STEPHAN LINZNER: Thanks,
Jose, that was great.
[APPLAUSE]
Yeah, so let's talk a
little bit of UI testing.
So, yeah, I think the
previous pod was great
because it showed
that we can create,
that we can implement
all our business logic,
and we can verify its
correctness using a unit test.
And then we can move on to
a higher level of testing
and write some UI tests for it.
But the other thing that it
shows, and you will actually
see that if you do the code
lab-- which I, by the way,
hope you all do right after
this session, go down,
they're downstairs-- is that
we used the IDE to generate
most of the code for us.
So, because, if you look at
the test as your specification,
and if you do TDD, you can
use the IDE to generate
almost all the code for you.
And you just filling the gaps.
But it also shows how you can
use meaningful abstractions,
and use a unit test to spec
the behavior of your system,
how your objects interact, how
to send messages to each other,
how they behave.
But we can apply some of these
same patterns to UI testing.
And we'll look at this now.
So UI tests should be a crucial
part of your development
strategy.
Like, essentially, they
test your application
through its user interface.
It's already in the word.
And what that also means
is that these tests
have to run on an actual
emulator or device.
And the great
thing about them is
they will give you a lot of
confidence in your application.
Because you can run on a white
variety of configurations,
on emulators, and
you can now even
use Cloud Test Lab to run on
real devices in the cloud.
And so for the next
release, you can just
sleep well, because you
know your app is just
going to work across all these
different configurations.
But I think I lied a
bit, because it turns out
that UI testing is
actually quite hard to do.
And writing a reliable
and nonflaky UI test
before Espresso,
it was a challenge.
And many of you I'm sure
have experienced this.
And this is essentially
why we created Espresso,
because we want you guys to
focus on being productive,
on writing code, implementing
new features, maybe even
focus on a test.
But we don't want you
to fix your flaky tests.
And what Espresso will give you
is a nice, fluent, and concise
API, which you can
use to hide almost all
of the complexity that
comes with writing UI tests.
So when we started
creating Espresso,
we tried to look at UI testing
from a different angle, right?
We didn't want it to focus so
much on implementation details,
like activities and fragments.
Instead we took a step back
and we thought, OK, what would
a user do?
And if you think
about it, what you
do every day if you
interact with your device,
you pick it up.
You'll find some
view on the screen.
Then you will perform
some action on it.
You might click on
a button, swipe.
And then you observe
some UI state change.
And this is essentially how
our Espresso API looks like.
So we have the onView method
as the main entry point.
And then we can just
use a ViewMatcher
to tell Espresso to find us
a view in the current view
hierarchy.
And then once we
have that view, we
can either perform
a ViewAction on it.
Or we can verify a ViewAssertion
like a state change in a UI.
And a ViewAction in this
case would be something
like a click or a scroll.
The good news is that we created
all the ViewMatchers, Matchers
Actions, and Assertions for you.
And I'm pretty sure
the ones that we have
cover, like, 90% of the cases.
But the great thing is all these
three are extension points.
And they make Espresso
very customizable.
And you can actually
tailor it to your needs
by writing your own measures,
actions, and ViewAssertions.
But now let's look at
how you would actually
write an Espresso test
for your application.
But before we dive in
the implementation,
let's have a look at the add
note feature from the notes
app.
So this is a UI flaw
I'm going to show you.
And we're going to
implement it afterwards.
So we start on the
main note screen.
Then the next thing one do is
we want to click on a button.
This will bring up the
add notes fragments,
where we can type in a
title and a description.
Then we can save the note.
And this will bring us back
to the previous screen.
And as you can see,
the notice displayed,
the new one is displayed,
on the main notes screen.
And we want to verify
that in our UI test.
So, yeah let's write a test.
So the first thing
that you have to do
is you have to create the notes
screen test in your Android
test source set.
And then you have
to do two things.
The first thing is
you have to tell JUnit
that you actually want to use
the Android JUnit4 Runner.
And then the second thing
is you have to assign
the tests to a bucket.
This is something that
you don't have to do,
but we recommend to do it,
because, especially if you
run on a built
server, you don't want
to run all the tests at a time.
You just want to run either
the small, the medium,
or the large ones.
And this is in particular
important for the large ones,
because they will
run a long time.
So once we've done that, we
need to set up the stage.
We have to set up
our test fixture.
And we do that using a new API
in the Android Testing Support
Library called
Activity Test Rule.
And so you might
have heard of rules.
They're not really
a new concept.
They have been around
awhile in JUnit4.
But this is
essentially an API you
can use to create
reusable components,
which you can use in
all of your tests.
And they reduce
[INAUDIBLE] code.
And that's exactly
what we can see here.
So in order to use
Activate Test Rule.
The only thing
that you have to do
is you have to create an
instance in a public field,
annotate it with the
at rule annotation.
And then what this will do
is it will start the activity
before each test and will
finish it after each test run.
Which is great.
And now we're actually
ready to exercise our UI.
And here's how you
do it with Espresso.
If you remember what
I just showed you,
the first thing
that we want to do
is we want to click this
button at the bottom right
of the screen.
So the way we do
it in Espresso, we
ask Espresso to give us the view
for the corresponding add note
ID from the current
view hierarchy.
And we do that by using
a withId View Matcher.
And Espresso will
return us that view
and then we want to perform
a click action on it.
And Espresso will get
the view, click on it,
and then add notes
fragments will show up.
And then we want to type
a title and a description
in the corresponding
added text fields.
And again, we use the
same entry point on view.
We use a withId
Matcher, again, to get
a hold of the title
and description views
from the view hierarchy.
But this time, we
don't click on it.
We just use a type text action
to type some text in the title
and the description.
And then we want
to save the note.
And this works pretty much
the same like the first step.
We get a hold of the view
using with a withId Matcher,
and then we perform
a Click Action.
And the last thing
that we want to do
is we want to actually verify
that the note that we have just
added to our model is
displayed on screen.
And this time, we're going
to do things a little bit
differently.
We use the On View method.
But instead of matching
a view by its ID,
we can also use text
and tell Espresso,
give me the view from
the view hierarchy
which contains these texts.
And then we can use
the check method
to verify that the view is
actually displayed on screen.
So if you look at this, this UI
flow, it's not an easy UI flow.
But if you look at the
code, it looks really easy.
It's really readable.
And there's no
implementation details.
There's no activities,
no fragments.
You essentially don't have
to deal with those details
anymore.
And in fact, Espresso only
cares about views and windows.
That's all it cares about.
And it hides most
of the complexity
and most of the UI
synchronization from you.
And now, of course, you
want to run your test.
This works pretty much the
same like Jose showed you
with the unit tests.
So you can either right
click on a test clause,
and click on a Play
button, or you can also
do it on a method level
from the test class.
But what this will do is it will
create two APKs for you, a test
APK and an app APK.
It will deploy both of
them to your device,
and then the test
APK will instrument
the app using instrumentation
and exercise it's UI.
So you can, of course,
do this from Gradle.
This is something
that you wouldn't use
in your local development flow.
But it's something that comes
in very handy if you actually
run from a built server, and
if you use a CI server, which
is something that I think
most of you guys do anyways.
And now at this point, we need
to wire up our Android code
with our architecture.
Right?
And the good news about
this is, because we already
implemented all the
logic in a presenter,
our Android implementation
will be much simpler.
Often it's just setting some
text on a text view field.
And often it's very simple.
And the other good thing is,
because we know that everything
already works, because we have
the UI tests for the presenter,
this is a great approach to
attempt testing in general
and combine the unit
testing with the UI testing.
And at one point you'll
actually go green.
Your test will pass.
And then you will see
something like you
can see here on the right.
And I've been working on
this since many years now,
and I'm still
fascinated if I see
those tests run on a device.
And you should
really try it out,
because it's a lot of fun to
write UI tests with Espresso.
So let me summarize.
Espresso gives you
frictionless UI testing.
Espresso is really reliable.
You will see that.
If you switch to Espresso,
or if you start using it,
you will see your test
will be much more reliable.
They're also more readable.
They're almost like a UI spec.
So think about on-boarding
a new member to your team.
You can just point
them to the test.
They can read through the tests.
They can almost figure
out the whole UI flow just
from your test.
And maybe they get excited
when they see how easy it
is to write a UI test.
And it's easier to
on-board them to write
UI tests in the first place.
The last thing I
wanted to mention
is Espresso is blazing fast.
We do all the
synchronization for you,
and we know when to execute the
next view action immediately
after the previous
one has finished.
And you will immediately notice
this if you have a lot of tests
and if you switch
over to Espresso.
So to summarize, Espresso
makes for a nonflaky test,
and you should
really try it out,
because it's an
amazing framework,
and it's a step forward
in UI testing on Android.
And with that, I'm very excited
to have Jay to talk about some
of the newer stuff that
we've been working on.
Because now that we have
all the low level APIs,
we really want to take a
smarter approach to testing.
And we want to build the
high level tools, which
sits on the lower level APIs to
enable more powerful use cases.
JAN-FELIX SCHMAKEIT: Cool.
Thank you very much Stephan.
So we have had Jose
talking about unit testing
as the fundamental way we should
be testing our applications.
Then we had Stephan talking
about integration and UI tests,
being able to test
the actual Android
part of our application.
And as you've probably
all realized by now,
these testing tools
have matured quite a bit
over the last few years.
So now's the time
to start thinking
beyond simply verifying
the functionality
of the our application.
Let's think about
performance testing.
And this very much sits at the
top of our testing pyramid.
You know, we take advantage
of all the great testing
tools, and all the
platform features that
are already out there to
build our performance tests.
So performance testing
today-- I'm sure many of you
have experienced this
yourself-- is very painful.
Traditionally, you have
a few lower spec devices.
For example, you
have a QA team that
has or has access to those.
Maybe you have a
slower network at home
where you just try to
run the application
and see if it performs
OK or if anything is
a bit slower than it should be.
And the problem with
that is you can't easily
reproduce any of this.
It's really hard
for you to track
trends to see how your
application is performing,
right?
And especially if you've
automated the rest
of your tests
already, why can't we
automate our performance tests?
And why can't we take
advantage of the existing tests
that you have already written
for this particular part?
Let me introduce the
Performance Testing Harness.
This is actually part
of a great code lab
that we've put together
that I highly encourage you
to check out after our talk.
The way this works is that
we has a new custom Gradle
plugin that contains a
task with which you use
to run your performance tests.
The trick comes from a
special test listener
that sits in your test APK.
The basic idea is
that we capture
some additional statistics
and some additional log
files as you are
executing the tests.
So for example, we can track
the rendering performance,
the LogCat outputs for each of
the tests as you're running it,
capture it on the
device, copy it back
to the development machine, and
then do some analysis on it.
The key parts of this are
some custom JUnit test rules
that we have added.
Stephan talked
about rules already.
These allow you to add some
additional functionality
to UI existing tests.
So here we have three rules.
The first one basically runs the
Dumpsys graphics info command,
which you might
have already used
if you've try to
track down some jank
issues in your application.
It allows you to get gather
some statistics on, for example,
the number of janky
frames that you had
and the rendering performance.
You can do exactly the same for
the net stats command, which
allows you to track your
network performance,
number for received and
send packets, for example.
And we can also capture the
LogCat outputs individually.
So just remember,
these rules are
applied to each test
as it's being executed.
So for each test
we can now capture
the graphics performance,
network performance,
and the LogCat output.
And this is pretty powerful.
And this is all being
executed through this new test
listener that gets added to
our test APK on the device.
And that's how we can capture
some additional statistics as
well, such as a Systrace output
if you want to track down
any other performance issues.
This is what it looks
like if you're actually
running the test.
First of all, we have
run our new custom task,
run local perf test.
And then, as the test
is being executed,
our listener captures the
logs for each of the tests.
In our case, we only had one
test we have executed here.
But you can see that for
each test, we have now a log
file we can go back to
and then analyze further.
And of course, there's
the Systrace file
that gets captured for
the entire test run.
And this all happens
automatically for you.
And I think this
is very powerful.
So here's an example where
our test actually failed.
And you can see that we
had an excessive number
of janky frames, 91%.
And this is clearly a problem.
This is one of our
existing tests that we had.
And we simply marked
as a perf test
and ran our test harness
over it and captured
all these statistics.
So we had 91% of janky frames.
Janky frame means that the
frame took longer to render
on the device, so it appeared
a bit janky on the device.
So our test failed.
Instead of having to go
back, getting the device
out having to reproduce
it, and trying
to see what's going on there, we
have captured all the log files
already.
We have the Systrace
file there as well.
So we can go straight
back to the logs
we have captured to see
what the problem was,
and maybe even fix
it straight away.
And this is a great way to scale
up your performance testing
if you think about it.
Just imagine where you can
take it from here, right?
Running it on one
device, yeah, that's OK.
You can probably
already do that.
But imagine scaling
this up and running this
as part of your continuous
integration tests, right?
What if you can run
all these performance
tests for each
commit or for each
build that you're creating.
And this is very
powerful, because this now
allows you to capture statistics
and trends as they develop.
You might even be able
to go back all the way
to one commit that introduced
some performance issues that
didn't come up until later on.
So no longer do you have
to rely on just simple
manual performance testing.
You can use our testing
harness to automate
some of the work for you.
And I would like
to think about, you
know, this is just
the beginning, right?
There's so much more we
can do with this, right?
Now that our testing
tools have matured,
we can start thinking
about other ways
we can test and
improve our performance
in our applications.
You can look at battery
usage, for example.
You can look at the network
performance as well.
This is very much
just the beginning.
So my point here is really
that we should now be starting
to think about smarter testing.
Smarter testing, performance
testing, and there
are many other ways
we can automate
some of the things we're
currently doing manually
to build better apps, make it
much easier for us to verify
and test our applications,
to delight our users
and give them a great,
great experience
as we work on our applications.
With that, I highly encourage
you to check out our code labs.
They're great.
First of all, the
Android testing code lap.
This is the one
that you have seen
throughout this
entire presentation.
So we have a great code
that we've put together
that shows you how we have
used the MVP architecture,
how we're using
test-driven development,
and how we are doing the
unit testing and the UI
testing in a great,
concise application.
You can check this out in
the code lab downstairs,
and they are online as well.
If you're more curious about the
automated performance testing
that I've talked about and
you want to see it in action,
and have a play with it to
see how you could use it
to actually track down
a performance issue,
check out the automated
performance testing code lab.
Here some amazing resources
we've put together.
The code labs are
all available online
for you to try out at home
or later on your laptop
as well, if you like.
We also have some
great samples out there
that show you everything
from basic unit testing
to much more complicated
Espresso testing.
So definitely check
this out on GitHub.
We also have a great project
called the Android Testing
Blueprint, and
that shows you how
you can integrate many
different ways of testing
into one great project.
For example, the unit and the UI
testing that you've seen today.
And of course, we have the ATSL,
the Android Testing Support
Library that we've
all been working on.
So definitely check out
the great documentation
for that as well.
And with that, I would like
to thank you very much.
It's great to see so many people
interested in testing here.
We'll be around the
office hours area,
and we have Stephan
joining the Android Tools
Panel, the fireside chat
this afternoon as well.
So you can definitely find
us if you have any questions.
And I think with that,
it's time for lunch.
So happy testing!
