[MUSIC PLAYING]
LIZA MA: Thank you for taking
the time to join us today.
My name is Liza.
I'm a product manager
on the Weave Team.
My colleague, Alex, here
is a software engineer
on the Weave Team.
And we're here today to talk
to you about Weave and mobile.
In our last session
on Weave, we learned
about how Weave enables an
interoperable device ecosystem.
We are excited by the
future of IoT devices.
And we can't wait to see
how this space evolves.
One intersection point that
is already obvious to us
is the interaction users will
have with their IoT devices
and their mobile phones.
In this presentation,
we will talk more
about the app ecosystem
that we envision supporting
for future Weave devices.
And we will walk
you through what
building an experience
for a Weave device
will look like on Android.
Interoperability is one of
the most important tenets
of the Weave ecosystem.
We want consumers to have
confidence when they purchase
a Weave device,
that it will work
with all the existing Weave
devices they already have.
And we want to
make sure that they
have compelling and compatible
choices available to them
from a wide range of
device manufacturers.
We actually expect device
makers to challenge each other
over time to build increasingly
more compelling device
experiences.
The Weave mobile
platform will enable
them to easily interconnect
not just their own devices,
but devices of their partners.
We also anticipate
that app developers
will be able to add a lot
of value to this ecosystem
and build unique experiences
that further increase the value
users get from
their Weave devices.
We recognize that
most IoT devices,
unlike our mobile phones,
are often shared devices.
And not everyone who
uses the same IoT
device will necessarily use the
same mobile operating system.
Weave will support
development platforms
on Android, iOS, and web.
This will allow developers to
build both apps and services
on top of Weave devices.
The diagram behind us
paints a very simple picture
of why we think the
Weave platform can
help accelerate and
simplify development
for these developers.
For an app developer
today who wants
to integrate with
three devices--
and let's just say, for
simplicity, that these are
devices of the same
type-- they have
to work with three
different APIs from three
different manufacturers.
Or in a worst case
scenario, want
to integrate with a device that
does not have an API available
because the device developer
has not yet devoted resources
to that, or they
don't have an existing
partnership with this
developer to make that happen.
With Weave, developers will have
a single API interaction point
with all Weave devices,
independent of type.
And Weave enables this
by working very closely
with device developers
to develop public schemas
for functionality that we know
devices of a particular class
share.
For example, every single
light bulb you have at home
supports the capability
to turn on and off.
There should be no
reason why sending
a command to this kind
of device is different
depending on the OEM.
Optional functionality,
like sending brightness,
for example, should
also be standardized.
This does not
preclude, of course,
developers from adding unique
functionality to their devices.
And you can learn more
about our device schemas
on the Weave developer website.
I'm going to hand
things over to Alex
now, who's actually going to
walk us through what building
an app on Android will
look like for Weave.
We will have a code
lab session tomorrow
where you will actually
get the opportunity
to try a code lab with an
upcoming update of our Weave
APIs on Android.
Over to you.
ALEXEY SEMENOV: Thank you, Liza.
So in this section,
we'll focus on
how you would use Weave
Android APIs to build
an application that
works with a Weave device
and which steps you need to
take in order to get it up
and running.
Now, when you work
with the Weave devices,
we would like you
to focus on what
you would like to achieve, and
not on how you would do that.
So all the semantics of a device
is what you care about, right?
Weave is a very,
very broad platform.
So we support cloud devices,
we support local devices,
we support different
authentication mechanisms,
and so on.
So when using our
APIs, you will not
need to care about any of that.
So as long as the device is what
you care about, you can see,
hey, does this
device have a feature
I would like to work with?
And if so, you
can start on that.
Before I move on
to the actual code,
I'd like to spend a few
minutes and talk about the life
cycle of Weave API.
So which steps do you need to
do to get APIs working for you?
Now, anyone of you who ever
used Google Play services
would feel immediately
familiar with Weave APIs.
We have a concept of
WeaveApiClient, client,
which is very similar to
that of GoogleApiClient.
And working with it would
be really familiar to you,
if you ever used
GoogleApiClient.
Now, WeaveApiClient is
required for every API call.
And it has a life cycle.
And by that, I mean
that, before you
try to use any of
Weave APIs, you
would need to connect
to the API client.
And after you're done using it,
you need to disconnect from it.
Now, one important
property of WeaveApiClient
is that we require that
Weave application is
installed on a user's device.
We believe that this
would bring two benefits,
both for developers
and for the users.
For developers, you will not
need to care about API updates.
So we will do that for you.
So all the fixes and so on,
they will be pushed by us,
along with Weave app updates.
And secondly, for
users, they would
have a consistent
experience across the board.
So there will not be
case when the user will
have two applications
that target
the same types of devices.
But when updated
our API library,
now they can see more
devices than the other one.
So if Weave application
is not installed,
be prepared to handle an error.
So the API will fail, and we
will give a special status
code, RESOLUTION_REQUIRED.
And this will also
include an intent
that you'll need to launch.
And we'll fix things
for you, so nothing else
that you need to do here.
So before you start
using WeaveApiClient,
you need to configure it.
When you build the
API client, there's
two important properties
that you need to specify.
First is you need to
add APIs that you're
going to be working with.
Now, there are
three APIs that we
have, app access, device, and
command APIs, all of which
I'm going to talk
in just a minute.
You also need to supply
ConnectionCallbacks
to the API client.
And ConnectionCallbacks
is our way
of telling you that APIs
are ready for consumption.
Once you're done with
that, and once you
have your API client set up,
you need to connect to it.
When you do so, our
ConnectionCallbacks,
there's an onConnected method.
And at this point, this
means that Weave APIs
are ready for consumption.
Now again, all of this will be
familiar to those of you who
are using Google Play services.
And so the life cycle
is exactly the same
as you would expect
from Google APIs.
Once you're done
using WeaveApiClient,
please disconnect from it.
We did an onPause
here in this example.
So working with Weave
devices, what does it involve?
It's actually four steps.
First is, you need to
ask user for permission
to grant access to Weave
device for your application.
Secondly, you need to
find the device you're
going to be working with.
Third is, you need to
get to know your device.
Is in the right type of
device, is it ready for use,
does it have the right
features, and so on.
And fourth part is act,
which is send commands
to your Weave device.
Now, we're going
to walk over each
of these steps in more details.
So first is asking
for permission.
Now, we defined a
permission model
that will put user in control.
And so before any
application tries
to access any of user's devices,
it will need to request access.
So what you will
need to say, you'll
need to specify device type,
as you'll see in a moment,
and what kind of role you'd
like to have in a device.
And a user will be able
to see which devices he's
willing to grant access to.
So the way you do that is you
create an AppAccessRequest.
And there's three
important fields
that you need to fill in.
First is, your role in a device
that you would like to get,
whether it's a user, a
viewer, or a manager.
Second one is types of devices
that your application is
aware of and can interact with.
And third one, you need to
supply us the project number
that you will take from
Google Developer Console.
And this is our way of
identifying your application
for sending out requests.
Now, there's a few caveats
of how you'd do that,
so please minimize
the role that you
request into the minimum
role that would allow you
to cover all your use cases.
Now, suppose you have
an application that
shows temperature inside your
house, or uses read sensor data
and displays the
temperature to a user.
You probably do not want
to request manager access
level to user's
devices, because it
might be surprising for user.
And as a rule of thumb,
do not surprise the user
in ways he does not expect.
Once you have the request, you
would use our app access APIs
to request access.
And if this succeeds,
will return an intent
that you need to launch
in order for the user
to go through authentication
flow for your application.
At this point, please
make sure that you
use startActivityForResult,
because that's
the only way for us to know
the identity of the calling
application and for you to see
the result of user's action.
Otherwise, you'll
be in the blind.
Once you have that, what you see
on the left side of this slide
is authentication flow.
So the user will see that
application such and such
is requesting access to
these types of devices
and requests this kind of role.
Now, the user can selectively
choose which device he's
willing to grant access to.
Suppose there's the use case of
user and surveillance cameras
application that is
requesting access
to view his cameras with
some cloud solution.
So the user might be OK to
try this new application
with his outside
cameras, but not
so sure about
whether he would want
to grant access to his
bedroom cameras and so on.
So he would be able
to selectively choose
which devices he's
willing to grant access
to your application.
And also, possibly, if you
request manager level of access
for, like I said,
to a monitor app,
the user will be able
to say, that's not
what I would like to do.
I would like to only give
a viewer access level,
because that's what it seems
like this application does.
And that's the sufficient level.
At any point in
time in the future,
the user will be able to go to
Weave application and update
the settings for any
particular application.
So he would be
able to add device.
He would be able to update
access level for existing
devices, and so on.
Once the user
acknowledges his choice.
Your application will
receive RESULT_OK
as a result of
[INAUDIBLE] execution.
And this means that
user made his choice,
and you're free to
start device discovery.
And selected devices will
appear in your application.
Now, we build application
permission model
in a similar way to
Android's runtime permission.
And so it has the
same caveats of usage
that you would expect
from runtime permissions.
First one is, if the user said
no, do not just request it
again.
Make sure the user
understands the choice.
If your application
works with Weave devices,
but the feature is not
really obvious at this point,
try to educate the
user so that, when
he sees the screen
with a permission,
it's not a surprise for him.
If he says no-- and this
is a critical feature
of your application-- again,
don't just request access
again.
Try to explain to
the user why you
would need to request
access to his device again.
The only exception to that rule
of not asking multiple times
is when you would
update your application
with some new features.
Now, if you take the
example of thermometer app,
suppose you can
now not only read
the temperature
inside your house,
but it can also set the
temperature on a thermostat.
So all of a sudden, your
application cannot be a viewer
of device.
He needs to be user,
because you need to set
the temperature for a device.
At this point, when you
have elevated permission,
you might ask for access again.
And again, use
your best judgment,
whether you would like
to educate the user why
you're doing that or not.
And the second case
is, again, when
you have a feature
that would allow
you to extend your
support of Weave device
to multiple device type that
were not previously present.
So at this point,
you can say, hey,
I would like to request
access to those new device
types that now supported.
Now, OK, we got
it out of the way.
And the user said, yes,
please use my devices.
You need to discover those.
So for this, we
have discovery APIs
as part of our device APIs.
To use those, you would
call the startLoading method
and supply a callback.
And this callback
will be receiving
a snapshot of all devices
that are available to you.
Now, those would be cloud
devices, local device.
And you should never care
about which type of device
you're working with, as long as
the semantics is right for you.
When you're done with
discovery, you call stopLoading.
And this will free up some
resource for the user.
Now to reiterate that, discovery
a very expensive operation.
So we might use multiple
sensors on a device
to discover a device.
We might send requests.
We might use cloud connection.
We might use local connection.
We might use
Bluetooth, and so on.
So as soon as you don't need
it, please stop the discovery
and save up some resources.
Devices are also
returning a buffer.
And one thing I'd
like to point here
is that, as soon as you
don't need the buffer,
please release it.
It will save up some resource.
But before you do so,
make sure that you
freeze all devices
that you would like
to still maintain access to.
So suppose you have
thousands of devices
that you can have access
to, but you only care
about one particular device.
It might be a good way to
just freeze this one device
and freeze the buffer with
the rest of the devices,
so you could save up some memory
and provide better experience
for all users.
Now that you have your device,
what you would like to know
is you would like to know what
this device can do for you.
Does it have the right features?
Can it executive some of
the commands on the device?
Is it of the right type?
Is it ready, and so on.
And for this, you would
need to get some information
about a device.
There's two main
components to that
that we define as part
of Weave protocol.
One is command definitions, and
the other one is device state.
I'm going to talk
about both of those,
starting with
command definitions.
So command definitions
are our way
of describing what
device can do for you,
so what kind of
commands it supports,
what kind of parameters
do those commands take,
do you have
sufficient permissions
or sufficient access level to
a device in order to execute
this command.
Let's take jump
command as an example
here that you can
see on this slide.
So as you can see,
this jump command
requires you to be at
least a user on the device.
And it has a single
parameter, called height,
which is an integer.
And it takes values
from 1 to 100.
Now, based on the
command definition alone,
you can build up a UI
to execute any command
on any Weave-enabled
device that you would like.
But to help you a
little bit, in here,
we're grouping command
definitions into packages.
And so packages are
per device type.
Now, they can be a,
say, base package,
which is applicable to all
device types out there.
And it would contain
commands that absolutely
must be implemented
by any device type,
say rename command, or
something like this.
For you camera
manufacturers, there
can be a camera
package that will
contain specific commands that
can be executed on a camera.
Say ISO setting, or file size,
or [INAUDIBLE] versus JPEG
output, and so on.
Now, for any
package, there would
be some standard commands.
And there would be
custom commands.
So any package is
extensible, but there's
a core set of commands that
any device of a particular type
needs to implement in order
to be called, say, a camera.
So based on that, you might
be able to build a UI that
works for your application.
And if you are a
manufacturer, and you
know of some advanced
capabilities of your devices,
we would provide those as well,
as part command definitions.
You would be able
to build up a better
experience for the users.
So to get those
command definitions,
you would use command APIs,
GetCommandDefinitions,
and supply device ID.
Now, you might notice any
API that works with devices,
or to get some info
about a device,
will take just Device ID.
So you don't need to keep the
entire device resource alive,
in order to execute
further API calls.
So the second part
is DeviceState.
And DeviceState would describe
what device looks like,
what's going on with this now.
Now, this can be as simple as
some statistics about device,
such as on/off state
or battery percentage.
Or this can be more
complicated or more structured
information about a
particular device type.
Say, for cameras, there
would be a light sensitivity.
And you could use, again, device
APIs to get this device state,
using the getDeviceState method.
So now, on to the fun part.
So you found your device.
You know what it's capable of.
You know it fits your criteria.
And you would like to
execute a command on it.
For this, we have
command execution APIs.
Now, when you want
to execute a command,
you would build it based
on the command definition
that you read earlier.
If you know that your target
in a particular type of device,
say, a thermometer, a
light, or so on-- say,
a light, you know it does
have a command, on/off.
So you technically don't need to
execute GetCommandDefinitions,
if you only want to build
a very simple application.
You could just
construct a command,
because you know there is
a standard set of commands
that every light
bulb will support.
You would do so
by setting a name.
So in our example there was
jump or setLight, for light.
And you would add
a parameter, which
is PARAMETER_NAME
PARAMETER_VALUE, to the command
and build that.
Once you have the command,
you will use command APIs
in order to execute that.
You will supply device
ID in this command
that you just constructed
and send it over to a device.
So command execution is
not an immediate process.
And so things will take time.
Some things can go wrong.
It might be due to user error.
It might be due to
some device state.
It might be due to
connectivity, and so on.
And so you might
want to check what's
going on with the
command that I just sent.
And for this, we
supply a get method
that will give you command
information about any command
that you send for execution.
Now, it takes CommandInfo
getID, which is ID of a command.
And you will receive that
when you insert a command
for execution onto a device.
OK.
So to reiterate that,
there's four things
that you need to do
in order to build
a functional application that
works with any Weave device.
First is, you need to
ask user permission,
and only do it once.
Secondly, you need
to find device
you're going to work with
and shut down discovery soon
as you don't need it.
Third is, get to
know your device.
Is it ready?
Does it have the right features?
What can I do with that?
And fourth is, act on
your device, which is just
create a command
and send it over
for execution on your device.
Now with that, I think
that's all for our session.
As Liza mentioned, we'll
have code labs tomorrow
where you'd be able to get
your hands on our new APIs
and try it yourself and build
a functional Weave application.
So within an hour, you should
be up and running and working
with any Weave device out there.
So I encourage you to attend.
And thank you, for
your attention.
[APPLAUSE]
[MUSIC PLAYING]
