[MUSIC PLAYING]
LAURENCE MORONEY: Hi, everybody.
And welcome to this session,
the last one of the day.
Everybody having
a good IO so far?
Nice.
Excited for the concert tonight?
So thanks for spending some
time with us beforehand.
I'm Lawrence Moroney.
KAZ SATO: I'm Kaz.
LAURENCE MORONEY:
And we're here today
to talk about machine
learning plus IoT equals
a smarter world.
Now that's a little bit of
a riff on a famous computer
science book.
Anybody know what that is?
"Algorithms Plus Data
Structures Equals Programs"
by Nikolaos Worth.
It's one of the seminal
works in CompSci.
So we said hey, we want to do
a little bit of a riff on that.
And it's also really about
what's going on in the world
today, that there's two
major trends going on.
Number one is the
rapid rise of IoT.
And number two is the rapid
rise of AI and machine learning.
And in many ways, these are--
they're intersecting
with each other
to create some really
cool new scenarios.
So we wanted to talk through
a few of the scenarios
and give some demonstrations.
The first one
we'll give is let's
take a look at really cheap IoT
sensors that will do something
like in this case,
we're going to take
a look at those that
measure air quality
and how they can push
data to the Cloud
and how we can use that
for machine learning.
KAZ SATO: And then
we also look at how
the cloud can be empowering
to for IoT and AI
these products that help
you in collecting data
and analyzing it and learning
some intelligent [INAUDIBLE]
writing models.
LAURENCE MORONEY: And
then the third one
will be as some of our
devices are practically
mini computers, like Raspberry
Pis and our handheld devices
that what can we
do about on device
inference with them as well.
So we'll take a look at
some of the examples of that
and some pretty cool stuff
that's happening there.
But first of all, some numbers.
According to IHS
research, there are
expected to be 31 billion
connected IoT devices
by the end of this year.
That's a lot of devices.
And those devices combined,
we found the same research
as showing a 27.6% compound
annual growth rate of the data
that they are
pushing to the Cloud.
So 31 billion by the end
of this year and every year
the amount of data that they're
pushing is growing by 27.6%.
And that's a lot of data.
Now what are we going
to do with that data?
Well obviously, there's
the usual things--
filtering, sorting, querying,
all that kind of stuff.
But we can also use that to
start training some models.
And all of that data and all of
those models, machine learning
models can then be used
to generate intelligence
and to help us to start making
intelligent decisions based
on the data that's being
generated by our devices.
So the first scenario
I wanted to look at
was thinking about
simple sensors
that give us smart solutions.
I just wanted something
with four S's on it--
easier to remember that way.
And an example of this is
like these little devices.
And if you've ever done any
kind of hardware hacking,
this is like a device--
the one on the left
here is called an ESP 8266.
And this little chip
actually has GPIO pins on it,
so those input output pins.
And it also has a
Wi-Fi circuit on it.
And when you combine
that with something
like the one on the right--
that's an air quality
sensor and a gas sensor--
now you can start putting
together something
very quickly and very
cheaply that will push data
to the Cloud.
And I'm going to go
off script for a second
and an interesting story.
Kaz and I had
prepared this talk,
and we were rehearsing
it last week
with the VP of our division.
And one of the feedback
he gave us was like, hey,
you don't have any small
simple sensors on there.
You're doing lots
of Cloud stuff.
And you're doing
lots of mobile stuff.
But you don't have
any IoT sensors.
So you should really build
one and put it in there.
And this was last Wednesday.
And the environment that we live
in now is so cool that we were
able to-- thanks to Arduino and
thanks to the maker community
to be able to build the
demo that I'm going to show
shortly--
in just a couple of days.
And the longest part of that--
can anybody guess
what the longest
time in building the demo was?
AUDIENCE: [INAUDIBLE]
LAURENCE MORONEY: No, it
wasn't actually testing.
It was waiting for the parts
to be shipped from Amazon.
I'm an Amazon Prime subscriber,
and it took two days.
And it only took three days
to build the whole thing.
So it's an amazing time
to be alive, I think.
The amount of things
that you can do,
the amount of
creativity, and the kind
of things that you can build
is just I find it so cool.
And I'm really happy
to be alive right now.
So let's take a
look at an example
of how you'd use one of these.
Consider a building
like this one,
and say in a
building like this we
could put sensors like
this for air quality
around the building.
And when they're
around the building,
then we can start tracking the
air quality in that building.
But you know, that's
just an instant status.
We can tell the stuff here on
the south side of the building,
the air quality isn't great.
And then on the north
side of the building,
the air quality is very nice.
But what would
usually happen if you
work in an office like this?
People on the south
side are going
to go and adjust
the thermometer,
adjust the thermostat,
the fans will kick in.
And then people on the north
side are going to be too cold.
It's like what wouldn't it be
great if we could intelligently
adapt the building?
Or what if that was a disaster
and it was like maybe a carbon
monoxide leak and
with our sensors
are going to show
that carbon monoxide
Leak?
And I don't know if how
well you can see it here,
but the leak is primarily
centered around the exit.
And if people are being
driven towards the exit,
they might be driven
towards danger.
But when we've got chief
sensors like this, we could say,
hey look, instantly we can
have an alternative exit.
But what if we can
go even further--
sorry.
What if we can go
even further and then
think about scenarios where
machine learning comes involved
and we can start
predicting things?
Like maybe we can predict
the best times of the day
to keep the fans on to
maximize air quality
and reduce our energy footprint,
good for the environment.
Or maybe we could, if there
was a gas leak or something
like that, we could
predict the path
of the gas leak based on testing
and based on machine learning
models.
So emergency responders would
know where it's safe to go
and where it's dangerous to go.
And we all love our
emergency responders,
and we want to make
sure that they're safe.
Or maybe we could look at
what the impact by modeling
what would be of
rearranging our office?
What if we have more people
sitting in the north and less
in the south?
Or what happens if
we host a big event?
And this building is
actually one of the offices
in Google's Seattle.
And up on that top right
top left hand side there
that room called
center of the universe
is actually where we
host a lot of events.
And it be good for us to
understand the air quality when
we do so.
So if you've got a lot
of devices like this,
there's a product
called Cloud IoT Core.
And the idea behind Cloud IoT
Core is it's fully managed
and allows you to easily secure,
connect, manage, and ingest
data from all of your devices.
And then once you
have that, then you
can start doing interesting
things, like pulling
the data out using
BigQuery, using Cloud ML,
using Cloud Data Lab,
using Cloud Studio to be
able to then build models
and then run interference
on these models and have
analytics behind that.
But let me give a little demo
of environmental monitoring
and what it would look like.
So if we can switch
to the Wolff vision.
So this is the device
I was talking about.
So this is a simple
Ardiuno device.
I'm not using the
Wi-Fi in this case.
I'm just using a wired one.
So I have a shield on top of the
Ardiuno with a wired internet
just so we can make sure that
the demo would work here.
And then this little device
is the air quality sensor.
And so right now, it's measuring
the air quality in the room.
And so that's measuring
for particulates.
It's measuring for gases,
carbon dioxide, carbon monoxide,
natural gas, all
that kind of thing.
So we kind of want to
trigger it by polluting
the air a little bit.
Any ideas how we can do that?
I'll just breathe on
it, because we all
breathe out carbon dioxide.
So I'm going to breathe on it.
I've always wanted to
do that in a session.
And if we switched to
my laptop, so my laptop,
we can now see I have a
Firebase running here.
And I'm using Firebase
Cloud Firestore.
And on Cloud Firestore,
I'm actually detecting
and I'm getting the readings
from this little device.
So we can see the readings
up here once they come up.
Look here at the reading
57, that was a moment ago
when I breathed on it.
54, I've actually
gotten it up to 70.
I must have eaten something bad.
Here's your background
here is about 42.
So what's going on here?
How is this being done?
Well, with Arduino, it's super
simple to write code and see--
and can we switch
back to the slides?
It will be a little
easier to see the code.
So if I go back
to the slides, we
can see this is the kind
of code that you would have
written in C on an Arduino.
And all I'm doing is
I'm reading pin 0,
that this is plugged
into data pin 0 on there.
I'm reading that.
And then every two seconds,
I'm reading it again.
And then what I'm doing with
that is I'm posting that to URL
and that URL is writing
its Firebase for me.
And that's all you have to do.
It's really that simple
to build a sensor
to start monitoring air
quality very, very cheap,
very, very simple.
And then when you aggregate the
data from all of these sensors
together, if you've got
thousands of them in the Cloud,
then you can start building some
really intelligent solutions.
But there's something
to take into account
when you're doing IoT.
And that's think in
terms of security.
So data security is
very, very important.
Obviously, these things are just
generating data and pushing it
up to my database
but a smart hacker
might be able to figure
out how I'm doing that
and then get access
to my database.
So you've got to think in
terms of data security.
And a pattern that I usually
like to follow for this--
and for good reason
too-- is instead
of pushing from the devices
directly up to your databases,
have something
within your firewall
that they are pushing to.
And let that act as a proxy.
And then that proxy will proxy
out your databases for you.
So then you'll have a secure
connection between that proxy
and the databases.
And then hopefully, that secure
connection will be enough.
And the reason for
that is also to do it
as an internal thing was
that the power limitations
on these things, I mean
that a lot of these
will only support HTTP.
They won't support HTTPs.
So that's why in my
code a moment ago,
if you looked closely, you
would have seen that I was just
making a client dot connect
server, comma 80 in that case
because it's just an
open HTTP connection.
So that was to my internal proxy
and then the internal proxy
was sending it out to the Cloud.
So things to consider--
very cheap, very easy for you
to build systems like this one.
Coding with something like
Arduino and C is super simple.
There's tons of APIs and
there's tons of examples that
would do it you can just adapt.
And that's what I
did in this case.
I've never touched an
Arduino before last Friday--
and I'm not particularly smart.
And it took just
a couple of hours
to figure out how to
hack some of the code
that they had provided to be
able to build this and write
my data up to Firebase.
So that's on a
small simple sensor.
Now let's take a
look at what happens
when you go Cloud scale.
KAZ SATO: OK.
Thank you.
So that was a very
similar scenario.
And let's take a look at
another scenario where
we have to get smarter
devices such as camera.
If you take a
picture of a camera,
you may want to look at some
object, what kind of object
you would have in the
images And TensorFlow
provides an API to do that,
that is Object Detection API.
So by using the
Object Detection API,
you could have the labels
of each object in the image,
or you could have
the boundary boxes.
And you could have the scores.
And it's really
simple to use the API.
For example, you can just
download the model file
for Object Detection API.
And you can have a
dictionary for Tensors
with the model
file of the model.
Then you get pass your images to
the dictionary, and that's it.
So you'd get the
output dictionary that
would have the detection
results, such as a number
of the object you have
or classes of objects,
or the boundary
boxes and the scores.
It's so simple.
So for example, you could
have the Raspberry Pi
with a camera attached
to a shopping cart
so that you could take a picture
of the inside of the shopping
cart and apply the Object
Detection API to detect
what kind of items you are
having or how many of them
you have the cart.
And we can use the Cloud IoT
Core combined with the Cloud
and Machine Learning Engine
to build a system that
provides the production level,
scalability, and availability.
For example, if you want
to build a smart camera
system for the
shopping cart, you
can have a Raspberry Pi and a
camera attached to the shopping
cart that uses the Cloud IoT
Core to collect all your data,
store the data on
the Cloud Pub/Sub,
which could be the back
end for your server.
And we could use the
Kubernetes Engine or GKE
as an orchestrator for
orchestrating everything
happening at the Cloud side.
And finally, the GKE
could be sending the data
to the ML Engine.
That's where we arrive at the
prediction with the Object
Detection API.
Let's take a look at
the demonstration.
So I tried used here
MG webcam to show--
LAURENCE MORONEY: Switch
to the laptop please.
KAZ SATO: Yeah.
So this is how it looks.
It's a very small
cart, a miniature cart.
And we have a Raspberry Pi
and a display and a camera.
So if you put something like--
this is a fake eggplant.
I bought it in a Japanese store.
LAURENCE MORONEY: Also known as
an aubergine for us Europeans.
KAZ SATO: Or a tomato.
LAURENCE MORONEY: Also
known as a tomato.
KAZ SATO: And how Object
Detection API would look.
So please switch--
I should ask him.
Yeah.
So let's wait a while.
LAURENCE MORONEY:
I need to refresh?
Should I refresh?
KAZ SATO: Is it working?
Maybe a network issue.
Oh yeah, there you go.
So--
[APPLAUSE]
KAZ SATO: Thank you.
So it's so easy to detect
what kind of the object
and how many of them you
have in the shopping cart.
So please go back to the slide.
So that worked.
So that was a very
simple scenario
where we just counted the
items and classified them.
But you may think that what's
the point of using Cloud
for this kind of detection?
And actually, that is true.
You can run the Object
Detection API just
inside the Raspberry Pi box.
And you don't have
to use the Cloud.
But if you can collect all
the data on the Cloud side
with the thousands
o fthe recalls,
then you can extract certain
or some collective intelligence
from thousands of
the passive recalls.
For example, you can train
a machine learning model
with all the shopping
items you are adding
to the cart, thousands of them.
You can train a
machine running model
that can predict what will
be the next item you will be
putting into the shopping cart.
That would be detecting
the location proximity
starting from the vegetables
and fruit and meat--
like a chicken-- and
the spices and pasta
based on all their past
history of the shoppers.
And you can also
have a small display
with it showing you
a recommendation
for the next item to add.
That would be working as a
navigator for the shoppers.
So let's call it as a
Smart Shopping Navigator.
And what kind of
machine learning model
we should design for
implementing the Navigator?
At first, we have to
represent what kind of items
you have in the cart.
So if you have the
tomato in the cart,
we may have one-hot vector.
That represents your
tomato in your cart.
In this case, you would have
a 1.0 as a value in a vector.
And if you put
eggplant in the cart,
you would have another
1.0 value in the vector.
If you have the chicken, then
you would have another 1.0
for chicken.
So this multiple one-hot
vector represents the cart item
history.
And with TensorFlow, you
can write code like this.
So you have a dictionary
for shopping items.
And you can code
one-hot function
to encode the shopping
items as a vector.
And then you can have
the multiple vectors
that represents the
cart item history.
And now we have the history.
And how do you detect the
changes in the history?
In this case, you can use the
convolution-- single dimension
convolution or 1D convolution.
And convolution in
machine learning
is usually used to
detect certain patterns
in a local group of big data.
For example, if you
have a 2D image,
you can apply a 2D convolution
to find some shape,
some patterns in the image.
That is how CNN or
convolution neural network
works for image convolution.
But you can also
apply the convolution
to single dimensional data,
such as time series data.
For example, you can apply 1D
convolution on the cart item
history.
They can detect that
what kind of changes that
happen inside the cart items.
And then you can flatten the
output to get the result.
And with TensorFlow, you can
write the code like this.
You can code the .conv1d
function to apply the single
dimension of convolution and
then code the flatten to get
the result. So now we have
the cart item change history.
And we may also want to
take in other factors,
such as seasonality-- whether
it's winter or summer--
or time of day
because shoppers may
want to choose food items
based on the seasonality--
whether it's a summer hot day
or whether it's a cold day.
So we put everything into
a single MLP Multi-layer
Perceptron-- which is a
classic old neural network with
the three layers--
to predict next items to add.
With TensorFlow,
you can write code
to condense everything
into one Tensor.
And you would define the
three layers of the MLP.
And that's it.
Let's take a look at this
how this Smart Shopping
Navigator works.
LAURENCE MORONEY: Switch
to the laptop please.
KAZ SATO: Let's
switch to the laptop.
If you put the eggplant, then
it should detect the eggplant.
LAURENCE MORONEY:
I see a shadow.
KAZ SATO: So you are
watching the same screen
I'm watching here.
So at the right side, you'll see
what kind of recipes and items
are recommended.
So it looks like the system
recommends the pasta eggplant
bake.
And I wanted to make a pasta.
So I would put the tomato.
Then the Navigator would
show the other things
you have to add would be--
I already have it.
But eggplant, tomato,
and I want to make the--
I want to make a pasta.
LAURENCE MORONEY: The
mouse is also working.
KAZ SATO: The mouse is working.
Well, somehow
doesn't show pasta.
But anyway--
LAURENCE MORONEY: Refresh?
KAZ SATO: Yeah, I could
try putting just chicken.
So that you can just follow
the items on the screen so
you can find all the items to
make the eggplant tomato pasta.
That's how it works.
Thank you.
So please go back to
the screen, this right.
So as you have seen
on the demonstration,
by showing you recipe
ideas and next item to add,
it works just like a car
navigator for the shoppers.
So the shoppers can just
follow the next item
to add to fill the cart
with all items required
to cook with a certain recipe.
And that was an example of how
the Cloud can be empowering
for the IoT devices, not only
for collecting data but also
you can analyze it and learn
some collective intelligence
from it.
It's not just in
the IoT anymore.
It's an internet
of smart things.
And this demonstration
was actually
built by Google Cloud
partner called Groovenauts.
And they have open sourced
everything on GitHub.
So if you're interested,
please go to the GitHub
and search with Smart
Shopping Navigator
to find out what kind
of code you would write.
And they also provide
their production solution
called the Magellan Blocks where
you have the user interface
to build a whole data pipeline
for collecting the IoT data
and analyzing it and
training the model.
So with that, I direct you
back to back stage to Laurence.
LAURENCE MORONEY: All
right, thank you, Kaz.
Pretty cool stuff, right?
[APPLAUSE]
So now the third scenario
that we wanted to look at
was on-device
inference and training.
And there's a few
different types
of devices you can do
inference and training on.
So for example, you can do it
on a mobile phone or a tablet.
You can do it on a Raspberry Pi.
Or you can do it on an
Android Things device.
And there's a really
cool video of doing it
on a mobile phone or tablet
that I've put at this QR code.
So take a look at this QR
code, watch this video.
I'm not going to show it here.
But the idea behind
this video is
that if you went to
in TensorFlow Summit
you would have seen it.
But it's farmers in
Tanzania in Africa
who don't have
Cloud connectivity
and who rely on a
plant called cassava,
that if the cassava
plant gets diseased,
it can kill and
impact an entire crop.
And it's hard for humans
to eyeball it and see it.
But they built a
machine learn system
that all they do they wave
their phone over the leaves,
and they can diagnose
diseases early.
It's really, really cool.
So that kind of on-device
inference without any Cloud
connectivity is possible
through something
called TensorFlow Lite.
With TensorFlow Lite you can
also do inference and training
on a Raspberry Pi.
And I have a car
here that I'm going
to show in a moment that's
driven off of a Raspberry Pi.
And if you're in
the Sandbox area,
you would have seen cars like
this one self-driving around.
But basically, there's a little
camera at the front here.
And what I would do is
I would manually drive
the car, record what I'm doing.
And then based on the
telemetry that I'm
sending to the car
to drive it around
and the corresponding video,
that's giving us the features
and labels that we want that the
car will then remember and then
use that to self-drive.
If we switch back to
slides for a moment.
And then finally,
there's the third one,
which we'll show
in a moment that's
on an Android Things device.
All this is made possible
because of TensorFlow Lite.
And I strongly recommend
check out TensorFlow Lite,
and check out the
talks TensorFlow Lite.
But the idea behind
TensorFlow Lite
is that if you want to train
a machine learned model,
we have a bunch of
converters that will then
flatten that model
and shrink it to make
it more mobile friendly.
There's an interpreter
core then which
is used to execute that
model so you can do inference
on the model.
So for example,
things like if you
want to do an image
classification,
it'll do that
locally on the device
without round-tripping
to the Cloud.
And then that's all
the stuff that's
needed like operation kernels
and hardware acceleration
to take advantage of the
mobile hardware that you have.
It runs on Raspberry Pi.
It runs on iOS.
And of course, it
runs on Android.
And it works really
great on a Pixel.
I have a Pixel 2 up here.
And if you want to see a demo
of it, come see me afterwards,
and I can show real time
inference on the Pixel.
Doesn't really work on
the stage that well.
Now here's a video that I
shot of me doing real time
inference.
And you can see it saw
the coffee mug correctly.
And here, this
coffee mug kind of
looks a little bit like
a mixing bowl so it got--
at some points, it felt
like a mixing bowl.
And I know that's
a horrible product
placement for my show
"Coffee with a Googler",
but I had to do it.
And it was able to recognize
a mouse and stuff like that.
So this is what it would
look like running on Android.
And so for the self-driving car
that I've been talking about,
you can see on the right
is the Raspberry Pi.
On the left, is something
called a pulse wave modulation
controller.
And that's what actually
drives the car itself.
So I have a TensorFlow Lite
running on the Raspberry Pi.
And that's what we'll
actually drive the car.
And this is built on a
project called Donkey Car.
And if you want to go and take
a look at Donkey Car and details
about Donkey Car,
they're at this URL.
So should we take a look
at what it would look
like to actually train the car?
So if we can switch
to my laptop please.
So I've actually started up
the Raspberry Pi on the car.
And there's a little web
server running on that.
So let me go and see
if I can access it.
Hopefully, it'll obey.
Oh no, it's refusing to connect.
Hang on.
Oh sorry, CD D2.
So what I'm doing is
it runs on Python.
So I'm just running
the Python code.
Sorry, Kaz, to make
you hold it so long.
So this is actually
booting up the web server
that's on the car that I would
then use to train and drive it.
So take a moment
just to book that up.
But what will happen is so if
I want to control the car when
I'm training it,
obviously, it's hard for me
to do it on its own.
It has that little web server.
The web server is connected
to a hotspot on my Pixel.
My laptop's connected
to the same one.
So it's starting up that server.
It looks like that
server's done.
And here it is.
So as Kaz moves it around,
you'll see that one.
And if I would to try
and drive the car,
I made a bit of a mistake.
If you've been over
to the Sandbox,
you would have seen
the cars they're
using are really small ones.
I didn't read the
specs properly.
And I bought this one, and
it's an absolute beast.
It goes really, really fast.
I don't dare put it on the
floor and start driving,
because it will
take off and it will
land about three rows back.
So watch out, Kaz.
And you see this is what
it looks like if I'm
driving and training it.
I can steer.
And I can control it like this.
So what it's doing
right now is recording
what it would see on
the webcam from the car
actually moving and storing
all that stuff that would then
use to train a TensorFlow
model for that specific area.
So when you see the cars
over in the Sandbox,
they're on that figure
of an eight track.
So we drove them around
that figure of eight.
We train them in that way.
And then instead of
launching it in training mode
as I have it now, you
just launch in drive mode,
and they go and they
drive themselves.
So you can build one of
these as a Donkey Cart--
OK, we can put it down.
We can build one of these using
the Donkey Car project cost.
This one is a little
bit more expensive,
but they generally cost
maybe about $200 total
for everything.
And you can build your own
little self-driving car.
And it's all open source.
It's really, really cool.
So thank you.
Can we go back to
the slides, please?
The motor on the
car is really loud
so I'm going to turn it off
because we hear it whizzing.
OK so that's what it's
like to train a Donkey Car.
And with on-device inference,
you can then have a--
believe it or not--
model self-driving car.
I think that's really cool.
Then the next thing was Android
Things that I mentioned.
And did everybody
get one of these?
All right.
So these are just so cool.
So this is just a little--
this one isn't actually
a Raspberry Pi.
It's a different developer
board but a similar concept.
And with Android Things
being able to execute stuff
and with TensorFlow Lite
running on Android Things,
you can start doing inference.
And has anybody
done the Code Lab?
Have you tried it out to
do inference on these?
Try it out and
build it and you'll
be able to do things like this.
So this afternoon back in
my hotel on this device,
I kind of pointed it
out at a water bottle.
And I thought it was interesting
that it gives you three things
that it thinks it is.
It thought it was a water
bottle, a pop bottle, or toilet
tissue.
I don't know if it's because
of the circular parts of it
or anything like that.
And then I tried it on this one.
And it's said a coffee
cup, a coffee mug, a cup,
or a measuring cup.
And particularly way
the handle is placed,
I thought it would be really
interesting to try and fool it.
With the handle the
way it's placed,
I thought it might
think it was a pitcher.
Because sometimes when
I do mugs like that,
it kind of classifies
as a pitcher.
And then of course, this
one, I tried on my Android,
my little French Android.
And it thought he was a
piggy bank, a teapot--
I guess he's a little teapot,
the way he's standing--
or maybe even a spotlight.
And maybe it's the shape of
it or something like that.
But that's the stuff
that's built in.
And when you assemble this
device that you've got,
that app is actually
already running on here.
And that app is open source.
And one of the really
cool things you can do
is that the image model
that app is trained
for, you can actually retrain
that for your own images.
I've been working
on a version of it
that I'm going to
add to the car.
And maybe we'll be able to
talk about it at a future I/O
where I'm retraining the
car to recognize things
like traffic lights.
So if it sees a red
light, it will stop.
If it sees a green
light, it will go.
And all the code for you to do
that is available with this kit
when you get it.
So give it a try.
And for those of
you watching online,
just check out Android Things.
You can buy these kits.
They're relatively cheap,
and they're so much
fun to play with.
So the Code Lab that I
mentioned, if you're here
at I/O you can go over.
And try it out if you want
to try it on your machines
at home, the URL for
it is at this QRL code.
Go give it a--
have a play with it.
It's a lot of fun.
So in recap, we
spoke a little bit
about internet of things and
AI and about the trends that
are happening that the explosive
growth that is going on,
the growth of actual devices and
the amount of data that they're
producing and then the things
that you can do with that.
We looked at with a
sensor data on an Ardiuno.
We looked at Cloud
Scale AI, and we
looked at on-device inference.
Now there's a whole
bunch of things
that you can do to
go take a look at.
Come on slide, animate.
There we go.
So things you can try out,
number one, writing data
from your internet of things
device to Cloud Storage.
There's great details
on it this URL.
And if you want to explore the
TensorFlow Object Detection
API--
which is what Kaz was using here
when it was detecting tomatoes
and eggplants or aubergines
and that kind of stuff--
you can find details
on that at this link.
IoT and the Cloud ML
engine, details for that
are at this link and this one.
And TensorFlow and
TensorFlow Lite--
and all of the mobile scenarios
that I was showing here
were based on TensorFlow Lit--
you can learn about
that at this link
or attend the talks
if you're here.
And finally, exploring
Android Things itself,
I find it super cool.
And it's really nice because
if you're already an Android
developer, you can
start building stuff
using your existing skill sets.
When I was doing the thing
for the Arduino here,
I had to give myself a crash
course in C. But like I said,
if you're already
an Android developer
and you're used to using Java
or Kotlin to build your Android
apps, then you can start
building Things apps with that
too.
So with all of
that, we just want
to say thank you very much.
Feedback is always welcome.
We've love to hear,
and it's this URL.
Thank you.
[MUSIC PLAYING]
