INDRANIL CHAKRABORTY:
Hello everyone.
Good afternoon.
And thank you for coming
to this session IoT
and Cloud for
industrial application.
I am Indranil Chakraborty.
I'm a product guy at Google
Cloud Platform, working on IoT.
What we're going to
cover today, we're
going to cover a couple of
industrial applications, which
are enabled by IoT
and Google Cloud,
and give you some
demos by our partners
and our customers on how
they have leveraged the power
of Google Cloud with IoT.
To really solve some
interesting industrial problems.
Let's get started.
I'm sure you all have heard
about the fourth Industrial
Revolution.
If you look it up
on Wikipedia, it
said that we've been in this
fourth Industrial Revolution
for quite a few years.
The first revolution started
back in the 18th century,
when steam engine
was first invented.
And, powered by water
and steam power,
it was used to
mobilize and automate
a lot of the production as part
of Industrial Revolution 1.0.
The second revolution
started with mass production
of assembly line, using
automation in the manufacturing
field.
It was primarily
started by Henry Ford
when he started assembly line
for his Ford car manufacturing.
And he said that the third one
started with computerization
and automation of IT.
So as we started installing
robots and connecting
those robots to the
IT system, that's
what started the third
Industrial Revolution.
And the fourth revolution
has been going on
for over 50 years, thanks
to the massive advances
in technologies.
And we believe that
with IoT and Cloud,
we can make further advancement
in the fourth Industrial
Revolution stage, which
can help businesses derive
a lot more efficiency, and
get a large amount of value
from IoT and Cloud.
What we're seeing
today in the industry
are couple of key
trends, which is
driving the advancement of
industrial application using
IoT.
First, what we're seeing is
we're seeing an increasing
desire by the industry
players to combine data from
their operational sites--
so factories, utility grids,
and other sites--
and combine it with
their IT systems,
so that they can really get the
value from both of these data
and derive a lot
more efficiency,
and improve their overall
production as well.
So that has been one
of the key drivers
that we're finding
in the industry.
The second is it it's
really become much easier
to extract data from factory and
other industrial sites as well.
If you think about it, when you
say IoT in industrial scenario,
a lot of the
manufacturers have already
had installed sensors
in their machines.
So these machineries
were already
instrumented with sensors.
And they're collecting
a large amount of data.
But only a very small
percentage of this data
is actually getting utilized
for many of these applications.
And so as the prices of
sensors keeps dropping,
thanks to the mobile
phone revolution,
we're seeing a large number
of these industries are either
using these existing machines,
which are instrumented with
sensors, or retrofitting
machines-- legacy machine--
with sensors, which can
then be connected to Cloud
for deriving a lot of value.
And finally, over the
last couple of years,
Cloud technology providers,
such as Google Cloud Platform,
has done a lot of improvement
to ingest massive scale of data,
and then store it in a
very cost effective way.
In addition, the Google
Cloud platform and others
have also been providing
machine learning capabilities,
so you can really extract
very meaningful insights
from this data.
And so as we think about
industrial application,
there are a couple of use-cases
we need to think about.
What are the key
use-cases we want
to address as we start
building IoT applications
for the industrial use-cases?
And so you may want
to have improved
your overall efficiency
of your product line.
There are players
and customers who
really suffered a lot
if there is any downtime
in their missionaries.
And so they want to use
data and apply analysis
on those data to
predict failure,
so they can avoid
any such downtime.
Real time tracking of inventory
is also a key use-case,
because you can use that to not
just avoid inventory stock-out,
but you can also use
it to predict demand.
So you can make
sure that there's
enough inventory as the
customer demand increases.
And of course,
product quality is
one of the key areas where a lot
of our customers and partners
are very interested in as well--
asset tracking the supply
chain, among other ones as well.
In Google, as we think
about these use-cases,
and as a developer or as
a industrial partner--
as you think about building
applications, IT applications
for industries, there
are really two key areas
which you need to solve
in order to address
any of these use-cases.
One is we have to be able to
ingest massive amounts of data
across different sites, and then
be able to process it at scale,
and then tease out key
insights so you can really
apply those learnings and
get some business value.
And in Google Cloud,
we have a number
of these services, which
allows us to do that.
And if you think
about it, Google
has been working on big data
for over a decade, right?
So we have been working on
ingesting data, processing data
at scale over a decade now.
And we think we can
apply the same services
for industrial
applications as well, when
it comes to the context of IoT.
So I'm going to spend some time
on a couple of these product
offerings, which we have for
ingesting and processing--
analyzing data at scale.
The first thing
you want to do is,
if you think about IoT and
data you get from sensors--
while individual
sensors which are either
instrumented in the machines,
may be generating couple
of bytes of data per
second or per few seconds.
But when you look at it from
a cumulative perspective,
it generates a massive
volume of data.
And so you want a system which
can ingest data at scale--
massive amount of data at scale.
And Pub/Sub is a great
product for this.
Essentially, what
we do here is you
have IoT devices
or sensors which
can publish data, either
directly or via gateway,
to a particular topic.
And then you can
have other services
to consume data from that
topic for further processing.
A couple of key
benefits of Pub/Sub.
One is it's durable
message persistence.
And what does it mean?
The sensor data, since
it's constrained,
you need to be able to send the
data without much challenge,
and it shouldn't have to
try to resend it every time.
So Pub/Sub makes sure that
once the data is published,
it is durably stored,
until the consumption party
sends an ACK back.
The second advantage of Pub/Sub
is it's a global service.
While your sensors
and industrial sites
might be distributed globally,
you want a central location.
Your consumption must be
having in a central location.
Since Pub/Sub is
a global service,
you don't have to worry about
what happens if my New York
service goes down?
Or what happens if my San
Jose service goes down?
It's a global service, and
we scale automatically.
In fact, Pub/Sub can
scale from handling
a few thousands of data per
second to millions of messages
per second within just
a couple of minutes,
without user or the developer
needing to provision any
additional services.
And finally, it also allows
you to separate your app
to your upstream
device deployment
from your downstream
application,
so you can continue
to build and iterate
on your application, which is
consuming the data from all
these different IoT
devices, without needing
to change or modify your
upstream deployment.
In fact, you can use
it to route the data
to some of the Cloud services,
such as Data Flow, Cloud
Function.
Or even your own existing
ETO's pipeline as well.
Once you've ingested
the data at scale,
the next thing you want to
do is really process it.
And for industry
IoT applications,
real time analysis
and real time insights
become really critical.
And so Data Flow
is a great product,
which allows you to constantly
receive data from Pub/Sub.
Streams of data from
Pub/Sub, and then
continuously analyze it.
So you could compute average
over a stream of data
within a certain window.
You can even use it to
convert data format,
so you have devices across
different industrial sites.
And most of the time, devices
have different data formats.
And before you want
to store it, if you
want to convert it
into standard format,
Data Flow is a great
product for that as well.
And in addition,
Data Flow can also
be used to get data
from other services,
such as your weather data
or traffic data as well.
And BigQuery allows you to
store massive volumes of data
and run near real time
analysis on those data as well.
In fact, you can run
a query on BigQuery,
which has got 100
billion rows of data,
and get a result within
a couple of seconds.
It's that fast.
And finally, once you
have stored all this data,
and have run some
basic analysis,
you can then apply
Cloud Machine Learning
to get some more interesting
and more sophisticated
insights out of it as well.
So the Industrial Revolution
is already upon us.
And what we have built
so far is Cloud services,
which can be applied
to build really
interesting applications.
Now what I'm going to do is
I'm going invite our friends
and partners to showcase
to you some of the example
demos they have built
using our Cloud platform,
which is planning to solve many
of the industrial application.
First I want to
invite Willem, who
is CEO of Arden Technologies.
And they've built really
interesting applications
for manufacturing.
Willem?
[APPLAUSE]
WILLEM SUNDBLAD: Thank you.
My name is Willem.
And as Indranil said,
I'm CEO and co-founder
of Odin Technologies.
How many here had heard the
term Industry 4.0 before?
Oh.
Quite a lot.
Thanks for that introduction.
There was a great survey done
by German manufacturers--
1,000 of them--
to gauge the level
of know-how and excitement about
this new trend and buzzword.
About 50% of them said that
Industry 4.0 was actively
being planned and
discussed in their plants.
Around 80% said it
was going to have
a long and lasting
dramatic impact
on the industry as a whole.
About the same 80% said
it was going to give them
a competitive edge.
But only 27% said that they
felt like they knew what it was.
So half that people
are talking about it.
Everyone think it's going
to be awesome, especially
for themselves.
But very few actually feel
like they know what it is.
In our opinion, Industry 4.0 and
industrial internet of things--
all it is, it's about leveraging
data and technology to take
better decisions faster.
Because the problems that
are facing manufacturers,
the industrial challenges,
or their business outcomes
and goals haven't changed.
Haven't changed in the
past 10, 20 or 100 years.
They still want to make more
as efficiently as possible.
Use as little
energy as possible,
use as little material
or time as possible.
If you look at US
manufacturing as a whole,
it's easy to think of only the
top brand name manufacturers
when you think of manufacturing.
The Tesla's of the world, the
Rolls-Royce's of the world.
But the US factories
had an output in 2015
of 6.2 trillion dollars.
That's 36% of GDP.
Most of those companies are
not Rolls-Royce or Tesla.
Most of them are material
processing companies
making components for
these larger manufacturers.
A normal car is built up of
30,000 components all coming
from a complex
network of suppliers.
And take the US
plastics industry
as an example, where
we've gotten our start,
and we have our biggest
focus right now.
16,000 factories
in the US, where
most of the data that they
collect and analyze and use
for continuous improvement
is done with pen and paper,
clipboards and timers, to try
to understand and optimize
their production.
So if they have a problem with
a rejection, a quality problem,
or they're trying
to optimize material
consumption of the settings, or
trying to understand a machine
failure--
When that problem happens,
they send engineers out there
to monitor it with pen and paper
to take the data in, and then
formulate a hypotheses, and try
to do the root cause analysis.
So problem-solving can
take weeks to months.
I'm not saying that there is
no technology in manufacturing,
but the technology
that's out there
has been only accessible to
those 1% of manufacturers.
Hardware has always been
built on programmable logic
controllers.
You know, stemming from the 70s.
Not a lot of people are
studying letter logic anymore.
The design of
software usually looks
like it's from the 90s at best.
Sometimes from 80s,
more resembling MS-DOS.
Software is usually
hosted locally.
And it's always extremely
expensive and difficult
to implement.
It can take months to years.
But it doesn't have to
be that way anymore.
Through ubiquity of networks,
drastic decreasing cost
of electronics, and
Cloud computing,
coupled with the
tremendous tools
and power that's available now,
superior industrial technology
or superior
technology, in general,
is now available to
manufacturers large and small
in all different verticals.
The technology that has helped
us in our consumer lives
to click more links and
buy more stuff on Amazon
can now be used to eliminate
waste in manufacturing.
That's our core mission.
We founded Odin to
empower manufacturers
with data and tools so that
they can eliminate their waste
in their production.
Decrease or eliminate
scrap rates.
Decrease or eliminate
machine failures
and unplanned downtime.
Optimize settings and
material consumption,
so they can make more with less.
And we do this by delivering a
device built on Raspberry Pi.
That we can communicate with
most industrial machinery
or external sensors,
capture data in real time
from the machines,
send wirelessly
to our Cloud-based
Analytics platform.
And there our
customers or process
engineers, quality engineers,
industrial engineers,
manufacturing engineers, or
executives at these facilities
are using the platform daily
to solve quality problems
and machine failures.
A process that normally
took weeks, or months,
now takes minutes.
And we do this completely
on Google Cloud.
So the millions and
millions of data
point that we take in per
production line, per day,
is ingested with Pub/Sub.
Our real time stream
processing framework
lives in Container Engine.
We're in the process of moving
that to Data Flow right now.
As an example of the stuff
that we can do there,
we can take laser measurements
of the output coming
from the machine,
calculate in real time
the volume of that product
with the speed of production--
we know the flow
rate of material.
We also know what
density that plastic has,
so we can know how
much pounds of material
they're using per feet,
per hour, per RPM,
per percent horsepower in
real time and historically.
Their set points
and their actuals
means that you know
how much waste there is
and where it's coming from.
Let's switch laptops.
So [INAUDIBLE] a
real time stream
of everything that's happening
out on the production line.
The purple one is the target
of where they should be.
The green one is the output
coming from the machines.
You have the yellows,
the melt pressure.
So how the material
is actually behaving
as it's leaving the machine.
This can be put as an
HMI, as a human machine
interface, down on the
production line for people.
You can add whatever metrics you
want, original or calculated.
You get a high
level overview over
your throughput utilization,
across lines, across factories,
for execs to understand
more how they're running.
You have complete traceability.
So you can search through your
production, based on product,
based on time, based on
events that have happened.
So if an engineer is trying to
solve a problem, what they do
is they search for
whatever product
they were running, whatever
time frame it happened,
or for the event itself.
They can then open it up.
I was slow when I
was recording this.
And they can open
it up here, and
problem-solving
and manufacturing
is like peeling the
layers of an onion.
You don't really
know where to start.
But here they start with the
most important process variable
for them, the output diameter.
This is a cable
extrusion example.
They looked at the melt
pressure of the plastic.
They look at the melt
temperature of the plastic.
They look at the nominal to
see where they actually were.
You can zoom down to a second
level granularity on the data.
So here they've identified
a certain problem
with the fluctuation
of the cable.
The diameter was
out of tolerance,
which causes return loss
in the cable, which means
that the customer can't use it.
So you can then, A,
identify exactly where
that happened, so you can
cut it out and sell the rest.
You can also understand
why it happens.
So in this case,
it looked like it
was the motor load that all of
a sudden changed the pressure.
It changed the amount
of plastic that ended up
going out on that cable.
You can also, once you've
identified something,
you can select
certain parts of it,
either for statistical
processing or to label it.
Labeling allows you to find it
again, share and collaborate,
but also to train up
machine learning models
so that both different
states or your production
can be classified automatically,
and different failures
can be predicted in the future.
I think that's it.
I think we can switch over
to the presentation again.
So in the same way that we build
tools and capabilities that
allows our customers to
run their factories more
efficiently and build
better products,
Google Cloud Platform
does that for us.
We can move a lot faster
and build better products
for our customers by leveraging
the tools that Google Cloud
Platform provides.
We also know that we
can scale comfortably
throughout this year,
going for tens of thousands
of machines and
thousands of factories
without having to worry about
scaling up Bigtable or our data
pipeline itself.
And that's it for me.
Thank you very much.
INDRANIL CHAKRABORTY:
Thank you, Willem.
So you saw how Willem
and his company
are using Google
Cloud to really solve
a critical problem for
manufacturing, where they can
now have a view of exactly
how their machines that are
operating.
And they can take more
predictive action,
proactive action, rather
than waiting for days
and using clipboards
to figure out
where things are going wrong.
The other area where we
see a lot of opportunity
and, in terms of where Cloud
and IoT can really help,
is in asset-tracking.
And we've been
working very closely
with our friends and
partners at Intel
to build some interesting
applications and demos for you
guys to share with.
So let me to invite Giby
Raphael from Intel, who's going
to talk about asset-tracking.
Giby?
[APPLAUSE]
GIBY RAPHAEL: Good afternoon.
So I run the largest
segment at Intel.
Let me begin the presentation
with three stories.
Let's go with story.
So the first story comes from
our own backyard, from Intel.
Intel is the world's number
fourth largest supply chain.
We ship more than a
billion products a year
across hundreds of countries
and thousands of places.
About 30% to 40% of that
is fabrication equipment.
And we have problems.
We ship this box-sized
equipments, and this is 2016.
We still use ink sensors, analog
ink sensors, to send stills.
These boxes cannot be tilted.
These boxes have a wide
vibration threshold.
And it loses calibration if
it exceeds the threshold.
So we stick on a tilt sensor
with two compartments.
If the ink leaks from one
compartment, the other--
we know it tilted.
But we don't know how many times
it tilted, where it tilted,
how it tilted, so that we can
use that information next time.
Even worse, since a
person is involved
in looking at and reporting
it, often times it doesn't get
reported.
We have had stories
where we ship these boxes
across international
borders, across countries,
and we have to bring it
back for calibration.
Obviously, Intel can
afford shipping costs
to bring the box back.
But what we cannot afford
is if the fab gets shut down
for a day or two
because of this,
that's millions or tens of
millions of dollars in loss
for us.
The second story comes
from a flower retailer
in New Hampshire.
So last year during
Valentine's Day,
he got a truckload of
flowers from Florida.
The flowers got spoiled
around Texas, midway.
Nobody knew about it.
The night before
Valentine's Day,
he got a truckload
of rotten flowers.
$50,000 written off.
It's probably no big deal.
It's a big retailer.
But what he couldn't afford
is the loss of reputation.
Right?
The single most
important day when
you're supposed to sell
flowers, he ran out of flowers.
Those customers are
not coming back.
The third story comes-- and hits
a little closer to the heart.
The 2014 Ebola epidemic
in West Africa--
a plane load of
vaccines got spoiled.
Nobody noticed it.
And by the time they
realized it, it was too late.
There was no time for plan
B or a contingency plan.
People died.
So these are not
isolated incidents.
But it happens every
day across the world.
70% of all companies
in the world
experienced a
supply-chain disruption
over the last 12
months and 20% of them
went out of business
because of that.
Last year, approximately
76 billion packages
were shipped across the world.
And 30%-- that's more
than 20 billion packages--
were damaged, delayed, or lost.
$60 billion of cargo value
was stolen last year.
On average-- this
is mind-blowing--
every single day, on
average, in the US two
cargo thefts happen on average.
And the value of the theft
is in excess of $200,000.
I grew up in India.
65% of fruits and
vegetables in India
doesn't reach out
from the farm, right?
At that magnitude,
we're looking at making
a dent on world hunger.
We believe that using
real time technology,
real time tracking
of the location,
the condition and security
at the package level
can solve this,
can bring it down.
But why hasn't it
been done before?
Why is nobody doing it?
Three reasons.
Cost, cost, cost.
Right?
Once I put a decent compute, the
memory, and a modem in a box,
it'll now cost me $100 to $400.
I cannot put them
on every package.
It's too expensive.
But, unfortunately,
all these problems
happen at the package level.
Theft happens at
the package level.
We send armed guards
in certain countries
when we ship our
server processors.
People run away
with the package.
Each package is $100,000.
Tampering happens at
the package level.
Tilts, vibration has to be
sensed at the package level.
That's just the Capex cost.
And then there's the Opex cost.
Shipping is one way,
point A to point B.
But these things have to
be brought back, right?
So you need to close
the loop there.
And especially if it's
across international borders,
40% of the time it's
not coming back.
Even when it comes back,
you have breakage, delay,
you have to pay
for the shipping.
It's a mess.
That's when we looked
into IoT and Cloud.
Right?
How can we resolve it?
So we invented a platform with
the vision of bringing down
the unit cost of tracking
to [? subtend ?] dollars
to begin with.
And, obviously, as
everything else it scales--
the price drops with scale.
Let me highlight three
features of this platform.
After that, Josh will come and
demonstrate our prototype here.
Number one, we looked at it--
at the package level, what
do you need to sense this?
So we need a basic computer
at the package level.
An MCU level compute.
And sensors.
Then we looked at,
hey, in order to take
the data from the package to
the Cloud, what do you need?
I cannot put a long-range radio.
I cannot put a modem in there.
It's too expensive.
Can we put a short-range
radio in there, right?
But, by definition, problem
is it's short range.
So we invented the world's--
probably one of
those smallest power
mesh network out
there with multi-hop.
That will now solve the range
problem to a large extent.
It also solves the
density problem,
where I can now have 1,000
sensors talking to one modem
to upload the data.
And finally, a page
from cell phone analogy,
we have a software IP
wherein all the sensors can
move from one modem to the
other, which basically means
I can have fixed modems, fixed
gateways in trucks, in planes,
in trains, in
warehouses, in airports.
And once I build
this infrastructure,
I'm only paying for the
sensor at the package level.
Josh will now demonstrate
a prototype we have here.
JOSH: So if we go to
the slide here-- so just
to provide you with a
little bit of context
around what Giby
was talking about,
and what we'll be demoing
today, this slide here
shows you, on the left-hand
side, a pallet of goods.
So in this demo
we'll be showing you,
on the far side over there,
you can see our mobile gateway.
These are prototypes.
So these are meant
to be big and bulky,
because they're prototypes.
And I'll show you a little bit
of what the production form
factor will look like
here in a minute.
On the left-hand side, you
can see a pallet of goods.
So we have a mobile gateway that
would be placed on a pallet.
And then we have a whole
bunch of boxes on that pallet.
Each one of those boxes you'd
be instrumenting with a sensor.
Giby talked a little
bit about this concept
of having a fixed gateway
and a mobile gateway.
What we're going to
be demonstrating today
is the mobile gateway.
So there, in the center
of the picture, what
we're going to show
live onstage today
is these prototype
sensors, which have
a 3-axis accelerometer in them.
And they also have a
temperature sensor.
So with that 3-axis
accelerometer,
we can tell things like
tilt, and we can tell things
like shock or vibration.
And depending on
the algorithm, you
could determine the difference
between a fall or a collision.
So in this architecture,
we're using GCP.
So we are publishing the data
to the Cloud using Pub/Sub.
You'll see the UI
here in a minute.
You'll see we're storing
aggregate data in Cloud SQL.
We're persisting there to look
at aggregate historical data.
And then, I'll show
you a real time example
as well, where we're going
to use Firebase to broadcast
the data out to the UI,
where we're consuming it
in real time.
So if you want to go ahead
and switch on over to the UI.
So we're seeing here this
is the dashboard page.
So what this is
meant to represent
is a high level
overview of the data.
So this is looking at a map
of a shipment, a real shipment
that we took from
Chandler, Arizona
out into the middle
of the desert.
On the left-hand
side there, you can
see a history of a temperature
exception that occurred.
So when we got out into
the middle of the desert,
we set a threshold
on these sensors
that said, if it drops
below a certain temperature,
let us know.
And you can see,
in this example,
it dropped down to
21 degrees Celsius.
But you get the geolocation
where that occurred,
and you get the time
that it happened as well.
Down below, you can
see a historical trend
of temperature data.
So this is real
temperature data coming off
this real array of sensors
over the last few days,
as we've been demoing
out in the demo booths.
So that's showing
you an aggregate view
of all the temperature for
all the sensors on average.
Below that, we have
some [? ardly ?] capable
slides or charts.
So this is a 4 up
chart that shows
you mini-max temperature
over the life of a shipment
for each one of the sensors.
So you can see what
that looks like.
On the right, is
a tilt occurrence,
so you can understand which
axis was the package tilted on,
so that you could do something
with that information,
like issue a reorder
if it was tilted,
and it shouldn't have been.
Down below, you
can see exceptions,
kind of a view of
all exceptions that
have happened with
this shipment,
as well as that
same kind of data,
but as a stacked bar chart
for each individual sensor.
So you can see if you have
a particular package that's
been getting a lot of
temperature exceptions or tilt
exceptions.
You can see most
of those exceptions
are temporary exceptions,
because we've been running this
for real out in the demo booth.
And we've been
putting these sensors
in the cooler a whole bunch,
showing it off to people.
So now we're going to
jump into the real time
portion of the demo.
On the previous page
we were using Cloud SQL
to render all that data or
to persist all that data,
and consume it on the UI.
Here we're using Firebase
to actually put the data out
on the UI.
So you can see that
these cards are
helping you visualize what's
going on with your packages.
So each one of these cards
represents a package, right?
It's a sensor, but the
sensor represents a package.
And you can see
one of them is red.
It's red because we
have a sensor that we
put in the beginning of
this demo into the cooler,
and it dropped down below
the preset threshold
that we had set
for that package.
That we cared about.
So when that happens, you can
see it here on the screen.
Or more likely what you would
do in a real world scenario
is you'd use an API
to push that data
and go take some sort
of action on that.
You can also see at any
tilt occurrences that occur.
So you can see a few of those
are having tilt occurrences
right now.
So we have this
concept of a microframe
and a macroframe in
this architecture.
So the microframe is for demo
purposes set up at 20 seconds.
So what that means
is every 20 seconds--
if there's any sort of
exception that occurs,
something that's outside of the
control limits that you set--
we're going to broadcast
that data up to the Cloud.
In this case, using Pub/Sub.
And that allows you to
do something with it.
A macroframe for demo
purposes is set at one minute.
So what that means is
if everything is OK--
if everything is within
tolerance-- once a minute we're
going to send that data up
to the Cloud on a one minute
basis.
So if you go ahead and click
on the Give Me the Deeds
button for the sensor that's
below temperature threshold,
you can see here where we've put
it in the cooler, taken it out
of the cooler.
Put it in the cooler,
and taken it out.
This is the same kind of
data that I showed you
on the dashboard.
But this is for an
individual sensor.
So now we're looking at an
individual sensor's data.
So you can see all
the exceptions that
have occurred with that sensor.
And you can see the
majority of those
are temperature exceptions.
And then below that, you can
see a historical trend of what's
gone on with this sensor.
So that's temperature
trend over time.
And then below that,
you can see that we
have all the historical
data stored, which
is being served up on the page.
And the data is being
stored again in Cloud SQL.
If there's any exception that
occurs to these, while they're
live here on stage, you
would see that here as well.
They would pop up to
the top of the chart,
and you would see those
here, in real time,
as they happen on that
every 20 second interval.
So that's basically the demo of
the Intel Connected Logistics
platform.
And I think I'm going
to head it back to you
Giby, so you can
kind of wrap us up.
GIBY RAPHAEL: Thank you, Josh.
Do we have to switch computers?
OK, good.
So where are we going
with this is what
I call the visibility cube.
Real time mitigation.
Trend analytics.
You can learn from
your mistakes.
And the Holy Grail, which is
a predictive analytics, right?
If a bad thing happens,
it's good learning.
But if you can prevent
it from happening,
that's a whole different level.
We want to create the class
pipeline as they call it
for end-to-end visibility.
And as the picture
shows it here,
a world of autonomous freight--
in our view, that's
where this is
heading-- where the
supply is automatically
quenched by the demands.
Filled in by the demand.
And we believe that supply chain
is a trillion dollar market.
There is at least $2
trillion of efficiency
to be gained in
supply chain today.
And it's ramping up to a
$10 trillion market in 2020.
Thank you.
INDRANIL CHAKRABORTY:
Thank you, Giby and Josh
for this fantastic demo.
So as you can see, using
sensor technology and Cloud,
you can not only track real time
the tilts and the other changes
which are happening
in your inner acid,
but you can also plot a
complete historical view of it.
And that can be used for a
lot of predictive analysis
as well moving forward.
So you've seen manufacturing--
how that can be transformed.
You've seen how
asset-tracking and logistics
can be transformed.
At home, we've been getting
used to Nest thermostat,
and how you can have
smart meters, which
can really, automatically
adjust based on your need
and based on your
power consumption
so that you can
optimize it over time.
But there's a huge
industry out there
around utilities, which
is still struggling,
and in need to benefit
from the Cloud technology
and overall data
analysis as well.
We have a partner
with us, Energy Works,
who has been
working very closely
and working hard with us to
help improve the utilities,
and understand, get more
insights from their datam
and help their overall
efficiency as well.
So let me invite Edwin
Poot, CEO of Energy Works,
to show you some
demo on utilities.
Edwin?
EDWIN POOT: Thanks, Indranil.
Great to be here and great to
talk to you about Energy Works.
I was wondering here.
Are there any people
here in the audience
from the energy industry?
No?
Oh, a few.
OK.
Well, everyone in
their daily lives
are affected by what I'm
going to explain to you.
So the energy industry
is in a transition.
And instead of looking only
at the grid and the sensors
and the smart meters that
we are ingesting data from,
there are other
data sources that
are coming into that
same infrastructure
that the utilities have
currently, mostly on PREM.
If you look at this
slide, for example,
you see spatial data,
open data, weather data.
Most of the time, only smart
metadata is being used.
For example, correlation with
weather would be very simple.
Some of them are doing it,
but most of them are not.
It's not about
looking backwards,
it's also about looking forward.
Which you can do with this
data in the here and now.
So it's more about streaming.
So most of the utility
customers we are working with
are currently in
the process of going
from [INAUDIBLE]
to streaming data.
So you're probably familiar
with the smart meter.
Indranil already mentioned it.
The smart meter itself--
frequencies are around
like 15 minute intervals
or five minute intervals.
So this is not
that bad to ingest.
But we are working
with customers--
for example, on the
smart grid sensor.
It's where we work with
microstate interface
for sensors.
They produce up to
512 values per cycle,
which could be like maybe
30 cycles per second,
ingesting of data all the time.
So we want to discover
information from that data
while it's streaming in.
So conventional
utility systems are not
able to keep up with
this data diversity.
So it's not only about
the sensory data itself,
but how can you add context
and arrange the data itself
that's coming from the sensor.
Or by using data, for example,
from the asset management
system, data from other sensors
nearby, or maybe social data.
Even weather data.
Sociological,
demographical information.
Anything that's
available that can
enrich the context
of your device.
Or of your time sheets,
of the data sets,
would increase the value.
So utilities are
facing a change.
Maybe I mentioned it before,
the energy transformation.
It means they have to change
from a commodity-driven
approach-- business model--
to more data-driven
business model.
So it means they have to
research other possibilities
of how to work with data.
Imagine, top of mind,
energy will be free.
So you don't pay for your
kilowatt hours anymore.
How are they going to
build a business model?
So that's the key.
That's what they're
struggling with currently.
So that's why Energy
Works made it our mission
to enable this energy evolution
by uncovering and monetizing
this hidden value in the data.
And we have a process
in place to handle this.
So it starts with the
injection of data.
Because we want to control
the entire process,
from ingestion all the
way up to monetization.
Because data quality is
most of the time very poor,
so poor that you cannot work
with certain business cases
or use-cases, and the
product cannot become viable
because of that.
So injection of data from
sensors, from actuators,
from smart meters either
push towards your platform,
or we read it in by
connecting to those devices,
either via our RUT
gateways or hidden systems,
or directly connecting to those
devices like, for example,
solar inverters or even
charging points for smart meters
directly.
And then we crunched the data.
We add value to the data.
We try to correlate the data.
We try to discover
patterns in this data.
So we can learn from this data.
So one of the things we're
going to show you today
is the interactive mode of our
platform, where we can actually
explore the data,
work with the data,
see what's happening
by zooming in,
interactively,
looking at the data.
While having said this, we
can also run the same thing
at Cloud Scale.
So once you have found
that perfect mole,
that perfect algorithm--
maybe your data scientist
is working on this.
And then with one
click of a button,
you can run this at Cloud Scale
across millions of devices,
while data is
being streaming in.
And finally, of course,
you can monetize
this perfect application
or algorithm that you
want to launch in this market.
Before I go into
that, first I want
to explain to you the
energy value chain.
So, starting from generation
all the way up to supply,
Energy Works facilitates
this entire process
by enabling our
data intelligence
across the entire value chain.
So for example, on
the generation side,
we work with providers that
provide renewable energy.
So they have to
manage that energy.
They want to see
what's being generated.
What's happening
with the generation?
Et cetera.
They want to predict it.
On the transmission side
and the distribution side,
grid and balances.
Congestion
management, et cetera.
There's a lot of data
coming there as well.
Some of the data is not real
time, like metro-oriented
or maybe even a week old.
But some of the data
can be streamed in live.
Then we're going to talk
about energy deal analytics
use-case today, which
we're going to demonstrate.
So let me skip that part.
And then last but not
least, the supply side,
which involves
consumer engagement,
any kind of applications,
and time of use rates.
Energy Works is built on top
of Google Cloud Platform.
Energy Works believes in the
server-less architecture,
which means we don't want to
work with servers directly
ourselves.
And of course, in
the end, our software
runs on servers
somewhere in the Cloud,
but we don't want to
be bothered with it.
So that's why we don't
use, for example, Compute
Engine or Contain Engine.
Everything is running on
platform as a service.
So on the collect
side, we use App Engine
for our REST-based API.
If customers want to integrate
directly with our platform.
And sometimes, you
also use Cloud Pub/Sub,
especially for
utilities that want
you to have a
streaming possibility
or streaming in data.
So the second step,
the processing side--
we use Data Flow
very intensively.
We use it for
injection of data, we
use it for cleansing of the
data, manipulating the data,
checking or applying
validation rules, et cetera.
We use Data PROC as well for
short and small analytics.
Then on the storage
side, you see
a lot of different clous
storage platforms that we use.
You might wonder, why is this?
So we believe that data,
with certain characteristics,
should be stored in
a data store that
is best suitable for
those kind of data sets.
For example, relational data--
you shouldn't store in Bigtable.
You could do it in
Data Store, maybe,
if you split it
up in key values.
But credibility of
data is very important.
So we store time series
data in Bigtable.
We store relational information
in Cloud SQL, which we
are upgrading to Cloud Spinner.
We use Cloud Storage as an
intermediate staging store
as well.
And for example, if you
are streaming in data,
and you want to apply
logic on data itself,
it's very important
to realize which
data store you're going to use.
Not to mention the
cost, of course,
if you use certain products.
For example, with Data
Store, you pay per operation.
With Bigtable, you pay
for the nodes you use
and the amount of
concurrent connections.
Then last, but not
least-- the analyze side--
we use App Engine again
to expose this information
to the outside world.
So customers can authenticate
against our API, which we tidly
integrate normally with
their existing platforms.
We use Cloud ML for
analyzing the data itself.
We have some sophisticated
testing-flow models.
For example, for
anomaly detection.
And we use BigQuery for
certain reporting capabilities.
So my colleague, Erik, is going
to step on stage in a minute.
He's going to explain
some of the use-cases
we have been doing.
And one of the use-cases
we've been doing
is a risk management
case for a Fortune 500
retailer in US based in Texas.
We automated their-- this is
critical-- process for pricing
and forecasts so they could
get better insights in risk
and risk reduction.
So risks that they
sometimes run into
could be 100 million
or 200 million.
So they service like, a
hundred thousands, or maybe
a million customers.
C and I, commercial industrial,
but also residential customers.
And they are buying
energy from the market.
Short-term, mid-term, long-term.
Could be hundreds of
millions of dollars.
So we provided them
better insights
with pluggable, validation,
estimation, and editing
capabilities.
That's a specific energy
industry term by the way.
But also with benchmarking
and scoring capabilities.
So during the entire
process, we are
able to see what the quality
is of the data itself,
with some scoring indexes
and scoring figures.
So we could provide an
accurate energy demand and load
forecast.
We were able to reduce their
process from eight hours,
or sometimes even longer,
to minutes or even seconds,
from six solutions to
one centralized view.
So, for example, the load
analysts [? system ?]
that they're working with
there with 40 people doing
10 customers a month--
they could suddenly
do more than $1,000 a month.
So from inaccurate data to
measurable data quality,
with a bottom line impact
of $50 million plus.
And by the way, the
screenshots you see here
is the business application.
And my colleague, Erik Van
Wijk-- he's going to go
show the demo in a minute.
He's going to show
the interactive view
of this business application.
So I'll introduce my
colleague, Erik Van Wijk.
He's going to do the demo.
ERIK VAN WIJK: OK,
thank you, Edwin.
Hi, my name is
Erik, and I'm going
to show you a little
demonstration about Energy
Works Deal Analytics.
But before I do
that, I will first
give you a little bit of
context about Energy Works Deal
Analytics.
Energy Works Deal Analytics is
used in the pre-deal process
of a utility.
And the pre-deal
process is inspecting
data from industrial
smart meters
with a resolution varying
from five minutes to an hour.
And, while inspecting
that data, they
need to improve often
the data quality.
So they need to file validation
rules on these data sets coming
from the smart meters.
And they have to normalize it.
And then, if the validation
detects that certain KPI's are
not met, then we
can automatically
estimate it and fill
gaps and stuff like that.
And in the end, we
aggregate all the data,
and then the final result will
be generating a forecast based
on the improved quality data.
And a better forecast
means less risk
for the utility and a better
proposition to their customer.
And like Edwin explained,
the commercial offering--
we brought it back from
several days to minutes.
And the nice thing,
we completely
automated this deal analytics
process for the utility
pre-deal process.
How this looks like
on a high level view,
in our Google Cloud
Beacon System,
is it's completely server-less.
It's all based on
Pub/Sub and Data Flow
and different strorage types,
like Edwin explained before.
So data from smart meter
is published on topics.
And Data Flow processes
start validating
all that data, estimate
wherever is required,
and generating forecast.
And one of the validation
rules that we use
is making use of a
tensorflow model.
We're experimenting with that.
We create a tensorflow model
to detect specific outliers
in the consumption data
coming from the smart meters.
And in the demo that I'm going
to demonstrate right now,
I'm going to zoom in a little
bit on how we flag the data.
So raw data is coming in.
It's poor quality, and
I'm going to show you
how we flag the data
in our data analyst
in this preview process--
can drill down and inspect the
data and play around with it.
So please switch
to the demo screen.
OK.
I have to start
running it again.
And what we did
for the utilities,
we have a server-less
architecture.
And the good thing is that all
our utility customers are all
running on the same codebase.
And that's also a really
big advantage for us,
that we can make use of
GCP [? plots ?] from tools.
We don't have to worry
about scale at all.
We can connect a thousand
utilities if we want,
and the Beacon System
will scale as it goes.
And in the meantime, it's
now rendering the data.
We're zooming in on the data
of a particular smart meter.
What we have here is the
raw data as it is coming in,
as it is published on the topic.
And if I zoom in
on this data, you
see that it contains
all kinds of gaps,
and it's really looking
like bad quality.
There's a lot of gaps.
And this data, in this
interactive mode--
we can run all the
validation rules
like we run on Cloud Scale.
We can run it in
this interactive tool
that were created for the data
analysts of the utilities.
And we can enable particular
outlier detections
that we created in this,
our tensorflow model.
And this is a live demo.
A problem.
I'm running it again.
So what it's doing now,
it's reading the data again.
It's providing all
the validation rules,
and it's estimating where
the KPI's are not met.
And then, it will
visualize the results.
And we see it here.
OK.
The tensorflow Flow
Model detected outliers
in this data flow.
And I can drill down on
this particular analysis.
I can zoom in on this data.
And this is an outlier.
This is caused probably by
an industrial engine that
broke down for several
days and didn't
consume the data that it should
consume in normal operation.
And to be able to
create a forecast,
based on this smart metadata,
we want to get rid of all
the outliers because
that's going to influence
the forecast, and then we cannot
create the right commercial
offer for the utility customer.
So what an analyst can do now,
it's detected this outlier.
And we can now
estimate the value
as if this engine
never broke down.
So we are going to use
a like-day estimation.
So it's searching for similar
days for the consumption
in this particular meter.
And now, the orange line
is reflecting the situation
like this engine
never broke down.
So this dip is reduced,
and the gaps are filled.
We have a linear estimation to
fill out missing data, to fill
out the gaps in the data.
And in the end, we can show a
merge of all the estimations
that we have done for
this particular data set.
And if I zoom it out, and
only showed the merge line,
then this will be
the end result,
the improved quality of the
raw data that was fed in.
And this can be used to
generate a forecast that can be
used for commercial products.
The analysts can also,
on the right side,
view the KPI's that belongs to
this estimation and validation
rules.
So it can tweak around with
different validation rules,
try different estimation
methods to make sure
that the KPI's are met
to be able to generate
the right forecast.
So basically, this is what
I wanted to show you guys.
If you want more
information, just speak out.
You'll recognize us on
our Energy Works t-shirts.
Ask us questions,
if you run into us.
I guess we're back to Indranil.
[APPLAUSE]
INDRANIL CHAKRABORTY:
Thank you, Erik.
And thank you, Edwin.
This was fresh out of the oven.
So what you saw
today was how we use
IoT and Cloud to monitor real
time data in the factories,
and to use that to solve
a lot of the problems that
can be done in a proactive way.
You also saw how IoT and
sensors, and our Cloud Platform
can be used for logistics
and asset-tracking as well.
And as Giby mentioned,
it has opportunity
to save around $2
trillion or so.
And finally, you saw
how the smart meters
are using smart metadata
and utilities data.
You can really help the
utilities to get much more
efficient over time.
So we think we are
just getting started.
We think that industrial IoT
is at the point of inflection.
And with IoT and
Cloud, we can really
transform multiple industries
over the next couple of years.
So we're very excited,
and we want to thank you
for joining us at this session.
Thanks a lot.
