[MUSIC PLAYING]
DAN AHARON: Dialogflow is a tool
to help automate conversations.
Typically, it can be used for
three main groups of use cases.
It could be businesses that
want to talk to their customers
and helping automate
those conversations.
It could be around customer
service or commerce.
The second is powering
connected devices.
So this could be connected
cars, connected TVs,
and being able to intelligently
talk to the people that
want to use those devices.
And the third is connecting
employees to their employers
and accessing information,
business intelligence.
We're going to focus most
of today on the first set,
but a lot of those tools
are going to be available
and are going to be useful
for the second and third group
of use cases, as well.
So, you know, the space
is super exciting.
We just chose, like,
four different statistics
to highlight here.
There is, of course, much more.
80% of customer interactions
could be resolved
with well-designed bots.
That's mind boggling
if it actually
kind of sees itself out.
My favorite are the
two on the bottom.
So if you look at the bottom
left, a lot of businesses
today come to start using
bots to reduce costs
so they can serve users
and their customers
more efficiently.
But, actually, in surveys,
the interesting thing
is customers actually
prefer self-serve tools when
they're asked what companies can
do to improve customer service.
60% say that self-serve tools
is kind of the biggest thing
that they can do.
And so there's, like, all
of these customers that
are kind of clamoring for
better self-serve tools,
but they're forced
to be, you know,
talking to agents
for simple tasks
that don't really
justify agents.
And that last one--
50% of enterprises
will spend more
on bots than traditional
mobile development.
That's also mind boggling if it
actually becomes true by 2021.
So with all of that
excitement, the kind of problem
is, today, most of
the virtual agents
that are out there
are not very good.
If you Google "why chat
bots," "what chat bots fail"
is the top result
that shows up, right?
It's pretty crazy.
And why is that?
So here on the left
you can see an example.
There's all of these
bots that were built kind
of at the peak of the hype.
A lot of companies
didn't really invest
in building them out properly.
They're not very intelligent,
and it results in conversations
that are broken.
The good news is we
think, with Dialogflow,
there's a really
great opportunity
to build bots that
are intelligent.
And so the way it
typically works
in terms of architecture, what
we see customers deploying,
there's all these
channels on the left.
So it could be
text channels, chat
on the web, Facebook Messenger,
Google Assistant, or it
could be digital
voice in a mobile app,
in a car, in a TV, or phone
calls through a phone gateway.
They come to Dialogflow,
which does the conversation
management and the natural
language understanding,
basically to break down, what
is the intent and the entities
in that natural language?
And then Dialogflow sends
that structured information
to the fulfillment layer.
You know we think Google
Cloud is a great place
to host that fulfillment player.
It can be on Cloud
Functions or Compute Engine.
But you could also
host it anywhere else.
You could put it on Prem
or wherever you want.
How many people
here in the audience
were in the session about
Dialogflow last year
in Google Cloud Next?
OK.
A few of you.
So at Google Cloud Next
last year, in our session,
I announced that we
crossed 150,000 developers
for Dialogflow.
So this morning,
we just announced
that there's now over 600,000
developers on the Dialogflow
platform, which is crazy, right?
This is in just over a year
we crossed that landmark.
So it's not only growing,
it's accelerating,
and we're very, very
excited about the momentum.
And then when we ask developers
why they choose Dialogflow--
because there are a
lot of other solutions
out there for bot
development today.
It's a very exciting space.
The answer we hear
time and time again,
it's not the bells and whistles.
Other solutions out
there in the market
have things that
look good on paper.
But at the end of the day,
the most important thing
that we hear most
often from developers
is it's just the quality of the
natural language, the quality
of the natural language
understanding, the machine
learning that
Dialogflow uses is head
and shoulders above anything
else that's out there,
and it really makes
a difference, right?
Because what you
want to do is you
want to avoid things like that.
Right?
And so it's much more
important than anything
else you could want
in a bot platform.
So the exciting
thing is most of that
is due to the natural
language technology that
came from the Api.ai
acquisition that we did a year
and a half ago.
And so what makes us
really, really excited
is that we're just
starting the journey
of making Dialogflow better.
And now that it's
part of Google--
we, at Google, have actually
been solving this problem
with natural language
understanding
for more than a decade in things
like Google Search, Google
Assistant, G-mail Smart
Reply, Translation.
And we've built all of these
assets and capabilities,
like natural language
understanding engines, speech
recognition, TPUs.
And all of these are now
available to Dialogflow
to build better tools.
And so some of
these announcements
you're going to hear
about today are basically
a result of this new Google
technology becoming available.
So putting together this slide
kind of hammers the point.
Just for me, personally,
I just realized how much
we've done in the last year.
It's pretty crazy.
If you look at all of
the different launches
and feature enhancements
since our session
here last year at
Google Cloud Next '17,
we've added so much
to the product,
from built in analytics
to multi-language support,
inline code editor, Stackdriver
integration, versioning,
and lots more.
And, today, I'm excited to
announce five new features that
make Dialogflow really well
suited to power the Enterprise
Contact Center.
So what we've heard
from a lot of you
are there are a few things that
are still difficult to manage
in chat bot conversations.
The first is connecting
bots to phones.
So we're going to talk a little
bit more about that today.
We're going to make it much
easier to connect the phones.
Second is answering questions
that touch the long tail.
Building Intents--
it's time consuming.
Knowledge connectors are
here to help with that.
Third is automatic
spell correction.
So, as you guys know,
in chats, users often
make typos and mistakes.
This uses Google
technology to solve that.
Built-in sentiment analysis,
we are introducing today.
That helps you
understand when you want
to transition to human agents.
And last but not least,
built-in text-to-speech.
That also powers
our phone gateway,
and it's built on
WaveNet technology.
So I'll zoom
through these slides
pretty quickly so we
can go to the demo,
and I'll show you guys how
some of these actually look.
So our Google Phone
Gateway is built
on the same technology used
by Project Fi, Google Voice,
and Google Hangouts.
It already powers over 20
million phone numbers today,
so it's already
operating at scale.
Now it's available to
any Dialogflow user.
You can add a phone
number to your agent
in less than a minute.
And the beauty of it-- it
wraps all of the technology
needed to make automated
phone calls happen,
including speech recognition,
natural language understanding,
speech synthesis, orchestration.
It's all handled
by Google Cloud.
Knowledge connectors-- we're
going to go deeper in the demo.
So I'm going to skip that.
Spelling correction--
so it looks pretty easy,
but it's really, really hard.
If we had to build something
that helps automate spelling
correction for bots
for Dialogflow,
it would have taken
us years and years
to create something
that's reasonably good,
and even then it's
hard to get it right.
Luckily, because Dialogflow
is now part of Google,
we were able to use
some of the tools
that Google uses for other
products like Search--
and I'm sure you guys have
all used spelling correction
in Search and in other places.
And now Dialogflow
can take advantage
of those same capabilities.
And this applies both to
Intents, like you can see here
on the right, and to Entities,
like you can see in the middle.
So built-in sentiment analysis--
a very common concern customers
have is, OK, I created a bot,
but how do I ensure there is
a great end user experience?
Built-in sentiment
analysis lets you
identify what the query
score is for every query
that the user
sends, and then you
can use that in
your business logic
to decide when you want
to switch to human agents.
And last but not
least, text-to-speech
powered by DeepMind's
WaveNet gives you
the closest to human speech
that is possible for a bot,
and it's built in both
in our phone gateway
but also into Dialogflow
so you can use it
for IT and other uses.
And the one other cool
thing that we just
announced this morning is we
also added device profiles.
So what it does is we
actually shape the waveform
differently for our
speech synthesis
depending on which
speaker you use.
So if you play it
on a phone line,
we'll produce speech
that's optimized for phone.
If you play it on
a large speaker,
we'll produce speech that's
suited for a large speaker.
OK.
So let's try and see how this
all looks like in a demo.
So those of you that were
in the session last year,
you'll remember what we did
was we built on stage here--
or, actually, in Moscone--
a bot that handles chats for
an imaginary Google hardware
store.
And it has an entity
called Products,
like Chromecast, Google Home,
Google Pixel, things like that.
And then it has Intents, one
for service, one for commerce.
So if we look at the one
for commerce, for example,
it requires a product
address and quantity.
And so if I say I'd
like to buy Chromecasts,
you can see it identifies
action as "buy."
The product is
Chromecast, but there
is no quantity and no address.
And now it asks, what
address would you
like me to ship it to?
I can say 3 Third Street,
San Francisco, California.
How many units do you want?
Five units.
And then it's done.
It's ordering five Chromecasts,
shipping to 3 Third Street,
San Francisco, California.
And there's a full JSON--
I think this window
is not showing up.
OK, there it is.
There is a full JSON you can
use in your fulfillment layer
to act on it.
So what I'm going
to do now is, what
if you wanted to not just
help people buy stuff,
but also answer questions people
have about Google products?
So lets look for a
Chromecast FAQ web page.
And we're just going to choose
the first result that's here.
Let's take this URL and copy
it over, and let's add it
as a Knowledge Connector.
So we need to create
a Knowledge Base.
Let's call it My Knowledge Base,
and let's create a Knowledge
Item.
We call it Store FAQ.
This is going to be HTML.
It's going to be an FAQ.
And I'm going to
paste in the URL here.
And what it's doing now,
it's going to this URL.
It's downloading
this article or FAQ,
and it's going through the
questions and sorting them out.
There.
It just finished.
So I'll show you guys
the detail in a second.
But it basically
extracted the information.
And then the next thing we
want to do is add a response.
So the default response
here is Knowledge Answer 1.
What happens many
times with FAQs,
though, is the customer is
going to ask a question.
You may not have the exact
same questions in the FAQ.
You may have something similar.
And there actually
might be more than one.
There might be two or
three that are similar.
So what we'll do is
we'll copy it over.
Let's choose Facebook
Messenger, for example.
If a user is on
Facebook Messenger,
they have the ability to
see more than one option.
So what we'll do is we'll
add cards in a carousel here
that show two different answers.
And we'll use question
1 here, and, here,
it will be question
2 and answer 2.
So we're going to show them both
the question we matched them
with, as well as the answer
so they know what we wanted.
OK.
I'm going to click Save.
And now, just so you
guys see the full detail,
you can see this
is what we scanned
from that web site, all the
questions and all the answers.
They're all here, and you
can even disable some of them
if you don't want to
cover them in your bot.
OK.
So let's go back.
We'll try it out in a minute.
But one other thing
I wanted to show you
guys is our new Phone Gateway.
So you can see this is
a new square we just
added this morning, the
Dialogflow Phone Gateway.
And what it does, it lets
you add a new phone line
to your agent in seconds.
So all I need to do--
I click the Dialogflow
Phone Gateway,
and now I choose an
area code if I want to.
I could also leave this
blank, but let's say
I want to choose Ohio.
I click Next.
Now I choose one
of these numbers.
I click Create.
And that's done.
So we now have a phone
number in this agent,
and it always starts
from the Welcome Intent.
So we'll go to the Welcome
Intent in a second.
Let's first test our
knowledge connectors.
So if, for example, let's
say there's a question here,
how is this different
from Apple's AirPlay?
So let's try this
question in Dialogflow,
and let's see if it works.
It needs a minute to update.
OK.
There it is.
So you can see it
gives the same--
it retrieves this whole
answer from the article,
even though the words
I use are a little
different from the article.
So I didn't use the same
words for the question.
It knows how to match it.
Now, it looks very easy,
and for humans, this
is a very easy task.
Those of you in
the audience that
have done a little bit of
natural language understanding,
they know it's actually
pretty hard for a computer
to understand how
to match things
like that in natural
language, because things
sound very similar when you
look at it programatically.
So let's look at it in Facebook
Messenger, and let's try--
now there's two
different questions
here about remote controls.
Do I need a separate
remote control
for volume and other things?
Can you use Chromecast with
a physical remote control?
If I ask, how do
remote controls work,
it could be matched to either
one of those questions,
so it needs to update.
OK.
So you can see it matches
to both of those questions,
so it detects that
both of them work.
Now, this is really, really
hard, because, remember,
this is the same agent we're
using that also has the bot
technology.
So if I say I'd like
to buy a Chromecast,
it needs to both handle
the commerce actions
that we gave it, as
well as the FAQs,
and know when to
use each of them.
So it's non-trivial,
and there's a lot
of natural language work that's
happening behind the scenes.
So let's see it all coming
together in our phone demo.
So this is the phone number.
I'm just going to try it out.
Now, speech recognition
demos are always risky,
so bear with me here.
Hopefully, this
is going to work.
I'm going to try it on speaker
phone, which is not the best,
but let's see.
OK.
So if this works, what
it should be doing--
it should be going
to our Welcome Intent
and then reading what we
have in the Welcome Intent.
Let's try.
PHONE BOT 1: --to the
Dialogflow phone demo.
How can I help you today?
You can order new Google
products or request service
on existing ones.
DAN AHARON: What's the
difference between Chromecast
and Apple AirPlay?
PHONE BOT 1: In
most cases, AirPlay
works by beaming the
video or audio directly
from an iOS device to
the television over WiFi.
Your iPhone or
iPad must remain--
DAN AHARON: So everything
you're hearing right
now is not typed into
Dialogflow or anything.
It was ingested automatically
from a random article
that someone wrote
on the internet.
PHONE BOT 1: --to
the television,
which then starts streaming
straight from the internet.
[APPLAUSE]
DAN AHARON: Thank you.
PHONE BOT 1: --for beaming
the actual content.
If there's an advantage
to AirPlay, it's that--
DAN AHARON: Let's see if the
regular Intents still work.
PHONE BOT 1: --but
even this is becoming
less of a differentiator.
DAN AHARON: It's a long answer.
Yeah.
Can I buy a few Google Pixels?
PHONE BOT 1: What address would
you like me to ship it to?
DAN AHARON: 3 Seventh
Street, New York, New York.
PHONE BOT 1: How many
units do you want?
DAN AHARON: Seven.
PHONE BOT 1: Ordering seven
Google Pixel and shipping to 3
Seventh Street,
New York, New York.
[APPLAUSE]
DAN AHARON: So I just want
to emphasize all of that
looked really simple
to set up, and we
were able to do it in
less than a minute.
Some of those things, you're
able to do with other services,
but other services that are
out there in the market today
require you to set up a
phone gateway, and that's
something that will take you
probably a few hours to set up
with everything involved,
and who knows how much you
need to pay for it.
And then you need to connect
it to sort of the relevant bot
service.
And then if you want to
set up a knowledge service,
you need to set
that up separately,
and then you need to figure out
how many kind of gigabytes you
need for it, because it's
usually server based instead
of being serverless.
And then you need to connect
all of these components
and do a lot of coding.
We basically did all of
that with zero coding
in less than a
minute, and you have
kind of a full automated
bot that works on the phone.
So we're very excited
with this, and we
can't wait to see what
you guys all do with it.
Thank you.
[APPLAUSE]
So with that, let
me invite Tariq.
TARIQ EL-KHATIB: Hi, everybody.
I'm Tariq El-Khatib.
I'm the product
manager at Ticketmaster
within the Global Contact Center
and Technology Department.
If you don't already know,
Ticketmaster sells tickets.
A lot of tickets.
In 2017, Ticketmaster
sold 292 million tickets.
2% of those were
over the telephone
and through our contact centers.
So it's safe to say we receive
several million calls per year.
Ticketmaster is a division
of Live Nation Entertainment.
Because of this, we
receive a large number
of calls of all sorts
of types of events.
So on top of our
common questions
or ticket questions, such
as, I lost my tickets,
or how do I print my tickets,
we receive a lot of questions
that we categorize as general
questions, and many of those
are event and venue
specific, such as,
where do I find the will call
box office at Wrigley Field,
or how big is the camping
area at Paradiso Festival?
Before I jump in
and show you how
we're using Dialogflow
and Knowledge Base
Connectors integrated into
our customer service IVR,
I wanted to go
through and show you
two slides that really
highlight the impact
that an NLP like Dialogflow
could make on to our sales IVR.
So over here is the transcript
of Dan calling the Ticketmaster
sales IVR.
It's a speech-enabled IVR
that was built 10 years ago.
At that time, it
was top of the line.
But the technology then
required short phrases
and didn't have any intent
or entity extraction.
So between Dan talking and
the system understanding,
it takes 10 attempts for--
and one missed
recognition-- for the system
to understand that
Dan wants to buy
two tickets to see the
Chainsmokers in San
Francisco on May 5.
Now, seeing how that
would look like with a NLP
like Dialogflow, this
can be completed in one
to two transactions.
This is obviously quite an
upgrade in customer experience
or user experience,
but the system also
has the flexibility to walk the
customer through each prompt
if they still prefer to do so.
All right.
Now I'm going to go ahead
and jump into our integration
with our customer service IVR.
I'm going to be
showing you two calls.
This first call is what I would
categorize as a common ticket
type call.
PHONE SPEAKER 1: A
couple months back, I
had bought tickets to
the Yankees game, which
is happening now in two days.
I still haven't received
my actual tickets.
TARIQ EL-KHATIB: So the
caller is calling and saying
they haven't received
their Yankees tickets.
This recording is captured, then
coded and sent into our Intent
Service.
Our Intent Service houses a lot
of our complex business rules.
That then connects to
our external proxy,
which acts as a traffic
router between all
of our external services and
also translates all of that API
traffic into a generic
API to feed back
into the Intent Service.
So once the call
reaches external proxy,
it's then sent to
Dialogflow right here
on the right, Agent 1
that is our main NLP,
and that triggers an intent
of tickets not received
with the entity of
sports team, the Yankees.
That information is then sent
back through the pipeline
to the external proxy, then
back to the Intent Service,
where, in the Intent
Service, there's
a rule that's triggered
based off of that intent that
plays a landmark
prompt, let me see if I
can help you find your tickets.
And then based off of
the entity of the Yankees
and the customer's
incoming phone number,
we're able to tell that this
customer had a mobile ticket
order and inform them,
looks like your tickets
are available right now within
your Ticketmaster mobile app.
So now I'm going to go
ahead and show a more event
specific question.
PHONE SPEAKER 2: I
was calling to just--
I have a general question.
I'm trying to get an idea of how
big these camping spaces would
be, just the general size
at the Paradiso Festival.
TARIQ EL-KHATIB: So
the question was,
how big is the
area of the camping
space at Paradiso Festival?
Just like call
one, the recording
is captured and
coded and then sent
to our Intent Service
and then external proxy.
And then it hits that same
Agent 1, our main NLP.
That's where it triggers
a intent that we labeled
as event and venue
specific question,
and it also yields the
entity of Paradiso Festival.
So that information
is then routed back
through the pipeline, where it
triggers a rule on the Intent
Service to tell the
external proxy to reprocess
the same transcribed
text from the first call,
but this time, based off
of the entity of Paradiso,
to hit an agent that is event
specific with the Knowledge
Base Connector tied to the
Paradiso Festival FAQs.
So, there, the answer's then
retrieved from the FAQ site,
goes through back the
pipeline, and that's
where the customer
will hear the camping
area at Paradiso Festival
is 25 by 15 feet,
enough to fit one
car and one tent.
So I'm going to go ahead
and show you how this looks
like in Swagger, the back end.
So, right here, I've got
the encoded recording,
and go ahead and
click Try It Out.
And so what's going on now
is it's making that full trip
through the Intent Service to
external proxy, Dialogflow,
and back.
And then here's the result.
As we see, there's no error.
That's good.
The actual transcript
of the call--
the caller just had
a general question.
And then the intent is
correctly identified
as event venue specific
with the festival
name of Paradiso Festival.
So also, below here, you'll
also see the additional data
that we appended,
additional business logic,
like landmark and
confirmation prompt,
or if there's a specific action
that needs to happen based off
of this intent type.
So the second leg of the call
where it actually goes and hits
the Knowledge Base
Connector is something
we are currently developing.
So for the purposes
of this demo,
I'm going to go ahead and
act as a courier pigeon
and show what that looks like.
So you could see
the question is now
returned as a general standing
camping site is about 15
by 25 feet, enough for
a tent and vehicle.
And so it can actually answer
these event specific and venue
questions now in our
self-service IVR,
which previously would
need to route to an agent.
So to wrap things up, here are
our key takeaways and lessons
learned.
With a wide spectrum
of potential intents,
you really should consider using
multiple agents and knowledge
bases.
This can really help with
intents that are very related
but are still
somewhat different.
Also, consider creating
an external proxy
with a generic API for product
and feature scalability.
So, when we built
this external proxy,
part of my future roadmap was
to also include a sentiment
analyzer, but that was
before Google would
include that in Dialogflow.
So I at least have
the ability to now--
any other additional
advances in AI--
to easily integrate
it with our system.
Next is conversation matters.
Put yourself in
the user's shoes,
and anticipate what
the next question is.
And, especially with IVRs,
gotta break the old, rigid IVR
prompting habits.
Those are not conversational, so
these services are only as good
as the effort and
the conversation
that you actually put into them.
Finally, think big with AI.
Every day, we see new
advances in the AI space.
For example, looking at the
phone connectors that were just
announced today,
in theory, we could
utilize those to create a
unique IVR for every major event
that we have or even
every major venue.
So based on your
everyday business,
you should really be
evaluating how these new AI
advances really can make a
difference to your business.
So with that, I'm
going to hand it off
to Akash for Marks and Spencers.
[APPLAUSE]
AKASH PARMAR: Hello, everyone.
I'm Akash Parmar.
I'm the enterprise architect
at Marks and Spencer.
Anyone here who's never
heard of Marks and Spencer?
Excellent.
That slide for you.
Established in 1884,
we have 1,500 stores,
and we have 81,000 employees,
32 million customers
across the world.
It's one of the most
recognized brands,
and it's a very popular
British household name.
And just to emphasize how
important Marks and Spencer is
to its customers, one
in three women in the UK
buy their bras from M&S, and we
sell 45 of them every minute.
In terms of our
contact center, we
take about 15 million
interactions over voice, chat,
and email, and that's handled
by [? 1,500 ?] advisors
and by our store staff, as well.
So our leadership team
gave us a challenge
where we were
asked to understand
our customers better.
We were asked to provide
more self-service options,
and we were also
asked to save cost.
And to achieve that, the
first thing we had to do
was to get rid of
our rigid DTMF IVRs,
and we replaced that with
a natural language based
solution, which
would ask a very open
ended question like,
how may we help you?
The response from
the customers was
digested into Dialogflow,
which would then come back
with an actionable intent.
This solution-- it's
not a deep solution
going into the depths
of Dialogflow, which is
our aspiration in the future.
But it is a solution which
was implemented widely
across our organization.
And this platform would
take over 12 million calls
over the next 12 months.
What this did was it freed
up around 100 employees, who
were just busy taking calls
and moving the calls around,
to actually not handle those
calls and be on the shop floor
and actually work with the
customers who are walking
in and out of our shops.
As you can see on
the side, my boss
calls it the stairway to
our customer service heaven.
We are still on step two,
so we have a long way to go.
Quick view of our current
solution-- what I'll do
is I'll try and make a call just
to give an experience of what
it sounds like.
PHONE BOT 2:
Welcome to M&S. Just
so you know, we record our
calls to help with training.
So that we can get you to
the best person to help,
please, can you tell
us in a few words
why you are calling today?
AKASH PARMAR: I want to
buy some blue shirts.
[PHONE RINGING]
PHONE BOT 2: We'd
really like to know
what you think of the
service you received today.
AKASH PARMAR: So the call--
just going to the flow, then.
What happened was,
I made a call.
It went into the
Twilio platform, which
answered the call, and it then
invoked our application, which
is in our secure environment.
The application said,
how may we help you?
The customer said, I want
to buy some blue shirts.
It then used Google's Speech
API to convert the speech
into text.
That text was handed back to our
application, which then sent it
across to Dialogflow.
Dialogflow came
back with an intent,
and we then use that intent to
route the call to our contact
center.
So that's how it
looks like today.
The partnership between
Twilio and Dialogflow
is going to make
this much simpler.
I wish they had this partnership
six to eight months ago.
It would make my life much
easier, but it's now here,
and I think it is a very,
very great opportunity.
So what happens now is Twilio
would integrate directly
with Dialogflow.
So rather than just coming back
with the utterance or the text,
it will actually come
back with an intent.
And then the organizations
can act on that intent very,
very quickly.
So it will, A,
make things faster,
and B, it will help
experiment quicker, as well.
So it's a very, very exciting
opportunity, this partnership
between Dialogflow and Twilio.
This is our current
dashboard, how it looks like.
Let me see if I can flip
across to the live dashboard.
And, hopefully, we'll find
the call which I just made.
So, basically, the dashboard--
it's a quick view on how
many calls we are taking,
what are our top 10 intents,
what the date is today.
Oh, my blue shirts,
they are there.
So the top call--
that's my call, so I want
to buy some blue shirts.
So very clearly, you
can see, calls come in.
The utterance was transcribed
very, very accurately
by Google Speech API.
Went to Dialogflow.
Dialogflow said it's an order
call, and then based on that,
we said we're going to
send it to the sales team
so that they can take the order.
Two key things
which I'll take away
or which we have achieved--
A, tool like Dialogflow
was so simple to be
used by the business users.
So from day one,
they were the one
who built it, trained the agent,
and they did it within days,
and we achieve now 90%
accuracy on our intent.
The second benefit,
which we didn't even
realize we'll get at the
start of this journey,
was the fact that we could now
ask the advisors not to fill
in any reason for contact.
Rather than the
advisor telling us
what they think the
call was all about,
we actually let the
customer intent do that.
So it's what the customer
wants, and they have said it
in their own words, and we
then use that as our reason
for contact.
And it's saved, like, 10 seconds
from every call in [? AHT. ?]
So we have some big
plans with Dialogflow,
so now we're going to
start going deeper.
First thing we want
to try to achieve
is an end-to-end conversational
order fulfillment journey.
So identify the customers,
integrate to the back end,
and also take payments.
So that's now a big
aspiration of ours.
We also want to use it for any
kind of appointment booking
system.
So we get a lot of
calls around bra fitting
appointments in our stores.
So we want to automate that.
We want to also go out and
use home devices, where
we have a dine in for 10
pounds offer in M&S, which
is very, very popular.
So, Google Home, Alexa can
be used by the customers
to find out whether we
have an offer ongoing.
If yes, what's this all about?
And also some complex journeys
like flower ordering, which
is, again, a very
big business for M&S,
but a complex
journey, because there
could be many reasons for which
you'll be ordering flowers--
happy, sad-- so
sentiments get involved.
So that's a journey we
want to go after, as well.
Key challenges--
so one challenge
we had was we had to integrate
Dialogflow separately,
and now that would go
away with the partnership
between Dialogflow and Twilio.
So that's a step in
the right direction.
Experiment rapidly,
but remember,
there is no prize if
you can't productionize.
It's all great to have all
these tools and technology,
but then if we can't
productionize them and send
customer calls through
it, it's of no use.
We need to also be aware of our
internal governance challenges.
So it took us, like,
five weeks to get
our first [? MVP, ?] but it took
us five months to productionize
it.
So procurement,
legal, finance, you
need to make sure you
take them on the journey
from the very start.
Otherwise, you would be
hitting lots of blockers.
To have a lean team
with the right attitude.
So I think my boss, Chris
McGraw, who's in the audience,
takes the phrase "learn
from your mistakes"
very, very seriously,
and he gets
upset if you don't make
five mistakes every week.
So that's what we do.
We make mistakes, and we
learn from our mistakes,
and it has worked for us so far.
Get the right
technology partners.
We had that in
Twilio and in Google.
Twilio was at every step with us
in our initial implementation.
It helped us recognize the
potential of this platform.
And we thank them for that.
The accuracy of Google Speech
API, the accuracy of Dialogflow
gave us confidence to go back
to the business and tell them,
this is something which is
definitely going to work.
It's not a question.
We know it's going to
work, and we went there
with total confidence.
Partners where we don't
have internal resources.
We didn't have a
development team,
but we wanted to move fast.
So we partnered with
a company called
DVELP, whose CEO, Tom Mullen,
is in the audience, as well.
So the key message
there is make sure you
have the right team
behind you when
you set off on this journey.
And, also, the technology
landscape is changing so fast.
You have to make sure that
it's your architecture--
your applications
are open to that.
New things are
coming in every day.
And we have to adapt to it
very, very quickly, as well.
So just to conclude, can
Dialogflow virtual agents
help contact centers?
We at Marks and Spencer are
definitely in the yes camp.
Thank you.
I'll pass it on to Jared now.
[APPLAUSE]
JARED MOORE: Thanks, Akash.
Hey, everyone.
So let me get a
quick show of hands.
How many of you have been
to a Home Depot before?
Awesome.
Yes.
So in case you weren't aware,
we are the world's largest
home improvement retailer.
We have over 400,000 associates
and over 2,000 stores
across the United States,
Canada, and Mexico.
And we also have one
of the world's largest
e-commerce websites.
So today what I'm going to do--
by the way, my name
is Jared Moore.
I'm from the Voice and
Conversational Search Team
at Home Depot.
And we're going to show
you a new feature that's
available in our Home Depot app.
So now what I'm showing
you here is our beta app.
And I actually just got word
that this feature is live on
or Android app, and
it's going to be live
pretty soon on an iOS app.
So if you all don't
have it already,
go ahead, download the
latest update of our app,
and give it a try
after the talk.
So what we developed is a
new version of voice search.
So now you can go and hit
the microphone button.
I'm looking for a hammer.
And it still worked.
So, you see, Dialogflow was able
to parse out that I'm actually
looking for is a hammer.
We don't actually want to search
for "I'm looking for a hammer,"
and then go and put that
into the search box for us.
And, in the future,
we're also going
to use Dialogflow to have
an audio response confirming
to the user that what
we're searching for
is actually what they
want to look for.
Cool.
So how did we do it?
So we designed a five layer
architecture to enable this,
and the first layer
is pretty simple.
It's just a client, so anything
that a customer or associate
is going to talk to.
The clients then connect
to a proxy layer.
So if we want to have
an optional layer
to do authentication,
then we add it there.
And then the real
intelligence is
when we go to the intent layer.
This is Dialogflow, and it
provides the automatic speech
recognition, as you saw,
intent matching, as well
as, in the future, TTS.
Dialogflow then goes
to our routing layer.
And what we noticed
was, if I say
I'm looking for a hammer
to my mobile phone,
on the desktop site,
on a Google Home,
no matter what, I'm
looking for something.
And the way that
you respond to me
might be different
depending on what channel
I'm talking to you.
But no matter what,
it's the same intent.
So what we do is we route
everything to a routing layer
and then route based
on where the client is.
So if you were talking
to the desktop site,
we'll get the
response specifically
for the desktop back end.
All right.
So in our example, we showed
you the page for hammers,
but if you were
on a Google Home,
then we couldn't
actually show you a page.
So we'd have a different
response for that case,
and we want to be able
to handle that here.
All right.
So if I had to narrow this
down into three main things
that you should focus on,
first of all, like I just said,
separate your
actions from intents.
You're really going to see that
intents are pretty much going
to be the same, no matter
where you're talking to.
If you want to
find something, you
are going to say it the same
way, no matter what you're
talking to you, but the way
you want to be responded to
could change.
The next one is definitely
design your architecture
for every use case.
And from what you could
see in my previous slides,
architecture can
be easily expanded
to add more and more clients.
And that's something
we wanted to be
able to do so that we weren't
stuck a couple of months
down the road when we had some
new channel we wanted to add,
and then we had to completely
re-architect our solution
in order to support it.
And then the last point is
share code between teams.
So if you look at our last
layer, the action layer,
we actually export
all of our code.
It's all on Google Cloud
Function, so it's all on node.
We export all of our code to
an internal NPM repository.
And then we can just have--
different channels can
just import that code,
add three lines of
code to their channel,
and then they can have
the exact same experience
in two different channels.
So that's one big takeaway.
And that's going to save
everybody a whole lot of time,
right?
So where are we going next?
First of all, we want
to see what we can
do with Knowledge Connectors.
I had a little bit of time to
play around with it myself,
and it seemed very, very easy
and very quick to set up,
so we want to see what we
can do to productionize
that in the future.
And we also want to look
into enabling new intents,
like where's my order,
add to shopping list,
or finding products
inside of a store.
And we're also looking
into new channels,
like the IVR system and,
also, the Google Home.
[MUSIC PLAYING]
