[MUSIC PLAYING]
ALAN HO: Hello everybody.
AMIR SHEVAT: Hello.
ALAN HO: So you're probably
here because you heard something
about bots taking
over the world.
You might have even
spent a weekend,
hacked yourself a bot--
a Slack bot, or Alexa Skill,
or something like that sort.
But there's a lot of difference
between a weekend hack project
and building a bot that really
works especially in enterprise.
So today, we're very lucky
to have Amir from Slack.
He heads developer
relations at Slack
and he's seen over 17,000 bots
built on the Slack platform.
And he has a lot of best
practices on how to build bots.
And after that, we're
going to go into a session
and show you how to use Google
Cloud Platform to implement
some of these best practices.
And we're going to end with
a really cool and fun demo.
All right?
AMIR SHEVAT: Awesome.
ALAN HO: So with that,
I'm about to being,
but I want to give it a
little bit of context first.
There's actually three
conver-- there's actually
three presentations related
to conversational apps.
Today is going to focus
mostly on serverless
and some of the
application architecture.
And tomorrow we have a
session on a deep dive
into natural language
processing with API.AI.
So if you like this talk--
or even if you don't
like this talk--
go to the next one.
And then third one is how you
actually build a voice bot.
So today we're going to
talk mostly on chat bots,
but tomorrow you'll be able
to talk about voice bots.
So with that I will
give it to Amir.
AMIR SHEVAT: Thank you.
Thank you for having me.
Hi, everybody.
My name is Amir Shevat, I lead
developer relations in Slack.
Let's start by
defining what bots are.
So bots are a new
user interface.
They help you expose
tools, and services,
and workflows through
messaging interface.
Think of them as users that
live inside your messaging
app, but being
powered by software
rather than being
powered by humans.
So in the next few
minutes, I'm going
to walk with you about
a few use cases and best
practices about how to
use and implement bots.
Let's see a few examples.
This is one of my favorite bots.
It's called Amy Ingram.
It's by a startup
called X.AI in New York.
And they're actually
an email bot.
X.AI developed this
bot and what it does,
it schedules your meetings.
So if you email me and say, hey
Amir, I want to meet next week,
I add Amy to the email.
It has access to
my calendar and it
starts setting my
emails-- it starts setting
my meetings through email.
It sends about 400
emails a month for me.
So it saves me a lot
of business processes
that I don't need to do.
People send thank you
letter for Amy and CC me.
They're not aware
that it's software.
So it's pretty
awesome and amazing
to have this companion
that helps you do things.
Another bot that I really
like is the Stats bot.
It actually connects to
the Google Analytics API
and surfaces reports
inside Slack.
So you can actually--
this is my blog.
As you can see it's not
performing very well.
But I can actually
talk to the bot
and ask it things like,
where are my users, where
they're coming from,
have a conversation
about my statistics.
And I don't need to do the
context switching between Slack
and Google Analytics.
I get everything in a
conversational interface.
At the technical level,
there's a few differences
between bots and applications
that you know, for example,
for Android.
And the contrast is that
bots are not binary that
are installed on your client.
We don't install
anything on your client.
We connect services.
So your bot actually lives
on Google Cloud functions,
or any other hosting
provider, and it
gets event about what's
happening in Slack
for your users.
And you can actually interact
with the client using our web
API.
And then the Slack servers
talk to the Slack client
using a socket.
So you can actually abstract
and provide a lot of security
using this model.
But your code is not installed
on the Slack client--
just want to make that clear.
So now let's talk a little
bit about bot interactions.
When people say bots,
they think an entity that
has a conversation with me.
They start thinking NLP, and
it's not necessarily the case.
So we'll go through
a few use cases
and see where NLP and AI might
make sense and some simple use
cases that it might not.
So the first type of bot
interactions are notifications.
Notifications are an easy way
to pipe content into Slack.
In this example, we're
using the calendar API
to pipe content
about my meetings.
Why is this useful?
Because I don't need
to do the context
switching between my
Google Calendar and Slack.
I get my notifications in Slack.
It also improves transparency
and actionability
because, think, if my team
sees this notification,
they know not to ping me.
They know that I'm in a meeting.
Think of a message that
is sent into by email
to all of your DevOps versus
a message that goes in Slack.
A single message makes
it actionable and more
transparent about what happened
and what are the actions
that we need to do in order to
resolve an issue, for example.
Does that make sense?
Awesome.
At the technical level,
notifications are very simple.
They are a URL that
you expose to you.
You hit that URL with
a certain payload
and resurface that
information inside Slack.
So as you can see,
you can actually
use a curl command
to call and create
a notification inside Slack.
It's as easy as a curl command.
The second type of
interaction that we've seen
is slash commands.
Slash commands are an easy way
to use Slack as the command
line for a third party service.
In this example, we
work with Foursquare
to surface locations around
for lunch and business.
As you can see, you do slash
Foursquare business lunch
in Miami.
We hit the Foursquare servers,
and what they return to us
is what we show in Slack.
So at the technical level, it's
an API that you expose to us.
So the user hits
the slash command,
we send you-- we
hit your endpoint
and send you all the
information, like who
is the user, what's
the team they're on,
what is the URL-- what is
the query string that they
added to the slash command.
And what you reply to us is
what we display in Slack.
And this is a simple
example about how to show
"hello" with the username.
The last type of
interaction is actually
a conversational interface.
And this is a full
bot conversation.
And this is very interesting
because you can actually
facilitate a lot of interesting
business use cases using bots.
In this example,
we work with Howdy
to facilitate stand up meetings.
Are you familiar with
stand up meetings?
So we stand up and we say,
what are we working today?
What are we working on tomorrow?
What are we blocked on?
And that happens all the time.
And it's really easy in small
teams that work together,
but it becomes much,
much harder when
you have big teams that are
distributed around the world.
So Howdy actually goes into
each of the team member
around the world in
Slack, and asks them,
what are you doing today,
what are you doing tomorrow,
what are you blocked on, and
reports that back into Slack.
So taking something very
simple and automating it
through a conversation
actually makes
a lot of sense in this case.
Building bots is not as
easy as building a slash
command or notification.
So we partnered with Howdy
to create an open source Node
framework for building bots.
And even if you're not
familiar with Node,
you can see more or less
what the bot is trying to do.
In this example, it's
looking for hello, hi,
or a greeting in a way of
direct mention, mention,
or direct message.
And what it does, it
replies with Hello.
So we actually took the
aspects of conversation
and made it much easier
using this Node framework.
And I highly recommend
you check BotKit.
On top of that, we
released buttons.
And I've never been so
excited about adding buttons
to a feature since we did this.
Buttons are amazing
because they let
users take action inside Slack.
In this example,
Greenhouse-- you
want to approve a
request for a candidate.
You click on the button
and the message changes.
So you can actually create
this illusion of an app--
or not illusion-- the actual
app that lives inside Slack.
And users can take
action inside Slack.
Think how awesome it is to get
vacation approval [? cue ?]
just by clicking on a button,
or doing your expense just
by clicking on another button.
And these go over all of
the types of interactions
that we talked about.
You can have notifications
with buttons.
You can have slash
commands with buttons.
And you can have full fledged
conversation with buttons.
To summarize, we have
three types of interactions
that we've seen this year.
Notifications-- easy way
to pipe content into Slack;
slash commands-- using
Slack as the command line
for other services to
query and take action;
and then full conversation,
and that facilitates workflows.
And augment all of these
with actionable buttons
so users could take action
inside the conversation
interface.
So let's talk a little bit
about use cases that we've seen.
The first use case is the CRM.
That was the most
requested use case
we've seen from our clients--
the most thought
over integration.
And as you can see,
the bot actually
augments the human conversation.
So you can talk in
the sales channel
and have the bot actually
provide good information
inside sales, in context.
So you don't need to go
from sales force to Slack.
It actually augments
the conversation.
The second type of
interaction that we've seen
is with task management.
So Trello had created
an awesome integration
that you can actually
assign a time to tasks.
And you can connect to
everything that you can do--
every day, the
small interactions
that you have in Trello--
inside Slack.
But what if you want to
build your own integration?
What if you have
your own process?
You don't connect to
third party service
but you have your own process?
So in this example, Meetup.com,
has created their own process
of--
in this case for getting
backlog from a conversation.
So we're talking in Slack
about all these cool features
and ideas.
They've implemented
a slash command
to add ideas to a backlog--
to add tasks to a backlog.
And this is actually
powered by functions.
So you can actually create--
not only use apps and
integrations that come out
of the box, you can easily
create your own workflows
that are specific for your
organization and enterprise.
And that's it.
Thank you.
[APPLAUSE]
ALAN HO: So I just want
a show of hands here.
Who has actually tried
building some sort of bot?
Oh wow.
OK.
This is great.
This is really good.
OK.
So we're going to go over
how you can build bots
using Google Cloud Services.
And the first thing
I want to talk about
is two concepts that
I want to give to you.
One is the concept
of serverless,
and the other is the concept--
and then I want to introduce
some of our machine learning
services.
So let's first start
with-- serverless
is a big buzz word, right?
So I want to first start
off with a definition
of serverless.
So what is serverless computing?
Serverless computing
eliminates the need
for provisioning and
managing individual servers.
It's not any more, it's
not any less, right?
But the reason why
this is important
is because like an
inordinate amount
of time for developers
and operations
is spent on this activity.
You know, no
manager or CEO said,
thank you for managing my
servers last year, right?
So serverless, even
though it's a buzzword,
it actually has a lot of
financial implications.
The other thing about serverless
that people don't understand
is that serverless
is fundamentally
your stack re-imagined
and simplified, right?
In this scenario, you
don't care about things
like load balancers.
You let higher level
services take care of it.
Even how you write
your applications
are very different.
Code that used to live within
your business application
suddenly moves out.
So why don't I just kind of
give you a little overview.
So if you have synchronous
requests or asynchronous
events, they're going through
some sort of PubSub mechanism
and they get picked up and sent
to the function as a service.
Synchronous requests are
going through some sort
of API gateway.
Now these are kind of--
the API gateway is
optional, but the reality
is that if you build
any production system,
you're probably going to need
the API gateway in front of it.
The function as a service
is where your actual code
execution occurs.
And I'm going to use this--
again, a lot of terms here.
A backend as a service
or database as a service
is where your application
store happens.
So I want to deep dive into
a function as a service
and backend as a service.
So a function as a
service is very simple.
It's basically a place
that you can upload code--
not a virtual machine, right?
You're not uploading
a virtual machine.
You're just uploading code.
It's event based.
There's monitoring built in.
And you're paying
by every request.
Now, I want to first talk about
the first part-- uploading
code.
So uploading code versus
a VM is very important
because it allows the cloud
provider to optimize the system
to run your application.
So today, maybe your
application might
be running on an Intel processor
and tomorrow it might actually
be running on an ARM processor.
Those kinds of things only--
that's only possible if you
give the ability for the cloud
provider to optimize your code
for the given compute platform.
Now there's some gotchas.
The biggest gotcha is you don't
have access to the file system.
You have to make your
application stateless.
And the reason why is
because it enables the cloud
provider to spin up your
function more quickly--
spin up a new instance
quickly, which enables
things like auto scaling.
But not having access
to the file system
is kind of tricky because
code that used to work--
A.K.A. things such as loading
configuration files from
the file system, things
like disk based caches,
things like installing
your own logging system--
those suddenly don't work
within a function as a service
environment.
So you have to rely on
different mechanisms
to do things that
you used to do.
The second concept that
I want to introduce
is a backend as a service,
or what you people call
a database as a service.
What differentiates,
let's say, a database
as a service versus--
a truly serverless database
as a service versus something
like Cassandra or
MongoDB, is that each
of the databases underneath
the hood are multi-tenant.
These are usually
NoSQL, though if you
looked at our announcement
about Spanner,
it's one of the first
horizontally scalable
relational database,
that's rapidly changing.
And there's going to
be logging built in.
And you pay by
request in storage.
There are some gotchas, though.
Usually with these kind
of database as a service
or backend as a service,
you have limited ability
to configure your
indexes, right?
So this is going
to be impacting you
if you have some extremely
low latency requirements
or you have a very,
very complex query.
There's times that you
won't be able to use
a backend as a service to
fulfill that capability.
We have three
different types of--
for Google, we have
three different types.
We have Cloud Datastore,
we have Firebase,
which is our mobile
backend as a service, which
actually leverages underneath
the hood Cloud Datastore.
And we also-- if you need to
run a backend as a service
in your own data center, we
also have Apache User Grid.
And that's being used to run
a lot of enterprises as well.
So let's give you
an example of this
from a scalability standpoint.
Pokemon, when they first
came to Google and they said,
OK, we're going to
launch this game--
and it turned out that
after a couple of months
they were doing 50 times the
traffic that they originally
thought.
So these systems are very,
very scalable and very good,
especially for apps that
you don't know if they're
going to take off, right?
All right, so recapping the
benefits of serverless-- number
one, it's much lower
infrastructure costs
because you're getting
economies of scale.
You will have a lot
lower operational cost
because a lot of the
routine things like auto
scaling your application, or
adding more servers, things
of that sort, you're going to
let the cloud provider provide
that.
But most importantly, I think
there's a part that people
don't think about serverless.
It's a much more simplified
programming model.
That means that even though
your code has limitations,
it lets you run faster
and execute and build
your code faster.
You focus on the code.
So I want to talk a little
bit about the machine learning
services for building your bot.
So Google has a lot of different
machine learning services you
probably already heard about.
And if you think
about this, it's
actually segmented
into two categories--
machine learning services that
already has the model built in.
So you don't actually have
the pre-train a new model.
And machine learning services
where you bring your own data,
you bring your own model--
namely cloud machine learning.
But before I'm going to dive
into which particular service
you need, the first
question you to ask yourself
is, do you really need AI?
Earlier, you saw that
there were three types
of interactions-- notifications,
slash commands, and then
full conversational bots.
The reality is, for
notifications, you
don't need AI, right?
I mean, there's an event
trigger that tells--
it's a one-way
communication that
tells a person
something has happened.
You really don't need AI.
For slash commands, if
it's a very simple command,
you probably don't need AI.
But if you have a command
that is taking in multiple
parameters-- for
example, in this case,
it says business lunch in Miami,
and your system needs to be
able to react to that--
having AI, namely natural
language processing, helps.
It also helps with dealing with
things like spelling mistakes--
fuzzy matching.
And then for full on
conversational bots
you definitely need AI, because
people could say anything.
You can't create some massive
if-then-else statement
and expect it to work.
So what does natural
language processing?
It's very simple.
What it does is that it
takes unstructured texts,
unstructured conversations,
and turns them into machine
understandable user intents.
And very aptly named API.AI
is using AI to turn a human
conversation into an application
programming understandable--
sorry--
something-- turning
it into an API.
So that's a very, very good name
for that particular product.
So when you pull
it all together,
you pull the
architecture together,
you have your clients on
the very left-hand side.
They're going to be
talking, probably,
to some API Management,
API gateway that
takes care of things like
security and authorization,
things of those sort.
Your application code lives
in your cloud functions.
Your application data lives in
something like Cloud Datastore.
And the business logic would
talk to the Machine Learning
Services to interpret
what the user is saying
and give back user intents.
So we're going to have
a quick demo here.
And for this demo, Amir
is going to help me.
I decided that I want to build
a bot that kind of solves
my problem.
And one of the things that
I really hate about bots--
I mean, sorry-- about
conversations via text
is sometimes when you're
in chat you don't quite
get a lot of emotion in chat.
It's not fun.
Like you do your daily work.
It's not fun.
Sometimes people like me, we're
not that great with emojis.
So you sacrifice a
little bit of fun.
The second thing that really
hurts me when I'm in chat
is when I'm on a deadline,
something I say to my teammate
may come off as really
harsh criticism.
So for example, like I need to--
if I'm getting a deadline--
meeting a deadline,
I just say, you
missed the deadline,
something of that sort.
I often say things
are extremely curt
and it can be
misinterpreted criticism.
So can I create a bot that
makes these conversations
a little bit more fun?
Can I create a bot
that would tell me
if I've said some boneheaded
thing that comes off
as criticism.
So I decided to create
something called KeanuBot.
And what KeanuBot
does is that he
looks at all the conversations
and tries to figure out
a phrase from a movie--
you know, classics,
like "Point Break"--
that would either
liven the conversation
or be able to take the sting
out of bad, curt remarks.
So I'm going to-- actually
can you switch to the demo
and Amir will demo with me?
All right.
So I might say something
like, hi, Amir.
AMIR SHEVAT: Hello.
ALAN HO: How are things going?
AMIR SHEVAT: I'm going to
answer my typical answer--
amazing.
Whoa.
Or I might say,
my car broke down.
Oh, bogus.
ALAN HO: All right, so let's
take another example, like Amir
and I were interacting
on some dot--
on this next presentation and
I'm like, oh my gosh, Amir,
you missed the deadline for
submitting your draft to legal.
Yeah, KeanuBot is
just reminding me
that, OK, maybe that's
a little bit too harsh.
I got to tone
things down, right?
OK.
So let's go back to the slides.
I want to actually show
you the architecture
and how that works.
So what I just showed
you is the Slack--
I want to go through
the Slack client,
talking to Slack server,
talking to an API gateway--
which is Apigee's API gateway--
talking to cloud functions,
talking to API.AI
and giving back response.
All right.
So let's go back to the demo.
So I want to show you--
within Slack there's a UI
that you can use
to create a bot.
And within the
bot, what you do is
that there is a section
called Event Subscriptions.
So I am going to set the Event
Subscriptions to send events
to this Apigee gateway.
And within this
UI I can also set
which events to subscribe to.
So events might be like
posting new messages,
but it could also
be other things
like a new user joins
the room, or a new user
got kicked out of a room.
So your bots can react on
different ways to that.
So in this kind of
scenario, I basically
subscribed to all the
messages going to the channel.
The next part I want to
show you is the API proxy--
sorry-- the API gateway.
And in the API gateway you
can create one or more APIs
proxies.
In this scenario, I created
this KeanuBot event handler.
And if I go into it--
oh.
Let's redeploy.
Nothing like trying
things out in production.
So what you have here is that
it shows the URLs over here
and it points it to a
particular endpoint target.
What also you can do too--
and I hope that this is
not going to break on me.
Give me one second here.
Oh, actually, I'm not going
to play around with that.
So what I'm going to
actually try doing
is I'm going to try
deploying a filter.
So also, what the API
management system does
is that you could do things
like spike arrest filters, which
prevents your bot from getting
flooded with too much traffic,
or you can apply things
like security, like OAuth2.
So this is kind of the example
of application security
code that may have
lived in your app
before, it's being
moved out of your app
into higher level services.
Let's put a spike arrest
filter on here and let's
set it to 5, OK, 10
messages per minute.
OK.
All right.
So it's submitting it.
While that's
happening, I'm going
to show you cloud functions.
So cloud functions--
this is the console.
Any cloud functions you
deploy will show up here.
You could literally have
hundreds of functions.
That's the great
thing about this.
And you're not going to be
charged for the functions that
are not running.
It will be spun up on demand.
We could take a
look at the logs.
All the cloud
functions, by default,
have a logger built in.
And you can see here,
the last message
that was coming through--
how are things going,
things of that sort,
they all show up
in the log message.
Let's just double check to
make sure this stuff works.
OK.
All right.
Let's see-- let's
hope it still works.
I'm going to actually
introduce a bug in the system
and show you how
a deployment goes.
So inside this cloud function--
to write a cloud function,
it's very simple.
You basically write
a function over here.
It looks exactly
like an Express app
because underneath the hood it's
actually leveraging Express.
And then you basically export
that particular function.
Now, I'm going to put a little
code in here that actually
causes KeanuBot to ignore all
the requests that are coming in
from itself, because
KeanuBot is only supposed
to monitor human responses.
And I'm going to introduce
this bug over here.
And then I'm going to
deploy it to just show you
how a deployment works.
And it typically takes
under two minutes
to actually get a
deployment happening.
So while that's
happening, I want
to quickly show you what's
going to happen with this bug.
So this bug leverages--
is going to cause the bot
to go in an infinite loop.
And the reason why
is because this--
I created in API.AI
a set of intents.
And one of intent
is this whoa intent.
So this whoa intent,
basically what it says
is that if it ever sees
a message called whoa,
it would respond back
with whoa as well
with a picture of Keanu Reeves.
So let's try this out and
let's see if it works.
Start tracing the session.
And let's just double check
to make sure it's deployed.
OK.
And then--
All right.
So now it's going
into its infinite--
it went into an infinite loop.
So let's go take a
look at what happened.
So one of the cool things--
oh, give me one second here.
Again.
Well, I'm having some issues
with the UI for the API
management system.
But what's happening
here is, actually,
if you look in the
logs, you can see
that it was sending whoa,
whoa, multiple times.
And then eventually it stopped,
because what was happening
was at the API gateway, we
had set a spike arrest filter
and it basically was able to
block that flood of messages
coming in.
So before that I'm
going to-- let's
go fix the code and redeploy.
And while that's
happening, I want
to show you how you can
use API.AI to create
a new intent on the fly.
So I'm going to
create an intent.
Inside Slack channel,
a lot of times people
ask like, how do you do this,
or I don't know how to do that.
So I might say something
like does anyone
know how to do something?
Right?
And whenever
somebody says this, I
think about the Keanu
Reeves quote in "Matrix"
where he says, oh
I know Kung Fu.
So we will take this
picture and we can put it
in the text response.
And we'll save this--
and we'll save it here.
And once we create
the intent, we
can actually try it out
within the API.AI UI.
So it doesn't have to
be exactly the same.
I'll say something
like, Amir, do
you know how to
fix my Slack code--
actually, does anyone?
Oh, OK.
So it did get-- it actually
matched a different one.
Let's try something else.
Does any one know
how to fix Slack?
OK.
Oh, you know what?
I forgot to press Save.
[LAUGHS]
Oh, what's going on here?
OK, it just took a little bit
of time to actually get through.
So you can actually
test within here.
And what you see
here is that it's
able to match the
particular intent.
And because it's
not an exact match,
it actually gives you a score.
So this gives you a
confidence of how confident
your application actually
matched that particular phrase.
And within your code, you could
basically put a match filter.
And I said if it matches
more-- if the score is
higher than 0.5, then it
would actually return.
So if we actually
go to the code--
say, does any one know
how to debug Visual Basic?
And it pops up.
AMIR SHEVAT: Awesome.
ALAN HO: So Amir, what do
you think about this bot
that I created?
And just give me some
comments and feedback on it.
AMIR SHEVAT: So
first of all, I think
it's awesome because it
augments conversation.
So it actually
addresses your problem,
which is, like,
sometimes we say things
in text that you don't know how
to interpret with other people,
right?
And this adds a layer
of this awesomeness.
What I would suggest,
moving forward,
is maybe having the bot
answer these awesome replies
in threads.
So you can actually have
the conversation going,
but actually have the
bot respond in threads
so that it doesn't
disturb our conversation.
We can have our
conversation in line
and if we want to have this
interpretation of the bot
and what it does say
about what we are saying,
we can click on the threads
and see what the bot is saying.
Do you understand?
So that could be a nice
addition to your bot.
ALAN HO: And you
know, I just showed
there was a lot of little
errors here and there.
What are some practices
that bot builders
deal with if there's
a misinterpretation
or what are some
strategies around that?
AMIR SHEVAT: That's
a great question.
I think one of the key
things is read your logs.
You will see that users say and
interact with your bot in ways
that you didn't think are--
wasn't the use case
that you thought.
People ask bots about Trump.
They ask thoughts
about the weather.
They ask Trump about anything.
I even found two
founders that actually
mapped the amount
of times people
said I love you to the bots.
Think of it, because this
is an interaction that
is very human and very
intimate, people really
tend to have emotion
responses to the bot.
So don't be surprised
if your bot actually
delivers on what it does.
It will get a lot of
love from the users.
ALAN HO: OK.
All right.
So let's go back to slides.
We're almost at our end.
So I'd just like to remind
you that if you liked this
presentation, or
you didn't like it,
we have two more
presentations-- one on API.AI,
and then one on how-- instead
of building a Slack bot,
you're going to be able to
build a-- or a chat bot--
how to build a voice bot.
So with that, let's
have some questions.
[NO AUDIO]
So the question was, why did I
choose Cloud Functions instead
of App Engine, or Endpoints?
So the reason why I chose
Cloud Functions is because it
was just so darn easy to do.
And the truth of the
matter is App Engine--
actually, the code
that I just showed you,
that same code runs on App
Engine directly as well.
Some of the nice things
is that if you run it
on Cloud Functions,
you don't have
to pay for when the
app is not running.
And also it scales very,
very quickly as well.
Underneath the
hood, they've done
some optimizations to make
it scale, and it's cheap.
Endpoints, actually, is what we
call-- is also an API gateway.
And I would actually say that
if you have enterprise use
cases, especially if you have
multi-cloud enterprise use
cases, Apigee is the way to go.
And Apigee also provides other
API management capabilities
like creating developer
portals and more complex ways
of doing--
it has more complex
policies as well.
Endpoints, on the other
hand, it's cheaper.
I'll be honest will
you right here.
But it's also designed for
Google's infrastructure.
So if you have an app and your
APIs are all within Google
itself, API Endpoints
is a-- sorry--
Cloud Endpoints is
a great solution.
AUDIENCE: Hi there.
We tried to build a
small bot in our company
and one of the issues that we
faced was names recognition.
In particular, we wanted it
to-- and with the API.AI--
we wanted it to
recognize people's names.
And the issue is that
names can be foreign,
they can be objects that
already exist in the system.
How are you-- are there
any strategies for actually
distinguishing names?
Do you aggregate everything
from the corporate directory
and stick that in as a name?
I mean, what are
some things that you
can do to help solve that?
AMIR SHEVAT: So there's a few
strategies to handle that.
We have sign-in with Slack.
So when a user
installed the app,
it can actually get his
email, his team, his name,
or her name.
You could also use our API
to list the users in the team
and get all the information--
all their profile information
on them.
AUDIENCE: Oh sorry.
I'm sorry I didn't clarify--
voice-based.
Voice-based.
ALAN HO: Oh,
voiced-based application.
AUDIENCE: Yeah, voice-based.
AMIR SHEVAT: That's
a good question.
ALAN HO: That's a hard question.
So if we could actually
go back to the demo.
So this might not clarify
everything that you said,
but when you create an
intent, say a name intent--
my name is Alan.
What it does is that
it automatically
recognize the name.
Now, what you can do is you
can create multiple variations
of this and basically--
Alan is my name.
What ends up happening
is that it gets smarter.
It gets smarter by having
multiple of these phrases
in there.
So you just basically
have to figure out
the corpus of phrases that
may have a name in there.
And you can add them
in and then that way it
will be able to better
pull out the name.
I know that doesn't
work all the time.
But that's maybe one
strategy you can use.
AUDIENCE: All right.
Thanks.
ALAN HO: Any other questions?
[NO AUDIO]
AMIR SHEVAT: So the question
is, are engineers biased?
Are we using
engineering speech when
we're doing speech recognition?
And the answer is yes.
I'm an engineer so I can say it.
But I think us, as engineer,
we have natural language
understanding problems.
[LAUGHTER]
Right?
It's true.
It's something that we have.
So I think having someone
who's an actual designer,
a conversational designer,
is very important.
Having a scriptwriter-- so,
for example, Hi Poncho--
which is a bot that
helps you do the weather,
they actually hire
people who are
doing conversational designs.
They have scripters who
actually-- and they have a CMS.
They have content
management system
that actually maps conversation
and what the bot will say.
So yeah.
Let's not have engineers
create the scripts.
Let's have designers
create the scripts.
[NO AUDIO]
AMIR SHEVAT: OK.
Yeah.
ALAN HO: Can you
repeat the question?
AMIR SHEVAT: So the
question was, how do you
set the expectations?
What can the bot do or not do?
I think the key, in Slack,
is around onboarding.
So when you onboard
an app or a bot,
the key is to let the team
know what the bot could do.
Think of-- and it's the
same thing with humans.
So when you onboard legal aid
to the team, what do you say?
You say, hey everybody, we
have this new legal aid.
They could help you
with legal questions.
And you can use
the legal channel
to communicate with her or him.
It's the same thing with bots.
So you can actually script
an onboarding script
that lets the team know how to
use the bot and what to expect.
The second thing is to
handle errors and feedback.
So there's a lot of things that
people will say to the bot.
And they will ask it,
they will play with it.
Because this is a new user
interface, people like to play.
So just manage expectations.
Don't try to do, for example,
don't develop chit chat model.
A lot of developers, which
I've met, try to like-- hello.
How are you?
What's the weather?
How have you been doing?
That's a total waste of time.
Be very, very clear
with what the bot does
and have a single
purpose bot that
solves one thing and people
will hopefully love it.
ALAN HO: Another suggestion
is that, especially
if you're trying to deploy
into enterprise, don't always--
you might have a
50-50 model where
the bot answers 50%
of the questions,
and a human answers
the other 50%.
So an example, another use
case on the Apigee website
itself, we created a bot that
would majority of the time
actually be answered by
inside salesperson, right?
Like if somebody's asking
for a pricing sheet
or needs to sign up for a demo.
But for the things
that are very, very
common, like how do I implement
OAuth, the bot would actually--
the application would actually
intercept that message
that's going to the
inside salesperson,
reply back immediately
to that person
with an answer that's suitable.
And to the user, it looks
like the inside salesperson
is actually making the answer.
So that way you can
actually, kind of--
you can create-- you can
intercept messages at a very--
in that scenario, you want to
make your AI system intercept
very, very specific
messages and fallback
to the human majority
of the times,
because a lot of times we
just haven't planned for it.
And that way you can,
over time, gradually
train up the bot to add more
and more new functionality.
AMIR SHEVAT: I agree with that.
Maybe last comment on
that is that not all bots
are conversational.
Again, this is very important.
Most of the bots I've seen
are notifications and business
workflows.
These do not require
a lot of conversation.
It's just getting
good things done.
So it's a matter of,
like, you can facilitate
a lot with buttons
and rich interaction
and then the process
becomes much more
intuitive for your users.
[NO AUDIO]
AMIR SHEVAT: Yeah, definitely.
So this is called DM,
Direct Message, and a bot
could DM any user.
So what I would see
is like you start
doing these sentences
inside the channel
and the bot analyzes this.
This would actually-- we
should implement that idea
for the KeanuBot, right?
And then the KeanuBot
could give you in a DM
like an interpretation
of what you said, right?
You did-- this was
too harsh, right?
So you can actually
implement a way
where the bot listens
to the channel
but gives you feedback
in a Direct Message.
AUDIENCE: [INAUDIBLE].
AMIR SHEVAT: No.
No, no, no, no.
Yeah.
Sorry.
Yeah and sorry.
[LAUGHS]
AUDIENCE: [INAUDIBLE].
AMIR SHEVAT: Wow.
ALAN HO: Why don't you
repeat the question.
AMIR SHEVAT: I love all my bots.
I love bots that facilitate
business processes.
So for example, I ask
this in every event
so I'm going to ask this here.
How many of you like
to do expense reports?
Not a lot of people
raised their hands.
And why is that?
Because if you think about
doing expense reports,
it's actually an
awesome-- should
be an awesome experience.
I paid for something
and now the company
is paying me back, right?
We should wake up every morning
wanting to do expense reports.
But if you ask VPs, they don't
mind doing expense reports.
Why?
I paid for something and
now my personal assistant
helps me and makes that pain
go away and then I get paid.
So think of all the business
processes that are suck-y--
for lack of any other
name-- in your business life
and see how you can implement
them using simple workflows.
Most of our business processes
are in spreadsheets or VB
script--
God forbid.
You can actually take a
lot of these processes
and implement them in a
conversation interface
and that makes a whole
lot of difference.
I hope I answered your
question without naming
one son over the other.
Who do I like and
stuff like that.
ALAN HO: My favorite
is actually Audible.
I like listening to
audio books and I just
find it very easy
to just ask Audible
to open a particular chapter and
have it start reading it to me.
So that's my favorite.
AUDIENCE: [INAUDIBLE]?
ALAN HO: Repeat the question.
AMIR SHEVAT: So
the question was,
what do you do with
multi-lingual bots.
And I would say,
as an engineer, I
don't know what the right
answer is right now.
I think this is
uncharted territory.
I played a little
bit with API.AI
and you can give
it different-- you
can train it to give
it different languages
for the same intent.
And that works for some extent.
But think of a place where
different people in the channel
speak different languages.
So I'm saying that I hear you.
I don't have a good answer.
But I think as an industry
we're still learning this thing.
You could use API.AI to do
the trivial mapping of like,
this sentence in
English, and this
is how it is in
French, and in Russian.
So that's a good
way of going at it.
ALAN HO: Yeah.
We also have a translation
service as well.
There's like amazing
AI in the last year
that's helping these
translations get really good.
So I think it's going
to be interesting
because these translation
services are, today, basically
reading a lot of human
text from articles.
But as Google sees more and
more bots built on its platform
and it's seeing a lot
of chat coming through,
you're going to start seeing--
I predict that these
translation services will even
be very good for chat as well.
So that-- we're at day zero.
AMIR SHEVAT: Yep.
AUDIENCE: [INAUDIBLE]?
ALAN HO: So the
question was, especially
in a kind of a
serverless architecture,
how do you go about testing--
what is the application
lifecycle going to be like?
Well actually, a lot of this
is-- you think about it--
this is really more towards
a microservices architecture,
where a lot of the concerns that
used to be bunched up together
in one big monolithic
application
gets broken up
into a lot of apps.
So I would say the
first thing you
do is having a lot of unit
tests against every single
microservice, and
really figuring out
what your bounded context
for each microservice
actually does.
So I think that's like step one.
Figure out the
right architecture
for your microservices.
And then step two, around
your lifecycle of it,
I would just treat it like any--
and there's tons of
books on best practices
for microservices lifecycle.
I would treat it just like that.
I would treat every
single one of them--
you want to make sure
that every microservice
that you put into production
you have a rapid process
to be able to deploy new, you
have the ability to roll back,
blue-green deployments,
all that good stuff
that we've been
developing over the years.
AUDIENCE: [INAUDIBLE]?
AMIR SHEVAT: Thank you
AUDIENCE: [INAUDIBLE]?
AMIR SHEVAT: So the
question is, how do you
see the bot world evolving with
bots not being in your face,
or like making too much
noise inside Slack?
So I think, again, it's
a learning experience.
If you think about
the first web pages,
they had all these flashing
texts-- you remember?
The HTML text, and
everybody use animated GIFs
and flashing text in every page.
I think we're a little
bit in that era.
And I think we're learning.
So I think in the last years--
in the last year, we've worked
with a lot of bot builders
to actually make the onboarding
great, doing less chat,
not DMing everyone.
The other thing is that you
can turn on admin approval
apps, which means that only
administrators can install
bots.
So users can still request bots.
So they could go into the app
directory and request the bot,
but that goes into a queue where
the admin can approve those.
So if you're seeing too many
chatter in your Slack channel,
then I would turn that on.
Last thing is company culture.
For example, in Slack, when
I talk about a certain topic
off topic in the channel--
if I go to the marketing channel
and start talking about code--
people react with an
emoji that is a raccoon.
That tells me that I'm
speaking about the wrong thing
in the wrong channel.
So having cultural
things like that
to signal, hey, why did
you install this bot,
is a good best practice.
AUDIENCE: [INAUDIBLE]?
AMIR SHEVAT: No, no, no, no, no.
We're very--
ALAN HO: Repeat the question.
AMIR SHEVAT: So
the question was,
how intelligent can the bot be?
Can the bot have a
discussion with you
about the architecture, and
that connection to design?
And the answer
is, right now, no.
The word intelligent in
artificial intelligence
is marketing speech.
[LAUGHS]
It's like the AI is an amazing
tool, but bots are not sentient
and bots are not
intelligent in the way
we perceive intelligence.
They're more smart.
I would call it
artificial smartness,
because once you teach
something to a smart person,
they know how to do it better.
But there's no intelligence.
At least then-- and I'm sorry--
maybe soon the overlords will
take over and take
over the world
and then I'll be
punished for this.
But this is my opinion.
What you can do is actually
facilitate workflows.
And that's where the
sweet spot of bots are.
So I need to approve a design.
So having the post,
the design in channel,
and then having a button
that says approve,
instead of sending 10 emails,
that's where the bot excel.
So you know the
reply all nightmare?
You send an email to
a team and everybody
starting replying all.
You can actually
make that go away
with a bot in a very easy way.
So these are the use cases.
Think of the small
use cases that you
implement that are contextual
and save a lot of time.
ALAN HO: Your job is safe.
[LAUGHS]
All right.
AMIR SHEVAT: Thank
you very much.
ALAN HO: Thank you very much.
AMIR SHEVAT: Thank you.
[MUSIC PLAYING]
