>> Hey folks.
Service Fabric Mesh is
a fully managed Azure service
for developers to
build and deploy
mission critical applications
without managing any of
the infrastructure like VMs
and storage or networking.
Chako is here to show us
how effortless it is to
scale a service from
three replicas to thousands,
today on Azure Friday.
Hi folks, I'm
Scott Hanselman and it's
another episode of Azure Friday.
I'm here with Chako Daniel
and we're going to talk
about Service Fabric Mesh.
Last time people can look
up the episode you took it
and scale it from
one instance up to three,
but you've come back
to do even more.
>> Yes. To just show you what
the true power of
serverless truly
means when it becomes
infrastructure without
thinking about it,
they're able to scale up.
So, I started with this
particular slide out here.
So, basically just
saying this is actually
our pivot slide which talks about
we give you a complete
Lifecycle Management,
we give you Intelligent Traffic
Routing and it is actually
the Microservices and Containers
Orchestration platform.
>> Like Azure itself uses
Service Fabric for things.
>> Yes, Azure-core uses it,
a lot of other large offerings
like SQL Azure or
Cosmos DB uses it.
This particular one, this
is in public preview now.
So, we don't have
any internal customers
using it at this point,
we have a lot of
external usage at this point.
>> Right. Service Fabric Mesh
is different from Service Fabric
is that I don't have to
think about a lot of the
administration administrivia
channel for me.
>> Correct. Correct. So,
it can also be used for
the same thing as we
use Service Fabric for.
It actually can be used
for modernization or for
your born in
the cloud applications.
You can bring your own network to
connect to other Azure
networks and so forth.
With minimal changes you
can actually put it into
Service Fabric Mesh.
Then you can also use
any language or any framework
which you like.
Whether it is in Java,
whether it is ASP.NET,
whatever else you think can be
containerized, you
can bring it to us.
>> I don't have to think about
the underlying virtual machine
or the underlying
operating system,
that is just part of the fabric?
>> Yes. So, based on your model,
we will figure out
where you will run.
Like for example,
if it's an alpine
image we will actually put
you on a VM which
actually supports that.
So, that's how it works.
So. You don't need to
think about it at all.
So, I'm here so basically
let me show you a demo.
>> Okay.
>> So, let me set it
up a little bit so
that you guys see it.
So, this demo basically
is yeah you can see it.
So basically it
has four services.
All of them listening
on the same network.
There is the Web front end
and then three worker services.
Each of these services basically
are meant to represent fireworks.
>> Sure.
>> So, at this point
they see there is
one container each and
each of them are just
tracking the position on
the firework and it's just
reporting it to the front end
and the front end
is just mapping it.
>> Okay. So to be clear,
this isn't just an animation
and some graphics.
The fireworks have
a lifecycle and they come
and they go and they
move and they change,
they mutate their state.
They will do all the work
going on here.
>> Absolutely. It is
just that tracking
is actually the position in which
the back end is
telling the front end.
>> Oh interesting.
>> So that's what it is.
So, it is coming
together using rest.
So, here's the JSON for
it so that you guys can
also try it out in
a smaller scale.
Of course, the quarter fall
for an excellent customer is 12.
I have larger quarter,
so I have to use logic.
So, the very first thing
you would see Control
Plus here a little bit.
Okay. So, there is
a network which I
define which is what
all the folders for services
would be listening into.
Then I have an ingress
end point which just
says here is the public port
I will listen to it's 80/80.
>> Yeah.
>> Okay. Then this is
the application itself.
So customers who are used
to Service Fabric will
see its similar notions like,
okay I have an application
which actually has
multiple services
and the services
are using a network to
talk to each other.
That's exactly out here as well.
This particular one has,
this is a web service,
this web service is as you
can see is the lowest type is
9x and it is
actually deriving from
this particular container image.
It is listening on
this particular endpoint.
Which is what we just
defined in the network.
As a part of
the invariant variable
this is another feature
which we have.
Basically you're
passing in something
which does the website
understands.
>> Cool. So this
is an ASP.NET core
application listening one port,
listening or running on Alpine,
running within the mash itself.
>> Yes.
>> Awesome.
>> Yes. So, it uses of course
the whole eight CPUs
I have here and
then a gig of four for gigs.
So, this is a front end.
The reason why it had to
be that way is because it
has to keep up with
the traffic which is,
that the rest of the four,
three up, three services
will pump in now.
So, it has a replica of one
of course and it's listening.
Then now let's go to
the back end services.
So each of these back
end services are also
another one of the same
container definitions
which we have.
I'm passing in an invariant
variable and object type field
which is called Red.
So there is red,
blue and green of course
and that's how I change.
Because it's the same image
but then maven variables
is what makes
a difference in terms
of what colour of
the framework of
the firework I have.
This one is a small
container of course,
it is just basically is a
small 0.25 And then 0.5.
>> Okay. So,
a little bit of a CPU
and just a couple of
hundred bags around.
>> Yes.
>> All right.
>> Now, it is to be a one,
I'm going to make it 500 because
that is going to be fun.
So, and then I did the
same for all the three.
>> The three colors red, green,
blue and 500 replicas
each, so 1,500.
>> Yes. That's what
we want to hit.
>> Wow.
>> So, let me just go to Azure,
open up the CLI and
then I will Copy this,
Paste and then it
will start to run.
>> Okay.
>> So, once it starts to run,
it will take a few seconds.
Now, basically it does,
it is now copying
those images into
the different VMs to
the back end so that
the images are applicable.
So till now its one, one, one.
So, while it's this coming,
let me talk about Ignite.
So, at Ignite timeframe
basically we would
have a few more resources
coming up like routing rules.
Like saw it last time
I don't ever seen this
coming and I didn't
know the exact time.
So, all the rules it's
something which we're
are working on.
We don't know exactly whether
to show up at that point or just
later but we'll certainly have
some demos, we'll
actually have secrets.
>> Ignite is in
the end of September.
>> Yes, yes, yes.
We would have secrets
as well so that they can
terminate SSL at the front end.
>> It's just going to
integrate with Key Vault?
>> Yes Key Vault will
be a by-reference.
Initially we would have in
line and then it would
by-reference. So now.
>> Something's happening.
>> Yes. There it is.
>> Happening. All the numbers
are changing too.
>> Yes. So now we have
started to scale up.
>> Look at that chart. One of
your laptops graphics card
is going to be able to handle it.
>> It will see that.
>> Again, those are
stateful objects with,
again the state is changing.
It's not just an animation,
it's a true representation
on the front end of what's
happening on the server side,
on the backend.
>> So, there you have, you
have [inaudible] containers.
>> It's a pretty visceral
example of scale.
>> Yes, and customer doesn't
have to think about it.
They just say I need
path finder and you have it.
So, within around 45 seconds you
have thousand containers running.
So, this is basically goes
to demonstrate that if
there is a traffic which hits
because your site
becomes popular,
you don't have to worry that
you're going to get all loaded.
There's Azure which
will cover you.
>> Yeah. I mean if you
think about that in
the context of games or
large social networks or anything
that has unpredictable traffic,
where you just you
just don't know what
the traffic is going to be
like from any moment to moment.
That level of real
compute scale is amazing.
So, you said that you have
special abilities because you
can have 1,500 in preview,
we can go up to 12.
But at some point is there really
an end to how much
Service Mesh can scale?
>> Actually at this point,
we have not part of a limit.
So, there will still be
a quarter with customers
will be constrained to they
like to say I need 500 cores,
600 cores, 700 cores.
Today in public preview
it is 12 cores.
But you can do partial cores
for your containers.
>> That's a great point.
Because in this case here,
you were doing quarter cores
for some of those objects.
>> Then you're thinking
about micro cores as well.
So that's things which
will happen later on.
This actually is from
application perspective.
We always approach it from
an application perspective.
So, unlike the
Blitzstein workloads
which most likely you would
use functions and so forth.
This is something for
a long term and so forth.
So anyway, so that's what
we're planning to do.
This actually would
be available for
you as a sample to look at.
The samples will be
available [inaudible].
>> Try
aka.msservicefabricmeshsamples.
Those are going to
be up on GitHub
and of course
documentation you can
find at the Service Fabric
Mesh part of Microsoft.
This can be played with
immediately and it's in preview.
Big stuff happening at Ignite and
beyond even more plans of
exciting more features is more.
>> Will you bring me
these features and share
with me next time?
>> Absolutely.
Absolutely. Next time
I come here maybe we should talk
about the resource model and
the registry integration.
>> That will be great.
>> Then also maybe
Springboard as well.
>> All right, let's
do it. I'm learning
all sorts of great stuff about
Azure Service Fabric Mesh
today on Azure Friday.
