>> Hi and welcome back to
another video in the Azure
Cloud native series.
My name is Gary Hope,
I'm a Program Manager on
the Azure Cosmos DB Team.
In this video, I'll briefly touch
on what makes Azure Cosmos DB
a great database for your
Cloud native applications,
and have a look at some
of the leavers you
can use to optimize cost when
leveraging it for building
your highly scalable
Cloud applications.
Azure Cosmos DB is a fully
managed NoSQL database for
modern app development with
SLA-backed speed and availability,
automatic and instant scalability,
and open source APIs for MongoDB,
Cassandra, and other NoSQL engines.
As a NoSQL database it
provides developers with
the agility to efficiently
support the modern app
development life cycle,
by handling all the data as it
evolves over the lifetime
of the application,
and effortlessly maps
back to the object
orientated programming
models most commonly used.
Using the default SQL Core
API and JSON documents
to store your application data,
provides you with a
schema agnostic way
of storing your data so you're not
faced with the overhead
of having to model
your data first into
tables and columns,
then understand the
relationship between them
even before you start
building your application.
As a fully managed Azure service,
Azure Cosmos DB provides
your application with
unmatched throughput latency
and consistency guarantees.
Through its by-design support
of scale-out architecture,
it provides cost-effective
linear application scalability
as your application grows,
when compared to the
scale-up architectures
typically required by
relational database systems.
This is great when you are
using capabilities like
AKS auto-scale to right-size
your AKS as part of cluster,
now your application can scale
to meet the demands without
having to worry about any limits
inherent in the database layer.
The Azure Cosmos DB latency
guarantee I previously mentioned,
provides you with a single digit
application reads and writes.
So your application will always be
snappy and provide your
users with consistent,
high performance user experience.
With its turnkey global data
distribution capabilities,
Azure Cosmos DB can automatically
replicate your data
around the world for support
for every Azure region.
This also provides your application
with continuous service availability
in the unlikely event of an Azure
region becoming unavailable,
and backs the five-nines
availability.
The ability to spot database
writes across multiple regions,
enables you to place your
data for both reads and
writes as close to your users as
they nearest Azure Data Center.
There are many ways to reduce
costs and easily managed
Cloud native applications
with Azure Cosmos DB.
Let me briefly introduce you to
the concepts we've talked
about in a bit more detail.
Let's jump into the portal.
Here I have a pre-provision
Cosmos DB account.
The account is named cloudnativedemo.
Within the account, we have
a database called
unimaginatively demodatabase,
and within that database we have
a container called LogisticsData.
You guessed it, the app
is a logistics app,
tracking and managing vehicles
and the packages they transport.
We will have a look at the data
in the container in a minute.
Let's have a look first at
the global replication
feature I mentioned earlier.
If you look to the
right to the screen,
you can see data currently
resides in West US 2.
Enabling a copy of that data to
be automatically transparently
replicated to East US 2 region
is just a matter of one click.
Configuring multiple region writes is
also just a matter of
enabling the option.
These are great capabilities,
however, they come at a cost.
So think carefully about
data availability and locality
requirements for your application.
If you don't need
them, turn them off.
I'm going to discard
these changes as this is
a development environment
and I don't need them now.
Let's have a quick look at
how the data is stored.
Within our demodatabase,
I'm going to drill
into the LogisticsData container.
Here you can see a list of JSON
documents stored in our database.
If you click on a document,
you will see the data within it.
In this case, vehicle data with
attributes such as vehicle
identification number,
the age of the battery,
and what state the
vehicles registered in.
Azure Cosmos DB provides a rich
query language to filter and
project the data into a format
tailored for your apps requirements.
It has a select statement
for the container.
These first records are the
same as what we saw earlier.
Even now, there is
slightly more interesting
data in this container,
it's a NoSQL database,
so I don't need to create
separate container to store
different types of data.
It has a query pulling
back the trip data.
It has more interesting structure,
some standard attributes,
however here we can
see packaged data embedded
in the trip document.
While Azure Cosmos DB
supports flexible schema,
it's important to think
about how you model
your data to support
efficient queries,
and in turn reduce
the database resource requirements
for your application.
I'll link to some
additional resources
on this topic for you to explore.
Let's talk a little more about
the granular performance and
controls we mentioned earlier.
When using Azure Cosmos DB,
you pay for the storage
used to store your data
and the operations you
perform against the database.
Any operation you perform
consumes some amount
of database resources,
these are expressed in
request units or RUs.
The more challenging the operation,
the more requests units it'll cost.
So choosing the right offer
is all about selecting how
Cosmos DB will deliver
these requests units while
you are using the database.
The first offer that's
been available on
Azure Cosmos DB since the service
launched, is provisioned throughput.
This offer let you
configure the amount of
throughput you expect
the database to deliver,
and requires you to plan ahead for
the right performance level to
cover your applications needs.
Note that this offer
delivers throughput
expressed in request
units per second.
But configuring my database for
4,000 request units per second,
I'm guaranteed that this amount
of throughput will be available,
which is great for applications
that receive sustained traffic.
I also get very strong
guarantees over
latency and availability
of my database.
Now it's pretty rare
that request traffic
is a steady as shown here,
as application traffic
often varies over time.
This can lead to unused capacity
of the amount of throughput we
provisioned is substantially higher
than the throughput
your application needs.
When traffic patterns
are predictable,
Azure Cosmos DB lets you manually
change the amount of provision
throughput to make
sure that the capacity
you provision matches
the capacity you need,
either through the portal,
writing an automation script,
or programmatically in your code.
Let's have a look at what
practically provisioning
throughput looks like.
Here is the same account and
database and container
we spoke about earlier.
Let's provision a new database and
container for us to test with.
I'm going to call it MyTestDB,
imagine it [inaudible].
Cosmos DB provides you with
the ability to provision
throughput at both the
database and container level.
Provisioning throughput
at the database level
like we're doing now,
allows us throughput
to be shared amongst
multiple containers up to 25 of them.
This support separate parts
of your application and data,
and potentially even
separate applications data
that you don't want to store
in a single container,
but for which you're happy to
share performance considerations.
Here we choose the number
of request units per
second we want to provision
for our database.
I'll leave it at the default 400
request units per second for now.
I'll call our first container,
container 1, set the partition
key and click "Create."
I won't talk much here about
partitioning and how to choose
the right partition key,
to support the shutting
of your data across
the multiple physical resources
as we scale the database out.
However, I will share a link to
additional materials at
the end of the video.
Given that choosing an appropriate
partition key is also important
to ensuring good performance as
you scale your application app.
There we go, we have a new
database and container
within it using shared
database throughput.
It's not sharing it with any
other container at the moment,
so let's create another container.
I'm going to select
the existing database,
provide a container name,
you guessed it, specify the
partition key and click "Create."
Here we have our two containers
now sharing the same throughput.
May be that 400 request units
per second is not large enough,
let's scale that up to say 2,000.
Here we have it, our original
database and container,
and our new database with
two new containers sharing
database level throughput.
But what happens if I have
specific throughput requirements
for part of my application data?
No problem, we can create a
container for that data with
dedicated throughput
at the container level
within the same database.
I choose our existing database,
let's call this dedicated container,
and specify a partition
key as we did before.
However, now I'm going to select
that we want to provision
dedicated throughput,
and specify the amount of throughput
required,
and click "Okay."
Here you can see our
dedicated container.
If we go back to the Overview,
you can see that it has its own
2,000 request units per second,
and we can now adjust that throughput
for this container independently.
That covers manual provisioning
scenario using the portal.
Let's have a look at
adjusting the throughput to
match your application's
needs through automation.
Here I'm back in the portal,
let's open up Cloud Shell
and have a look at what's
involved in setting up
throughput using the Azure CLI.
Using the Az Cosmos DB CLI command,
I can show the throughput of
a dedicated container we've
just configured, 4,000 RU.
I can now using the Az
Cosmos DB CLI command again,
set the throughput in this case to
2,500 request units per second.
A quick refresher the portal,
and here you can see almost
instantaneously the
throughput is adjusted.
We can do the same
thing using PowerShell.
Using the Get-Cosmos DB SQL
container throughput commandlet,
we can see the provision throughput.
We can now set the
throughput using the update
Az Cosmos DB SQL container
throughput commandlet,
in this case down to 2,500 again.
As you can see, modifying the
amount of throughput using
PowerShell or the Azure
CLI is super easy,
and allows you to match the
provision throughput to
your predicted applications
requirements for database capacity.
This automation can be added to
the existing automation used
to scale your application.
