[MUSIC PLAYING]
NEHA PATTAN: We're really glad
that all of you could make it.
Hope you're enjoying
Cloud Next 2019.
Now, over the past
few years, you've
heard us talk a lot
about how Google Cloud is
built on top of our global
network infrastructure.
This year, we wanted to take
the opportunity to dive deeper
into this aspect of the
global network design,
talk about how it
enables features
in global virtual
networks, or VPCs,
and hopefully encourage
you to think big
and to think global
when you're planning
for your own
application deployments.
We have a 15-minute talk day.
My name is Neha Pattan, and I'm
a software engineer at Google
I'm going to be joined by
Marshall Vale, who is a product
manager on the Cloud DNS
team, and by Ed Hammond, who
is a senior enterprise architect
on the Cardinal Health team.
And he's going to be sharing
with us the inside scoop on how
Cardinal Health deploys VPCs.
So this is basically
what cloud consists of.
Cloud is divided into regions,
which further get subdivided
into zones.
Now, a region is a
geographic location
in a certain continent
where the Round-Trip
Time-- or the RTT--
from one VM to another
is typically under
1 millisecond.
A region is typically divided
into three or four zones.
And a zone is a
geographic location
within a region which
has its own fully
isolated and independent
failure domain.
And so no two machines
that are in different zones
or in different regions
share the same fate
when it comes to failure.
So they're definitely in
different data centers.
In GCP at this time, we have
19 regions and 58 zones.
So how does Google's network
infrastructure power this?
Now, Google's network
infrastructure
basically consists of three
main types of networks.
The first is the
data center network.
This is what connects all
the machines in a data center
together.
The second is a
software-based private WAN
that we call before that
connects all the data
centers together.
And the third is also
an SDN-based public WAN
for user-facing traffic
entering our network.
So a machine basically gets
connected from the internet
via the public WAN, and gets
connected to other machines
and other data centers
over the private WAN.
So when you send a packet from
your virtual machine running
in cloud, let's say,
in North America,
to a GCS bucket running
an Asia, for example,
then the packet doesn't
leave the network backbone.
It basically doesn't
traverse over the internet,
it traverses over
Google's network.
Now one of the things that
I would like to mention
here is that, in addition
to the peering routers
at the edge of our
network, we also
deploy network load balancers
and layer 7 reverse proxies.
The layer 7 reverse
proxies basically
help us in terminating
the user's TCP or SSL
connection at a location
close to the user.
Now this is really important
because, as you know,
establishing an HTTPS
connection requires
two network round-trips between
the client and the server.
And so it's really
important to reduce
the RTT between the
client and the server.
And we do this by bringing the
server closer to the client.
And so we basically terminate
the end user's TCP or SSL
connection at a location that
is closest to the end user.
So this is basically what
comprises the global Google
network.
With 134 network edge locations,
presence in over 200 countries
and territories, and content
delivered through Google Cloud
CDN, you get access to
the same functionality
that also powers all
of Google's services,
like Search, Maps,
Gmail, YouTube.
Each of these are used by over
a billion users worldwide.
This is a snapshot
of our footprint.
In GCP, as I mentioned
before, we have 19 regions.
We have also announced two new
regions that will be turning up
before the end of this year.
So we are turning
up regions in Japan
and South Korea this year.
Before the end of
next year, we'll
also be turning up
to more regions,
in Salt Lake City, USA,
and in Jakarta, Indonesia.
As I mentioned before,
we have 134 PoPs,
or Points of Presence,
and 96 CDN locations.
Our data centers are connected
to each other through hundreds
of thousands of miles
of fiber optic cable.
We also have 13 sub-sea
cable investments.
We have 81
interconnect locations.
So these are basically sites
where you can physically
peer with us directly in
order to use either dedicated
or partner interconnect.
So why is all of this
important to you?
Why do you care that the
network is actually global?
Now, if you were to
deploy your applications,
you would do so by deploying
the applications to multiple VMs
in a single zone.
In order to protect yourself
against single-VM or
single-machine failures.
In order to protect yourself
against single-zone failures,
you can replicate
your application
across multiple
zones in the region.
So this basically gives
you added redundancy,
and so increases
the availability
of your application.
In order to further
protect yourself
against regional outages--
so things like natural disasters
or other such rare events--
you can then replicate
your application
across multiple regions.
And this is how you get
a global deployment.
Now what is critical here is
that with global deployments,
you are assured that
all the traffic stays
on Google's network backbone and
does not leave Google's network
backbone.
So you get access to all of your
compute and storage resources,
globally and
privately, which means
that you don't need to assign
public IPs to your VMs,
and you don't need to set
up expensive VPN or peering
connections in order to
stitch regional VPCs together.
Now, another way of
designing for robustness
is to use load balancing.
Due to the global nature
of Google's network,
we are able to offer global
load balancing so that you'll
get a global IP address
assigned to your load-balanced
application.
So a user in Taiwan sees
the same global IP address
for your application
as a user in Texas,
and they both get routed to
the closest healthy backends.
So the user in Texas
may get routing
to the closest healthy
backend that may
be running in Portland, Oregon.
But if there are no
healthy backends that
are running in any
region in North America,
they may get routed to the next
closest healthy backend, which
may be across the Atlantic.
But the important thing here is
that the end user's TCP or SSL
connection will get
terminated close
to the location of the user
before getting encrypted
and then routed over the
Google network backbone
to the data center in Europe.
We use the same DDoS
protection system in Cloud
as we do for the rest of
Google, and so you benefit
from the same perimeter
security that the rest of Google
services have.
We are able to assign
a single global IP
address to our load-balanced
applications using Stabilized
Anycast.
Now, Anycast, as many
of you would know,
is an addressing and
routing methodology
that allows you to wrote data
grants from a single sender
to the receiver that is closest
to the sender among a set
of multiple receivers that are
all programmed to self-traffic
and to receive traffic
on the same IP address.
We use Stabilized
Anycast in order
to preserve the BGP session,
despite BGP instability.
So if an end user's ISP is
recalculating the routes
that they are
announcing, then there
may be a certain amount of
instability in the TCP sessions
that the end user experiences.
As a result of this,
the end user's request
may get reported to a
network load balancer
in a location that is not
closest to the end user.
Now when this happens,
the network load balancer
in that location figures out
that the client IP is actually
closer to a different location.
And then forwards that
request to that location.
So this is how we basically
preserve TCP session stability
despite BGP instability.
So that's all about
the physical network.
How does this enable features
in the global network,
in the virtual network?
So as you know, virtual networks
services in Google Cloud
are global in nature.
So you can create
your network once,
you can associate network
policies once in your network.
So things like firewall
rules and routing policies
can be applied once
to your network,
and these policies
will work seamlessly
as you expand to
multiple regions
and you place your compute
resources in new regions.
If you're an enterprise
network administrator,
then you can use
Shared VPC in order
to centrally create and
manage your network,
while allowing full
autonomy to your developers
and allowing your
organization to scale
to hundreds or even thousands
of developers or development
teams.
So let's take a
look at an example.
Now let's say, in
your organization,
you would like to create
two types of applications.
One is a web application
that is internet-facing,
and the other is a
billing application that
is not facing the internet.
With Shared VPC, you can
create a load balancer
in your web app project.
For reasons that we
discussed earlier,
you'll want to
create redundancy,
and so you create compute
resources in multiple regions.
And then you can set these as
backends of your load balancer.
You can then create internal
load-balanced applications
inside the billing
project, and you
can use your billing VMs as
backends to these internal load
balancers.
And thus you can create
multiple tiers of applications
in your share VPC.
You can then associate
firewall rules within the VPC
in order to restrict traffic.
Due to the global
nature of our network,
you also get access to
Google-managed API services
privately, which means that you
don't need to assign public IP
addresses to your VMs
in order to access API
services like your Cloud
Storage, Machine Learning, Big
Tables, Banner, and many others.
This functionality of privately
accessing Google-managed API
services is now
extended to the on-prem
so that you are able to
access Google-managed API
services privately from
your on-prem network via VPN
or interconnect.
In order to secure your VPC, you
can use network layer firewall
rules.
Network layer firewall
rules are stateful and
connection-tracked.
Now, you can create, allow,
or deny firewall rules,
either in the ingress
or the egress direction,
by specifying source IP ranges
or destination IP ranges.
You can also use tags
or service accounts
in order to easily
group resources that you
are applying firewall rules on.
Now, another really
important feature
is enable and disable
firewall rules.
This is something that we have
launched in the past year.
So if you're
troubleshooting your network
and you would like to find
out the root cause of an issue
by temporarily disabling
a firewall rule
and seeing what effect
it has on the network,
now you don't have to delete the
firewall rule and recreate it.
You can simply disable and then
re-enable the firewall rule.
Another really important
feature that we
have launched to GA over the
past year is firewall logging.
Firewall logging
basically allows
you to see reports
of connection records
that get created for every time
a firewall tool gets applied
on a connection.
Firewall logs are not
sampled, unlike VPC Flow Logs,
but there is a limit on the
number of connection records
that get exported in a
five-second interval,
and this limit depends
on your machine size.
The firewall logs get exported
to the shared VPC host project
so that the security
administrator
of your organization can
view the firewall rules
and verify that the firewall
rules in the organization
are administered and are
getting applied correctly.
You can export firewall
logs with Stackdriver,
Cloud Pub/Sub, Cloud
Storage, or BigQuery.
Now, as I mentioned,
firewall logs
consist of connection records,
where a connection record
gets created every time a
firewall rule gets applied.
So for VM-to-VM traffic, there
would be a connection record
that gets created for the
egress rule on the sender VM,
and for the ingress rule that
gets applied on the receiver
VM.
This is also true for
VM-to-VM traffic in the VPC,
even if the VMs belong to
multiple service projects.
And for traffic that is
entering or leaving your VPC,
either to go to a peered
VPC, to go to the internet,
or to go to your on-prem
through a VPN connection.
Now, if you would like to
apply security policies
at the edge of our
network, then you
can do so by creating security
policies using Cloud Armor.
And what this
allows you to do is
to specify the rules that
should get applied at the edge.
So this is perimeter
security that
gets applied on your
load-balanced applications.
So let's say you create a
load-balanced application.
You can now associate security
policies, using Cloud Armor,
with the load balancer.
And this ensures
that the traffic that
is permitted to the
load-balanced application
will conform with the rules
that you have specified.
So your VMs basically don't
see traffic from the senders
that you have blacklisted.
You can specify Cloud Armor
rules using IP blacklists
or whitelists.
This set of rules,
this functionality
is basically generally
available now.
You can also specify the
rules using [INAUDIBLE] rules
or geospecific rules, or you can
use a flexible rules language
in order to customize the rules
that you would like to specify.
So what does a typical
deployment look like?
Now, in a typical deployment,
you would create an HTTP
or HTTPS load balancer-- and
this is a global load-balanced
application--
And You would automatically get
defense against DDoS attacks.
This is because, at the edge,
we implement DDoS protection,
and so any traffic that
is entering on our network
will get DDoS attacks
defense for free.
If you would like to
apply custom rules
for your load-balanced
application,
then you can use
Cloud Armor, and you
can associate the
Cloud Armor security
policy with your load balancer.
If you would like
to allow access
to your load-balanced
application
only for users that have been
granted this access using
IM policies, then you can use
a product called Cloud IAP.
That's short for
Identity-Aware Proxy.
So what the identity-aware proxy
does is that it checks whether
the end user's
credentials have--
it will check the end user's
credentials against IAM policy,
and it will check whether
the end user has been granted
IAM access to this
load-balancing application,
and if it has, then it will
allow the traffic to come in.
So traffic will
basically enter your VPC.
It will be received
by the VMs only
if it is allowed both
by Cloud Armor as well
as the identity-aware proxy.
Within the VPC, you can
use network layer firewall
rules to specify the
security on your VMs,
and you can specify them so
that are allowing traffic only
from the load balancer
proxy, and you
don't have to open your VMs
or your ports on your VMs
to the internet.
Another really important
aspect of security design
is ensuring that you mitigate
the risk of data exfiltration.
You can do this now using
VPC service controls
so that you define a service
perimeter or a security
perimeter outside of which your
data should not be accessible
or it should not be
allowed to be copied.
With the functionality of
allowing private access
to Google services from
your on-prem network
by a VPN or interconnects,
the definition
of your security
perimeter can also
be extended to your on-prem.
So that brings us to
the next question--
how do you access on-prem?
Now, there are multiple ways of
connecting your virtual network
running in the cloud to
your on-prem network.
You can do so using
VPN, and you can either
configure a VPN
using static routes
or you can use Cloud
Router in order
to dynamically exchange
rounds using BGP.
Or you can use
interconnect, in which case
you're directly peering with us.
And you can use either
dedicated interconnect,
where you control
the peering, or you
can use partner
interconnect, where
you are paying a partner for--
where the partner is
basically peering with us,
and you're paying them for
the bandwidth that you use.
You can create VPN connections
or interconnect attachments
in the shared VPC host project,
and the virtual machines
and all the service projects
that are attached to the shared
network will then get access
to your on-prem via the VPN
connection or the interconnect.
We're really
excited to announce,
now, the option to create
highly-available VPN
connections.
With highly-available VPN,
you can create two interfaces
on your VPN gateway,
and you can connect
the peer gateway on your
on-prem to these two
different interfaces.
Each of these interfaces
gets a different IP address,
and this allows redundancy
in your connection
to your on-prem network.
You can either use
highly-available VPN
in active-active
mode, in which case
you're advertising the routes
from your on-prem using
the same [INAUDIBLE] value
or the same priority,
and you would need
to use the same base
priority as well
when advertising
those routes to your VPC.
And in this case both
the tunnels will be used,
and the traffic
will be easy, and be
hashed over both the tunnels.
Or you can use it an
active-passive mode, where
one of the tunnels is
used while it is up,
and when we figure out that
the connectivity is down,
then we fall back
to the other tunnel.
This basically increases
the availability
of your VPN connection,
your on-prem, by 90%.
And so in a single region, you
get four-nines availability
for connecting to
your on-prem network.
Now, the next thing
that we have launched
to [INAUDIBLE] over the
past year is Cloud NAT.
Now, Cloud NAT is a feature
that a lot of our enterprise
customers have been asking
for, and they're really
excited to use.
Cloud NAT is basically
a managed NAT solution
that allows you to
configure access
to the internet on
your virtual machines
without having to give your
virtual machines public IPs.
We implement outbound NAT, we
don't implement inbound NAT.
And so this increases
the security of your VPC
by ensuring that connections
cannot be initiated from
a malicious user on the internet
to your virtual machines
running in the Cloud.
Cloud NAT basically
scales seamlessly
across VMs and
across connections
by handling both static as
well as dynamic IP allocation.
You can configure NAT as well
in the shared VPC host project
in the regions where you have
virtual machines that need
connectivity to the internet.
So let's imagine that you have
a package server that you would
like to download packages from,
onto your virtual machines
running in your VPC, before
you can bootstrap them.
Now, after you've configured the
NAT gateways in your shared VPC
host project, the VMs in
all the service projects
that are attached
to the shared VPC
will be able to
access the packet
server using the IPs that are
managed by the NAT gateway.
One thing that is really
important to mention here
is that NAT is a
control plane component,
it's not a proxy-based solution.
And so you get the same
bandwidth and performance
for your internet
connection as you
would if you were to give
public IPs to your VMs.
So NAT basically
scales really well.
You need a single NAT
gateway, because it's not
a proxy-based solution,
in order to manage
the NAT IPs for thousands of
VMs in that region in your VPC.
Now, if you assign public IP
addresses to your VMs primarily
for getting connectivity
to the internet,
but you also had the added
advantage of being able to SSH
to your VMs using
those public IPs,
then you lose that capability
when you remove public IPs
and when you switch
to Cloud NAT,
because connections cannot be
initiated from outside the VPC
using Cloud NAT.
With Cloud IAP-- and
Cloud IAP is something
we discussed briefly
in the context
of global load-balanced
applications.
Cloud IAP is now extended
to have functionality
for TCP forwarding.
And what this
allows you to do is
to specify IM policies on who
has access to SSH to your VMs.
And when a request
comes in, the proxy
will basically check
whether the user
has been granted permission
to SSH access to the VM.
And then if that check passes,
then the SSH connection
will be wrapped inside HTTPS
and then, using TCP forwarding,
will be sent to the remote
instance running in your VPC.
Now, coming to the
next topic, how do you
access manage services?
Managed services can be
accessed by using VPC peering.
So if you want to get full-mesh
connectivity between two VPCs
that are running in cloud--
they may be in different
organizations or the
same organization--
then you can use VPC peering.
We're really excited to announce
the general availability
of private service access.
Now, private service access
is a managed solution
that allows you to get a
private connection to a managed
service.
And the other really
cool thing that it does
is it allows you, as
a service consumer,
to specify a global IP range
and hand this off to the service
provider so that all the
sub-networks in the service
provider's VPC get carved
out of this global IP range.
And so, as a consumer, you're
able to better plan for the IP
ranges that are used for
your managed services.
One of the really
important features
that we have added as
functionality to VPC peering
is the ability to access peer
VPCs from your on-prem network.
And so if you have a VPC, and
you are accessing a managed
service through VPC peering,
and you have a VPN connection
from your on-prem to
your VPC, then you
are able to also
access the managed
services from the on-prem,
via VPN or interconnect,
to your VPC.
The other feature
that we have launched,
which is now available
in beta, is the ability
to control custom
route exchanges.
So let's take a look at all
of these in more detail.
So let's say you
have two networks.
The one on the left
is a consumer VPC
and the one on the
right is a producer VPC.
These two are in
different organizations.
And once you pair
these two VPCs,
you get full-mesh
connectivity between all
the virtual machines
in both these VPCs.
And so you're able to
access the managed service
VMs from virtual machines
in different regions
in your consumer VPC, as well as
in different service projects.
So basically, you get
full-mesh connectivity.
Now let's imagine an
example where the consumer
VPC basically has added custom
routes in their routing table.
So on the consumer
VPC's routing table,
you can see the default local
routes for the subnets that
are created in the consumer
VPC, you can see the peer route
for the subnet that is
created in the peer VPC,
and you see two
additional static routes.
So one of them is a static
road to the VPN tunnel.
So that's basically
a route going
to the consumer's on-prem.
And there's another route
that is configured to next hop
to a VM.
So let's say you have
an appliance that
is running on the VM.
On the peer side, you're able
to see the default local route
to the subnet on the
peer VPC, and you're also
able to see the peer
routes to the subnets that
are added in the consumer VPC.
However, by default, you cannot
see the custom routes that get
added in the consumer VPC.
And so the route to the VPN
as well as the route to the VM
appliance are not visible in the
routing table of the producer
VPC, by default.
With the ability to exchange
custom routes over VPC peering,
these routes will now be
visible in the producer VPC.
And so if you enable
export in the consumer
and you enable import of
custom routes on the producer,
the producer's
routing table will
be populated with the
static routes that are
defined on the consumer VPC.
So here you can see the 10.4/16
goes to the consumer's on-prem
via VPN, as well as the
static route, 10.5/16,
which goes to the VM appliance,
are now visible in the routing
table of the producer VPC.
You can disable this by either
disabling export of custom
routes or by disabling import of
custom routes on the receiving
VPC.
Now, both export of
custom routes as well as
import of custom routes
are disabled by default.
And so in order to
exchange custom routes,
you need to enable
export on the sender VPC
and enable import
on the receiver VPC.
Now, the next topic
I'll be talking about
is that of VPC flow logs.
Now, VPC flow logs are
basically a feature
that we announced and
launched last year,
and we have enhanced
this product in order
to allow you to have more
control over the flow log size
that gets generated.
The default aggregation
interval for VPC flow logs
is five seconds.
But now you can
configure this and change
it to be anywhere between five
seconds to up to 15 minutes.
You can also configure
the flow report sampling.
By default, the sampling
is 50%, and you can now
configure it to be anywhere
between 0% and 100%.
So this basically allows
you to control the flow log
size that gets generated.
We also add certain
metadata to flow logs,
and there is no an option
to exclude this metadata.
And with that, I
would like to invite
Marshall onstage to share with
us a few recent announcements
on Cloud DNS.
MARSHALL VALE: Thank you, Neha.
So my name is Marshall Vale.
I'm the product manager for
Cloud DNS here at Google Cloud.
And so one of the key elements
of connecting your resources
together in your
VPC is, of course,
DNS, or the Domain Name System.
Today I'm going to
give you a summary
of those types of capabilities
that Cloud DNS provides
for your VPC,
along with a couple
of exciting new announcements.
Of course it all starts with
Cloud DNS private zones.
Private zones allows
internal DNS resolution
to your private network.
Now, it's important to keep
your internal resolution
to your private networks
because that helps exclude
external parties from
discovering information
about their
resources, of course,
on your private networks.
Private zones can be
attached to a single network
or multiple networks
in your VPC.
Private zones also
supports what's
called split horizon, which
allows you to do overlapping
public and private zones.
So for example, you
may have a portal
that looks different
for your employees
than it looks different
from your customers.
Private zones also
supports IM policies.
So you can delegate
administration capabilities,
edit, or view capabilities
for your zones.
Pleased to announce that
private zones is now
in general availability.
Also very excited
to announce, here
at Next, the availability
of Stackdriver logging
and monitoring for
your private zones.
This allows query logs
and counts about responses
to be logged to Stackdriver.
From there, you can store
it long term in Stackdriver,
but you can also use
Stackdriver's Pub/Sub
capabilities to send that
long to other storage
locations, such as
BigQuery, but also on-prem
for your own storage
and analysis tools.
The query logs that are
recorded are probably
very similar to bind logs
that you're familiar with.
It stores information such
as Q name or R data, but also
specific GCP Cloud DNS things,
such as project ID or DNS
policies.
So for the metrics
and the monitoring,
that records information
such as serve
fails or annexed domain counts.
And this is all really
important because it
helps you debug your DNS
situations on your VPC.
But also it's important
for security analysis,
for threats in your system.
Pleased to announce
that, here at Next,
it's now available
in public beta.
So I've been talking about Cloud
DNS services within your VPC,
but you also may need to
connect your DNS services
from your VPC to
other locations,
such as on-prem or
even another VPC.
So for connecting to
on-prem, we have a capability
called DNS forwarding.
This allows bi-directional
DNS resolution
from your GCP resources
to your on-prem resources.
Now, DNS outbound forwarding
allows your GCP resources
to resolve your hostnames
using an on-prem authoritative
server, such as Bind
or Active Directory.
DNS inbound forwarding
does the opposite.
It allows your on-prem
sources to use Cloud DNS
as your authoritative server.
You can learn a little bit
more about DNS forwarding,
and of course your variety
of hybrid connection options
in the Net 2.04 session.
DNS forwarding is
currently in public beta.
I'm also very
excited to announce
availability of DNS peering.
This allows you to
cross DNS resolution
across multiple VPCs.
Let's look at a
couple scenarios.
First is, say you
are a SaaS provider
and you want to connect with
the VPC and your consumer.
The consumer would be
creating a special DNS peering
zone that would connect to
the network and the service
providers in PPC, and they
would have their own zone
that the final resolution
would happen from.
Or you might want to combine
this with DNS forwarding
to create an
architecture where you
have multiple VPCs that can
use an on-prem authoritative
resolver.
From there, you would
make a single VPC that
would be a hub VPC
doing DNS forwarding,
and you have multiple
spoke VPCs that
would use DNS peering to
connect into the hub VPC.
DNS peering even supports
resolution for your .internal
addresses.
Here at Next, DNS peering is
now available in public beta.
So with that, you can see
a summary of the Cloud DNS
services for your private
zones, and the wide variety
of flexibility that it supports
in your VPC architectures.
So with that, I'll
pass it back to Neha
to give you a summary of
what you've heard today.
NEHA PATTAN: Thanks, Marshall.
So to summarize
three main takeaways
that we would like you to
focus on from our presentation
is that VPCs are
global in nature,
they're built on top of a
global network backbone,
and you're assured that the
traffic basically never leaves
the network backbone.
You get access to all of you
compute and storage resources,
globally and
privately, so you don't
have to assign public IPs
to your virtual machines
in order to access
managed Google
services like Cloud storage.
You're able to configure
your global network once.
You are able to associate
network policies
with this network,
and you're also
able to centrally
create and manage
the network in your
organization while allowing
your organization
to scale to hundreds
or even thousands of developers
or development teams.
Security comes first
with Cloud VPC.
You can apply network
layer firewall rules
for specifying security
within your VPC.
These firewall rules
are stateful and
connection-tracked.
You can specify Cloud
Armor security policies.
In order to apply
rules that get enforced
at the edge of our network
for your load-balanced
applications, and you can
also use VPC service controls
in order to mitigate the
risk of data exfiltration.
VPC features are
basically integrated
with Stackdriver logging.
And now, with VPC flow
logs and firewall logging,
it is really easy
to monitor VPCs.
With that, I would like
to invite Ed on stage
to share with us Cardinal
Health's story on how
they deploy and use VPCs.
Thanks.
JONATHAN EDWARD
HAMMOND: I should
start by saying this is
a really great session.
I've actually watched
it a couple times.
So for those on
video, you might want
to go over it a couple times.
In Cardinal Health, we're
a Fortune 500 company,
and we have used a
lot of the features
that you've heard today.
We're a global
enterprise, and so we
have services running all over.
Primarily, our first
deployments in GCP
had been within
the United States,
but we have plans and
designs to fully go global,
and we've provisioned a lot
of that in our networks.
Our business objectives
in adopting cloud
is to largely be
quick to market,
be able to adopt different
technologies very quickly,
and be very cost-effective.
Obviously, in all
our businesses,
we want to try and save money,
and so were always interested
in trying to be the most
effective with every dollar
we spend.
The other thing
that we have to do
is we have to make sure
that we're protecting all
our health care information.
There's lots of regulations
in various different countries
about health care information,
so protecting our customer
information is very key to us.
As far as having
agility, and being
able to be flexible,
and speed to market,
we want to try and move
our shared services'
traditional structure
more into a DevOps
model, where
development teams can
provision their own services.
But we still need to
make sure it's secure.
So we need to provide
techniques and capabilities
where they can be
very agile, very
flexible, without
having to say mother
may I, if you will,
but still keep
all those guardrails in place
so that they don't do something
that would be compromising
to our customer data
or to the enterprise.
And of course I mentioned
the cost-effectiveness.
Obviously that's a key aspect.
So what we chose
to do early on was
we created two host projects.
So we have two host
projects that basically
do all the network things.
So that what that does is,
for our application teams,
they don't have to think about
all those network pieces.
All the interconnects and
all that kind of stuff
is hidden from them.
And then we have hundreds
of service projects
that sit on top of that,
and they use these two host
projects.
We have one for production,
one for non-production.
Each one of those
have multiple VPCs.
And those service projects are
then allocated or authorized
to be used by various
different development teams.
Those development teams then
are able to utilize the network.
One thing that-- if you're
in the network world
and you've talked to
a lot of developers
for a length of
time, you'll find
that you start talking about
routes, and BGP, and firewalls,
and their eyes
glaze over and they
don't want to hear about it.
I don't know about you, but
I've run into a lot of that.
So we try and make it very,
very easy for our consumers,
our consumers being the
development staff that are
actually helping the business.
And in that regard.
There's lots of
need for training,
there's lots of need
for documentation,
and that is a continual process.
And our company, it's
very large, and we
have lots of people coming
and going all the time,
so we're constantly doing this
education over and over again--
documentation, training
videos, in-house webcasts,
and that kind of
thing that we do
are very, very key to
our strategy there.
We also have to make sure
we have all the governance
controls in place,
both proactive
and reactive
governance controls.
That's really critical to
success in a highly-regulated
industry such as [? we. ?] So
the value of the host projects,
again, is that I have a small
number of network experts--
very small number
of experts that
know how all this
networking thing works.
So all the interconnects,
VPN tunnels,
VPC peering, how all
that gets put together,
all that physics that's
behind the scenes,
we try and make it simple
and easy to consume.
We have a few centralized
appliances that we use.
I gave a presentation
earlier this week
about how we used some
centralized appliances
to do some of the more
traditional security perimeter
work that you might expect
in a large enterprise that's
been around for a
long period of time.
And we're moving to the cloud.
We have a lot of technologies
that we have to bring with us.
We also have needed, at times,
to create a separate subnet.
Say, for instance,
there's a service project
that they have a
very specific need.
So we can create a
subnet just for them,
and allow just that service
project access to that subnet.
So therefore, nobody else knows
about it, just this one service
project.
That's pretty powerful.
We've used that a
couple of times.
As far as firewall rule
management-- again,
this is one of those
governance controls.
We use firewall
logging extensively.
That was a great feature that
we looked forward to, and have
been using it since the
day it was released--
or available to us.
We've been using it for a while.
But we use service accounts.
So the goal here is that
I have a VM instance, say,
and another VM instance, and
it's in my own service project.
We allow the target and
the source to be the same,
and we allow ingress and
egress rules for those
so they can talk to each other.
So it's wide open any, so I
have a collection, if you will.
Within my service
project, I can grant that.
If I won't go between
two service projects,
then I go service account
to service account.
What that does is it gives
us pretty good segmentation
across the different projects,
and pretty tight controls
in that regard.
When we have a standard pattern,
like a north-south pattern,
for example, an
internal web server.
An internal web server
is a standard 80 and 443,
allowed in from the
RFC 19 address space.
And so if that's
the case, then we
allow them to put
on a network tag.
This has been
preapproved by security.
And you apply that network
tag only to the VM instances
where that applies.
So not everybody gets it,
but now, as a DevOps team,
I don't have to go
to the network team
to do a firewall rule.
I just apply that tag, I
get those firewall rules.
And in some cases
we have network tags
that apply both at ingress,
egress, and routing rules.
For example, for
our egress, we use
that to get out through
some security appliances
for some certain use cases.
So we do it based on routing
and firewall rules as well.
And then we also
have cloud functions
that actually monitor some
of these firewall rules
so that we don't do
anything really dangerous.
This is the reactive.
And so if somebody,
for some reason,
creates a firewall rule that
was not supposed to be the case,
we shut that down
within seconds.
I'm guilty of doing that.
I created a firewall rule.
It was a 000.
I was doing it in a
very specific use case.
The function caught
me, I closed it down, I
re-enabled it a couple times.
I'm like, oh, I
know what's wrong,
I didn't add myself
to the exclude list.
So that was a good test.
It actually worked, just
what we wanted it to do.
The value of the
service projects,
again, is for the DevOps teams.
So a DevOps team can actually
build all their resources.
As you know, a
project is a bounce
for authorization as well
as for costing, in our case.
We have costing
based on projects.
So with that, we can delegate
the accountability down
to a DevOps team, and let
them move on their own.
Again, we advertise only--
I say advertise, it's
grant the permission to--
only selected subnets
to these projects.
So rather than seeing all
the VPCs that we built run
and seeing all the
subnets that we built,
we only advertise
selected subnets.
So say, for instance, your
project wants to run in Europe
and only in Europe,
I'm not going
to show you any of the subnets
that are in the United States.
Same way in the United
States, or in Asia, whatever.
So that's very,
very powerful to us.
So then the
developers don't have
to start thinking about
all these network things.
It's US Central 1 or US East 1.
That's a pretty simple thing.
Developers can deal with that.
We also have an
organizational policy
that prohibits the
creating of any instances
with an external IP address.
Obviously the NAT capability
was a powerful feature
in that regard as well.
OK so to sort of summarize,
showing on this picture,
we have multiple different VPCs.
Each one is global network X, Y,
and Z. We have different VPCs,
and those are provisioned
in the host project.
Within any service
project, I may
expose only one or two subnets.
And in this case, service
project B and service project A
are communicating to each
other from different regions--
they can be global--
over the Google backbone,
and that all works.
And the firewall rules
that you see on there,
I have illustration
of three of them.
The first one is the
internal web server.
So that's a network
tag-based rule.
So if I put that
on my server, I can
receive 80 and 443
traffic in-- my decision
as a devout DevOps person.
And then the middle one
is the "service account
to service account."
So if you're in that
same service account,
you have ingress/egress
to all your peers,
and that's an
"any" kind of rule.
And then the last one
is "service project
to service project."
So we're using,
as much as we can,
service account to
service account.
And that gives us that
point-to-point communication.
Well, that concludes my
portion of the presentation.
I'll turn it back over to Neha.
[APPLAUSE]
NEHA PATTAN: Thanks, Ed.
So we might not have too
much time for questions,
but we'll be available outside
this room for Q&A. So thanks
so much, everyone, for coming,
and see you again next year.
[MUSIC PLAYING]
